
我喜欢足球,虽然已不再年轻,仍然每周参与社区的足球运动。我最喜欢的俱乐部是 AC 米兰,最喜欢的球员是马可·范巴斯滕。
大概是 2000 年代初,我在《冠军经理》里执教维冈竞技——一支三级联赛的小球队,阵容寒酸,毫无争冠的底气。我往那个存档里砸了数不清的时间。一个赛季一个赛季地熬,一个引援一个引援地凑,维冈一路升级,拿了联赛冠军,打进欧冠,最后站到了欧洲之巅。那是我在电脑上亲手建起的最有成就感的东西。然而这段辉煌不是被强敌击溃,而是被一个 32 位整数终结——俱乐部的现金流涨得太高,触发了整数溢出,账面数字变成了荒诞的负数,游戏就此崩溃。
那段经历从没真正离开过我。足球这项运动有某种特别的结构——角色分工、阵型博弈、九十分钟内一环扣一环的决策链——让它成了人类发明的最适合测试高压战略的场域之一。2026 年是世界杯年,那种感觉又回来了。
于是我做了 AgentPitch。

AgentPitch 比赛回放
AgentPitch 是一个由大模型驱动的足球模拟器,场上每一名球员都是一个 AI 智能体。不是基于规则的,也不是强化学习训练出来的,而是由大模型生成控制决策的代码,并在每场比赛结束后自动进化。
整个系统对外暴露一个极简的接口:
def decide(game_state: dict, player_state: dict, history: list) -> Action:大模型只需要写这一个函数。沙箱在每个 tick 运行该函数,结果再反馈到下一轮进化。整个系统形成一个闭环:生成 → 对战 → 观察 → 进化 → 循环往复。
大模型在这里不是在叙述”中场球员应该怎么打”,而是在写每秒运行数百次、要跟一个确定性物理引擎直接对抗的可执行代码。因为实时推理实在太慢了,不适合高速运行的足球游戏模拟。
系统的设计是围绕每场比赛的两条流水线。
赛前:代码生成流水线(CGP code generation pipeline)。 系统用 Jinja2 构建一个完整的 prompt——包含场地几何参数、球员九项属性(速度、技术、体力、传球、射门等)、动作空间(移动/传球/射门/抢断/持球)、阵型机制,以及沙箱约束——然后让大模型生成完整的 decide() 函数。生成的代码在比赛开始前就会在沙箱里编译。编译失败,错误信息会被送回修复 prompt,最多重试三次。
赛后:赛后策略进化流水线(PMEP)。 终场哨响,引擎从比赛日志里挑选最多五个关键事件——进球、拦截、兜底换人、射门、抢断——构建策略进化 prompt:你上一轮的策略长这样,发生了这些事,请改进。大模型给出修订版 decide(),编译、验证后写入磁盘,成为这支球队下一场比赛的策略。打完整个系列赛,策略会一轮轮叠加。
设计上让两条流水线共用同一套代码提取逻辑和编译错误重试机制。某个看起来合理但沙箱跑不起来的策略,会被自动修复,不需要人工介入。

沙箱系统花费很多时间来设计。每次 decide() 调用有 5 毫秒的时间预算,超时就给那名球员当 tick 返回 Hold()。连续超时十次,熔断器触发,该球员在本场比赛余下时间被禁用。PMEP 可以看到策略失败的模式,在下一轮进化中加以修正。
支持三个沙箱后端:
Python(RestrictedPython)——默认方案。命名空间暴露 28 个安全内置函数和五个动作类,运行时禁止任何 import。写了 import math 的代码每个 tick 静默返回 Hold();生成 prompt 里已经明确警告了大模型这一点。
JavaScript(QuickJS)——每名球员有独立的 JS 上下文。跨 tick 的持久状态是支持的,因为同一上下文的模块级变量在 execute() 调用之间能存活。超时通过 ctx.set_time_limit(5ms) 强制执行。
Rust 编译到 WebAssembly——策略在比赛开始时编译为 wasm32-wasip1,结果按 sha256 缓存。每名球员有独立的线性内存和 Wasmtime Store。超时通过每 1ms 触发一次的 epoch 中断来强制执行。随机性是确定性的:用策略哈希异或球员 ID 哈希作种子的 splitmix64 宿主函数。
同一场比赛里,A 队可以跑 Python,B 队可以跑 Rust。Tick 引擎通过统一的 Sandbox 协议调用,不需要知道底层跑的是哪种语言。
生成提示词有十五个部分:回调契约、完整的 game_state schema(tick、比分、球的位置、球员位置、球队阶段)、player_state schema(九项属性、冷却状态、阵型区域)、history schema(最近 10 个 tick 的动作和结果)、阵型与快照机制、沙箱约束,最后是任务本身。
prompt 的结尾是这样的:
Write the complete decide() function. One function only. Start withdef decide(. Return only the decide() function inside a Python fenced code block. No explanation, no other text.
这个约束——单一函数、禁止 import、只能用内联辅助函数、可用内置函数已明确列出。大模型不能 import numpy,不能定义模块级类,不能 import math.sqrt。需要开方?写 x ** 0.5。处理向量几何?只用基本四则运算。
经过多轮进化存活下来的策略,往往会出现一些具体的行为特征:守门员学会了在射门进入扑救半径之前保持不动;中场球员开始在传球前先检查 cooldown_remaining;前锋开始读 history 缓冲区,判断某个传球套路是不是已经连续失败了。这不是保证会发生的,取决于用的是哪个模型,但它是可以观察到的——你打开进化后的代码,能直接读出其中的推理逻辑。
UI界面风格设计和早期冠军足球经理的 2D 模式很像,你可以用相似的方式观看大模型策略代码之间的对决。

我研究 AI 智能体有一段时间了,主要集中在实时数据系统和安全监控领域。反复遇到的一个问题是评估的难题:你怎么知道一个智能体是真的在变好?
足球是个罕见的领域——这个问题在这里有一个简单的答案。比分就是比分。一个能赢下从未见过的对手的策略,肯定做对了什么。一个生成了听起来很像样的代码、但让十一名球员原地不动的策略,肯定做错了什么,比赛日志会精确地告诉你错在哪里。
三种语言支持在这里也有研究价值。让同一个大模型分别生成 Python 策略和 Rust 策略,它面对的是不同的约束,产出的代码结构也截然不同。跨语言的性能比较——以及跨模型的比较,AgentPitch 支持 OpenAI、Anthropic、Gemini、DeepSeek、OpenRouter 和本地 Ollama——构成了一个很干净的自然实验。
我想探索的更大问题是:当你把一个大模型放进一个有明确胜负条件和反馈回路的受限可执行环境里,它会不会发展出连贯的策略?还是只会在 schema 的表面特征上做模式匹配,生成看起来像战术、实际上随机乱跑的代码?目前的答案是:取决于模型,也取决于进化轮数。
我选了几个主流模型,用默认 prompt 各自生成了一批策略,我们来看看结果。
测试的模型包括:Anthropic Sonnet 4.6、Anthropic Opus 4.7、OpenAI GPT 5.5、OpenAI GPT 5.4 nano、Gemini 3.1 Pro、Gemini 3.1 Flash、OpenAI GPT OSS、DeepSeek V4 Flash 等。
在我的环境里,Gemini 3.1 Flash 的速度快得出奇,只需 3 秒就能写完一份策略;OpenAI GPT 5.5 则明显偏慢。数据上的差距相当大——最快和最慢之间差了两个数量级,而且 provider 的影响远比目标语言的影响显著。

“更多 token 不等于更多的代码策略”。

Sonnet 4.6 输出的代码最密集,约 6K tokens 换来 JS 和 Python 各约 20KB 源码,bytes-per-token 比在所有模型里最高。GPT-5.5 用了 14,789 个 token,生成的却只有 11.7KB,其中很大一部分是推理过程,实际代码量相当克制。Gemini Flash 是另一个极端——每种语言的输出都只有 2.6–3.9KB,快,但内容精简,延迟低也是这个原因。
按语言汇总平均数据:JavaScript 平均延迟 42.0s,Python 54.5s,Rust 95.2s。Rust 生成时间是 JS 的 2.3 倍,消耗的 token 也多 60%,但源码反而更小——模型写 Rust 时思考得更慢更仔细,但每行代码含的信息量更高。

我选取所有 Python 策略打了一轮杯赛,Opus 4.7 最终夺冠。

从杯赛对阵图来看,Sonnet 4.6 以 11:0 击溃了 GPT-OSS-120b,Opus 4.7 在半决赛 5:0 击败 DeepSeek V4 Flash,决赛以 4:2 险胜 Sonnet。
杯赛有不小的随机性,联赛比赛更多,每一支策略都有机会对阵,更说明问题。我把手头所有策略拉进来,跑了一个单循环联赛,结果如下:

Anthropic 碾压全场:前六名中占四席,平均积分 47 分,而 OpenRouter (免费模型)只有 13 分。所以贵有贵的道理。 按 provider 汇总:Anthropic 47.3 分,Gemini 33.3 分,OpenAI 28.8 分,DeepSeek 16.0 分,OpenRouter 上的免费模型13.0 分——3.6 倍的积分差距,同样的赛程,同样的 prompt,只是换了不同的模型。
Opus 是当仁不让的足球之王:三种语言平均积分 53.7 分。

跨模型和跨语言的交叉表反应一个规律:选定模型,语言几乎不改变结果;换一个模型,积分表能移动 30 分以上。模型的影响远大于语言的选择。

最让人意外的发现是 Rust 的整体表现——平均积分 35.3 分,高于 Python 的 29.7 和 JS 的 26.4。我原本预期 Python 应该更强,因为它更接近自然语言,大模型对它的掌控力应该更好。但 Rust 的严格类型系统可能恰好迫使模型写出了 bug 更少的代码,慢工出细活。

注:策略联赛基于生成好的策略,联赛过程中,不包含策略代码进化的环节。加入进化会极大的拖慢模拟的时间。
有了数据之后,更有意思的问题是:策略层面到底发生了什么?我让这两个策略(副班长DeepSeek 和冠军Opus)正面对决,看它们怎么踢。最终比分 2:11,Opus 大胜。

通过观察比赛经历发现,Opus 的策略阵型保持非常工整,后卫/中场/前锋各司其职,牢牢占据自己的位置区域。DeepSeek 的守门员则一路跑进中场,把球门完全留空,两名前锋也冲到了对方小禁区里,可能是进攻心切,心急却没吃到热豆腐。

两份策略的代码都附在文章里,感兴趣渎职的可以认知分析一下。
Opus 4.7
def decide(game_state, player_state, history):
# Helpers
def dist(a, b):
return ((a[0]-b[0])**2 + (a[1]-b[1])**2) ** 0.5
def clamp(v, lo, hi):
return max(lo, min(hi, v))
field = game_state["field"]
fw = field["width"]
fh = field["height"]
my_team = game_state["my_team"]
my_id = game_state["my_player_id"]
ball = game_state["ball"]
ball_pos = ball["position"]
ball_vel = ball["velocity"]
players = game_state["players"]
me = player_state
my_pos = me["position"]
my_role = me["role"]
# Goals
own_goal_x = field["team_a_goal_x"] if my_team == "team_a" else field["team_b_goal_x"]
opp_goal_x = field["team_b_goal_x"] if my_team == "team_a" else field["team_a_goal_x"]
goal_top = field["goal_top"]
goal_bottom = field["goal_bottom"]
goal_y = (goal_top + goal_bottom) / 2.0
opp_goal = (opp_goal_x, goal_y)
own_goal = (own_goal_x, goal_y)
attacking_dir = 1.0 if opp_goal_x > own_goal_x else -1.0
# Teammates and opponents
teammates = []
opponents = []
for pid, p in players.items():
if pid == my_id:
continue
if p["team"] == my_team:
teammates.append((pid, p))
else:
opponents.append((pid, p))
# Nearest opponent to me
def nearest_opp(pos):
best = None
bd = 1e18
for pid, p in opponents:
d = dist(pos, p["position"])
if d < bd:
bd = d
best = (pid, p)
return best, bd
# Predict ball next position
next_ball = (ball_pos[0] + ball_vel[0], ball_pos[1] + ball_vel[1])
# Match phase handling
phase = game_state.get("match_phase", "in_play")
if phase in ("pre_match", "half_time", "full_time", "goal_scored"):
return Hold()
cooldown = me.get("cooldown_remaining", 0)
# ================ GOALKEEPER ================
if my_role == "GK":
# Stay near goal line, intercept shots
penalty_x = own_goal_x + attacking_dir * (fw * 0.12)
# If ball is in danger zone (close to own goal), come out
ball_dist_to_goal = abs(ball_pos[0] - own_goal_x)
if me["has_ball"]:
if cooldown == 0:
# Find best teammate to pass to (upfield)
best_t = None
best_score = -1e18
for pid, p in teammates:
if p["role"] == "GK":
continue
tp = p["position"]
# prefer upfield, away from opponents
upfield = (tp[0] - my_pos[0]) * attacking_dir
_, od = nearest_opp(tp)
score = upfield + od * 0.5
if score > best_score:
best_score = score
best_t = p
if best_t:
power = clamp(8 + dist(my_pos, best_t["position"]) * 0.15, 8, me["strength"])
return Pass(target_pos=best_t["position"], power=power)
return Hold()
# Position: between ball and goal
target_x = own_goal_x + attacking_dir * min(ball_dist_to_goal * 0.1, fw * 0.08)
target_y = clamp(ball_pos[1], goal_top + 1, goal_bottom - 1)
# If ball very close and loose, try to pickup/intercept
if ball["carrier_id"] is None and dist(my_pos, ball_pos) < fw * 0.15:
dx = ball_pos[0] - my_pos[0]
dy = ball_pos[1] - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d > 0.01:
return Move(dx=dx/d, dy=dy/d, speed=1.0)
dx = target_x - my_pos[0]
dy = target_y - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d < 0.5:
return Hold()
return Move(dx=dx/d, dy=dy/d, speed=min(1.0, d/3))
# ================ FIELD PLAYERS ================
# If I have the ball
if me["has_ball"]:
dist_to_goal = dist(my_pos, opp_goal)
nearest, nd = nearest_opp(my_pos)
# Shoot if close to goal
shoot_range = fw * 0.28
if dist_to_goal < shoot_range and cooldown == 0:
# Compute angle toward goal
dx = opp_goal_x - my_pos[0]
dy = goal_y - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d > 0.01:
# angle in radians: use atan2-like via direction
# Shoot uses angle; approximate with simple computation
# angle = atan2(dy, dx)
# Implement atan2 approximation
def atan2_approx(y, x):
if x == 0 and y == 0:
return 0.0
ax = abs(x)
ay = abs(y)
if ax >= ay:
a = ay / ax
# atan(a) approx
r = a / (1 + 0.28 * a * a)
else:
a = ax / ay
r = 1.5707963267948966 - a / (1 + 0.28 * a * a)
if x < 0:
r = 3.141592653589793 - r
if y < 0:
r = -r
return r
angle = atan2_approx(dy, dx)
power = min(me["strength"], 18)
return Shoot(angle=angle, power=power)
# If under pressure, try to pass
if nd < 4.0 and cooldown == 0:
# Find open teammate, prefer forward
best_t = None
best_score = -1e18
for pid, p in teammates:
if p["role"] == "GK":
continue
tp = p["position"]
_, od = nearest_opp(tp)
upfield = (tp[0] - my_pos[0]) * attacking_dir
d_to_me = dist(my_pos, tp)
if d_to_me < 3 or d_to_me > fw * 0.5:
continue
score = upfield * 1.5 + od - d_to_me * 0.1
if score > best_score:
best_score = score
best_t = p
if best_t:
power = clamp(6 + dist(my_pos, best_t["position"]) * 0.3, 6, me["strength"])
return Pass(target_pos=best_t["position"], power=power)
# Otherwise dribble toward goal
dx = opp_goal_x - my_pos[0]
dy = goal_y - my_pos[1]
# Avoid nearest opponent slightly
if nearest and nd < 6.0:
opp_pos = nearest[1]["position"]
ax = my_pos[0] - opp_pos[0]
ay = my_pos[1] - opp_pos[1]
ad = (ax*ax + ay*ay) ** 0.5
if ad > 0.01:
dx += (ax/ad) * 3
dy += (ay/ad) * 3
d = (dx*dx + dy*dy) ** 0.5
if d > 0.01:
return Move(dx=dx/d, dy=dy/d, speed=1.0)
return Hold()
# I don't have the ball
carrier_id = ball["carrier_id"]
possession = ball["possession"]
# Distance to ball
my_d_ball = dist(my_pos, ball_pos)
# Find closest teammate to ball
closest_team_d = my_d_ball
for pid, p in teammates:
if p["role"] == "GK":
continue
d = dist(p["position"], ball_pos)
if d < closest_team_d:
closest_team_d = d
am_closest = my_d_ball <= closest_team_d + 0.01
# Loose ball: closest player chases
if possession is None or carrier_id is None:
if am_closest or my_d_ball < fw * 0.15:
dx = next_ball[0] - my_pos[0]
dy = next_ball[1] - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d > 0.01:
return Move(dx=dx/d, dy=dy/d, speed=1.0)
# Opponent has ball
if possession and possession != my_team:
# Try to tackle if very close
if carrier_id and carrier_id in players:
carrier = players[carrier_id]
cd = dist(my_pos, carrier["position"])
if cd < 1.8 and cooldown == 0:
return Tackle(target_player_id=carrier_id)
# Closest defender presses
if am_closest:
# Move toward carrier, intercept line
target = carrier["position"]
dx = target[0] - my_pos[0]
dy = target[1] - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d > 0.01:
return Move(dx=dx/d, dy=dy/d, speed=1.0)
# Default: move to formation position with attack/defense bias
form_pos = me["formation_position"]
# If our team has possession, push forward and offer support
if possession == my_team:
# Bias toward attacking position
target_x = form_pos[0] + attacking_dir * fw * 0.05
target_y = form_pos[1]
# Spread: avoid clustering with carrier
if carrier_id and carrier_id in players:
cpos = players[carrier_id]["position"]
if dist((target_x, target_y), cpos) < 5:
target_y += 5 if my_pos[1] < cpos[1] else -5
else:
target_x = form_pos[0]
target_y = form_pos[1]
# Drift toward ball y a bit
target_y = target_y * 0.7 + ball_pos[1] * 0.3
target_x = clamp(target_x, 1, fw - 1)
target_y = clamp(target_y, 1, fh - 1)
dx = target_x - my_pos[0]
dy = target_y - my_pos[1]
d = (dx*dx + dy*dy) ** 0.5
if d < 1.0:
return Hold()
speed = min(1.0, d / 4.0)
if speed < 0.5:
speed = 0.5
return Move(dx=dx/d, dy=dy/d, speed=speed)DeepSeek Pro
def decide(game_state, player_state, history):
# --------------------------------------------------------------
# HELPER: basic math
# --------------------------------------------------------------
def dist(p1, p2):
return ((p1[0] - p2[0])**2 + (p1[1] - p2[1])**2) ** 0.5
def norm(vx, vy):
length = (vx**2 + vy**2) ** 0.5
if length == 0.0:
return 0.0, 0.0, 0.0
return vx / length, vy / length, length
# atan2 & degree conversion (Taylor-based, handles all quadrants)
PI = 3.141592653589793
def atan2(y, x):
if x == 0:
if y > 0:
return PI / 2.0
if y < 0:
return -PI / 2.0
return 0.0
z = y / x
if abs(z) > 1.0:
inv = 1.0 / z
a = inv
term = inv
arc = term
for i in range(1, 10):
term *= -a * a
arc += term / (2 * i + 1)
res = PI / 2.0 - arc
if z < 0:
res = -res
else:
a = z
term = a
arc = term
for i in range(1, 6):
term *= -a * a
arc += term / (2 * i + 1)
res = arc
if x < 0:
if y >= 0:
return res + PI
else:
return res - PI
return res
def to_deg(rad):
return rad * 180.0 / PI
# --------------------------------------------------------------
# FIELD & TEAM INFO
# --------------------------------------------------------------
field = game_state["field"]
my_team = game_state["my_team"]
my_id = game_state["my_player_id"]
role = player_state["role"]
ball = game_state["ball"]
ball_pos = ball["position"]
ball_carrier = ball["carrier_id"]
attacking_goal_x = field["team_b_goal_x"] if my_team == "team_a" else field["team_a_goal_x"]
defending_goal_x = field["team_a_goal_x"] if my_team == "team_a" else field["team_b_goal_x"]
goal_mid_y = (field["goal_top"] + field["goal_bottom"]) / 2.0
my_pos = player_state["position"]
cooldown = player_state["cooldown_remaining"]
health = player_state["current_health"]
health_factor = 0.6 + 0.4 * (health / 100.0)
# --------------------------------------------------------------
# HELPER: get players list
# --------------------------------------------------------------
def get_all_players():
return list(game_state["players"].values())
def get_teammates(exclude_self=True):
return [p for p in get_all_players() if p["team"] == my_team and (not exclude_self or p["player_id"] != my_id)]
def get_opponents():
return [p for p in get_all_players() if p["team"] != my_team]
def get_player_by_id(pid):
return game_state["players"].get(pid, None)
# --------------------------------------------------------------
# HELPER: find opponent GK
# --------------------------------------------------------------
def find_opponent_gk():
for p in get_opponents():
if p["role"] == "GK":
return p
return None
# --------------------------------------------------------------
# HELPER: nearest opponent within a distance
# --------------------------------------------------------------
def nearest_opponent(pos, max_dist=5.0):
best = None
best_d = max_dist + 1
for p in get_opponents():
d = dist(pos, p["position"])
if d < best_d:
best_d = d
best = p
if best_d <= max_dist:
return best, best_d
return None, None
# --------------------------------------------------------------
# HELPER: shoot evaluation
# --------------------------------------------------------------
def should_shoot():
# distance to goal line
dx_goal = attacking_goal_x - ball_pos[0]
# we are facing attacking direction: for team_a x increases, for team_b x decreases
# only shoot if we are in opponent's half and not too far
if my_team == "team_a":
if ball_pos[0] < field["width"] * 0.6:
return False, 0.0, 0.0
else:
if ball_pos[0] > field["width"] * 0.4:
return False, 0.0, 0.0
dist_to_goal_line = abs(ball_pos[0] - attacking_goal_x)
if dist_to_goal_line > field["width"] * 0.45:
return False, 0.0, 0.0
# Choose target y based on GK position
gk = find_opponent_gk()
if gk is not None:
gk_y = gk["position"][1]
if gk_y < goal_mid_y:
target_y = (goal_mid_y + field["goal_bottom"]) / 2.0
else:
target_y = (goal_mid_y + field["goal_top"]) / 2.0
else:
target_y = goal_mid_y
angle_rad = atan2(target_y - ball_pos[1], attacking_goal_x - ball_pos[0])
angle_deg = to_deg(angle_rad)
# power: base on distance, strength, shooting
base_power = min(20, 8 + dist_to_goal_line * 0.2)
# adjust by attributes (shooting+skill blend) and health
blend_shooting = (2 * player_state["shooting"] + player_state["skill"]) / 3
power = base_power * (blend_shooting / 15) * health_factor
power = min(20, max(5, power))
return True, angle_deg, power
# --------------------------------------------------------------
# HELPER: pass evaluation
# --------------------------------------------------------------
def choose_pass():
teammates = get_teammates()
if not teammates:
return None, None, None
best_score = -1
best_target = None
best_target_pos = None
for tm in teammates:
pos = tm["position"]
d = dist(ball_pos, pos)
if d < 1.5 or d > field["width"] * 0.7:
continue
# openness: count opponents within 4m
close_opp = sum(1 for op in get_opponents() if dist(op["position"], pos) < 4.0)
openness = max(0, 3 - close_opp)
# prefer forward passes (toward opponent goal)
to_goal = attacking_goal_x - pos[0]
forwardness = to_goal if my_team == "team_a" else -to_goal
forward_score = max(0, forwardness / field["width"] * 5)
# role bonus
role_bonus = 2 if tm["role"] == "FWD" else 1 if tm["role"] == "MID" else 0.5
# passing skill blend
pass_skill = (2 * tm["passing"] + tm["skill"]) / 3
# final score (heuristic)
score = (openness * 1.5 + forward_score * 1.0 + role_bonus * 1.8) * (pass_skill / 15)
# penalty if too close to me
if d < 8:
score *= 0.5
if score > best_score:
best_score = score
best_target = tm
# target position: a bit ahead of teammate towards goal
aim_x = pos[0] + (attacking_goal_x - pos[0]) * 0.2
aim_y = pos[1]
best_target_pos = (aim_x, aim_y)
if best_target and best_score > 0.8:
# compute pass power based on distance
d_pass = dist(ball_pos, best_target_pos)
power = min(20, max(5, d_pass * 0.4))
return best_target["player_id"], best_target_pos, power
return None, None, None
# --------------------------------------------------------------
# HELPER: move towards a point
# --------------------------------------------------------------
def move_toward(target, speed=1.0):
dx = target[0] - my_pos[0]
dy = target[1] - my_pos[1]
n_dx, n_dy, length = norm(dx, dy)
if length < 1.0:
speed = min(speed, length)
return Move(dx=n_dx, dy=n_dy, speed=speed)
# --------------------------------------------------------------
# HELPER: move towards a point with snap bypass (speed>=0.5)
# --------------------------------------------------------------
def active_move(target):
return move_toward(target, speed=1.0)
# --------------------------------------------------------------
# HELPER: choose strategic formation-based position
# --------------------------------------------------------------
def formation_influence():
zone = player_state["formation_zone"]
center_x = (zone["x"][0] + zone["x"][1]) / 2
center_y = (zone["y"][0] + zone["y"][1]) / 2
return (center_x, center_y)
# --------------------------------------------------------------
# HELPER: check if we can tackle the ball carrier
# --------------------------------------------------------------
def get_tackle_target():
if ball_carrier is None or ball_carrier.startswith(my_team):
return None
carrier = get_player_by_id(ball_carrier)
if carrier is None:
return None
d = dist(my_pos, carrier["position"])
if d < 3.0: # tackle range guess
return carrier["player_id"]
return None
# --------------------------------------------------------------
# MAIN DECISION LOGIC
# --------------------------------------------------------------
# 1. Cooldown active -> can only move/hold
if cooldown > 0:
# move towards formation position if idle, else press
if role == "GK":
# stay near goal line
gk_y = max(field["goal_top"], min(field["goal_bottom"], my_pos[1]))
target = (defending_goal_x, gk_y)
return active_move(target)
# loose ball/opponent possession -> pressure
if ball_carrier is None or not ball_carrier.startswith(my_team):
if ball_carrier is None:
return active_move(ball_pos)
else:
opponent_pos = get_player_by_id(ball_carrier)["position"]
return active_move(opponent_pos)
# teammate has ball -> support
return active_move(formation_influence())
# 2. Player has ball (and no cooldown)
if player_state["has_ball"]:
# GK always clear immediately
if role == "GK":
# pass to nearest open teammate
teammates = get_teammates()
if teammates:
best = min(teammates, key=lambda t: dist(my_pos, t["position"]))
target = best["position"]
power = min(20, max(10, dist(my_pos, target) * 0.5))
return Pass(target_pos=target, power=power)
return Hold()
# shooting evaluation
can_shoot, s_angle, s_power = should_shoot()
if can_shoot:
# also check if there's an immediate better pass? But shooting is priority when close.
return Shoot(angle=s_angle, power=s_power)
# passing evaluation
pid, tpos, ppower = choose_pass()
if pid is not None:
return Pass(target_pos=tpos, power=ppower)
# if under heavy pressure (opponent within 2m) and good dribbling, try to move past
opp, opp_dist = nearest_opponent(my_pos, 2.0)
if opp is not None and player_state["dribbling"] > 12:
# try dribble forward (away from opponent)
away_x = 1.0 if my_team == "team_a" else -1.0
away_y = (my_pos[1] - opp["position"][1]) * 0.5
return Move(dx=away_x, dy=away_y, speed=1.0)
# Otherwise hold or dribble toward goal
target = (attacking_goal_x, goal_mid_y)
return active_move(target)
# 3. Player does not have ball
# Loose ball: go get it
if ball_carrier is None:
return active_move(ball_pos)
# Opponent has ball
if not ball_carrier.startswith(my_team):
tackle_id = get_tackle_target()
if tackle_id is not None:
return Tackle(target_player_id=tackle_id)
# move towards carrier to press
carrier_pos = get_player_by_id(ball_carrier)["position"]
return active_move(carrier_pos)
# Teammate has ball: position support
# If I'm FWD, make a run towards attacking goal, slightly ahead
if role == "FWD":
run_x = attacking_goal_x + (-0.1 * field["width"] if my_team == "team_b" else 0.1 * field["width"])
run_y = goal_mid_y + (my_pos[1] - goal_mid_y) * 0.7
return active_move((run_x, run_y))
if role == "MID":
# stay between ball and goal, slightly behind ball carrier
support_x = (my_pos[0] + ball_pos[0]) / 2
support_y = (my_pos[1] + ball_pos[1]) / 2
return active_move((support_x, support_y))
# DEF: hold shape, but push up if team phase is attacking
if game_state["team_phase"][my_team] == "attacking":
# push up towards midfield line
push_x = field["width"] * 0.6 if my_team == "team_a" else field["width"] * 0.4
return active_move((push_x, formation_influence()[1]))
return active_move(formation_influence())这个是比赛的详细统计,从比赛统计数据来看,DeepSeek 那边的控球率和射门数反而更多,但 Opus 的盘带数远高于对手——两种截然不同的踢法,产生了天差地别的结果。

DeepSeek的代码量并不少,但是实际效果却不如代码量更少的Opus。大家可以研究一下代码,找到更深层的原因。
AgentPitch 是我个人为了做LLM Agent研究的业余项目。它的定位是一个干净的、经过充分测试的沙箱——用来研究具备编码能力的 AI 智能体被丢进一个有真实反馈的真实问题时,究竟会怎么表现。
扩展方向还有很多:接入世界杯真实球员数据;加入裁判智能体;搭建一个让不同 provider 的策略跨赛季对抗的竞技场。竞技场模式 UI 已经在了,专门为这类实验设计的。目前还缺少一些真实规则,比如犯规、越位、换人,后续可能会补上。
如果你在做 AI 智能体研究、LLM 评估,或者你曾经像我一样把太多时间和青春交给了《冠军足球经理》,代码就在那里,欢迎来尝试一下。也许你可以自己写一个策略,和顶尖的大模型PK一下足球执教能力。
世界杯大概还有两个月。但你现在就可以用 AgentPitch 办一届自己的 LLM 杯赛。
更新时间:2026-05-05
本站资料均由网友自行发布提供,仅用于学习交流。如有版权问题,请与我联系,QQ:4156828
© CopyRight All Rights Reserved.
Powered By 61893.com 闽ICP备11008920号
闽公网安备35020302034903号