Operators Become Supervisors
Where AI and robotics converge first.
Software developers stopped writing code. They started directing agents that write code.
The same shift is coming to battlefield robotics. But we’re not there yet. The current state is mostly teleoperation—one human, one robot, direct control. The interesting question is what happens when agentic AI merges with the robotic systems already being deployed at scale.
I spent years building military robots at iRobot—Packbot, Firstlook, systems soldiers actually used in theater. Later at Charles River Analytics, I worked on autonomous ground vehicles, SLAM, computer vision. The technical trajectory was obvious then. What’s changed is the software.
Where We Are Now
Ukraine will field 30,000 unmanned ground vehicles by the end of 2026. That’s the world’s first robotized ground army at scale. But it’s not autonomous—not yet.
The robot types already deployed:
Machine gun robots like the Droid TW 12.7 mount heavy weapons on tracked platforms. One held a frontline position solo for 45 days in late 2025. The AI assists with turret stabilization, target tracking, and ballistic calculation. But a human operator was piloting it remotely the entire time. The decision to fire always remains with the human.
Bomb robots are disposable carriers with 5-6kg of explosives, costing up to $5,000 each. They destroy fortified positions. Some carry FAB-250 aerial bombs—250kg of ordnance on tracks. Human drives it in, human triggers the detonation.
Mortar robots like the Ratel S carry mortar tubes and anti-tank devices. Remote-controlled fire. Human decides when to engage.
Logistics robots handle resupply under fire. Around Pokrovsk, 90% of supplies now reach front line positions via UGV. Each robot survives 7-8 trips on average before getting hit. At $2,000-2,300 per mission, that’s cheaper than the alternatives—and no soldier dies delivering ammunition.
Drone swarms are further along. Ukraine’s Swarmer system puts 3 humans in control of 8-25 drones. Over 82,000 missions with this setup. The human designates target areas. The drones decide among themselves who takes the shot—but the human decided where and whether to engage.
Russia has run coordinated multi-robot attacks. At Avdiivka, groups of robotic platforms carried out bombardments using hundreds of grenades. Multiple robots, human-coordinated. Still teleoperation at the core.
The pattern: AI assists, humans decide. “Autonomy is decades away from reliably driving vehicles in the chaos of a constantly changing battlefield”—that’s the current industry consensus.
The Transition Happening
The shift from 1:1 to 1:few is real. Three operators managing 25 drones is an 8x multiplier.
Defensive autonomy is pushing further. Ukraine’s Sky Sentinel turret shot down a Shahed drone without human input. Deploy it, feed it radar data, it handles the rest—detects, tracks, calculates trajectory, fires. The US Bullfrog system costs $10 per engagement, trained on thousands of flight profiles. Point it at a sector, and that sector becomes an autonomous kill zone.
This is the artillery problem returning. If you’re in the target area, tough luck. The kill zone doesn’t check IDs. Old-school artillery worked this way—don’t be in the grid square when rounds land. Now it’s AI turrets creating persistent denial zones. Anything that moves gets engaged.
But offensive autonomy—robots deciding to engage humans—remains human-in-the-loop. For now.
What Agentic AI Changes
Put a compute module in each bomb robot. Not a dumb controller—an actual reasoning system. Give it objectives: this target area, this time window, these rules of engagement. Launch 500 coordinated.
While connected, agentic AI enables capabilities teleoperation can’t match. Every robot shares situational awareness. The swarm understands the environment collectively—obstacles, threats, opportunities—faster than any human command structure can process. Coordinated maneuvers happen in milliseconds. The human operator sees a unified picture and directs at the formation level.
Then comms get jammed. And they will.
This is where autonomy becomes necessary, not optional. Like soldier groups in an assault—coordinated at the start, then chaos. Jamming. Smoke. Infrequent comms windows. The robots need to continue with last-known objectives, operate independently, take opportunities the original plan couldn’t anticipate.
A sky drone could serve as local coordinator when the main link goes down. Ground units take objectives from it. When even that link fails, each robot falls back to autonomous execution within the constraints it was given. Complete the mission. Return if possible. Don’t engage friendlies. The human set the parameters; the machine handles execution in the chaos.
There’s a Star Trek Voyager episode—Dreadnought—where B’Elanna is trapped inside an autonomous missile still executing objectives from a war that ended years ago. The show played it as horror. The reality is we’re building toward exactly that capability. Weapons that execute without persistent human control. The “off switch” problem becomes very real when the whole point is operating in environments where you can’t reach them.
Why Battlefield First
This is the pattern. Military pushes technology first, commercial follows.
The internet came from ARPANET. GPS was military navigation. Drones were surveillance platforms before they were delivery vehicles. Radar, jet engines, nuclear power—major innovations come through military applications first, then diffuse to commercial use.
Conflicts accelerate iteration. Life-or-death stakes. Massive funding. The pressure to adopt is absolute—whoever doesn’t loses to whoever does. Ethical debates happen after adoption, not before. They always have.
Jamming forces autonomy in ways commercial applications don’t face. You can’t rely on constant connectivity when the adversary is actively denying it. Amazon delivery drones can phone home constantly. Military robots need to function when cut off. That requirement drives capability faster than any commercial roadmap.
Ukraine produced 4.5 million drones in 2025. The iteration cycles are measured in weeks. The agentic AI revolution will hit battlefield robotics before it hits commercial robotics. The convergence happens there first. Then it comes home.
The Framework
The pattern repeats:
Software developers: wrote every line → direct agents that write code → set architecture and approve.
Battlefield operators: control each robot directly → direct swarms within parameters → set objectives for autonomous formations.
The human doesn’t exit. The human moves up. Individual control becomes group direction becomes strategic architecture.
Operators become supervisors. Supervisors become architects. The machines handle everything else.