Army Robots Hunt Tanks in Project Convergence

 In Land, U.S. Army, Forces & Capabilities
Army photo

An experimental Army robot, nicknamed Origin, at the Project Convergence exercise at Yuma Proving Ground, Ariz.

WASHINGTON: A pair of unpre­pos­sess­ing robots, look­ing more like mil­i­ta­rized golf carts than Terminators, trun­dle across the Yuma Desert, part of the Army’s Project Convergence exercise on future war­fare.

Like human troops, the machines take turns cov­er­ing each other as they advance. One robot finds a safe spot, stops, and launch­es the tethered mini-drone it car­ries to look over the next ridge­line while the other bot advanced; then they switch off.

Army photo

Pegasus mini-drone carried on the Origin robot.

Their objec­tive — a group of build­ings on the Army’s Yuma Proving Ground, a sim­u­lat­ed town for urban combat train­ing. As one robot held back to relay com­mu­ni­ca­tions to its dis­tant human over­seers, the other moved into the town – and spot­ted “enemy” forces. With human approval, the robot opened fire.

Then the robot’s onboard Aided Target Recognition (ATR) algo­rithms iden­ti­fied anoth­er enemy, a T‑72 tank. But this target was too far away for the robot’s built-in weapons to reach. So the bot uploaded the tar­get­ing data to the tactical network and – again, with human approval – called in artillery sup­port.

“That’s a huge step, Sydney,” said Brig. Gen. Richard Ross Coffman, the Project Convergence exer­cise direc­tor. “That com­put­er vision… is nascent, but it is work­ing.”

Algorithmic target recog­ni­tion and com­put­er vision are crit­i­cal advances over most cur­rent mil­i­tary robots, which aren’t truly autonomous but merely remote-con­trolled: The machine can’t think for itself, it just relays camera feeds back to a human oper­a­tor, who tells it exact­ly where to go and what to do.

That approach, called tele­op­er­a­tion, does let you keep the human out of harm’s way, making it good for bomb squads and small-scale scout­ing. But it’s too slow and labor-inten­sive to employ on a large scale. If you want to use lots of robots with­out tying down a lot of people micro­manag­ing them, you need the robots to make some deci­sions for them­selves – although the Army empha­sizes that the deci­sion to use of lethal force will always be made by a human.

Sydney J. Freedberg Jr.

Brig. Gen. Richard Ross Coffman

So Coffman, who over­sees the Robotic Combat Vehicle and Optionally Manned Fighting Vehicle pro­grams, turned to the Army’s Artificial Intelligence Task Force at Carnegie Mellon University. “Eight months ago,” he told me, “I gave them the chal­lenge: I want you to go out and sense tar­gets with a robot — and you have to move with­out using LIDAR.”

LIDAR, which uses low-pow­ered laser beams to detect obsta­cles, is a common sensor on exper­i­men­tal self-dri­ving cars. But, Coffman noted, because it’s active­ly emit­ting laser energy, ene­mies can easily detect it.

So the robots in the Project Convergence exper­i­ment, called “Origin,” relied on pas­sive sen­sors: cam­eras. That meant their machine vision algo­rithms had to be good enough to inter­pret the visual imagery and deduce the rel­a­tive loca­tions of poten­tial obsta­cles, with­out being able to rely on LIDAR or radar to mea­sure dis­tance and direc­tion pre­cise­ly. That may seem simple enough to humans, whose eyes and brain ben­e­fit from a few hun­dred mil­lion years of evo­lu­tion, but it’s a rad­i­cal feat for robots, which still struggle to distinguish, say, a shallow puddle from a dangerously deep pit.

“Just with machine vision, they were able to move from Point A to Point B,” Coffman said. But the Army doesn’t just want robots that can find their way around: It wants them to scout for threats and tar­gets – with­out a human having to con­stant­ly stare at the sensor feed.

That’s where Aided Target Recognition comes in. (ATR also stands for Automated Target Recognition, but the Army doesn’t like the impli­ca­tion that the soft­ware would replace human judg­ment, so it con­sis­tent­ly uses Aided instead).

Army photo

Data from the Origin robot and drone updates handheld tactical maps.

Recognizing tar­gets is anoth­er big chal­lenge. Sure, arti­fi­cial intel­li­gence has gotten scar­i­ly good at iden­ti­fy­ing indi­vid­ual faces in photos posted on social media. But the pri­vate sector hasn’t invest­ed nearly as much in, say, telling the dif­fer­ence between an American M1 Abrams tank and a Russian-made T‑72, or between an inno­cent Toyota pickup and the same truck upgunned as a guer­ril­la “tech­ni­cal” with a heavy machine­gun in the back. And the Army needs to be able to tell enemy from friend­ly from civil­ian in messy real-world combat zones – and not only from clear over­head sur­veil­lance shots, but from the ground, against troops trained to use cam­ou­flage and cover to break up easily rec­og­niz­able sil­hou­ettes.

“Training algo­rithms to iden­ti­fy vehi­cles by type, it’s a huge undertaking,” Coffman told me. “We’ve col­lect­ed and labeled over 3.5 mil­lion images” so far to use for train­ing machine-learn­ing algo­rithms, he said – and that label­ing requires trained human ana­lysts to look at each pic­ture and tell the com­put­er what it was: “That’s some­one sit­ting there and going, ‘that’s a T‑72; that’s a BMP,’” etcetera ad nau­se­am, he said.

But each indi­vid­ual robot or drone doesn’t need to carry those mil­lions of images in its own onboard memory: It just needs the “clas­si­fi­er” algo­rithms that result­ing from run­ning through images through machine-learn­ing sys­tems. Because those algo­rithms them­selves don’t take up a ton of memory, it’s pos­si­ble to run them on a com­put­er that fits easily on the indi­vid­ual bot.

“We’ve proven we can do that with a teth­ered or unteth­ered UAV. We’ve proven we can do that with a robot. We’ve proven we can do that on a vehi­cle,” Coffman said. “We can iden­ti­fy the enemy by type and loca­tion.”

“That’s all hap­pen­ing on the edge,” he empha­sized. “This isn’t having to go back to some main­frame [to] get processed.”

The Army’s experimental “Origin” robot during Project Convergence

In other words, the indi­vid­ual robot doesn’t have to con­stant­ly trans­mit real-time, high-res video of every­thing it sees to some dis­tant human ana­lyst or AI master brain. Sending that much data back and forth is too big a strain on low-bandwidth tactical networks, which are often dis­rupt­ed by ter­rain, tech­ni­cal glitch­es, and enemy jam­ming. Instead, the robot can iden­ti­fy the poten­tial target itself, with its onboard AI, and just trans­mit the essen­tial bits – things like the type of vehi­cles spot­ted, their num­bers and loca­tion, and what they’re doing.

“You want to reduce the amount of infor­ma­tion that you pass on the net­work to a tweet, as small as pos­si­ble, so you’re not clog­ging the pipes,” Coffman told me.

But before the deci­sion is made to open fire, he empha­sized, a human being has to look at the sensor feed long enough to con­firm the target and give the order to engage.

“There’s always a human that is look­ing at the sensor image,” Coffman said. “Then the human decides, ‘yes, I want to pros­e­cute that target.’”

“Could that be done automatically, without a human in the loop?” he said. “Yeah, I think it’s tech­no­log­i­cal­ly fea­si­ble to do that. But the United States Army an ethics-based organization. There will be a human in the loop.”

Breaking Defense source|articles

Recommended Posts

Start typing and press Enter to search