Inside the Army’s Futuristic Test of Its Battlefield Artificial Intelligence in the Desert

 In Land, Intelligence, Forces & Capabilities, Threats

YUMA PROVING GROUND, Ariz. — After weeks of work in the oppres­sive Arizona desert heat, the U.S. Army car­ried out a series of live fire engage­ments Sept. 23 at Yuma Proving Ground to show how arti­fi­cial intel­li­gence sys­tems can work togeth­er to auto­mat­i­cal­ly detect threats, deliv­er tar­get­ing data and rec­om­mend weapons respons­es at blaz­ing speeds.

Set in the year 2035, the engage­ments were the cul­mi­na­tion of Project Convergence 2020, the first in a series of annual demon­stra­tions uti­liz­ing next gen­er­a­tion AI, net­work and soft­ware capa­bil­i­ties to show how the Army wants to fight in the future.

The Army was able to use a chain of arti­fi­cial intel­li­gence, soft­ware plat­forms and autonomous sys­tems to take sensor data from all domains, trans­form it into tar­get­ing infor­ma­tion, and select the best weapon system to respond to any given threat in just sec­onds.

Army offi­cials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline — the time it takes from when sensor data is col­lect­ed to when a weapon system is ordered to engaged — from 20 min­utes to 20 sec­onds, depend­ing on the qual­i­ty of the net­work and the number of hops between where it’s col­lect­ed and its des­ti­na­tion.

“We use arti­fi­cial intel­li­gence and machine learn­ing in sev­er­al ways out here,” Brigadier General Ross Coffman, direc­tor of the Army Futures Command’s Next Generation Combat Vehicle Cross-Functional Team, told vis­it­ing media.

“We used arti­fi­cial intel­li­gence to autonomous­ly con­duct ground recon­nais­sance, employ sen­sors and then passed that infor­ma­tion back. We used arti­fi­cial intel­li­gence and aided target recog­ni­tion and machine learn­ing to train algo­rithms on iden­ti­fi­ca­tion of var­i­ous types of enemy forces. So, it was preva­lent through­out the last six weeks.”

play_circle_filled An Extended Range / Multipurpose (ER/MP) Unmanned Aircraft System (UAS), returns from functional testing during Project Convergence 20 at Yuma Proving Ground, Arizona, September 15, 2020. The ER/MP AUS autonomous weapons systems have the capacity to carry multiple payloads while delivering precise attacks against enemy forces, potentially preventing the necessity of ground force presence. (Spc. Jovian Siders/U.S Army) An Extended Range / Multipurpose (ER/MP) Unmanned Aircraft System (UAS), returns from functional testing during Project Convergence 20 at Yuma Proving Ground, Arizona, September 15, 2020. The ER/MP AUS autonomous weapons systems have the capacity to carry multiple payloads while delivering precise attacks against enemy forces, potentially preventing the necessity of ground force presence. (Spc. Jovian Siders/U.S Army)

Promethean Fire

#nl-modal-dialog{ – header-text-color:#D2232A}

The first exer­cise fea­tured is infor­ma­tive of how the Army stacked togeth­er AI capa­bil­i­ties to auto­mate the sensor to shoot­er pipeline. In that exam­ple, the Army used space-based sen­sors oper­at­ing in low Earth orbit to take images of the bat­tle­ground. Those images were down­linked to a TITAN ground station surrogate locat­ed at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under devel­op­ment, Prometheus is an AI system that takes the sensor data ingest­ed by TITAN, fuses it, and iden­ti­fies tar­gets. The Army received its first Prometheus capa­bil­i­ty in 2019, although it’s tar­get­ing accu­ra­cy is still improv­ing, accord­ing to one Army offi­cial at Project Convergence. In some engage­ments, oper­a­tors were able to send in a drone to con­firm poten­tial threats iden­ti­fied by Prometheus.

From there, the tar­get­ing data was deliv­ered to a Tactical Assault Kit — a soft­ware pro­gram that gives oper­a­tors an over­head view of the bat­tle­field pop­u­lat­ed with both blue and red forces. As new threats are iden­ti­fied by Prometheus or other sys­tems, that data is auto­mat­i­cal­ly entered into the pro­gram to show users their loca­tion. Specific images and live feeds can be pulled up in the envi­ron­ment as needed.

All of that takes place in just sec­onds.

Once the Army has its target, it needs to deter­mine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

“What is FIRESTORM? Simply put it’s a com­put­er brain that rec­om­mends the best shoot­er, updates the common oper­at­ing pic­ture with the cur­rent enemy sit­u­a­tion, and friend­ly sit­u­a­tion, admis­sions the effec­tors that we want to erad­i­cate the enemy on the bat­tle­field,” said Coffman.

Army lead­ers were effu­sive in prais­ing FIRESTORM through­out Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the pro­gram, FIRESTORM process­es the ter­rain, avail­able weapons, prox­im­i­ty, number of other threats and more to deter­mine what the best firing system to respond to that given threat. Operators can assess and follow through with the system’s rec­om­men­da­tions with just a few clicks of the mouse, send­ing orders to sol­diers or weapons sys­tems within sec­onds of iden­ti­fy­ing a threat.

Just as impor­tant, FIRESTORM pro­vides crit­i­cal target decon­flic­tion, ensur­ing that mul­ti­ple weapons sys­tems aren’t redun­dant­ly firing on the same threat. Right now, that sort of decon­flic­tion would have to take place over a phone call between oper­a­tors. FIRESTORM speeds up that process and elim­i­nates any poten­tial mis­un­der­stand­ings.

In that first engage­ment, FIRESTORM rec­om­mend­ed the use of an Extended-Range Cannon Artillery. Operators approved the algorithm’s choice, and prompt­ly the cannon fired a pro­jec­tile at the target locat­ed 40 kilo­me­ters away. The process from iden­ti­fy­ing the target to send­ing those orders hap­pened faster than it took the pro­jec­tile to reach the target.

Perhaps most sur­pris­ing is how quick­ly FIRESTORM was inte­grat­ed into Project Convergence.

“This com­put­er pro­gram has been worked on in New Jersey for a couple years. It’s not a pro­gram of record. This is some­thing that they brought to my atten­tion in July of last year, but it needed a little bit of work. So we put effort, we put sci­en­tists and we put some money against it,” said Coffman. “The way we used it is as enemy tar­gets were iden­ti­fied on the bat­tle­field — FIRESTORM quick­ly paired those tar­gets with the best shoot­er in posi­tion to put effects on it. This is hap­pen­ing faster than any human could exe­cute. It is absolute­ly an amaz­ing tech­nol­o­gy.”

A new capability was tested on a Gray Eagle drone, such as the one seen in this file photo. (U.S. Army)
A new capa­bil­i­ty was tested on a Gray Eagle drone, such as the one seen in this file photo. (U.S. Army)

Dead Center

Prometheus and FIRESTORM weren’t the only AI capa­bil­i­ties on dis­play at Project Convergence.

In other sce­nar­ios, a MQ-1C Gray Eagle drone was able to iden­ti­fy and target a threat using the on-board Dead Center pay­load. With Dead Center, the Gray Eagle was able to process the sensor data it was col­lect­ing, iden­ti­fy­ing a threat on its own with­out having to send the raw data back to a com­mand post for pro­cess­ing and target iden­ti­fi­ca­tion. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a prod­uct created by Project Maven, a major Department of Defense effort to use AI for pro­cess­ing full motion video.

According to one Army offi­cer, the capa­bil­i­ties of the Maven Smart System and Dead Center over­lap, but plac­ing both on the mod­i­fied Gray Eagle at Project Convergence helped them to see how they com­pared.

With all of the AI engage­ments, the Army ensured there was a human in the loop to pro­vide over­sight of the algo­rithms’ rec­om­men­da­tions. When asked how the Army was imple­ment­ing the Department of Defense’s prin­ci­ples of eth­i­cal AI use adopted earlier this year, Coffman point­ed to the human bar­ri­er between AI sys­tems and lethal deci­sions.

“So obvi­ous­ly the tech­nol­o­gy exists, to remove the human right the tech­nol­o­gy exists, but the United States Army, an eth­i­cal based orga­ni­za­tion — that’s not going to remove a human from the loop to make deci­sions of life or death on the bat­tle­field, right? We under­stand that,” explained Coffman. “The arti­fi­cial intel­li­gence iden­ti­fied geo-locat­ed enemy tar­gets. A human then said, Yes, we want to shoot at that target.”

C4ISRNET source|articles

Recommended Posts
0

Start typing and press Enter to search