Project is a M*A*S*H-up of machine learning and battlefield decision-making

Via Brandon Vigliarolo

A new DARPA initiative aims to ultimately give AI systems the same complex, rapid decision-making capabilities as military medical staff and trauma surgeons who are in the field of battle.

The In the Moment (ITM) program, which is right now soliciting research proposals, aims to develop the foundations of expert machine-learning models that can make difficult judgment calls – where there is no right answer – that humans can trust. This study could lead to the deployment of algorithms that can help medics and other personnel make tough decisions in moments of life and death.

“DoD missions involve making many decisions rapidly in challenging circumstances and algorithmic decision-making systems could address and lighten this load on operators … ITM seeks to develop techniques that enable building, evaluating, and fielding trusted algorithmic decision-makers for mission-critical DoD operations where there is no right answer and, consequently, ground truth does not exist,” DARPA said. 

At the heart of this problem is that these sorts of AI systems need to be trained even when there is no ground truth or consensus among experts. Generals may disagree over how exactly a confrontation between two opposing units should unfold. Doctors may have differing opinions on how to treat someone. Teaching machine-learning software how to figure out the best course of action from these stances is non-obvious, and what ITM seems to be set up to tackle.

As DARPA put it:

ITM is taking inspiration from the medical imaging analysis field, where techniques have been developed for evaluating systems even when skilled experts may disagree on ground truth. For example, the boundaries of organs or pathologies can be unclear or disputed among radiologists. To overcome the lack of a true boundary, an algorithmically drawn boundary is compared to the distribution of boundaries drawn by human experts. If the algorithm’s boundary lies within the distribution of boundaries drawn by human experts over many trials, the algorithm is said to be comparable to human performance.

On a practical level, the program is focused on medical treatment in the field, and has two phases: part one involves small-unit triage, and part two is triage involving mass casualties.

Matt Turek, ITM’s program manager, said the plan is for an algorithmic decision-maker and human experts to both choose how to act in a situation. Those decisions are handed blindly to a pool of triage professionals, who then have to say which of the decision makers they would delegate to. 

That’s just the testing phase. Ultimately, ITM aims to take humans out of the decision-making loop by building AIs that people will trust to make the same sorts of decisions an expert would. 

Trusting AI in a war zone

The first use case, triage in a small unit field situation, could enable untrained soldiers to perform triage and apply trauma medicine by following an AI trained to make the same sorts of decisions as a pool of field medics. That’s a huge potential boon for the average soldier, who’s often in the position to be the only one available to triage and treat serious injuries during operations. 

The second case may be a harder sell for skilled surgeons, doctors, or anyone else who could have their decision-making capabilities augmented by AI. Under ITM, DARPA said autonomous triage decision makers could be “fine-tuned to a particular unit commander,” such as a trauma surgeon who “would likely be held responsible for the decisions made within [their hospital unit], including those of any autonomous system.” 

DARPA said it wants to see its research partners begin their work by October, 2022, and plans to work through four technical areas over the course of the next three-and-a-half years:

  • Developing techniques that an AI can use to identify and quantify key decision making attributes in difficult situations
  • Create an alignment scoring system that can predict end-user trust
  • Designing and executing an evaluation program for the first two stages
  • Policy and practice integration, including legal, moral and ethical research of the technology

DARPA has been the source for many pieces of consumer technology, and it has no plans for the software to be locked behind away government doors. Instead it promises to make ITM’s technology “as widely available and reused as possible.”

It encourages anyone applying to participate in ITM to embrace an unlimited rights open source IP model, and said that anyone who doesn’t offer unlimited rights will have to make a convincing case why. ®

Via TheRegister.com

0