Could this software help users trust machine learning decisions? – C4ISRNet

Could this software help users trust machine learning decisions? – C4ISRNet

WASHINGTON – New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems—artificial intelligence algorithms that learn and change over time as they are fed ever more data—by auditing them to provide insights about how it reached its decisions.

“The technology that underpins machine learning and artificial intelligence applications is rapidly advancing, and now it’s time to ensure these systems can be integrated, utilized, and ultimately trusted in the field,” said Chris Eisenbies, product line director of the cmpany’s Autonomy, Control, and Estimation group. “The MindfuL system stores relevant data in order to compare the current environment to past experiences and deliver findings that are easy to understand.”

While machine learning algorithms show promise for DoD systems, determining how much users can trust their output remains a challenge. Intelligence officials have repeatedly noted that analysts cannot rely on black box artificial intelligence systems that simply produce a decision or piece of intelligence—they need to understand how the system came to that decision and what unseen biases (in the training data or otherwise) might be influencing that decision.

MindfuL is designed to help address that gap by providing more context around those outputs. For instance, the company says its program will issue statements such as” “The machine learning system has navigated obstacles in sunny, dry environments 1,000 times and completed the task with greater than 99 percent accuracy under similar conditions;” or “The machine learning system has only navigated obstacles in rain 100 times with 80 percent accuracy in similar conditions; manual override recommended.” Those types of statements can help users evaluate how much confidence they should place in any individual decision produced by the system.

This is the first release of the MindfuL software as part of a $5 million, three-year contract under DARPA’s Competency-Aware Machine Learning (CAML) program. BAE Systems plans to demonstrate their software in both simulation and in prototype hardware later this year.

Source

Leave a Reply

%d bloggers like this: