Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more
This week, the Defense Innovation Unit (DIU), the division of the U.S. Department of Defense (DoD) that awards emerging technology prototype contracts, published a first draft of a whitepaper outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. The paper, which includes worksheets for system planning, development, and deployment, is based on DoD ethics principles adopted by the Secretary of Defense and was written in collaboration with researchers at Carnegie Mellon University’s Software Engineering Institute, according to the DIU.
“Unlike most ethics guidelines, [the guidelines] are highly prescriptive and rooted in action,” a DIU spokesperson told VentureBeat via email. “Given DIU’s relationship with private sector companies, the ethics will help shape the behavior of private companies and trickle down the thinking.”
Launched in March 2020, the DIU’s effort comes as corporate defense contracts, particularly those involving AI technologies, have come under increased scrutiny. When news emerged in 2018 that Google had contributed to Project Maven, a military AI project to develop surveillance systems, thousands of employees at the company protested.
For some AI and data analytics companies, like Oculus cofounder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military contracts have become a top source of revenue. In October, Palantir won most of an $823 million contract to provide data and big analytics software to the U.S. army. And in July, Anduril said that it received a contract worth up to $99 million to supply the U.S. military with drones aimed at countering hostile or unauthorized drones.
Machine learning, computer vision, facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. army branches. And in the case of Maven, Microsoft and Amazon among others have taken Google’s place.
AI development guidance
The DIU guidelines recommend that companies start by defining tasks, success metrics, and baselines “appropriately,” identifying stakeholders and conducting harms modeling. They also require that developers address the effects of flawed data, establish plans for system auditing, and “confirm that new data doesn’t degrade system performance,” primarily through “harms assessment[s]” and quality control steps designed to mitigate negative impacts.
The guidelines aren’t likely to satisfy critics who argue that any guidance the DoD offers is paradoxical. As MIT Tech Review points out, the DIU says nothing about the use of autonomous weapons, which some ethicists and researchers as well as regulators in countries including Belgium and Germany have opposed.
But Bryce Goodman at the DIU, who coauthored the whitepaper, told MIT Tech Review that the guidelines aren’t meant to be a cure-all. For example, they can’t offer universally reliable ways to “fix” shortcomings such as biased data or inappropriately selected algorithms, and they might not apply to systems proposed for national security use cases that have no route to responsible deployment.
Studies indeed show that bias mitigation practices like those that the whitepaper recommend aren’t a panacea when it comes to ensuring fair predictions from AI models. Bias in AI also doesn’t arise from datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can also contribute. So can other human-led steps throughout the AI deployment pipeline, like dataset selection and prep and architectural differences between models.
Regardless, the work could change how AI is developed by the government if the DoD’s guidelines are adopted by other departments. While NATO recently released an AI strategy and the U.S. National Institute of Standards and Technology is working with academia and the private sector to develop AI standards, Goodman told MIT Tech Review that he and his colleagues have already given the whitepaper to the National Oceanic and Atmospheric Administration, the Department of Transportation, and ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service.
The DIU says that it’s already deploying the guidelines on a range of projects covering applications including predictive health, underwater autonomy, predictive maintenance, and supply chain analysis. “There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” Goodman told MIT Tech Review.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
AI Staff Writer
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article