Extraordinary technological innovations have driven an expansion in artificial intelligence (AI) use. At the same time, they have brought little-understood risks to every sector of the economy.
Credit: Barbara Johnston/University of Notre Dame
Extraordinary technological innovations have driven an expansion in artificial intelligence (AI) use. At the same time, they have brought little-understood risks to every sector of the economy.
Now, as part of a new consortium, University of Notre Dame researchers will help establish the advanced measurement techniques required to identify the risks associated with current AI systems and to develop new systems that are safer and more trustworthy.
The consortium, called the Artificial Intelligence Safety Institute Consortium (AISIC), was formed by the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce that works to develop standards for emerging technologies.
The consortium was formed in response to a presidential executive order made in October. The executive order stated: “Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said U.S. Secretary of Commerce Gina Raimondo. “Through President Biden’s landmark executive order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
“We are excited to join AISIC at a pivotal time for AI and for our society,” said Jeffrey F. Rhoads, vice president for research and professor of aerospace and mechanical engineering. “We know that to manage AI risks, we first have to measure and understand them. It is a grand challenge that neither technologists nor government agencies can tackle alone. Through this new consortium, Notre Dame researchers will have a place at the table where they can live out Notre Dame’s mission to seek discoveries that yield benefits for the common good.”
“A special focus for the consortium will be dual-use foundation models, the advanced AI systems used for a wide variety of purposes,” said Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering and director of the Lucy Family Institute for Data and Society. Chawla, who was recently elected a fellow of the Association for the Advancement of Artificial Intelligence, explained, “Improving evaluation and measurement techniques will help researchers and practitioners gain a deeper understanding of AI capabilities endowed in a system, including risks and benefits. They will then be able to offer guidance for the industry leaders working to create AI that is safe, secure and trustworthy. It is a moment for human-machine teaming.”
The consortium includes more than 200 member companies and organizations that are on the front lines of developing and using AI systems, as well as the civil society and academic teams that are building the foundational understanding of how AI can and will transform our society.
These entities represent the nation’s largest companies and its innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in the use of AI today. The consortium also includes state and local governments, as well as nonprofits. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective safety around the world.
The full list of consortium participants is available here.
To learn more about AISIC, visit www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute.
Contact: Brandi Wampler, associate director of media relations, 574-631-2632, [email protected]