WASHINGTON (May 16, 2023) – The George Washington University is co-leading a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.
Credit: The George Washington University
WASHINGTON (May 16, 2023) – The George Washington University is co-leading a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.
The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with systems engineers, social scientists, legal scholars, educators, and public policy experts. The multidisciplinary team will work with impacted communities, private industry, and the federal government to determine how to evaluate trust in AI, how to develop technical solutions and processes for AI that can be trusted, and which policy models best create and sustain trust.
“The TRAILS Institute is a leading example of the world-class work GW faculty are doing at the intersection of technology and the social sciences, and it reflects the university’s commitment to innovation that advances knowledge, fuels the economy, and enhances equity and quality of life for all members of our global community,” said GW President Mark Wrighton.
GW’s Provost and Executive Vice President For Academic Affairs, Christopher Bracey, echoed the President’s excitement about this new collaboration. “With engineers and computer scientists working side-by-side with legal scholars and social scientists in the policy capital of the world, TRAILS is poised to enhance the opportunities and address the risks of artificial intelligence and how it is being used in our society.”
Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI by encouraging new innovations that foreground ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.
John Lach, Dean of the GW School of Engineering and Applied Science, said, “TRAILS embodies the ‘Engineering And…’ ethos of GW Engineering – working across disciplines and engaging with communities to leverage the power of engineering and computing for societal good.”
In addition to GW, TRAILS will include faculty members from the University of Maryland, including Hal Daumé III, the lead principal investigator for TRAILS, Morgan State University, and Cornell University, with more support coming from the National Institute of Standards and Technology (NIST), and private sector organizations like Arthur AI, Checkstep, FinRegLab and Techstars.
The new institute recognizes that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque to the public. As a result, those most affected by the technology have little say in how it is developed.
“I think it’s important to understand that AI can be the source of significant benefits and innovations to society, but can also cause a lot of harm,” said David Broniatowski, an associate professor of engineering management and systems engineering at GW and the lead principal investigator of TRAILS at GW. “Many of those harms are felt by people who are historically underrepresented because their concerns are not reflected in the design process.”
As an example, AI systems may be trained on datasets that reflect the values—and biases—of system designers or data labelers. People using the system may not be aware of those biases, instead believing the system’s output to be “objective.” If system output is not easily interpretable, the priorities encoded into the system may reflect values that do not align with the people who are using, or otherwise being affected by, those systems, Broniatowski said.
Given these conditions—and the fact that AI is increasingly being deployed in systems that are critical to society, such as mediating online communications, determining health care options, and offering guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.
TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.
The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.
The second research thrust will focus on developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.
Broniatowski will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust.
Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.
“There is no trust without participation and no accountability without participation–hence we believe in a participatory approach to AI at all levels from design to deployment,” said Aaronson.
Morgan State University, Maryland’s preeminent public urban research university, will lead community-driven projects related to the interplay between AI and education, while Cornell University will advance efforts focused on how people interpret their use of AI.
Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.
“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.
Today’s announcement is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.
The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.
“These institutes are driving breakthrough discoveries to achieve our country’s ambition of being at the forefront of the global AI revolution. This work would not be possible without our longstanding alliances with our academic partners, government agencies, industry leaders and AI communities at large,” said NSF Director Sethuraman Panchanathan.
###
This research is supported by the National Science Foundation (Award IIS-2229885). This story does not necessarily reflect the views of this organization.
Media Contacts: (GW) Caitlin Douglass, [email protected]; (NSF) 703.292.7090, [email protected]
About the National Science Foundation
The U.S. National Science Foundation is an independent federal agency that supports science and engineering in all 50 states and U.S. territories. NSF was established in 1950 by Congress to promote the progress of science; advance the national health, prosperity and welfare; and secure the national defense. The organization fulfills its mission chiefly by making grants. NSF investments account for about 25% of federal support to America’s colleges and universities for basic research: research driven by curiosity and discovery. NSF also supports solutions-oriented research with the potential to produce advancements for the American people.
About The George Washington University
Chartered on February 9, 1821, and located in the heart of Washington, D.C., George Washington University provides for unparalleled access to leading international institutions, multinational corporations, global media outlets, and the governments of 177 countries via their resident embassies. This is a singular advantage—no other university has as much potential for international engagement within footsteps of its doors.