A $20 million grant from the National Science Foundation (NSF) will support the creation of a new artificial intelligence research institute, aimed at developing AI assistants capable of trustworthy and context-aware interactions. The initiative, known as the AI Research Institute on Interaction for AI Assistants (ARIA), is led by Brown University and includes several partner institutions such as The University of New Mexico (UNM), Santa Fe Institute, Colby College, Dartmouth College, New York University, Carnegie Mellon University, the University of California campuses at Berkeley and San Diego, and Data and Society.
Melanie Moses, professor of computer science in UNM’s School of Engineering, and Sonia Gipson Rankin, professor in the School of Law, will head UNM’s involvement. Their work will focus on building and evaluating AI systems that understand human reasoning and respect community standards while adhering to principles of justice.
“The law is how we address conflicts in our society, but it is difficult for the law to keep up with the rapid pace of change in computing and AI. In this project we have the opportunity to design trustworthy AI using computational methods, while considering the social and legal implications from the start,” Moses said.
Gipson Rankin highlighted the importance of integrating multiple disciplines into AI development: “Integrating law, computer science, and a range of other disciplines is essential to developing AI systems that are not only innovative, but also trustworthy, right-respecting, and aligned with the public interest. In fields like mental health, where trust, privacy, and ethical responsibility are paramount, we have a great opportunity to design AI that truly serves people. Moving forward requires creativity, conscience, and collaboration. This is exactly the kind of work that UNM does so very well.”
Earlier this year UNM launched a Level 1 Grand Challenge team focused on developing provably trustworthy AI systems. Moses and Gipson Rankin lead this team along with Stephanie Moore from Organization, Information & Learning Sciences. This group collaborates across UNM to bridge theoretical models with real-world deployment challenges for trustworthy AI.
Ellie Pavlick from Brown University emphasized that creating safe AI systems for sensitive areas like mental health care demands more advanced capabilities than current chatbots offer. “Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” Pavlick said. “At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high-stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”
Pavlick noted ARIA’s efforts would include educational programs spanning K-12 through professional levels. These include working with Bootstrap—a computer science curriculum developed at Brown—to help train teachers nationwide; summer programs will also bring students to ARIA campuses for hands-on research experience.
The urgency behind this work stems from increasing use of commercial AI chatbots in mental health settings; there is evidence users seek relationship advice or mental well-being information from tools like ChatGPT.
“The work we’ll be doing on trust, safety and responsible AI will hopefully address immediate safety concerns with these systems — for example, developing safeguards against responses that reinforce delusions or unempathetic responses that could increase someone’s distress,” Pavlick said. “We need short-term solutions to avoid harms from systems already in wide use, paired with long-term research to fix these problems where they originate.”
Today’s large language models lack an internal model of reality or intuitive understanding of users’ emotional states—limitations ARIA seeks to overcome through multidisciplinary collaboration involving legal scholars and philosophers as well as technical experts.
“You don’t just want to take for granted that any system that you can build should exist because not all of them will have a net benefit,” Pavlick said. “So we’ll be addressing questions about what systems should even be built and which should not.”
Pavlick added: “We’re addressing this critical alignment question of how to build technology that is ultimately good for society. These are extremely hard problems in AI in general that happen to have a particularly pointed use case in mental health. By working toward answers to these questions, we’ll work toward making AI that’s beneficial to all.”
ARIA is one of five national institutes sharing $100 million under NSF’s announcement made July 29—a public-private investment supported by Capital One and Intel—which aligns with national priorities outlined by the White House’s AI Action Plan.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness,” said Brian Stone from NSF leadership. “Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future.”



