
CONSORTIUM LEADERSHIP
INAUGURAL MEETING
The Responsible AI Leadership Consortium held a significant meeting on April 21st, 2023, in New York City at the James Hotel.
The event was centered around two panel discussions: one addressing the governance, regulatory, and policy aspects of artificial intelligence (AI), and another delving into the implications and applications of Generative AI and Large Language Models (LLMs).
The participants hailed from diverse sectors, including technology, media, research and academia, banks and other financial services companies, market intelligence firms, governance and regulation, and the venture capital community.











There was a strong consensus on the need for AI to have “guardrails and safety valves.” Participants emphasized the importance of corporate responsibility in self-regulating AI, particularly in terms of data sources, potential biases, and continuous monitoring of machine learning models.
The group recognized the slow pace of regulatory action in Washington, D.C., and identified a need for industry expertise to help craft comprehensive AI regulations.
Success in Responsible AI was seen not in regulating the technology itself but in regulating its application within specific sectors, such as military or healthcare. This approach can lead to improved risk management and increased public trust.
Enterprises were urged to audit their data and AI systems proactively, emphasizing quality data, understanding model performance, and managing algorithmic risk to enhance product quality, innovation, and market position.
participants highlighted several key points:
- The necessity of ethical AI as part of data governance.
- The importance of testing and defining AI guardrails.
- The role of AI model risk in non-financial risk groups.
- The potential of LLMs in leveraging private data for organizational benefit.
- The importance of explainability in AI, especially for regulatory compliance and trust building.