
Responsible AI for Misinformation & Disinformation
The Responsible AI Leadership Consortium, with collaboration from the Permanent Mission of Liechtenstein to the United Nations in New York, will convene distinguished leaders from UN member countries, public, private, and civil society delegates to address the escalating threat of misinformation and disinformation accelerated by AI technologies. This seminar will bring together experts, policymakers, and business leaders to explore solutions and new approaches to counter this growing challenge.
This would foster collaboration across sectors and develop innovative strategies and tools to combat AI-driven misinformation and disinformation, empowering governments, civil society, and businesses to protect the integrity of information in the digital age. This seminar aims to inspire new solutions and partnerships that will positively impact global efforts to counter misinformation and help with this ensuring responsible AI deployment.











Establish independent agencies to oversee AI governance with a focus on data quality. Implement real-time fact-checks and transparent models to counter misinformation. Launch regional and sector-specific pilot projects to test and refine AI governance models.
Promote data tagging, sharing, and verification to enhance AI reliability. Balance synthetic and organic data for accuracy. Foster collaboration through global platforms for transparency and knowledge exchange.
Focus on verifying AI’s energy demands and investing in renewable technologies. Promote global discussions on sustainable AI, especially with input from the Global South. Support a global AI governance compact that prioritizes ethics, sustainability, and human rights.
Ensure human oversight in AI, recognizing the limits of scalability. Use synthetic data with governance for edge-case simulations and promote public education on fact-checking to empower individuals to challenge misinformation.
participants highlighted several key points:
- AI Policies should balance safety with innovation.
- Governments, AI developers, and industry leaders must work together to create practical and enforceable AI guidelines.
- AI governance should be a global effort to ensure fairness and consistency.
- High-quality data and strong governance are critical for reliable AI systems.
AI offers powerful tools it also presents risks that require proactive oversight. Striking a balance between innovation and regulation is essential to ensure AI remains a force for good.
Moving forward, collaboration between governments, businesses, and organizations will be crucial in shaping policies that are adaptable, fair, and sustainable.