Kmaupdates

UN Launches Recommendations For Urgent Action To Curb Harm From Spread Of Mis- And Disinformation And Hate Speech

Views: 4

Global Principles for Information Integrity address risks posed by advances in AI

United Nations, New York, 24 June 2024

The world must respond to the harm caused by the spread of online hate and lies while robustly upholding human rights, United Nations Secretary-General António Guterres said today at the launch of the United Nations Global Principles for Information Integrity.

Speaking one year after the launch of his report into information integrity on digital platforms, the Secretary-General put forward a framework for coordinated international action to make information spaces safer and more humane, one of the most urgent tasks of our time.

Misinformation, disinformation, hate speech and other risks to the information ecosystem are fueling conflict, threatening democracy and human rights, and undermining public health and climate action. Their proliferation is now being supercharged by the rapid rise of readily available Artificial Intelligence (AI) technologies, increasing the threat to groups often targeted in information spaces, including children.

“The United Nations Global Principles for Information Integrity aim to empower people to demand their rights,” said the Secretary-General. “At a time when billions of people are exposed to false narratives, distortions and lies, these principles lay out a clear path forward, firmly rooted in human rights, including the rights to freedom of expression and opinion.”

The UN chief issued an urgent appeal to government, tech companies, advertisers and the PR industry to step up and take responsibility for the spread and monetization of content that results in harm.

The United Nations’ own missions, operations, and priorities are compromised by the erosion of information integrity, including vital peacekeeping and humanitarian efforts. In a global UN staff survey, 80% of respondents said harmful information endangers them and the communities they serve.

The Principles are the result of wide-ranging consultations with Member States, the private sector, youth leaders, media, academia, and civil society.

The recommendations within are designed to foster healthier and safer information spaces that champion human rights, peaceful societies and a sustainable future.

The proposals include:

Governments, tech companies, advertisers, media and other stakeholders should refrain from using, supporting or amplifying disinformation and hate speech for any purpose.

Governments should provide timely access to information, guarantee a free, viable, independent, and plural media landscape and ensure strong protections for journalists, researchers and civil society.

Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.

All stakeholders involved in the development of AI technologies should take urgent, immediate, inclusive and transparent measures to ensure that all AI applications are designed, deployed and used safely, securely, responsibly and ethically, and uphold human rights.

Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data.

Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.

Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly-available independent audits and co-develop industry accountability frameworks.

Government, tech companies, AI developers and advertisers should take special measures to protect and empower children, with governments providing resources for parents, guardians and educators.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top