Rapid advances in artificial intelligence or AI are actually increasing online disinformation. It is also possible that it will encourage a number of governments to increase censorship and surveillance due to growing threats to human rights, said a United States (US) non-profit organization in a report published on Wednesday (4/10).
Global internet freedom declined for the 13th consecutive year. China, Myanmar and Iran were the worst out of 70 countries surveyed by the Freedom on the Net report. The report highlights the risks posed by easy access to generative AI technology.
AI enables governments to “improve and perfect online censorship” and digitally amplify repression, making surveillance and the creation and spread of disinformation faster, cheaper and more effective, according to Freedom House’s annual report.
“AI can be used to increase censorship, surveillance, and the creation and spread of disinformation,” said Michael J. Abramowitz, president of Freedom House. “AI advances are exacerbating the human rights crisis in cyberspace.”
The ChatGPT logo and the words Artificial Intelligence are visible in an illustration. (Photo: REUTERS/Dado Ruvic)
By some estimates, AI-generated content could soon account for 99 percent or more of all information on the internet, straining content moderation systems already struggling to cope with the flood of disinformation, technology experts say.
A number of governments in the world are considered slow in responding. Only a few countries have passed laws regarding the ethical use of AI, while at the same time justifying the use of AI-based surveillance technologies such as facial recognition on security grounds.
Generative AI-based tools were used in at least 16 countries to manipulate information about political or social issues during June 2022 to May 2023, according to a Freedom House report. The agency added that the figure was likely an underestimate.
Meanwhile, in at least 22 countries, social media companies are required to use automated systems for content moderation to comply with censorship rules.
With at least 65 national elections to be held next year, including in Indonesia, India and the US, misinformation can have a big impact, with fake news stories popping up from New Zealand to Turkey.
“Generative AI offers the power and scale to spread misinformation at a previously unimaginable level – it is a disinformation force multiplier,” said Karen Rebelo, deputy editor of BOOM Live, a Mumbai-based fact-checking organization.
While AI is “a military-grade weapon in the hands of irresponsible parties,” in India, political parties and their proxies are the biggest spreaders of misinformation and disinformation, he said, and they have no interest in regulating AI.
While companies like OpenAI and Google have implemented safeguards to mitigate some of the use of malicious AI-based chatbots, these can be easily bypassed, Freedom House said.
Even if false information were to come to light, it could “undermine public trust in the democratic process, give activists and journalists incentives to self-censor, and diminish reliable and independent reporting,” the report said.
“AI-generated imagery… can also amplify existing polarization and tensions. In extreme cases, this can trigger violence against individuals or entire communities,” he added.
Despite all its drawbacks, AI technology could be highly beneficial, the report says, as long as governments regulate its use and implement strong data privacy laws, as well as requiring better misinformation detection tools and human rights protections.
For example, AI is increasingly being used to examine and analyze satellite imagery, social media posts and images to flag human rights violations in conflict zones. (ah/rs)