US Vice President Kamala Harris said world leaders have a “moral, ethical and social obligation” to protect humanity from the dangers posed by artificial intelligence (AI). He pushed for the creation of a global roadmap while attending an AI summit in London. Observers agree and say that one element that must remain is human oversight.
Artificial intelligence (AI) technology is cool. People upload large amounts of data into machines that can perform calculations more quickly and have sharper memories than humans. There are those for creating works of art, tracking drowning swimmers, saving lives through better medical diagnosis, or creating strange sounds.
However, like other tools, the use of AI also depends on the user’s intentions. There are those who use AI to deceive, provide false information and even hurt other people.
Last week, US President Joe Biden signed an executive order to create new standards, including requiring large AI developers to report the results of their security tests and other important information to the US government.
Meanwhile in London, after attending the AI Security Summit on Wednesday (1/11), US Vice President Kamala Harris also announced the establishment of the US government’s AI Security Institute, as well as publishing draft policy guidelines for the use of AI by governments and a declaration on its responsible application in the military sector. .
“To bring order and stability amidst global technological change, I firmly believe that we must be guided by a common understanding between countries. That’s why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, as well as work to create new rules and norms.”
Members of the US Congress held hearings on the issue earlier this year, where industry leaders such as OpenAI CEO Sam Altman expressed concerns.
“My greatest fear is that we – this field, this technology, this industry – pose a great danger to the world. “I think this can happen in various ways,” he said.
At the London summit, entrepreneur and billionaire Elon Musk, who is developing his own generative AI program, said he views AI as “one of the biggest threats” to society. He called for a “third party referee.”
“We arrive here, for the first time in human history, with something that will be much smarter than ourselves. It’s not clear to me whether we can really control it. But I think we can aspire to guide it in a direction that can benefit humanity. But I really think that this is one of the existential risks we face today and potentially the most pressing risk,” Musk said.
Analysts say government officials and the tech industry don’t need universal solutions, but rather alignment on values and, most importantly, human oversight.
Jessica Brandt is the policy director of the AI and Emerging Technologies Initiative at the Brookings Institution, “It’s OK to use a variety of different approaches, and then, where possible, coordinate to ensure that democratic values are rooted in the systems that govern technology globally.”
But in the end, human nature is both AI’s strength and weakness, where – at least so far – AI’s existence appears to be limited by human nature which is overwhelming and capable of doing both good and evil. (rd/ka)