"Can rival forces collaborate to overcome the risks created by artificial intelligence?"
Last week, the first summit on artificial intelligence safety took place.
Last week, representatives from the industry, researchers, and government officials from27 countries gathered at Bletchley Park, a country estate an hour's drive from London. This place became famous because British mathematician Alan Turing and his team cracked the Nazis' Enigma code during World War II. But this time, the group gathered was trying to crack another code: how to deal with the risks associated with the latest advances in artificial intelligence (AI).
The Bletchley Declaration became a significant outcome of the meeting.
The most notable outcome of the meeting was the signing of the Bletchley Declaration. It emphasized the importance of involving all stakeholders and international cooperation, as well as the responsibility of "actors developing advanced AI capabilities." Headlines mainly focused on the impressive achievement - China, the European Union, and the United States signed a declaration on AI regulation, even as they agree on much less in many areas. The declaration shows that countries with different ideologies are concerned about the potential harmful consequences of artificial intelligence systems and recognize that solutions cannot be built by one country alone. Thus, on the same stage, U.S. Secretary of State Gina Raimondo stated, "Even as nations actively compete with one another, we can and must seek global solutions to global problems," while China's Vice Minister of Science and Technology Wu Zhaohui called for cooperation to mitigate the possible unintended harmful consequences of the latest AI models.
Important agreements on AI security
There were also other important events at the summit.
Participation of the Southern Hemisphere and overcoming limitations
The summit was also notable for including low- and middle-income countries from the Southern Hemisphere. Brazil, India, Indonesia, Kenya, Nigeria, the Philippines, Rwanda, and Turkey all participated in the discussions. However, the participation of representatives from the Southern Hemisphere outside of government was modest at best; the composition of participants was instead dominated by European and American research institutions—often funded by large tech conglomerates that also demanded full attendance at the summit.
The path to ensuring AI safety
Ensuring the safety of AI will be a monumental task of this era. Future iterations of the summit should allocate space on the agenda for working with tech giants that possess the computational power and the ability to shape the agenda on AI safety issues that are already affecting communities and countries today.
Source:https://www.reuters.com/technology/gpt-new-ai-summit-strategically-speaking-2023-11-02/
Comment
Popular Posts
Popular Offers
Subscribe to the newsletter from Hatamatata.ru!
Subscribe to the newsletter from Hatamatata.ru!
I agree to the processing of personal data and confidentiality rules of Hatamatata