Property Abroad
Blog
"Can rival forces collaborate to overcome the risks created by artificial intelligence?"

"Can rival forces collaborate to overcome the risks created by artificial intelligence?"

"Can rival forces collaborate to overcome the risks created by artificial intelligence?"

Last week, the first summit on artificial intelligence safety took place.

Last week, representatives from the industry, researchers, and government officials from27 countries gathered at Bletchley Park, a country estate an hour's drive from London. This place became famous because British mathematician Alan Turing and his team cracked the Nazis' Enigma code during World War II. But this time, the group gathered was trying to crack another code: how to deal with the risks associated with the latest advances in artificial intelligence (AI).

The Bletchley Declaration became a significant outcome of the meeting.

The most notable outcome of the meeting was the signing of the Bletchley Declaration. It emphasized the importance of involving all stakeholders and international cooperation, as well as the responsibility of "actors developing advanced AI capabilities." Headlines mainly focused on the impressive achievement - China, the European Union, and the United States signed a declaration on AI regulation, even as they agree on much less in many areas. The declaration shows that countries with different ideologies are concerned about the potential harmful consequences of artificial intelligence systems and recognize that solutions cannot be built by one country alone. Thus, on the same stage, U.S. Secretary of State Gina Raimondo stated, "Even as nations actively compete with one another, we can and must seek global solutions to global problems," while China's Vice Minister of Science and Technology Wu Zhaohui called for cooperation to mitigate the possible unintended harmful consequences of the latest AI models.

Important agreements on AI security

There were also other important events at the summit.

Recommended real estate
Buy in Turkey for 76354£

Sale flat in Mahmutlar 97 949,00 $

1 Bedroom

1 Bathroom

55 м²

Buy in Turkey for 259950£

Sale flat in Bahcelievler 333 471,00 $

2 Bedrooms

2 Bathrooms

70 м²

Buy in Turkey for 250000$

Sale cottage in Sarier with sea view 250 000,00 $

4 Bedrooms

2 Bathrooms

160 м²

Buy in Turkey for 395000€

Sale flat in Mahmutlar 418 939,00 $

2 Bedrooms

1 Bathroom

64 м²

Buy in Turkey for 197900£

Sale flat in Bahcelievler 253 871,00 $

1 Bedroom

2 Bathrooms

55 м²

Buy in Turkey for 500585£

Sale villa in Basaksehir 642 164,00 $

3 Bedrooms

3 Bathrooms

214 м²

The UK and the US signed significant bilateral agreements on joint AI security standards. On the US side, the National Institute of Standards and Technology will collaborate with the UK's Frontline Technology Committee. However, discussions also included the UK's National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency, considering that AI security is both a technical issue and a matter of national security.

Participation of the Southern Hemisphere and overcoming limitations

The summit was also notable for including low- and middle-income countries from the Southern Hemisphere. Brazil, India, Indonesia, Kenya, Nigeria, the Philippines, Rwanda, and Turkey all participated in the discussions. However, the participation of representatives from the Southern Hemisphere outside of government was modest at best; the composition of participants was instead dominated by European and American research institutions—often funded by large tech conglomerates that also demanded full attendance at the summit.

The path to ensuring AI safety

Ensuring the safety of AI will be a monumental task of this era. Future iterations of the summit should allocate space on the agenda for working with tech giants that possess the computational power and the ability to shape the agenda on AI safety issues that are already affecting communities and countries today.

Source:https://www.reuters.com/technology/gpt-new-ai-summit-strategically-speaking-2023-11-02/

Comment