Connect with us

Regulation

UN Calls for Global AI Governance As Meta & OpenAI Face Challenges

Published

on

Ai Artificial Intelligence Ml Machine Learning Vector By Kohb Gettyimages 1146634284 100817775 Large.webp.webp


AI News: The United Nations has issued seven recommendations for reducing the risks of artificial intelligence (AI) based on input from a UN advisory body. The final report of the council’s advisory body focuses on the importance of developing a unified approach to the regulation of AI and will be considered at a UN meeting scheduled for later this month.

AI News: UN Calls for Global AI Governance

The council of 39 experts noted that large multinational corporations have been able to dominate the development of AI technologies given the increasing rate of growth, which is a major concern. The panel stressed that there is an ‘unavoidable’ need for the governance of artificial intelligence on a global scale, since the creation and use of artificial intelligence cannot be solely attributed to market mechanisms.

According to the UN report, to counter the lack of information between the AI labs and the rest of the world, it is suggested that a panel should be formed to disseminate accurate and independent information on artificial intelligence.

Advertisement

The recommendations include the creation of a global AI fund to address the capacity and collaboration differences especially in the developing countries that cannot afford to use AI. The report also provides recommendations on how to establish a global artificial intelligencedata framework for the purpose of increasing transparency and accountability, and the establishment of a policy dialogue that would be aimed at addressing all the matters concerning the governance of artificial intelligence.

While the report did not propose a new International organization for the regulation, it pointed out that if risks associated with the new technology were to escalate then there may be the need for a more powerful global body with the mandate to enforce the regulation of the technology. The United Nation’s approach is different from that of some countries, including the United States, which has recently approved of ‘a blueprint for action’ to manage AI in military use – something China has not endorsed.

Calls for Regulatory Harmonization in Europe

Concurrent with the AI news, leaders, including Yann LeCun, Meta’s Chief AI Scientist and many CEOs and academics from Europe, have demanded to know how the regulation will work in Europe. In an open letter, they stated that the EU has the potential to reap the economic benefits of AI if the rules do not hinder the freedom of research and ethical implementation of AI.

Meta’s upcoming multimodal artificial intelligence model, Llama, will not be released in the EU due to regulatory restrictions, which shows the conflict between innovation and regulation.

Advertisement

The open letter argues that excessively stringent rules can hinder the EU’s ability to advance in the field, and calls on the policymakers to implement the measures that will allow for the development of a robust artificial intelligence industry while addressing the risks. The letter emphasizes the need for coherent laws that can foster the advancement of AI while not hindering its growth like the warning on Apple iPhone OS as reported by CoinGape.

OpenAI Restructures Safety Oversight Amid Criticism

In addition, there are concerns about how OpenAI has positioned itself where the principles of safety and regulation of AI are concerned. As a result of the criticism from the US politicians and the former employees, the CEO of the company, Sam Altman, stepped down from the company’s Safety and Security Committee. 

Advertisement

This committee was formed in the first place to monitor the safety of the artificial intelligence technology and has now been reshaped into an independent authority that can hold back on new model releases until safety risks are addressed.

The new oversight group comprises individuals like Nicole Seligman, former US Army General Paul Nakasone, and Quora CEO Adam D’Angelo, whose role is to ensure that the safety measures put in place by OpenAI are in line with the organization’s objectives. This United Nations AI news comes at the heels of allegations of internal strife, with former researchers claiming that OpenAI is more focused on profit-making than actual artificial intelligence governance.

✓ Share:

Advertisement

Kelvin Munene Murithi

Kelvin is a distinguished writer with expertise in crypto and finance, holding a Bachelor’s degree in Actuarial Science. Known for his incisive analysis and insightful content, he possesses a strong command of English and excels in conducting thorough research and delivering timely cryptocurrency market updates.

Disclaimer: The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.

Advertisement





Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2024 creamofcrypto.com