- home
- article
- Global Tech Leaders Commit to AI Safety Standards
Global Tech Leaders Commit to AI Safety Standards
ai generated text
The article discusses the signing of voluntary artificial intelligence (AI) safety standards by 16 companies, including Google, Microsoft, Meta, OpenAI, and Anthropic. The signatories commit to working towards information sharing, investing in cybersecurity, and prioritizing research. However, critics argue that these commitments lack teeth, as they do not impose concrete obligations on the companies. According to the article, the new commitments include publishing safety frameworks for their AI models, outlining how they could be misused by bad actors. These frameworks would need to specify where severe risks would be deemed intolerable. Additionally, the companies committed not to develop or deploy a model that poses unacceptable risks.
These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI.
It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.
Generative AI
- South Korea removes DeepSeek from app stores pending privacy review
- Criminalising AI child abuse tools becomes law in the UK
- US President Trump Labels China's DeepSeek AI Leap as "Wake-Up Call" for US Tech Industry
sources
perspectives
countries
organizations
- 1.Google
- 2.Meta
- 3.Microsoft
- 4.OpenAI
- 5.Anthropic
- 6.Zhipu AI
- 7.Ada Lovelace Institute
- 8.G42
- 9.Google DeepMind
- 10.Inflection AI
- 11.International Business Machines Corp
- 12.Naver
persons
- 1.Anna Makanju
- 2.Elon Musk
- 3.Rishi Sunak
- 4.Alex Hern
- 5.Dan Hendrycks
- 6.Emmanuel Macron
- 7.Fran Bennett
- 8.Kamala Harris
- 9.Michelle Donelan
- 10.Nick Clegg
- 11.Sam Altman
- 12.Yoon Suk-Yeol