G7 Countries to Launch AI Code of Conduct – The G7 nations plan to establish a voluntary code of conduct for AI development, providing guidelines for managing the risks and advantages of this technology. This agreement is set to take place on October 30, as reported. According to the report, the code has 11 points that aim to promote “safe, secure, and trustworthy AI worldwide” and help “seize” the benefits of AI while still addressing and troubleshooting the risks it poses.
The plan was drafted by G7 leaders in September. It says it offers voluntary guidance of actions for “organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.” Furthermore, it recommends that organizations should share detailed reports regarding the functionality, constraints, proper usage, and potential misuse of the systems they are developing.
It also underscores the importance of implementing strong security measures for these systems. The G7 member countries encompass Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union. The G7 meeting for this year was held in Hiroshima, Japan, bringing together Digital and Tech Ministers from all participating countries on April 29 and 30.
The discussions during the meeting encompassed various subjects such as emerging technologies, digital infrastructure, and AI. Notably, there was a specific agenda item focused on responsible AI and global AI governance. The G7’s AI code of conduct is a response to the global need for governments to address the opportunities and challenges presented by AI. The European Union took early steps in this direction with its EU AI Act, with the first draft approved in June.
Furthermore, on October 26, the United Nations established a 39-member advisory committee to address global AI regulation issues. China also launched its own AI regulations in August, and within the industry, OpenAI, the developer of the popular AI chatbot ChatGPT, announced its intention to establish a “preparedness” team to assess various AI-related risks.