At the G7 Summit in Japan in 2023, the G7 members (UK, France, Germany, Italy, Japan, Canada, and the US, together with the EU) established the Hiroshima AI Process with the objective of putting in place trustworthy AI technical standards.
Under the Hiroshima AI Process, the G7 leaders have agreed on 11 Guiding Principles (the Principles) and a Code of Conduct on Artificial Intelligence (the Code) These documents build on the OECD AI Principles.
The Principles set out the broad terms of the guidance and the Code sets out, non-exhaustively, ways in which the Principles should be actioned (such as undertaking testing throughout a system’s lifecycle). They are voluntary guidance for organisations developing and using advanced AI systems, including generative AI systems. The Code explains that the G7 is calling on organisations to follow the actions in the Code, while governments develop more enduring and/or detailed governance and regulatory processes. The G7 are also encouraging these organisations to set up internal structures and policies in order to facilitate an accountable and responsible approach to implementing the actions in the Code and in AI development.
The Principles provide that the relevant organisations are to abide by the following, proportionate to the risks:
- Take appropriate measures throughout the development of advanced AI systems including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
- Monitor patterns of misuse, after deployment including placement on the market.
- Publicly report advanced AI systems’ capabilities, limitations, and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
- Work towards responsible information sharing and reporting of incidents among organisations developing advanced AI system including with industry, governments, civil society, and academia.
- Develop, implement, and disclose AI governance and risk management policies grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organisations developing advanced AI systems.
- Invest in and implement robust security controls, including physical security cybersecurity and insider threat safeguards across the AI lifecycle.
- Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable uses to identify AI-generated content.
- Prioritise research to mitigate societal, safety, and security risks and prioritise investment in effective mitigation measures.
- Prioritise the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health, and education.
- Advance the development of and, where appropriate, adoption of international technical standards.
- Implement appropriate data input measures and protections for personal data and intellectual property.
While the UK continues to work on developing its framework for AI, UK organisations who design, develop, deploy and use AI systems, should look to incorporate the Principles and the Code into their processes.