Attempts to regulate or to attempt to regulate AI are a challenge but are growing, with many countries at least considering how this might be achieved. The EU has pushed ahead with its regulation on artificial intelligence in Europe, in the form of The European Artificial Intelligence Act.
The UK government has indicated its “pro-innovation” regulatory framework for AI and raised its profile hosting the international AI Safety Summit, to discuss an international approach to the risks of AI. What is the US doing?
Apparently inspired by a rogue AI villain in the Tom Cruise film Mission: Impossible Dead Reckoning (Part 1), the Biden administration announced a US AI-focused Executive Order on 30 October 2023, for the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (the Executive Order). The guidance supporting the Executive Order is also due to be implemented and fulfilled throughout 2024.
This Executive Order fact sheet states that that the order will (amongst other things) establish new standards for AI safety and security, protect privacy and promote innovation and competition as well as building and promoting these safeguards with the rest of the world. The Executive Order is aimed at government agencies and refers to eight key principles and priorities which are summarised below:
- Artificial Intelligence must be safe and secure through AI:
- Use and testing guidelines;
- Notification requirements;
- Infrastructure and cybersecurity; and
- Content authentication (e.g. watermarking, to label AI-generated content),
- Responsible AI Innovation, competition and collaboration should be promoted, to allow the US to become a world leader in AI and solve some of society’s most difficult challenges;
- Responsible development and use of AI should be used to support American workers;
- AI policies must be consistent with the current push to advance equity and civil rights (e.g. evaluating and keeping AI clear of discrimination or bias in underwriting models);
- Interests of US users of AI must be protected including through new consumer law protections;
- Privacy of US users of AI must be protected - to mitigate risks such as mass collection, processing and sharing of personally identifiable information;
- Managing the risks of the US Federal Government’s own development and use of AI and improving its capacity to regulate its own use of AI; and
- The US Federal Government should lead the way to global societal, economic, and technological progress and coordinate approaches with other international governments.
Considerations for US companies and UK companies operating in the US
To begin with, the impact for US privately owned companies, or UK companies operating in the US, will be limited as these guidelines will only be adopted by federal bodies (who are ‘leading by example’ in their own use of AI) and will remain voluntary for private companies. However, as the core of the Executive Order is to try and match federal government regulation and guidance with the speed of AI development, and as further US policies are due to follow throughout 2024, there may be further pressure on private companies to ensure they comply with the principles of this Executive Order – culminating in congressional legislation and an international body or watchdog.
The Order calls for various US federal agencies to take steps and make investigations into the potential effects of AI in their areas, such as:
- On the steps to be taken to watermark AI outputs (to minimise the risk of deepfakes and ensure people are aware that communications are authentic);
- Evidence of actions taken to address the risk of an AI system containing an underlying bias, based on whichever dataset it was trained on; and
- Evidence of practices to ensure AI does not disrupt the job market or undercut employees.
It seems likely that, by expansion, these will have an effect on the way US based businesses more generally operate.
Closing observation
It is clear that this Executive Order is an attempt to carefully balance between mitigating the risks of AI and stifling AI innovation, whilst leaving the door open for future guidance and standards set by the US government and other independent agencies. The real risk is that, in contrast to legislation in the US, executive orders are susceptible to rescission by future federal governments and presidents. The Biden administration will therefore be keenly aware of the need for buy-in from across the US political spectrum, in order for this Executive Order to evolve in the way envisaged.
The information contained in this article is intended to be a general introductory summary of the subject matters covered only. It does not purport to be exhaustive, or to provide legal advice, and should not be used as a substitute for such advice.