The use of artificial intelligence (AI) in the life sciences industry is rapidly advancing, with promising applications in drug discovery, clinical trials, and patient diagnosis. AI is a particularly useful tool for life sciences businesses given its ability to efficiently process large amounts of data, thereby reducing costs and scope for human error, while improving development speed for new products and methods. However, AI needs to be used in a considered manner and in accordance with regulations.
On 18 July 2022, the UK set out its proposals for the regulation of AI within the UK. The proposals aim to strike a balance between protection of the public whilst promoting innovation and are another step in the UK’s National AI Strategy.
AI regulation in the UK is not currently dealt with in one place; rather it is contained within various legal instruments, such as data protection law or the Equality Act 2010. The proposals therefore play a part in developing a UK framework, creating a more coherent approach between sectors and clarity for businesses and the public alike.
We have summarised the key elements of the proposals published below and expect a white paper with fuller details later this year.
What will fall into scope?
The proposals do not define AI and instead set out its core characteristics. This aims to retain flexibility whilst still providing an element of coherence, with the intention being that individual regulators will adopt more specific definitions where required in their sectors informed by the core characteristics.
The two core characteristics of AI according to the proposals are:
- "Adaptiveness" of the technology – i.e. can the intent or logic be explained?
- "Autonomy" of the technology – i.e. does it require instruction or oversight from a user?
What are the six core principles?
Where technology falls within scope, its developers and users will need to have regard to the six core principles described in the proposals.
The six core principles contained within the proposals require developers and users of AI to:
- Ensure AI is used safely
- This is especially applicable to life sciences businesses such as healthcare, pharmaceutical and biotechnology companies.
- Ensure AI is technically secure, and functions as designed
- The performance of AI systems as developers intended is likely to instil public confidence, allowing for the continued commercialisation of AI.
- Ensure AI is appropriately transparent and explainable
- This principle may vary in practice between sectors. Some regulators may seek to prohibit AI decision making which cannot be explained.
- The proposals also suggest some example transparency obligations, such as requirements to provide information relating proactively or retrospectively to the data being used and information relating to training data.
- Consider fairness
- Again, this may have different applications depending on the sector and all regulators will need to consider what fairness looks like in their sector.
- Identify a legal person to be responsible for AI
- This may be a corporate or natural person.
- Clarify routes to redress or contestability
- Using AI should not remove the right to contest an outcome.
How will the proposals be regulated?
The proposals take a sector-specific approach. That is, sector regulators will be responsible for applying the principals to their relevant sectors. For the life sciences sector, the Medicine and Healthcare Products Regulatory Agency will apply the six principles in overseeing AI. The relevant regulatory bodies for other sectors include Ofcom, the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO) and the Financial Conduct Authority (FCA). However, it is worth noting that the principles will initially be non-statutory so they can be monitored and updated if required.
Next steps?
Whilst the proposals are a welcome step within the UK, it needs to be acknowledged that they do not change the legislative framework around AI and instead are non-statutory “guidelines” to help increase public trust and innovation. The usefulness of the proposals is yet to be seen; the white paper expected later on this year will shed some more light on the impact of the proposals and is an opportunity to amend the core characteristics of AI or the six principles.
Further, it cannot be overlooked that UK businesses also operating within the EU will also need to consider the EU regulation of AI. Please see our article regarding EU regulation here. The UK and EU frameworks take fundamentally different approaches; the EU focussing on the risks posed by AI systems generally and the UK taking a sector-specific approach.
If you would like our help navigating your use of AI within the UK, or to discuss the proposals please do get in touch.