a

Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem nulla consequat. Aenean massa.

AI Regulations in the UK – 5 Key Points

AI Regulations in the UK – 5 Key Points

While the EU steams ahead with its AI Act, the UK has gone to well-documented lengths to be able to chart its own distinct path when it comes to its own regulatory compliance framework. But what does that path actually look like in reality when it comes to AI?

 

UK ICO John Edwards gave some indication as to the likely direction of travel in his speech to the New Scientist’s Emerging Tech Summit on May 15. Here are the main takeaways:

 

  1. Existing tools and frameworks can be applied to new technology and systems.

 

As far as Edwards is concerned, principles from the UK GDPR and EU DPA 2018 will apply in any event and are sufficient when it comes to defining personal data and what it may or may not be used for, regardless of the technology or medium, so he refutes the claim that generative AI is unregulated and disputes the analogy that it is like the “Wild West.” There is no push, as far as Edwards is concerned, to echo the EU with a UK “AI Act,” as there is no need; the regulatory standards we already have are enough.

 

  1. People need to trust the system they use and what happens to their personal data.

 

The same needs for transparency around privacy and the use of personal data continue to apply. Users need to be able to quickly and easily understand what happens to any personal data that these systems may acquire, and it should not be difficult to acquire or understand this from a platform. The privacy standards in operation should not be opaque or remote, but clear and understandable. So clear and cogent privacy policies will remain a necessity.

 

  1. Data protection by design and by default

 

Data protection by design is already one of the existing principles in the UK GDPR, and Edwards expects to apply it in the same way to any system using AI that is being developed. Data protection controls and compliance should be at the forefront of developers’ and system architects’ minds from the outset, not an afterthought bolted on shortly before launch. Systems that demonstrate compliance with data protection by design in their original architecture will produce outputs that easily demonstrate that personal data is being collected, processed, transmitted, retained, shared, reviewed, and eventually deleted in a manner that demonstrates compliance with legal and regulatory standards around data protection, as a matter of course. Administrators will be able to produce evidence easily that the system not only meets standards in its design but demonstrates compliance with those standards in all of its daily operations, and this evidence can be provided at ease for the benefit of data controllers and processors as well as regulators on inspection. As tempting as it may be to run off and get to the exciting task of designing and building a system, data protection has to be considered right from the outset. My experience is that data protection compliance is the last thing that entrepreneurs and developers want to consider when brainstorming and building, and Edwards is clear here that it must be among the first and primary concerns.

 

  1. One-stop regulatory shop for digital innovator:– DRCF

 

If you are developing a system operating in the field of digital communications or a highly regulated area such as fintech, you will be dealing with multiple regulators, which can be both complex, time-consuming, and disincentive to innovation. In an attempt to provide a one-stop solution for regulatory compliance, the ICO, FCA, OFCOM, and CMA have joined forces to create the DRCF, where these regulators can be consulted “all under one roof.” Nice idea, but it would have been helpful to have a few more under that roof, like the FRC, SRA, MHRA, etc. What about those innovating in pharma, medtech, lawtech, etc.?

 

  1. We have produced guidance. So. Much. Guidance.

 

Regulators never want to be accused of being asleep at the wheel, and the ICO is no exception. The only problem is that it leads to lots of reports, guidance, and consultations. Edwards refers to a whole raft of them just from the ICO. You can read through the whole thing, but I’m not sure it would enhance your understanding of what the ICO is actually looking for when it comes to the demonstration of compliance from generative AI systems.

 

 

Post a Comment