Exploring the AI landscape | Event summary

A month before the AI Safety Summit, leaders from across the sectors joined a workshop hosted by The Whitehall & Industry Group for cross-sector conversations about Exploring the AI Landscape. During the event, private and public sector speakers including Thomas Slater, Head of AI Institutions and Partnerships at the Department for Science, Innovation and Technology (DSIT) offered an overview of the current UK AI regulatory landscape as the government plans its response to the AI regulation consultation. 

Key Takeaways: 

  • Regulation should enable rather than stifle innovation.  
  • Stakeholders have a large focus on safety, security and robustness.  
  • There is a need to enable better exchange of information both among the several regulators who have remit over areas that encompass AI regulation and with other stakeholders. 
  • Collaboration will be key to ensure that all voices are heard and can effectively inform central government's policy work.  

Overview 

In 2021 the National AI Strategy set out a ten-year plan to make the UK a global AI superpower. It was noted that the landscape has changed significantly since publication and with the introduction of generative AI technologies such as ChatGPT and other foundation models. The government is introducing a regulatory framework that is designed to be flexible so it can adapt to the fast rate of change 

Enabling innovation 

Governance of AI including regulation, standards, and assurance was a key pillar of the strategy.  As part of that, the department created a regulation framework with key characteristics to help ensure regulation meets the needs of the current AI landscape including in the following ways: 

  • Enabling Innovation: The first characteristic, “pro-innovation”, emphasises regulation should enable rather than stifle innovation. 
  • Allowing responsiveness: The characteristic, “adaptable”, aims to allow the regulator to respond to the rapid pace of change.  
  • Encouraging collaboration: With the goal of encouraging stakeholders to work together to facilitate AI innovation, other key characteristics of the framework include being proportionate, trustworthy, clear, and collaborative.  

 

A focus on safety 

There are five values-focused cross-sectoral principles in place to implement in the regulation framework, which were later mentioned in the King’s speech.  

Currently, there is a large focus on the first principle: safety, security and robustness. The AI Safety Summit showcased this focus on a global scale. There, several countries signed The Bletchley Declaration, an international agreement to the safe and responsible development of frontier AI. 

Enabling better exchange of information  

There is a clear need to monitor and evaluate the impact of regulation and how it is implemented.  

Currently, the regulation that applies to AI is split across several regulators. This shared responsibility means that it is essential for them to communicate with each other. In aid of this process, the government is considering creating a central coordination function to enable better exchange of information and knowledge between regulators and other stakeholders. 

The process of setting up of central functions has already started with a risk monitoring unit aiming to build greater capacity to understand the implications associated with frontier AI and inform the UK’s policy approach. 

Harnessing cross-sector expertise 

The Frontier AI Taskforce served to bring together expertise from across the landscape to strengthen UK capability on safety evaluation and deliver public sector use cases. Following the Summit, the Taskforce has now evolved into the AI Safety Institute, advancing knowledge around examining, evaluating and testing new types of AI to understand the capabilities of new models. 

A joined-up approach involving stakeholders across the whole landscape, including regulators, business, academics and civil society is needed for success. With the landscape evolving at such a pace, collaboration will be key to ensure that all voices are heard and can effectively inform the policy work being done within central government to create an agile, secure, pro-innovation approach to AI regulation. 

What AI means for organisations 

As part of the workshop, delegates were polled to gain an understanding of where cross-sector collaboration is most beneficial across the AI ecosystem. 

When polled about what area of the AI ecosystem could benefit most from cross-sector collaboration, regulation came out as the clear lead for delegates. Building trust and ensuring security in the use of AI followed as key areas for collaboration. 

Delegate Learnings 

Delegates discussed their thoughts on the opportunities and concerns around the adoption of AI and what these mean for their organisations as well as for themselves as individuals. 

Opportunities 
  • AI can greatly reduce the time spent on manual processes, transforming productivity in the way we work – for example, by taking minutes, creating documents, or conceptual design for engineering structures. 
  • AI can help with due diligence and initial desk-based research. 
  • There is a potential to widen or narrow access to these technologies, dependant on settings and service requirements. 
Concerns 
  • There are risks associated with tech being used clumsily and without purpose. 
  • Errors at source can cause erosion in the use of AI - is the data in use accurate? 
  • With any advanced technology, there is a concern that its use can create a chasm for digital exclusion. 
Next steps for organisations 
  • It is hard to understand what this tech is going to do for you in the future, so employees need better training to understand prompts. 
  • Given the early stage of generative AI, human oversight is important. How do we get to the stage where the technology is trusted and how will humans then use the systems? 
  • AI is an evolution of current processes. Social acceptance and ethical use of this tech is important. There is a need to develop use cases outlining the benefits and risks. 

Conclusion 

Whilst the AI landscape continues to develop at pace, bringing together experts from across the public, private and not-for-profit sectors is vital in creating a safe and secure ecosystem that embraces the opportunities and understands the risks associated with these technologies. 

 With AI evolving so rapidly, it was unsurprising to that when surveyed, 69% of delegates did not feel they understood the potential impact of AI on their roles over the next ten years. 

 WIG will continue to explore how best to bring together these voices and offer regular events in our technology and digital theme. See a listing of all our upcoming events here. 

Written by

As Event & Content Manager, Leo is responsible for producing and managing a wide range of WIG events.

 

Prior to joining WIG, Leo was Head of Events at the PRCA, the professional association for the PR and communications industry.

 

In his spare time Leo enjoys gardening and spending time in the allotment with his greyhound, Del. He also plays drums in an indie-blues band.

Originally published:

;