An Interview with our AI Collaboration Forum Speaker, Ben Gilburt, AI Ethics Lead, Sopra Steria

In advance of WIG’s first ever AI Collaboration Forum on 7 February 2020, we interviewed one of our speakers, Ben Gilburt, AI Ethics Lead at Sopra Steria; Ben shared his insight on current practice in AI, the future of AI and the consequential need for new regulations. 

What is the current focus for your team at Sopra Steria? 

I’m part of Sopra Steria’s new Digital Ethics & Tech for Good Team.  We’ve created a Digital Ethics framework and defined our own Principles and Design Conventions to guide our work and help our customers manage Digital Ethics.  We’re looking at ethical issues across technologies, not just AI – thus Digital Ethics.  AI will be an immensely transformational technology in our society, but is not the only one that creates ethical issues – everything from VR to digital video to Cloud technology projects could be improved if ethics was considered more frequently.  We’re also active in the AI Ethics space specifically, contributing to emerging standards and leading the conversation on these issues through publications, events and communities.   

What one step can organisations take to contribute to making the UK a world leader in ethical AI? 

For organisations – if they are to take one step, we’d suggest using a framework – perhaps ours? -  for assessing the projects that you will work on, and that guide the team through a process of responsible innovation.  The idea is not to have a one-size-fits-all approach to ethics, but for organisations to pro-actively consider the ethical risks and opportunities of the technology they are creating, procuring and putting to use. There are practical actions every organisation can take despite this area’s complexity. 

On a related note - we think the UK as a nation is doing pretty well at leading in ethical AI globally. We have a thriving AI sector (third highest VC funding after the US and China) and amongst the highest public funding in Europe and globally (thanks to the AI sector deal and overlapping interests from the Industrial Strategy Fund). We are also a great example of linking institutional infrastructure to government, with the Ada Lovelace Foundation and the Turing supporting policy decisions. Our government has clearly shown intent to continue on this direction, with the House of Lords 'Ready, Willing and Able' report highlighting ethical AI. 

One way the UK government might help us become (or remain) a global leader in ethical AI is by aligning more clearly, or at least explicitly with international frameworks. Specifically, if we regard ethical AI in the same way as the European Commission, the UK hasn't made such obvious links to align itself with 'Trustworthy AI' (e.g. Like Denmark, Luxembourg and Switzerland). That's not to suggest our actions go against the concept, but rather that we have not made the link obvious.

What are key considerations around effective governance of AI?  

Understanding the context – be that organisational, regional, national or international. No two governance models should be exactly the same, though we do expect to see convergent values emerge. 

Communication – this goes both ways. We need to understand customers’/citizens’ unique needs, interests and motivations, and we need to educate our customers/citizens to make informed decisions about their use of AI . 

Sponsorship – to ensure that the action that needs to be taken can be taken. It is meaningless to put governance of AI in the hands of people with good intentions, who lack the authority, remit or budget to make necessary changes. Good intentions need to be paired with the time and space to research and understand complex questions, a remit and authority to make change and the budget to do so.

How can we build trust in AI? 

There are emerging guidelines for Trustworthy AI from the European Commission – these focus on making AI lawful, ethical and robust according to certain principles.  

There is still a question around operationalising Trustworthy AI . We are optimistic that the process will become clearer in 2020, with the High-level Expert Group on AI actively engaging with businesses that have piloted the ‘Ethics Guidelines for Trustworthy AI’ to understand their experience and challenges putting them into action. 

Also, we do not believe policy initiatives should be set entirely around red-lines, banning dangerous uses of AI. Though that is important, we believe trust can be built by incentivising applications for AI that support our society, protect vulnerable people and take on humanitarian challenges . 

You will be speaking at the WIG AI Collaboration Forum on 7 February, what are you hoping to get out of the day? 

I hope we can have a meaningful conversation about what the Government and Industry can do, turning the theory – of which there is plenty – into practical action. There’s a great line-up of speakers that we could learn a great deal from, and we hope our experience will be able to add to the conversation too.