Skip to main content

The current interest in all things AI can probably be measured by the packed out main concert hall at the Barbican last weekend for the Wired Pulse: AI event, followed by a crawl through the even more packed AI: More than Human exhibition.  And if that doesn’t convince you that we’ve all become slightly AI obsessed, then the queues for the robot-operated bar will.

It’s fitting that a weekend looking at the future of AI was accompanied by cocktails as it sometimes feels that’s what we’re going to need to cope.  Headlines abound of robots taking our jobs, concerns grow about AI bias, and the overreaching power of the big tech algorithm is well-documented, most recently by our client the Mozilla Foundation in its latest Internet Health Report.  Just this week Oxford University announced a major donation, partly to create a new Institute for Ethics in AI.  

Not surprising then that the day kicked off presenting, in part, the current view of AI as threat or panacea.  At the heart of this is the knowledge we will only use it responsibly if we truly understand its impact on the human beings it’s designed to help, and that machine learning and algorithmic outputs are only as good as the data we put in.

There were a number of presentations which picked up on these themes.  Terah Lyons of Partnership on AI talked about the need for AI research and development efforts to be actively engaged with – and accountable to – a broad range of stakeholders; in particular the need to get close to the problem you are aiming to solve and involve the people you are trying to impact.  This effectively means combining those affected by tech with those creating it, which sounds simple but clearly doesn’t happen often enough.

Sandra Wachter of the Oxford Internet Institute, in a presentation on fairness, privacy and advertising, talked about the importance of transparency in understanding what the algorithm is thinking or inferring about people, and the dangers of affinity profiling (grouping people according to their assumed interests).

Among a list of 12 rules for a safer AI future, Cassis Kozyrov of Google reminded us of the importance of seeking a diversity of perspectives in developing AI solutions, not to get distracted by the sci-fi (AI is a useful tool, but not a person…) and to “wish responsibly”, which I think could also be read as ‘be careful what you wish for’.

Vishal Chatrah of Prowler.io also stressed the importance of building AI going hand in hand with building trust, but also thought that, while we need to be careful of the downsides, they are outweighed by the upsides and was ambitious that this was an area Europe could lead.

In this spirit, alongside the notes of caution we heard plenty to be optimistic about.  Lucy Yu and five.ai are busy navigating multi-country regulations and a Highway Code built for an analogue age in their quest to deliver a driver-free, car-sharing future.  One where we’re hopefully safer, or at least as safe, our roads are less congested and we’re collectively a little less stressed.

The ever-brilliant Marcus du Sautoy, Professor of Mathematics at Oxford and author of The Creativity Code, recognised that fear around AI was a prevailing theme, but also argued how AI could help us push creative boundaries and take us out of our creative comfort zones.  

This certainly seemed to be the case with the performance from the amazingly talented Reeps One, blending high-end beatbox skills with AI to produce organic works of art (I think).

Most important, though, was the presentation from Katherina Sophoa Volz of OccamzRazor who, driven by a desire to find a cure for Parkinson’s, is using machine learning to build the first ever complete map of the disease.  Among other things it allows her and her team to understand how it works, evaluate effectiveness of treatments and find the quickest path to a cure. Truly inspiring stuff.  

As on my last visit to the venue (Jonas Kaufman singing deutsche Lieder) I left the concert hall having enjoyed proceedings more than I was expecting. But probably no less concerned.

What is clear is that, carefully introduced, AI has the potential to do good and some of it can be quite a lot of fun along the way.  But the early signs, particularly highlighted in parts of the exhibition, include a range of unintended consequences.

Amidst the ambition, optimism and creativity, trust and responsibility were recurrent themes.  But, as we know, trust is earned and stems directly from trustworthy behaviour. It’s not something the AI community can just say is important, ask for or claim.  It needs to be demonstrated.

There were some clues during the day to how that trustworthy behaviour can be built in: involving those people directly impacted by the proposed solution in the design process, engagement with a diverse range of voices, the highest levels of transparency and open, honest fixes when things go wrong, which they inevitably will.

One of the speakers put up a list of rules, documents and declarations of AI and ethical principles from academia, non-profits, governments and industry.  It was a long list, which makes it seem that recognition of the issue and goodwill isn’t the issue here. What’s perhaps needed is a step back to look at some of the structural issues among those doing the developing and introducing new technologies to work out what seems to stop the good intentions from being implemented in some cases.

There is undoubtedly good stuff here, but, at present, an understandable amount of anxiety and distrust, which parts of the AI community at least sound like they’re listening to.  What comes next is how that listening is transformed into action before, not after, the event.

I hope this doesn’t sound too “AI”, but you can be sure the world will be watching.

Leave a Reply