Innovation

AI: Friend or Foe? (Part II)

In the second part of his report, Chris Middleton looks at the strategic challenge of implementing AI systems.

March 7, 2017

In part one, we looked at just some of the new applications of AI and how these might impact on lives and businesses. But what are the strategic challenges for organisations? Is there a risk of some rushing to implement the technology inappropriately, or without considering possible outcomes?

A global survey of C-level executives last year by consultancy Avanade found 77 per cent admitting that they have given little thought to the ethical implications of smart applications and devices, suggesting a global attitude of ‘invest first, ask questions later’.

That suggests a tactical mindset in many businesses, not a strategic one. But the fallout from the Tay experiment should be foremost in customers’ minds. You may recall, Microsoft’s chatbot was launched on Twitter last year, where it learned hate speech from internet trolls within 24 hours of its debut.

Nadella called these incidents “attacks”, but the fact is that Tay simply failed to understand the nuances of human communication in a society where people are free to ask a robot whatever they want, subject to UK, US, and European laws.

Tay’s Chinese chatbot counterpart, Microsoft Xiaoice, didn’t encounter the same problems when it launched online. This reveals an inconvenient truth: cultural differences, freedom of speech, and AI are not always easy bedfellows.

3902d604-b30c-4e10-ad35-9b54dd3611b5-large16x9_microsofttaychatbotMGN2

So what’s the answer?

This year, the European Union announced its intention to regulate the markets for robots and AI, in an attempt to ensure that the technologies’ benefits remain targeted at a fairer society. But Brexit and the rise in anti-European sentiment in the UK, US, and even parts of Europe itself, may threaten those ‘eurobot’ ambitions, and leave all the power with private companies.

Without greater oversight and regulation, it stands to reason that any platform-wide application of AI in a world in which Amazon, Google, Apple, Microsoft, and others compete not just for sales, but also for customer loyalty and partner rewards, risks opening up a world of antitrust behaviour on an epic scale.

“Alexa! Order me some coffee!” or “Hey Siri, read me the news!” are innocent enough requests. But which coffee and which news? Why? And who benefits? Ask your Amazon Echo to book you a flight to New York and two nights in a Midtown hotel, and then consider why it has suggested one airline and one hotel group above others.

So transparency and trust will be essential in a world where embedded intelligence helps people to make decisions about what to buy, what messages to listen to, and what information they need – or makes those decisions for them.

This is why IBM’s Rometty believes that organisations need to adopt three principles when it comes to AI. She said, “One is understanding the purpose of when you use these technologies.

“For us, the reason we call it ‘cognitive’ rather than ‘AI’ is that it is augmenting human intelligence – it will not be ‘Man or machine’. Whether it’s doctors, lawyers, call centre workers, it’s a very symbiotic relationship conversing with this technology. So our purpose is to augment and to really be in service of what humans do.

“The second is, industry domain really matters. Every industry has its own data to bring to this and to unlock the value of decades of data combined with this. So these systems will be most effective when they are trained with domain knowledge.

“And the last thing is the business model. For any company, new or old, you’ve accumulated data. That is your asset… And that [principle] applies to how these systems are trained.”

hstjfdqwj6dc0kt2pmzi

Microsoft’s Nadella shares the belief that AI should be seen as a strategic complement to human intelligence, and not as a replacement for it. He says, “That’s a design choice. You can come at it from the point of view that replacement is the goal, but in our case it’s augmentation.”

But the problem is that many client organisations see AI, machine learning, robotics, and automation as simple replacements for human workers, allowing them to slash internal costs and leave replicable processes running 24 hours a day.

Last year Dr Anders Sandberg of Oxford University’s Future of Humanity Institute predicted that 47 per of all jobs will be automated, adding, “if you can describe your job, then it can – and will – be automated”.

And this mindset doesn’t just affect private sector enterprises, such as banks, law firms, and customer service centres. A recent report, ‘Work in Progress: Towards a Leaner, Smarter Public Sector Workforce’, by right-wing think tank Reform, claims that AI, robots, and automation will sweep aside 250,000 public sector jobs in the UK alone – including many teachers, doctors, and senior administrators.

The technologies will arrive like Uber in the public sector, suggests Reform, creating a ‘gig economy’ in which expert human citizens compete via reverse auction to offer their services at the lowest possible price while the robots run the machineries of central government, along with local functions such as health and education.

The report is binary, simplistic, and ideology driven, reflecting a world in which all the focus is on cost and very little on social value, human benefit, or risk. That said, its headline findings may appeal to organisations that relish the (apparent) prospect of easy solutions and sweeping efficiency drives.

But all this should ring some familiar bells for innovators and technology/business strategists. A decade ago, offshore outsourcing (offshoring) promised easy help desk solutions at dramatically lower cost, before disastrous customer feedback forced many organisations to engage in expensive repatriation programmes.

Domestic damage to corporate reputations wasn’t part of the original equation when organisations rushed to send their call centres overseas; but it should have been. That’s obvious with hindsight, so why didn’t anyone ask simple questions at the time, such as: “What if people don’t like it?”, and “What message does this send our customers?”

With a similar rush towards AI and automation today, there’s a risk that what AI technology companies aim to provide and what some enterprises believe they are getting are completely different things. This pushes AI’s design ethos and its underlying ethics into the spotlight. Or at least, it should do.

Joichi Ito is head of MIT’s Media Lab in the US, where he works with the next generation of technologists. Describing some of his own students as “oddballs”, he admits that serious problems can arise with AI as far back as the design stage.

Joining Rometty and Nadella onstage at Davos, he said, “I think people who are very focused on computer science and AI tend to be the ones that don’t like the messiness of the real world. They like the control of the computer. The like to be methodical and think in numbers. You don’t get the traditional philosophy and liberal arts types.”

But human society isn’t binary; it’s messy, complex, emotional, nuanced, and sometimes irrational, biased, or prejudiced.

manmachine

Ito admitted that problems such as these can be perpetuated, rather than solved, by coders: “The way you get into computers is because your friends are into computers, which is generally white men. And so if you look at the demographic across Silicon Valley, you see a lot of white men.

“One of our researchers is an African American woman, and she discovered that in the core libraries for facial recognition, dark [sic] faces don’t show up. So if you’re an African American person and you get in front of it, it won’t recognise your face. And she discovered this because, probably, there was no one who had a dark face in the place where they were building and testing.”

You read that correctly: coders designed a racist AI, not because they set out to do so, but because of the lack of diversity in their own closed, inward-facing group.

Ito explained, “One of the risks that we have in this lack of diversity of engineers is that it’s not intuitive which questions you should be asking, and even if you have design guidelines some of this other stuff is a field decision.

“So one thing we need to think about is that when the people who are actually doing the work create the tools, you get much better tools. And we don’t have that yet – AI is still somewhat of a bespoke art. Instead of creating a solution, you need to integrate the lawyers and the ethicists and the customers to get a more intuitive understanding of the tool.”

Conclusions

AI will benefit human beings in countless ways and help many of us to innovate and do our jobs better. It may help mankind to cure diseases and uncover new intelligence in research and development. But Ito is right: AI is never entirely ‘artificial’, but often a simple expression of the belief systems of its human designers, coders – and customers.

The first things to be learned by machines or automated in a robotic world aren’t repetitive, replicable tasks, but business leaders’ – and governments’ – assumptions about the societies in which we live, or the markets in which organisations operate.

But what if those assumptions are wrong?

Those assumptions can be reinforced by software designers who – in many cases – understand the on/off, yes/no world of computers rather better than they understand complex human beings. As a result, any false assumptions (together with any bad or incomplete data) can quickly be cast into algorithms that spread worldwide.

Terminator_28453_4db5a1135e73d67af40067b5_1303953272-640x360

Just imagine if Tay hadn’t debuted on Twitter, but on the customer help desk of a government department or of a multinational brand: catastrophe.

This is the real ethical dimension of AI: it isn’t so much in the applications – many of which will be transformative and positive  – it is in the thought processes that occurred before the coders were brought in, combined with the mindset and world views of the programmers.

Add to that whatever the customers believed they were getting – which as we’ve seen, may be completely different to what the providers set out to design. And then factor in the messiness of the real world, which includes trolls trying to break the system.

Never forget: trolls are customers too.

So: buyer beware. Innovate and get creative with AI, but think strategically – not tactically – about it, and engage the one thing that computers don’t yet have: your common sense.

Use AI to augment and complement your business, your internal data, and your hard-won human expertise. And if systems are being designed especially for you, then check your assumptions at the door.

Think like your customer actually thinks, and not how you would like them to. And consider how the world actually works, rather than how you would like it to.

If you can’t do that, then don’t expect artificial intelligence to do it for you.