Innovation

Coding ethics into technology

Computer Ethics Professor Simon Rogerson explains why ethical considerations are vital at the design stage as society becomes more and more reliant on technology.

July 3, 2017

As a young Fortran programmer in the 1970s, I was once told to incorporate a covert time bomb into a design system that was to be rolled out to a subsidiary company. At the time, I saw nothing wrong in building these functions; the ethics of the decision didn’t cross my mind. After all, I was a junior programmer and had been told to do it by the most senior member of staff in the department. You might say I was an uneducated technologist.

Today, I believe that professional practice is unprofessional without ethics, and yet it seems that little has changed in the industry. In January 2017, for example, car giant VW pleaded guilty to using a defeat device to cheat on mandatory emissions tests, as well as to lying and obstructing justice to further the scheme.

At the centre of this scandal was misinformation generated by onboard software. That system was developed and implemented by computer professionals, who must have been party to the illegal and unethical purpose behind it.

Nearly half a century passed between these two events, which suggests that the software industry has learnt little about the importance of ethics in system design. But as technology becomes more and more central to our lives, the ethical dimensions ought to become more central too.

For example, surveillance is rarely out of the news, and it is often said to have a moral purpose: to catch terrorists and abusers. Yet in the US, it has been revealed that the FBI is designing a system to gather, identify, and (via AI) contextually analyse images of tattoos – not only to identify individuals by them, but also to infer the meaning of any tattoo within the wider population. Such a system may flag innocent people as suspects – and potentially as people who share the same values and beliefs as other suspects.

Is that an ethical development? Should technology professionals help society to walk down that road? And does the very real scientific problem of confirmation bias come into play here: designing systems that confirm pre-existing prejudices? (That would seem to be self-evident in any system that links tattoos with criminality.)

Such questions are important, because IT professionals’ unethical decisions aren’t always deliberately so: sometimes they arise from too little consideration of their own frames of reference.

Take this example: at the World Economic Forum 2017 in Davos, MIT Media Lab’s Joici Ito revealed how a group of students had designed a facial recognition AI system that couldn’t recognise an African American woman. This wasn’t because they were consciously prejudiced, but because they hadn’t considered that the development and testing environment was exclusively white and male – as is common in the industry. The system was released before anyone spotted the problem.

facial-recognition-markers-640x353

In this example, no one stepped outside of their own frame to consider the project from a different angle. ‘The world outside the box’ can – and should – present an ethical perspective. Ito’s anecdote demonstrates how a lack of diversity has both ethical dimensions and a real-world impact.

IT development can reinforce societal problems if it fails to consider them at the design stage. But is the industry trying to fix the problem?

In 2014, the BCS, the Chartered Institute for IT in the UK, ran a special edition of ITNOW focusing on ethics in ICT. In many ways, it was a litmus test of ethics progress by practitioners and academics working in tandem across the industry. It was a disappointing read: the lack of ethical consideration in systems design and implementation was evident, and the calls for action were neither new nor inspiring. There was virtually no evidence supplied and no pragmatic action demanded; the emphasis was all on top-down political rhetoric.

The report illustrated that, at best, the industry has stood still.

Ethical practice should be paramount among computer professionals. But what does this actually mean? Practice has two distinct elements: process, and product.

  • Process concerns the activities of computer practitioners, and whether their conduct is virtuous.
  • Product concerns the outcome of their professional endeavour, and whether the systems are ethically viable.

Time bombs and defeat devices fail on both counts, while racist AIs and invasive surveillance systems fail against the second (arguably, in the case of the FBI example). But why should IT professionals care more about these problems, and what can they do about it?

First the question of why.

Every day, society becomes more and more reliant on information and communications technology. Our innovations seem limitless, as does their scope to seep into all aspects of people’s lives. Application areas such as the internet of things (IoT), cloud computing, social media, artificial intelligence (AI), and big data analytics are commonplace – not just in enterprise contexts, but also in everyday consumer ones.

Some argue that, as a consequence, society becomes more and more vulnerable to catastrophe. Those fears are based on fact. For example, the ransomware attacks of May 2017 – themselves designed by coders, of course – caused the closure of many hospital A&E units in the UK, and in June, caused worldwide disruption in banks, retailers, an airport, and energy suppliers, including a nuclear power plant.

In the commercial world, the drive for efficiency, productivity, effectiveness, and profit is seen as the priority by strategists and business leaders, and this affects what IT professionals are asked to do.

Such pressure sometimes results in real short-term gains, but it can also lead to unscrupulous, misguided, or reckless actions (as we have seen).

Sometimes those actions are really inactions. For example, it was revealed in the technology press that the main reason for the ransomware’s ‘successful’ takedown of hospital systems was due to the operating system being out of date and unsupported, because of cost cuts. The government had been repeatedly warned of the risk.

The tempering of efficiency/profit drives with greater ethical considerations of their outcomes is often neglected – until something happens and triggers a public outcry. As a society we seem to accept this, but computer professionals don’t have to. Coder and technologists don’t have to accept playing their own part in unethical behaviour, such as designing systems to fail or to mislead the public.

ShowImage

But what can the technology industry do about it? There exist several ethics tools which can be used in the design process of systems. Three of these are DIODE, FRRIICT, and SoDIS.

DIODE is a structured meta-methodology for the ethical assessment of new and emerging technologies. DIODE was designed by a team of academics, government experts, and commercial practitioners to help diverse organisations and individuals to conduct ethical assessments of new and emerging technologies.

The Framework for Responsible Research and Innovation in ICT (FRRIICT) is a tool that helps those involved in ICT R&D to carry out their work responsibly. It consists of a set of scaffolding questions that allow researchers, funders, and other stakeholders to consider a broad range of aspects of ICT research.

The Software Development Impact Statement (SoDIS) extends the concept of software risk in three ways. First, it moves beyond the standard limited approach of schedule, budget, and function; second, it adds qualitative elements; and third, it recognises project stakeholders beyond those considered in a typical risk analysis.

SoDIS is a proactive, feed-forward approach which enables the identification of risks in how ICT is developed (and within ICT itself). It is embedded into a decision support tool, which can be used from the start of system analysis and design.

Why such tools have not been incorporated more into system design is open to question. Perhaps it stems from a mismatch between the pressure to complete quickly and cheaply, and the obligation to complete properly. (We can all think of applications that are rushed to market, then fixed on the fly later. In such an environment, ethics take a back seat.)

Or perhaps it’s a symptom of inappropriate education and training, where the focus is firmly on the technology at the expense of the context.

Or perhaps it’s a symptom of ‘silo thinking’ across practitioner and research communities, which prevents valuable exchange and synergy.

Or perhaps there is a general lack of awareness of the dangers associated with technology, especially among coders who prefer the binary world of computers to the messy world of human beings – an observation made by MIT’s Ito at Davos.

Whatever the reasons, the situation must change. All those in the computing profession, including new entrants, need to have the ethical tools, skills, and confidence to identify, articulate, and resist the unethical aspects of system design and implementation.

More, they should be free to challenge the decisions of, and orders issued by, their seniors, where those actions are ethically questionable, without detrimental effect to themselves.

There should be ‘three E’s’ in technology development: effectiveness, efficiency, and ethics, and these should be applied right from the start of any programme.

The tools to do this exist, but it is the desire that seems to be missing. And that’s why the charge of professional irresponsibility within the computer profession cannot and should not be ignored.

[Additional reporting Chris Middleton]