Skip to main content

The AI conundrum: When morality and technology clash

Two silhouetted figures engaged in a hiking or climbing activity on a rocky coastal area during a dramatic sunset. One person is reaching up to help the other climb onto a higher rock formation.

In innovation, as in other walks of life, good intentions are not enough. Sometimes technology designed with the best purposes can ultimately lead to negative consequences.

This has proved true in the world of Artificial Intelligence (AI), where the hope is that sophisticated algorithms can free us from the boredom of repetitive tasks that we can do but don’t want to and offer a solution to those tasks we’d want to do but currently can’t. Along the way, the hope is that AI can help overcome our own inherent biases. But it doesn’t always work out that way.

Amazon, for example, deployed AI tools in its warehouses to monitor machines and workers in response to the COVID-19 pandemic. The intention was to keep workers safe by tracking their whereabouts and making sure they were complying with social distancing regulations. However, they quickly figured out the tool could also be used to track task completion. That led the company to focus more on productivity and push for onerous performance goals. The workers in response felt violated and sought redress through their union.

This is just one example that illustrates the need for a moral compass when designing and building AI technologies, especially in fast-paced organizations, to prevent well-meaning tools from becoming malevolent.

Bell Labs researchers are developing just such a compass and we feel it’s a critical step in achieving the ethical principles outlined in our 6 Pillars of Responsible AI.

In our first blog post examining these pillars, we looked at the frequent paradox between maintaining fairness and privacy. Here we’re looking more closely at morality, which tracks closely with the pillars of fairness, transparency and accountability.

A moral compass for AI

Morality defines what is "right" or “wrong,” "good" or "bad." In recent decades, moral psychology discovered multiple dimensions people use to decide what is right and what is wrong.

These dimensions, introduced by the American social psychologist Jonathan Haidt, are care/harm, fairness/cheating, loyalty/betrayal, authority/subversion and sanctity/degradation. These dimensions are the innate ethical intuitions upon which human moral reasoning is based. But Haidt and his colleagues found that individuals often approved or disapproved of hypothetical ethical scenarios based on unconscious and automatic intuitions. For example, secretly using a national flag to clean a bathroom does no physical harm but was overwhelmingly felt to be wrong.

In a similar way, our researchers wondered to what extent do moral foundations help explain AI uptake and reluctance. To answer that question, the researchers ran a crowdsourcing study involving more than 130 respondents who were asked to judge productivity-tracking technologies along the five moral dimensions.

Using statistical analyses, the researchers landed on the following three criteria to judge whether an AI technology is moral:

  1. Viability – An AI application is moral if it can be easily built from existing technologies in a satisfactory manner. An AI application that is difficult to build would be considered unviable because it would inevitably fall short and would be ridden with inaccuracies and biases and it would be considered morally unfair. For example, accurately tracking facial expressions is technically easy to do using webcams, and therefore a viable application of AI. By contrast, tracking body postures is still a hard problem because it requires complex wearable sensors that are not yet in widespread use. Body-posture tracking is therefore considered unviable AI because any system would misclassify and misinterpret the data.
  2. Non-intrusiveness – An AI application is moral also if it does not interfere with work and is “fit for purpose.” An AI application that is intrusive would be considered “disloyal” to one’s way of working and, at times, even authoritarian. For example, tracking facial expressions in online meetings is technically easy to do using webcams but is considered intrusive AI because it would disrupt the meeting.
  3. Responsibility – Finally, an AI application is moral also if it does not cause any harm or does not infringe on any individual right. An AI application that has a negative effect on individuals would be considered disrespectful and, in certain cases, even harmful. For example, tracking facial expressions with cameras in office floors is viable and non-intrusive yet would be considered harmful as it unnecessarily compromises people’s privacy.

These three criteria offer a moral compass for designing AI systems. The question is how to implement them. For the first two, we already have a clear direction. How to design “viable” and “non-intrusive” technologies are questions that the tech industry answers daily. However, the third criterion is far more complex. “Responsible” technology is an ethical concept that is hard to define.

In a best-case scenario, AI developers integrate ethical concepts through checklists: lists of items ensuring that ethical issues are considered in the development process. However, a checklist is lengthy, and each item deals with complex concepts, such as fairness or transparency, which are hard to summarize in a single sentence. Therefore, developers go through checklists without proper attention to translating ethical considerations into design features.

In an upcoming blog post, we will introduce an interactive system that prompts AI developers to think about Responsible AI principles to partially fix that. When building technologies that track productivity, AI developers will be prompted to think about, for example, individual privacy, and, in so doing, the system will offer potential solutions such as techniques that extract anonymized facial features instead of capturing entire faces, or techniques that provide aggregate analytics instead of individual metrics.

Innovation often moves forward for its own sake, without necessarily taking moral considerations into account. Often companies deploy technologies simply because they are easy to develop and do not interfere with work. But they fail to consider complex aspects such as fairness or transparency. If moral considerations are disregarded from the start, AI could be blindly applied and could, for example, turn a pleasant working environment into a workplace where surveillance spirals out of control, not to mention potentially far more catastrophic results.

It is much harder to undo the damage later than to prevent it from the start.

Interested in learning more about Responsible AI?

Nokia has defined six principles to guide all AI research in the future

Daniele Quercia

About Daniele Quercia

Daniele Quercia is Department Head of Social Dynamics at Nokia Bell Labs Cambridge (UK). He has been named one of Fortune magazine's 2014 Data All-Stars, and spoke about “happy maps” at TED.  His research has been focusing in the area of urban informatics and received best paper awards from Ubicomp 2014 and from ICWSM 2015, and an honourable mention from ICWSM 2013. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. His thesis was sponsored by Microsoft Research and was nominated for BCS Best British PhD dissertation in Computer Science.