Empowered, not dependent

01.11.2023 The author believes that artificial intelligence has the potential to make humanity stronger. But this will only succeed if an emphasis is placed on the collaboration between humans and machines. Sarah Dégallier Rochat talks about a more humane approach to the digital transformation.

Are machines on the verge of replacing humans? Will they take our jobs? In the age of artificial intelligence, is there still room for human qualities? Current discourse tends to paint the new, increasingly sophisticated technologies in an apocalyptic light. This way of thinking about artificial intelligence (AI) exaggerates its abilities, however, giving rise to notions that are only loosely related to the actual reality of the technology.

Such hyperbole slows both social progress and research. It prevents us from developing solutions that put us on an equal footing with AI.

The myth of the artificial superhuman

In common parlance, outside of the specialised academic community, the term ‘AI’ is largely used in the sense of a fully humanoid artificial general intelligence (AGI). Such anthropomorphic systems are omnipresent in popular culture − just think of Hal in Space Odyssey, Kit in Knight Rider or the T-800 in the Terminator franchise. They make it easy to forget that this incarnation of artificial intelligence is still entirely fictional – for the moment, at least.

This way of thinking about artificial intelligence (AI) exaggerates its abilities, however, giving rise to notions that are only loosely related to the actual reality of the technology.

Sarah Dégallier Rochat
Sarah Dégallier Rochat

On the other side of the spectrum stands narrow artificial intelligence (NAI). Rather than replicating human intelligence, NAI technology is designed to solve a highly complex, yet highly specific problem. This type of artificial intelligence, unlike AGI, is already very advanced.

Experiences of automation in industrial contexts have taught us: repetitive, strenuous manual labour is the first type of work to be taken over by machines. Work that requires an ability to improvise or solve problems continues to resist successive waves of automation for now. Artificial intelligence requires enormous volumes of data and extensive training. At present, it can merely imitate creativity. As adaptable human beings, we have every reason to be relatively optimistic about the future.

Mechanisation or support?

Automation technologies are unlikely to replace us in the near future, but they will almost certainly change the way we work. As AI advances, it will increasingly render human decision-making superfluous.

This risk challenges us to be mindful in our development of new technologies, to ensure that our interaction with AI supports and strengthens human workers rather than mechanising them. Rather than thinking of artificial intelligence as a replacement for humans, we shift our focus to augmented intelligence that supplements and supports our own abilities.

Augmented intelligence

For an example of augmented intelligence, imagine a situation in which an AI-based software automatically extracts relevant information from thousands of pages of text so that a human can then draw conclusions from the selected sections. Or think of an AI highlighting anomalies on X-ray images, suggesting translations or detecting quality deficiencies to help a human respond to the information swiftly – that, too, is augmented intelligence.

Such a division of labour between humans and machines is helpful: analysing data “manually” or identifying anomalies is for humans a time-consuming process that is prone to error. Similarly, the machine cannot consider any aspects of the data beyond its own narrow view. Augmented intelligence, then, facilitates faster and more efficient decisions that are nevertheless still “human”.

Of course, certain problems arise when we try to implement such systems. Human employees struggle to question the results produced by AI. After all, AI cannot think logically: it merely computes solutions based on probabilities calculated from the training data and a specified set of target parameters. Like any new technology, augmented intelligence carries the risk of further entrenching stereotypes and power imbalances instead of resolving them.

Augmented intelligence, then, facilitates faster and more efficient decisions that are nevertheless still “human”.

Sarah Dégallier Rochat
Sarah Dégallier Rochat

Ethics for the black box

As a society, the onus is on us to decide which moral values should underpin the interaction between human and machine in future. It is difficult to conceive of an entirely unbiased artificial intelligence. Alongside the familiar prejudices, training data also contain patterns of which humans are not even aware yet.

If an AI tasked with pre-selecting job applicants’ CVs detects in its training data that people with a certain resemblance to a person with a mostly negative public image are less likely to be hired, it will process this pattern as a recommendation. Such patterns in the data give rise to unexpected discrimination against us humans. This means that a totally neutral AI is unrealistic. But what we can decide is how AI should handle known forms of discrimination, marginalisation and exclusion.

The augmented worker

With augmented intelligence, the machine is no longer a replacement for humans but rather an expansion and fortification of the human workforce. Indeed, the world of industrialisation is currently experiencing a paradigm change:

in the past, machines used to be complex and expensive. Workers had very few opportunities to modify them or supplement their own abilities. New technologies such as collaborative robotics are democratising the technology, making them accessible even to those without expert training.

They turn AI and robotics into tools that make workers more efficient and help them in their work. These technologies no longer aim to reduce the scope of human decision-making to an absolute minimum in order to prevent error. Instead, they support self-directed workers and draw their attention to potential errors.

We need an AI code of conduct

The idea that AI may soon be the better human is as fascinating as it is misguided. Instead of thinking about utopian visions of the future, we need to understand how we want humans and machines to interact in future. Ultimately, it is not the technology itself that poses a threat to humans, only the way in which we implement and approach it.

Darstellung eines Mediziners, der mit digitalen Hilfsmitteln arbeitet.
Humane AI helps people to do their work better.

Find out more