Skip to content

Human agency in the age of algorithms

“A reductive, simplistic, yet relatively neat mathematical model is offered as a streamlined solution for dealing with messy complex human behaviour.”

Text version

The AI project
Artificial intelligence (AI) and machine learning (ML) have become pervasive terms, found in ever-expanding contexts and applications. These include tasks such as simulating board games, generating texts and predicting human behaviour.

Machine learning algorithms are mathematical computations that carry out specific calculations when given well-defined problems with a set of loss functions (parameters for estimating downsides or costs). Building mathematical models of behaviour, action and preference using algorithms first requires framing these human phenomena as well-defined problems with a set of optimization functions (criteria for enabling selection of the best outcome from a range of alternatives). These computations take inputs, perform well-defined calculations, and return outputs. More importantly, however, we can only optimise for what we can codify and measure.

In practice, at its core, ML is a process, however remote it may seem to be, of exhaustively mapping, automating, controlling, manipulating – and thereby directly or indirectly influencing – the physical, psychological and social world.

Most importantly, the attempt to map, formalise and automate (one aspect at a time, in a piecemeal manner) does not come from any desire to develop a meaningful understanding of ‘the human condition’. Often, the intent to control, manipulate, influence and predict future behaviours and actions is the major driving force. This line of thinking tends to see the human condition as a problem that needs to be solved. Sometimes, attempts to map, formalise and automate are intended to make the human redundant. The reductive, simplistic, yet relatively neat mathematical model is offered as a streamlined and cost-effective solution for dealing with messy complex human behaviour.

Computer vision, for example, utilises visual input such as image and other video data in order to measure, map, formalise and effectively structure and influence human cognition, behaviour and interactions, the physical environment, and eventually the social fabric. Every aspect of daily life is tracked, monitored, recorded, modelled and surveilled – from our geographic position to our facial expressions, from our body movement to our breathing, scratching, gait speed and sleep patterns – via the traces we leave, such as radio signals, location or video data.

Such mapping, formalizing, monitoring and surveilling of human behaviour and the physical and social environment is done under the guise of convenience. ‘Frictionless’ transactions, ‘smart’ home assistants and fitness tracking systems, for example, are presented as tools that make daily life seamless. Such technologies, and the normalisation of these practices, at the same time normalise surveillance, along with the underlying, mistaken assumption that human affairs and the social world are something that can be entirely mapped, controlled and predicted and the human condition is a problem that can and needs to be solved.

These attempts to measure, codify and control human behaviour and social systems are both futile and dangerous.

Human cognition
Treating human cognition in mechanistic terms, as static and self-contained, tends to mistakenly equate persons with brains. Focus on the brain as a seat of cognition has led to overemphasis on abstract symbol manipulation and on mental capabilities. Although the brain is important, humans are more than brains. Furthermore, the brain itself is a complex, dynamic and interactive organism that is not yet fully understood.

We know that humans are not brains that exist in a vat, nor do we exist in a social, political and historical vacuum. People are embodied beings that necessarily emerge in a web of relations. Human bodies are marked, according to cognitive scientists Ezequiel Di Paolo and colleagues, by ‘open-ended, innumerable relational possibilities, potentialities, and virtualities.’ As living bodies, which themselves change over time, we are compelled to eat, breathe, sometimes fall ill, fall in love and so on. We necessarily have points of views, moral values, commitments, and lived experiences. We are sense-making organisms that relate to the world and to others in ways that are significant to us. Excitement, pain, pleasure, joy, embarrassment and outrage are some of the feelings that we are compelled to feel by virtue of our relational existence.

Human beings, according to Di Paolo, are not something that can be finalised and defined once and for all, but are always under construction and marked by ambiguities, imperfections, vulnerabilities, contradictions, inconsistencies, frictions and tensions. Human ‘nature,’ if we can say anything about it at all, is the continual struggle for sense-making and resolving tensions. In short, due to these ambiguities, idiosyncrasies, peculiarities, inconsistencies, and dynamic interactions, human beings are indeterminable, unpredictable and noncompressible (meaning that they cannot be neatly captured in data or algorithms).

Furthermore, social norms and asymmetrical power structures permeate and shape our cognition and the world around us. This means that factors such as our class, gender, ethnicity, sexuality, (dis)ability, place of birth, the language we speak (including our accents), skin colour and other similar subtle factors either present opportunities or create obstacles in how a person’s capabilities are perceived.

The fact that humans and the social world at large are not something that can be neatly mapped, formalised, automated or predicted does not stop researchers, in big tech and startups alike, from putting forward tools and models that reductively or misleadingly claim to sort, classify and predict aspects of human behaviour, characteristics and actions.

Humans are not machines and machines are not humans
Simplistic views of human cognition are not new. Historically, the brain has been compared with the some of the most advanced inventions or ideas of the time. At the height of the industrial revolution, the steam engine served as an apt metaphor for the brain. Go back a few thousand years and we find that the heart (not the brain) was seen as the central organ for cognition/thinking. Our paradigms and metaphors are reflections of the time and not naturally given facts. From the 1970s, new powerful metaphors have come to pervade: for example, the brain as information processing machine metaphor. This is neither an argument against metaphors nor an attempt to deny that there can be parallels between brains and computers.

Metaphors are an important tool for understanding complex concepts. However, the problem arises when we forget metaphors are just that: metaphors. As the evolutionary geneticist Richard Lewontin contends, ‘We have become so used to the atomistic machine view of the world that originated with Descartes that we have forgotten that it is a metaphor. We no longer think, as Descartes did, that the world is like a clock. We think it is a clock.’

That is, we have come to think of ourselves in terms of machines. And conversely, to think of machines as humans. In order to do so we have reduced (and at times degraded) complex social and relational behaviour to its simplest form while at the same time elevating machines to a status at or above the human, as impartial arbiters of human knowledge. As researchers Alexis Baria and Keith Cross (2021) put it, ‘the human mind is afforded less complexity than is owed, and the computer is afforded more wisdom than is due.’ When we see ourselves in machinic terms, human complexities, messiness and ambiguities are seen as an inconvenience that get in the way of ‘progress’ rather than a reality, part of what it means to be human. Similarly, from the perspective of the autonomous vehicle developer, when human behaviour comes into conflict with what AI models predict, human pedestrian behaviour, for example, is seen as an irrational anomaly and dangerous.

Current advanced technologies such as generative models that produce ‘human-like’ texts, images or voices are treated as authors, poets or artists. State-of-the-art models such as ChatGPT (and its variants), Stable Diffusion and MidJourney can produce impressive outputs with carefully curated prompts. ChatGPT, for example, can write rap lyrics, write code and even essays when given unambiguously framed prompts. Similarly, Stable Diffusion and MidJourney can produce images in the style of a given artist. It is important to note that these models also produce outputs that are harmful (descriptions of how adding crushed porcelain to breast milk is beneficial), discriminatory (treating race as a determining factor for being a good scientist, in which the white race ends up superior), or where it generates plausible sounding outputs that falsely appear as facts (when such facts are non-existent).

It is a mistake to treat these models as human, or human-like. Putting issues such as art forgery and plagiarism aside, what these models are doing is generating text and image outputs based on text and image data ‘seen’ previously. Human creativity, be it production of a piece of art or music, is a process that is far from input → output. Instead, it is characterised by struggles, negotiations and frictions. We do things with intent, compassion, worry, jealousy, care, wit, humour. And these are not phenomena that can be entirely measured, datafied, mapped or automated. There is no optimization function for love, compassion or suffering. In the words of computer scientist Joseph Weizenbaum, considered one of the fathers of modern artificial intelligence, ‘No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.’

Reflecting on the lyrics ChatGPT has produced ‘in the style of Nick Cave’, Nick Cave wrote in January 2023 that:

Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self… it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery… This is what we humble humans can offer, that AI can only mimic.(www.theredhandfiles.com/chat-gpt-what-do-you-think)

The human condition in the age of algorithms
Generative, classification and prediction models have become commonplace, deployed in public and private spaces such as airports, offices and homes in major cities across the globe, monitoring, tracking and predicting our purchases, our movements and our actions. Search engines selectively direct information, political advertising and news towards us (based on profiles they have assigned us, often without our knowledge); recommender systems nudge our next steps; and even seemingly simple and benign applications like spellcheck tend to determine what ‘correct’ use of language is. From political processes such as law enforcement, border control and aid allocation to very personal activity such as dating and the monitoring of our health, algorithmic systems mediate, influence and direct decision-making to varying extents, often with little or no mechanism for transparency or accountability.

As we have seen, human behaviour and social relations are non-determinable, in constant flux, and therefore non-predictable and not compressible into data or models. Yet, our every move, our behaviours and actions are continually, ubiquitously tracked, surveilled, datafied, modelled and nudged in certain directions and away from others. Rather than effectively capture non-determinable, non-predictable and complex human behaviour, what these models end up doing is modelling and amplifying societal norms, conventions and stereotypes – automating the status quo. As a result, models that often fail to work properly, or models that do not work for all, or worse, models that automate the status quo and are by definition discriminatory, continue to be built and deployed. Directly or indirectly, these models serve as constraints, tools that limit options, opportunities, choices, agency and freedom.

Obvious consequences of this include a gradual invasion of privacy, the normalisation of surveillance, simplification of human complexity, reduction of agency, and the perpetuation of negative stereotyping and other forms of injustice. In most cases, the fact that these technologies influence critical decisions and constrain opportunities remains hidden. Most surveillance technology producers operate in the dark and frequently go out of their way to hide their existence. One knock-on effect of this is that as we become aware that we are being watched, monitored and predicted, we alter our behaviour accordingly, or ‘game the models’. Automation of content moderation on social media platforms, for example, has forced minoritized communities who are disproportionately censored to alter their language in order to discuss topics without their content getting flagged or removed. This alteration has come to be known as ‘algospeak’. With the awareness that most recruiters and employers use automated filtering algorithms to screen CVs, there is now a flourishing market providing advice on how to write CVs that can pass algorithmic screening.

For the most part, market monopoly, wealth accumulation and power to dominate are the central drives behind mass data collection and model building for big tech corporations, powerful institutions and start-ups alike. Questions of social impact, negative downstream effects, and accountability are obfuscated or diverted as irrelevant by those in the position to build and deploy these scientifically and ethically questionable models. This situation marks the rise of unrivalled power and influence concentrated in few, wealthy hands that have neither the experience nor the expertise to understand the issues, nor the interest to alleviate them. Consequently, models that sort, classify, influence and predict are free to nudge human behaviour and social systems towards values that align with these central drives – wealth and power. But not, for example, justice.

As a result, democratic processes have been wrecked; people have been wrongfully arrested; disenfranchised data workers exploited; people denied medical care, loans, jobs, mortgages and welfare benefits; underprivileged students’ exam results have been downgraded; artists plagiarised; communal violence promoted (with dire results from death to political instability); the list goes on.

Designers, producers and vendors of algorithmic models go out of their way to evade responsibility and accountability. Without proper accountability mechanisms, for example, to screen and assess deployed models and ensure ethical consideration in the design of those that are being built, and without the regulatory frameworks that protect individuals and communities that are most impacted by algorithmic systems, we continue to replicate these injustices at a mass scale.

References

Baria, A. T., & Cross, K. (2021). The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor. arXiv preprint arXiv:2107.14042. Di Paolo, E. A., Cuffari, E. C., & De Jaegher, H. (2018). Linguistic bodies: The continuity between life and language. Cambridge MA: MIT Press.

Lewontin, R. (1996). Biology as ideology: The doctrine of DNA. Toronto: House of Anansi.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. WH Freeman.

Abeba Birhane

Abeba Birhane Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). Her interdisciplinary research sits at the intersections of embodied cognitive science, machine learning, complexity science and decoloniality theories. Her work includes audits of computational models and large scale datasets. Birhane is a Senior Fellow in Trustworthy AI at Mozilla Foundation and an Adjunct Assistant professor at the school of computer science and statistics at Trinity College Dublin, Ireland.

© Abeba Birhane
Search