Who’s afraid of Frankenstein?
Beyond utopia or dystopia: building accountable AI
Artificial intelligence (AI) has entered all aspects of our life. Who gets hired, fired, granted a loan, or who receives medical care is increasingly determined by algorithms. Many see AI as the new cornerstone of a bright technocentric future, where AI improves efficiency and productivity and allows people to focus on more creative and interesting tasks. For these optimists, AI can bring extensive benefits to fields as diverse as healthcare, finance, and education.
The illusion of automation
Yet, currently AI algorithms often fail to generalize situations outside of their training set. Hence, in sectors where the composition of data and behaviors change over time, the quality of predictions can go down. In the context of expert occupations like medicine, research has shown how doctors, even if they were in favor originally, ended up not using AI because of the constant errors of the system. Moreover, in some cases, despite claims of an AI powered system, the work is done by poorly paid humans often based in the Global South. The story of underpaid Kenyan workers cleaning up data for OpenAI and other big tech companies revealed by The Times in 2023 is one among many such stories.
In contrast to the techno-utopians who believe in a brave new world powered by AI, techno-dystopians highlight the risks AI algorithms pose to privacy, inclusion, equity, and freedom. They see these technologies as threatening our capacity to act as critical and creative thinkers. In this dystopian vision, the algorithms of big tech companies enable a panopticon where Big Brother is powerful and manipulative, yet largely invisible.
Research has shown how algorithmic systems reflect biases in data, in design assumptions and in implementation practices. It is worrisome to imagine their influence in predictive policing, criminal justice, hiring platforms and processes, as well as their role in amplifying politics via social media, beyond the human costs and underpaid exploitative labor around AI which have recently surfaced.
Who’s the real monster?
Interestingly, whether AI is seen as good or as bad, there is a sense that AI is inevitable, and we must accept it ‘as it is’. I’d like to propose another perspective, one where we can, and in fact should, make choices. Mary Shelley’s Frankenstein provides us with a great metaphor here. Indeed, the creature in Shelley’s novel is not an unthinking, evil brute, but rather has been placed in the world by a human creator unable and unwilling to acknowledge and take responsibility for his creation. Frankenstein is not the name of the creature, but of its creator: Victor Frankenstein. This invites us to wonder who the monster in the relationship really is and emphasizes that the role of inventors and their values cannot be ignored. Victor Frankenstein pushes the boundaries of science to create ‘life, but his failure to appreciate the qualitative implications for his creation’s lived-experience results in a monstrous hybrid. Thus, Shelley’s creation suggests that the monster is only potentially dangerous, if not taken care of.
Similarly, we should question the origin of the biases of AI algorithms, which are reflections of our society’s biases. AI is designed and developed by human beings, who all have their own biases, and they are fed with data produced by us, reflecting our biases. So, while AI can create many opportunities – if our algorithmic technology is not well-thought through, not mindfully designed, and not taken care of – it might become monstrous like Frankenstein’s creature. And this is not only the responsibility of the technologists. AI as any technology is socially constructed. It is embedded in a broad system from its design, development to implementation and use, and is the result of decisions made by people. Therefore, there are actions we can take to ensure that AI is built and deployed in ways that do not discriminate against minorities and that instead protect the rights of the public. We cannot just delegate responsibility to the technology or leave it to developers or Big Tech to make decisions on what can be built and deployed.
Beyond technologists: an interdisciplinary future
Trustworthy AI starts with evaluative practices that are mandated by laws and industry standards. We all, implementers, managers, policy makers, users as well as designers and developers, should be accountable for the technology implemented and ask questions, even difficult ones such as: who owns the data, what data will be used, how and who will be involved and who might be discriminated upon or left out? In some cases, we might realize that an AI system is not the best solution and we can decide to not implement a certain system in a certain context.
If AI is to become a counterpart, it is important that we broaden the conversation among technologists – one that focuses on speed, technical fixes, and the latest release – to an interdisciplinary conversation. We need to include all users and impacted communities to reflect on social practices, cultural norms, and the context in which the technology is integrated. This also means including philosophers, psychologists, historians, politicians, anthropologists, and artists. It is only with these conversations where we face our responsibilities – unlike Victor Frankenstein – that we can improve system design, advance transparency, expose discrimination, and create more positive outcomes for all.
Originally published in Observador, 2024 in Portuguese.

