Generative AI Risks Entering an Uncreative Vicious Circle. How to Avoid It?

Author: Oliver Lukitsch

This is a sneak peek into an upcoming paper on the pitfalls of generative AI, written by our co-founder Markus Peschl. We’ll link it here once it’s published.

The AI revolution is all over the media, and rightly so. The technology is indeed impressive. Much has been said about its capabilities, but many observers and experts stress that AI cannot yet replace genuine human creativity. 

But even if there is no artificial substitute for human ingenuity yet, the power and accessibility of AI tools threaten to replace our creativity with a non-creative mechanism of recycling existing knowledge. We could end up in a vicious cycle of reproducing more of the same over and over again. But how could this happen, what would be the impact, and how can we avoid it?

Although artificial intelligence so far lacks human agency and creativity, it nevertheless has the uncanny power to shape our human environment in a disembodied way. Keeping the human mind and its extraordinary creativity in the loop will be more important than ever. 

Learning From the Past

At first glance, ChatGPT’s capabilities are impressive. We have a seamless conversation with the bot. Its responses are fluent, intelligible and often convincing. Not only the knowledge it ‘possesses’ is impressive, but so is the way it is able to quote and paraphrase. 

Yet, the so-called large language models (LLMs) have strong limitations. They provide convincing answers, but often have an unreliable, oftentimes unpredictable relationship to the truth. The reason for this can be explained by their fundamental architecture. LLMs (and many other AI systems) do not “understand” the world around them. Strictly speaking, they do not even grasp the meaning of the sentences they produce. 

Rather, ChatGPT and the like are very sophisticated forms of autocomplete. They complete sentences by predicting the next word in line. And they are able to do this because they have been trained on unbelievably huge amounts of data. This is important for understanding why so-called “generative” AI does not just copy existing text, but comes up with its own answers to user queries. 

It is also important to understand the system’s limitations. The sentences and images it produces are derived from existing patterns found in the vast sea of data (almost exclusively provided by the Internet). In other words, generative AI seeks to consolidate these past regularities rather than add something new to the present.

Vicious Knowledge Recycling 

There is, of course, nothing wrong with a technology that can expertly amalgamate the past. It’s a fantastic tool – as long as we are aware of its foggy relationship with the truth and reality. 

But let us first take a look at the “knowledge base” generative AI draws on – in other words, where it gets its knowledge from. 

In a nutshell, systems such as ChatGPT were fed with huge amounts of data from the internet. Most of that data was created by humans in various ways. But much of the data such systems are fed with is a result of genuine human creativity, like stories written by authors or scientific knowledge brought forth by human minds.

However, AI is also trained and learns from successfully producing outputs that users deem positive, helpful, and serviceable. In that sense, generative AIs also “feed” on their own output. They self-reinforce favorable responses. 

Moreover, since AI can create large amounts of content in extremely short periods, this also means that the web will soon be flooded with AI-generated content. Hence, AI will create a significant amount of data on which it will be trained in the future again. 

Is this a bad thing? The vicious circle of knowledge recycling runs the risk of creating a loop of repetitive knowledge output, with less and less plurality and originality in the content produced. But why? 

The End of Genuine Creativity

We often assume that human creativity is something that is realized by our brains and happens just inside our minds. 

But as cognitive scientists have shown time and again, creativity requires and includes our embodied engagement with our environment and its exploration. Creativity is not something that happens just inside us, but something that happens in our engagement with the world around us. And such embodied exploration means something very different from the predictive approach of LLMs. Embodied creativity is about uncovering unforeseen potential rather than predicting the most likely outcomes.

The AI systems currently being hyped have no access to the world in any relevant sense. They do not sense it directly and therefore cannot explore it to produce real novelty.

However, today’s AI systems are not completely divorced from human creativity. In fact, they draw on the vast resources of human creativity by being trained on it. So while generative AI may not be creative in itself, it still draws on the originality of human-embodied creativity.

Therefore, we claim it’s not about making or keeping AI creative itself. Rather, it is about keeping human creativity in the loop. 

There’s nothing inherently wrong with using AI to create content, as long as we’re aware that it can’t replace human creativity, and as long as it’s continually fed with human creative input.

A painting depicting the complex nature of artificial intelligence, created by Dall-E, a generative AI from OpenAI.

How to Avoid the Vicious Cycle of Uncreativity?

To break the vicious circle that can emerge over time, as more and more people will use AI in their creative work, we must consider the conditions and contexts in which genuine novelty emerges. Radically new knowledge emerges when human agents engage with their material environment. Like a sculptor who carves a form out of marble. 

The physical, embodied interaction and contact with the world around us enable human beings to realize the inherent potentials that lie dormant in their surroundings. 

We, humans, can create novelty not because we predict but because we can act and are involved in the world and tap into an infinite space of unrealized possibilities. It is in this space that we must realize true novelty. And we can only do this because we have bodies profoundly embedded in a biological world of which we are a part. 

To break the vicious cycle of AI knowledge recycling, it is necessary to regularly reintroduce human creativity into training datasets. The organizations that will be building and training the next versions of AI models need to keep this in mind and make sure that the data they are working with contains a significant amount of text, images, sounds, and videos that were not created using generative AI.

AI creates pseudo “novelty” by reproducing what it already “knows.” Humans create novelty by shaping it in interaction with the external world. Thus, human-generated novelty must be continuously fed into the knowledge production loop to keep generative AI systems maximally useful – and to prevent them from viciously recycling old knowledge.

A Recycled World 

One final note of caution. Human-machine interaction is not confined to a virtual sphere. The artifacts we create by interacting with AI will be the “endowments” of tomorrow’s world and social environment. Technologies we create can, in turn,  profoundly change the world around us. They shape our lives in far-reaching ways. Think of how the internet and social media have changed the world we live in today. Generative AI promises a similar disruption. 

All the more reason to be vigilant about how AI might displace human creativity from the knowledge-creation cycle.

Who we are and what we define as human is deeply intertwined with the world we build around us. In other words, it is the world we build that makes us human. How we understand ourselves and live our lives is also strongly influenced by the environment in which we live. 

Yet we are building a world in which AI is becoming incredibly important in our lives – it will be a technology that shapes who we are. That is why we must be careful not to allow a disembodied, dehumanized technology to become the bedrock of our existence. Who we are, our human capacity for self-fulfillment and creative agency could be malignantly driven out of a technology that could then come to define what it means to be human.

On a Positive Note 

But generative AI is all but a dark path. Quite the opposite. There are a plethora of different ways human society can benefit. 

Cognitive technologies have brought a new level and quality of automation to knowledge work, allowing less interesting yet challenging work to be outsourced to machines. Of course, many fear for their jobs, but a socially-balanced digital transition can create a better life for many of us and free up genuine human capacities. 

This takes us to another point. AI technologies have already begun to increase the pressure to radically rethink what human education, cognition, learning, teaching, and knowledge production look like in an age where humans work side by side with intelligent machines. 

Education can finally depart from its “learning-by-heart” or “know-it-all” approach to forming “meta-skills”, such as our ability to make judgments in an uncertain world, to critically reflect, and, of course, our creativity. 

We stand on the precipice of a transformative moment that has already begun to unfold. It will be even more important that we do not allow the technology that has created this transformation to blindly lead us through it. Rather, we should embrace human agency and value human creativity as the guiding force in shaping our interaction with this exciting technology.

Have you enjoyed
this article?

Subscribe to our newsletter and make sure you stay up to date on the latest insights and inspirations about using technology purposefully.

Image by D-Koi @ Unsplash