Author: Oliver Lukitsch
As always, we start by saying what we want to say, and then we say it. The new powerful AI systems that we can’t stop discussing these days raise fears that creative human work will be outsourced. At the heart of these fears is the idea that there will be a clear division of labor between humans and machines, between us and (generative) AI systems. It’s an important debate to have.
But this blog post considers how we can interact with AI to make sense of the world and ourselves. Our point is: We do not (or should not) simply divide and conquer. Rather than letting AI do our work, we can create loops of prompting to explore our own human ideas more deeply.
Doing so will be essential to understanding the role of AI systems in knowledge creation and innovation projects. And this will be important for any business betting on AI systems, making them more successful in knowledge processes and especially innovation. Let’s get started.
Where Things Currently Stand
ChatGPT was released just a few months ago. The debate has already come a long way since then. For instance, in our previous blog post, we highlighted how AI could diminish human creativity and autonomy – while not becoming creative itself.
That said, AI clearly has a huge and exponential potential to change our lives. Whether it does so in good or bad ways depends on how measured and thoughtful we are about the transition and adoption of this new technology.
As the warning signs mount, many commentators claim that AI is too complex to understand. Some argue that it will replace jobs, and since the new generation of models is generative by design, it may not replace the expected jobs that perform repetitive tasks but rather those where language and communication skills are important. These jobs are expensive – if done by humans. So is the training of skilled workers.
But the outcry has a common core. The upshot is that artificial intelligence will replace human workforce, making it redundant and irrelevant.
There is a real prospect that this will happen. Job displacement is on the horizon. Some would even say the displacement of humanity.
So it may seem strange to change perspective. But it is worth it. Follow with me.
Lost in Division of Labor
When we think about the coexistence of humans and generative AI systems, our thinking revolves around the idea of a clear-cut (albeit yet undefined) division of labor. The idea is that some tasks are best (or only) done by humans. Some tasks are best done by AI. We take this idea for granted.
We, therefore, tend to entertain fantasies of substitution and fears of replacement. We do so with non-generative AI systems, which we think of as non-creative as they solve (repetitive) tasks that can be automated. These include self-driving vehicles, traffic management, power grid management, logistics, and construction, among others.
The same goes for generative AI, which could replace human workers in copywriting, journalism, content creation (on social media), screenwriting, or game development. Of course, this list is far from exhaustive, but any exhaustive list grows longer by the day.
But all too often, we are left with the impression that it is either humans or AI; it is us or it.
Where are the future boundaries between human work and AI capabilities? This is a question that every future business that is remotely and possibly dependent on (generative) AI tools has to ask itself. After all, it is a question of whether expensive human work can be “outsourced” to a highly efficient and thus “cheap” AI system. It is a question that will drive management consultants as well as companies.
So clearly, it is understandable that this question is a focus at the moment. It is at least existential for businesses – and maybe existential for humanity in general.
Knowledge Economy without Humans?
The current debate is anything but unimportant and pointless. But it mustn’t distract us from the fact that AI systems are rolling out, and they are rolling out fast. We will face them for better or worse – and we will interact with them.
We are talking about AI today because the new generation of systems accessible to so many of us are generative machines. They create content rather than just carrying out repetitive tasks. Non-generative AI is good at (or used for) carrying out analytical, non-creative tasks (such as driving, predictive policing, etc.).
While non-generative AI intrudes the realm of “knowledge work” and the knowledge economy by design, generative AI does so with increased intensity and force. So far, AI has not yet competed with human creativity and the human gift of interpretation and deriving meaning from mere structure. However, generative AI systems (such as large language models or LLMs) are a force to be reckoned with. Any job which is about writing (creatively) is on the line. No wonder our fantasies of replacement and substitution are, once again, all over the place.
But things are not so simple and dire. Let’s look at how we interact with generative AI, such as ChatGPT; we can already see forms of interaction emerging that are not just about outsourcing skills and relinquishing abilities: we are already beginning to “make sense of the world” with AI.
First, let us briefly explain what we mean by that – and second, let us show you why this will be critical for future innovation projects.
Joint Sense-Making and Emergent Meaning
While very impressive, ChatGPT cannot (yet) be used unsupervised. Humans must check the AI’s creations for coherence of meaning. Furthermore, the more unexpected and novel the content, the less ChatGPT can contribute to its creation. Many people who work with the generative tool report that it either assists them in their workflow by providing them with code, but often people also use it to continually refine their prompts until the bot gives them what they need. Of course, there are moments when AI magically comes up with something useful right out of the box. But more often than not, we get something we cannot use right away.
Instead, we experience a back-and-forth, a continuous loop between us and the machine. In the process, we refine what we want it to say, and often – and this is essential – we come to understand better what we want it to say in the first place. (In a way, we are learning together with the machine.)
One can describe this process as a joint “sense-making” between humans and AI. It is like a dance between man and machine, one that helps us explore better what we want to say and think.
It is important to note that this form of sense-making does not live up to the complexity of what cognitive scientists refer to as “participatory sense-making”, the process of two autonomous beings interacting with one another. Sense-making with AI systems so far only involves only a single autonomous, agentive party, and that’s the human in play.
Still, as one can see right away, the dynamics that can occur between a human agent and its artificial, generative partner is not just one of “division of labor”. It is an interaction in which we, together with bots, can still assume creative agency. We can, rather than undermining our human capacity for creation, rather help elevate it to new levels.
We are not saying that generative AI will make us more creative human beings or elevate our creativity to a new level. Rather, we are well aware that AI can undermine our creativity by ceding our “generative” powers to LLMs. AI has the sad “potential” to make us creation-lazy. The sheer power of producing text, as opposed to having to think about it, investing time and effort, is an appealing offer and allure. Especially because the rollout of such systems is fast and confusing, we must pay attention to such risks.
But the existence of risks does not negate the existence of positive future potentials. Our impulse might be that we must cultivate human creativity while necessarily neglecting generative AI. Withdrawal and AI deprivation appear to be viable solutions to safeguard the cultivation of the human mind.
Yet, we can be creative with AI. Generative AI can be a partner in crime for genuinely creative exploration of surprising content. We can work with AI and explore meaning together, and make sense of the world and ourselves by using these systems responsibly.
The Case for Innovation
Businesses, but also educators, should pay attention not only to how work will be divided between humans and (AI-driven) machines. They also need to focus on how we can use AI to make sense of the world and ourselves. But why should they? The mere possibility does not justify investing time and effort in cultivating such skills.
First, AI is here to stay. New applications based on generative deep learning technology will emerge faster than ever. And we will likely encounter them in every aspect of our lives.
Therefore, knowledge processes (e.g., in education systems) and innovation projects will be key addressees of such new applications. Some of them may not deal well with the risks mentioned above: Do such applications enhance creativity or replace it (with features that are not creative in themselves)? Do they pay attention to the dynamic loops that help users make sense of their own thinking? Do such tools support genuine and human-centered creation of novelty? Or do they simply use human prompts to “brainstorm” in a stochastic, non-creative way?
Our blog post is just the beginning of this critical conversation we need to have. In the future, we should arrive at a blueprint for how human-AI interaction must be structured to work for the benefit of human creativity. It will be a long road, but rest assured that the way we work with AI must look more like a dance than a division of labor if we want AI to be a thriving partner in meaningful innovation processes.
As we said in a previous blog post, this is also critical for keeping human creativity in the loop of AI-driven knowledge production. This is an important step in ensuring that knowledge creation evolves in a purposeful and demand-driven manner. However, this discussion is beyond the scope of this blog post.
To summarise, what we want to add to the discussion is this:
We can use generative AI systems to augment (rather than replace) human creativity. We can use AI systems to make sense of the world and ourselves. And we can use them in knowledge-creation processes that are deeply human-centered.
But the premise must be that human prompting must also be prompted. It is positively disruptive prompting by AI that triggers thoughts and ideas we would not have had otherwise. To prompt or to be prompted, that is the question.
Images: Massimiliano Sarno and Cash Macanaya @ Unsplash