Photo story: This photo was taken in Penang, Malaysia (18 May 2025) at the end of the Tan Jetty.
The question of what humans are supposed to do in a world increasingly shaped by machines is a practical concern that underlies many of our professional, educational, and societal choices. While there is no single answer, I find myself returning to this question repeatedly, noticing how seemingly unrelated observations begin to converge toward a more coherent picture.
From Chess to AI Avatars
In a 2024 interview at the World Economic Forum, OpenAI’s CEO, Sam Altman remarked that people still enjoy watching humans play chess because of the emotional tension and the unfolding drama that comes with human fallibility. Despite knowing that machines have long surpassed us in strategic precision, we continue to be drawn to the human effort, the stakes, and the unpredictable choices. This points to the fact that people may not always be seeking the most optimal outcome; they may be seeking connection, narrative, and resonance.
The implication is that even when machines can do something better, human involvement can still generate value. And when demand remains, supply tends to follow. Chess schools still operate, tournaments are still organised, and players continue to train. At face value, this seems to affirm a kind of enduring space for human participation, even in fields already “solved” by machines.
However, this sense of reassurance is fragile. It depends on a social appetite that may not stay. In some domains, the appetite is already shifting.
Increasingly, we are seeing the rise of AI influencer avatars in driving engagement and sales. In a recent example, the AI avatars generate millions of dollars in revenue within hours; much better than their human versions. What may have felt artificial or even absurd just a few years ago may quickly becoming standard practice, particularly in markets where efficiency, consistency, and scale are paramount.
This shift suggests that our collective palate is more malleable than we often assume. What initially feels uncomfortable may become acceptable because it is the default option that works. This has significant implications for how human roles are defined and valued. The question of “what should I do” is not solely a matter of personal aspiration or capability; it is shaped by what others are willing to accept, engage with, or pay for.
In other words, human purpose is not only determined internally; it is also constrained and enabled by external demand. And when that demand begins to shift toward non-human alternatives, the shape of human work shifts with it.
If There Are Rules, There’s Risk
One way to understand where AI excels is to look at where humans have already created structure. Whenever a task follows rules, whether formal, informal, or even intuitive, it becomes a candidate for automation. Our natural inclination to create processes, systems, and repeatable frameworks is what makes many of our contributions transferable to machines.
This includes not just technical or analytical work, but also forms of emotional labour. Areas like education, coaching, and therapy which were once considered immune to be replaced by machines, are increasingly structured around observable patterns, decision trees, and outcome-based frameworks. As soon as these structures become clear enough, they can be trained into models.
I am not saying that AI will replicate every nuance of human interaction. In fact, it doesn’t need to. It may only need to be “good enough”, and this threshold has already been crossed in many commercial settings.
Given this trajectory, it is worth questioning the idea that some professions are inherently safe from disruption. Lists that rank jobs by their likelihood of being replaced offer temporary comfort, but they often reinforce a static view of value; one that assumes the system itself remains unchanged. In reality, systems evolve in response to pressure, opportunity, and experimentation.
A more constructive question might be: What would it take for a machine to replace what I do? And following that: Are the conditions already forming for that replacement to happen? Once we understand how and where value is defined, we can better position ourselves to navigate, and even shape the changes ahead.
The Future Will Be “AI-First”
Much of the current conversation around AI still revolves around automation: how to use it to streamline existing workflows, augment current roles, or increase efficiency within legacy systems. But perhaps the more profound shift will come from ventures that are AI-first, those that do not begin with existing human structures at all.
AI-first businesses are not asking how machines can help humans work better. They are asking what the work would look like if it were designed entirely around machine capabilities from the outset. These ventures are not constrained by legacy logic, and as a result, they often discover entirely new ways to solve problems, serve customers, and scale operations.
Such companies will accelerate change. Their emergence will fundamentally redraw the boundaries of what constitutes human work.
But this trajectory is not fixed, because human behaviour is not passive. People react, resist, reimagine, and create. New problems will emerge that are not easily solved by machines. New forms of value will be recognised. Cultural shifts, ethical debates, and social constraints will all play a role in shaping the paths ahead. Recognising the possibilities is not the same as surrendering to them. It is simply a way of seeing more clearly.
I don’t have a prescription for what we should do. Perhaps I can offer a set of inquiring lenses through which to view the world:
Am I still building around assumptions that were shaped in a pre-AI world? Am I solving for continuity, or actively looking out for where the rules can bend? Am I challenging where human presence is essential?
These questions create space for rethinking. To me, it’s enough to begin again with a clearer pair of eyes.