top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

The CogX blog

Thought leadership on the most pressing issues of our time


Guest contributor: Future of Work

Roger Spitz


Roger Spitz, bestselling author, futurist, and leading AI expert addresses the critical challenges of an AI future in his latest opinion piece. Spitz draws on his extensive experience in strategic foresight and AI investment to provide practical strategies to help readers adapt and stay relevant in the face of technological change.


5 Strategies to Activate Your Agency and Stay Relevant in the Age of AI


The stakes of Artificial Intelligence


Today, AI seems to be the answer to everything, irrespective of the question. AI’s potential for profound benefits comes face-to-face with existential risks. More nuanced than human extinction alone, these risks challenge our values, freedoms, even the trajectory of civilisation.


Algorithmic control, a growing shift from human judgement, subtly infiltrates our lives. It influences decisions about everything, from news feeds to job prospects, beliefs to allegiances. This erosion of agency and choice, gradual and often invisible, deserves far more attention than stereotypical doomsday scenarios.


Evolutionary pressure prioritises relevance. The pressure is on us to make more relevant decisions. But how?


The “Complex Five”: Know your Unknowns



We must understand the different types of uncertainty to anticipate our future relationships with AI.


  • Known Knowns: Things we know that we know, like “the sun rises in the morning and sets at night.” For these, we use Michele Wucker’s definition of “Gray Rhino. There is no uncertainty with Gray Rhinos; we might treat them as unknown, but they are certain.  


  • Unknown Knowns: Things we think we know, but we find that we don’t understand them when they manifest. For example, increasing ocean temperatures and acidity levels prompted perfect conditions for jellyfish population growth. This increase then forced shutdowns from jellyfish clogs in the cooling systems of nuclear reactors around the world. Here, situations we believe we understand can become complex, as small changes drive larger, less predictable impacts. To describe such unknown knowns, Postnormal Times uses the term “Black Jellyfish.


  • Known Unknowns: Things we know we don’t know, including new diseases, impacts of climate change, and mass human migration. These are obvious, highly likely events, but few acknowledge them. We call these known unknowns “Black Elephants,” based on a term attributed to the Institute for Collapsonomics.


  • Unknown Unknowns: Things that we don’t know that we don’t know. For these unpredictable outliers, we use Nassim Nicholas Taleb’s “Black Swans.


  • Butterfly Effects: The flapping wings of one majestic insect brings these animals together. The “Butterfly Effect,” defined by meteorologist Edward Lorenz, describes how small changes can have significant and unpredictable consequences. To illustrate, Lorenz described a butterfly flapping its wings influencing tornado formation elsewhere.


All these degrees of uncertainty share a common trait: ignorance, or absence of evidence, is not evidence of absence.


 

Responding to AI’s “Complex Five”


The big five in safaris (buffalo, leopards, lions, elephants, and rhinos) are dangerous because of their strength and size. In our disruptive world, our Complex Five are the five most important animals in our “hunting” for big disruptors - rhinos, jellyfish, swans, elephants, and butterflies.


By filtering the future of AI beyond the extreme dichotomies, we can adapt our responses to the spectrum of uncertainties and develop strategies for staying relevant in the age of AI.


1. Beware of Charging Rhinos

Imagine Wucker’s Gray Rhino: What probable, visible, and high-impact AI outcomes do we ignore, despite the evidence they are already charging at us?


  • Disinformation: AI-powered deepfakes and misinformation are rapidly growing, threatening facts, democracies, and mental health. Brexit, recent US elections, and the Covid pandemic have already demonstrated how social media can dent democracy.


  • Skills and reskilling: Today, routine cognitive tasks are being automated. Artificial narrow intelligence already has incredible capabilities which excel at precisely defined tasks, outperforming humans in specialised areas. To build skills that machines cannot quickly emulate, we must replace mechanical transfers of knowledge with human-centric capabilities like critical thinking and emotional intelligence.


With Gray Rhinos, responses often fall short because decisions come too late. To avoid being trampled, anticipate the impacts, rather than muddling along and panicking when it’s too late.


2. Don’t Get Stung by the Jellyfish

Black Jellyfish used by Postnormal Times indicate hidden, low probability events that have a high potential impact. While the initial situation may seem predictable, Black Jellyfish grow into something that defies our imagination.


  • Info-ruption: What will be the cascading effects of information’s disruption? How do weapons of mass disinformation threaten society’s cohesiveness? Info-ruption could be the primary weapon in future wars, determining the future of humanity.


  • Scaling bias: AI’s wholesale amplification of discrimination through bias can reverberate across society.


  • Fusion of AI and BioTech: The intersections of AI, biology, and technology could challenge the status of humans as dominant beings. What defines sustainable humanity?


Info-ruption is an arms race. As technology amplifies inaccurate information and bias, we must develop equally powerful tools (and mindsets) to fight these.


To respond to AI’s Black Jellyfish, we need to consider snowballing effects by asking how these reverberations could cascade further, and even become irreversible. Ask “What if this expanded larger than expected?” and “What else might this impact?”


3. Address the Elephant in the Room

Black Elephants are obvious threats, but few are willing to acknowledge them. These are similar to Gray Rhinos, but for now the elephant is standing, versus the imminent charging rhino. When Black Elephants are discussed, conflicting views translate into confusion and inaction.


  • Reinventing education: Our knowledge-driven education models will produce a massive number of people who won’t keep up in our nonlinear, ever-changing world. The current AI debate neglects the critical need to reimagine education. AI threatens not by its existence, but by our education systems failing to adapt.


  • Deskilling decision-making: As we delegate to AI systems, our decision-making capacities erode, causing us to lose the habit of making decisions ourselves. As algorithms increasingly impose their decisions on us, we lose opportunities to exercise agency.


Black Elephants require mobilising action, aligning stakeholders, and understanding the changes throughout our complex systems. In specific situations, own your response. Don’t let Black Elephants blindside you, or they will morph into Gray Rhinos and charge.


The Era of Techistentialism: Reinstating Agency


Today, humanity faces both technological and existential conditions that can no longer be separated. We define this phenomenon as Techistentialism.


Through AI, technology is challenging us in strategic decision-making, a realm historically specific to humans. Here, technology confronts the existential dimension, as we stand on the edge of our free will.


Jean-Paul Sartre powerfully articulated the human condition: “existence precedes essence,” whereby our agency emerges through choice. But if technology is determining outcomes on our behalf, our agency is curtailed and our choices may be beyond our control.


Techistentialism is our attempt to apply this philosophical perspective to sense-making and decision-making in our contemporary technocratic environment.


Machines don’t need to become superintelligent in order to challenge us. The issue at hand is a question of understanding the nature of our own capabilities in relation to a machine’s computational rationality.


We should not underestimate the severity of deskilling. By delegating our decision-making capabilities to algorithms, reliance may slip into dependence.


The true existential risk is not machines taking over the world, but the opposite, where humans start operating like idle machines - unable to connect the emerging dots of today’s complex world.



Reinventing education, from the playground to the boardroom, is now an existential priority. We need to form new relationships with inquiry, experimentation, failure, and creativity to help us problem-solve out of the existential risks we face.


4. Build Resilience for Black Swans

Taleb’s Black Swans are unforeseeable but extremely high-impact events. The issue is that we don’t know what we don’t know. Even for AI, the odds of these rare events and their runaway chain reactions aren’t computable.


  • Artificial General Intelligence: What future technological developments are imaginable (or unimaginable)? Is reaching AGI really possible, and what would be the ramifications?


  • Superintelligent AI systems: What happens if our creation surpasses the combined human intelligence and outgrows our ability to control it? Could it pursue goals that pose an existential threat to our species?


  • Extreme catastrophic failures: Cross-impacts stemming from interacting AI systems can lead to drastic and irreparable outcomes.


Responses to Black Swans include building resilient foundations and paying attention to rare events with profound impacts. However unpredictable Black Swans are, we can still be anticipatory, while implementing guardrails for the randomness of our world.


Ask not what AI might do to humans but what humanity will choose to do in relation to AI. Look for the nonobvious. Accept randomness. Be aware of cognitive bias as the modern world becomes dominated by very rare events. When Black Swans appear, rise up from the devastation.


5. Expect the Unexpected from our Majestic Butterfly

The butterfly effect is liminal: It can mutate into the other animals. How do the Complex Five snowball as they collide?


Systemic disruption is a breeding ground for Butterfly Effects. Not preparing comes at a high cost. Our global systems, from food to energy, are interdependent. Impacts are not siloed; likewise, neither should our approach to assessing AI risks and future-preparedness.


AI is developing quickly, and the goalposts to remain relevant are constantly moving. Anything we think we know today in relation to AI will change tomorrow.


The best response to AI’s Butterfly Effect is to build resilience, action adaptive strategies, and expect the unexpected.



Applying our Complex Five matrix to AI can help us plan possible responses. By recognising the degrees of uncertainty, we can better prepare for a range of unknown futures and surprises ahead.


 

Author:


Roger Spitz is the bestselling author of Disrupt With Impact: Achieve Business Success in an Unpredictable World and the four-volume collection The Definitive Guide to Thriving on Disruption, from which this article is derived. President of Techistential (Strategic Foresight), and founder of Disruptive Futures Institute (Think Tank) in San Francisco, Spitz is a leading expert and investor in Artificial Intelligence, and is known for coining the term “Techistentialism”.


He publishes extensively on the future of strategic decision-making and AI. Spitz is also a partner of Vektor Partners (Palo Alto, London), a VC fund investing in the future of mobility. As former Global Head of Technology M&A with BNP Paribas, Spitz advised on over 50 transactions with deal value of $25bn.



 

Did you enjoy this post? Then you’ll love our weekly briefings on The Future of Work. Check out some previous editions here, or just cut straight to the chase and subscribe to our newsletters exploring AI, net zero, investing, cinema, and deeptech. 

Previous
Next
bottom of page