top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Undoing the Tech Coup: A Thrilling Conversation with Marietje Schaake



Technology promises a better world, but it also threatens our privacy, security, and democracy.



Guest Author: Marietje Schaake



Marietje Schaake, a former European Parliament member and Fellow at Stanford University's Cyber Policy Center and the Stanford Institute for Human-Centered Artificial Intelligence, has long been a leading advocate for safeguarding democracy in our digital age. In her new book, “The Tech Coup: How to Save Democracy from Silicon Valley, she exposes the escalating tension between the unchecked power of tech giants and the alarming erosion of democratic institutions.


In our conversation with Schaake, we explore the urgent need to address this dangerous imbalance of power. From advocating for intentional policies and investments that align technology with human rights to emphasizing the fundamental necessity of transparency and accountability in the digital sphere, our discussion offers a roadmap for reclaiming democratic control in an age increasingly dominated by technology giants.



1) We often hear a lot about how big tech companies are invading our privacy and selling all of our personal data, but your book opens up the debate on a much bigger issue. It made me realise that the way technology is changing things right now could actually threaten the very core of our democracy. Can you explain why this "digital takeover" is such a big deal?


The problem we face is systemic and sees an ecosystem of tech companies playing a powerful role in impacting society, without sufficient countervailing powers. Tech companies are building antidemocratic tools such as spyware, take ever more critical decisions on the line between war and peace, and have acquired massive amounts of capital, data, compute and talent, they have effective lobbying campaigns and buy away expertise from governments.


Data gathering without rules and oversight causes all kinds of problems. In ‘The Tech Coup’, I describe how in the US, amidst an increasing number of restrictions on the right to abortion, location data has been used in court cases against women, to prove that they visited an abortion center and were in violation of the law. Besides direct data gathering by search engines or social media companies, there are data brokers that combine and sell data from various sources, to develop the strongest and most detailed data profiles of people. These can then be used for commercial or political targeting, but data is now also the key ingredient to training AI models. 


Data, once collected in a specific, or benign context, can become a tool used against a person in a different context. 



2) I've always thought of technology as just a tool, something that can be used for good or bad. However, your book argues that some technologies are not neutral at all - but instead come with built-in values. Can you explain more about what you mean by that? How do these built-in values sneak into tech, and why are they often at odds with what's best for society?


I argue technology is never neutral. There are explicit decisions such as the algorithmic settings that curate the information we see on social media. But there are also deep impacts from technologies developed through a Silicon Valley lens, but used in the context of a brewing conflict, such as we saw when Facebook became the platform for incitement to genocide in Myanmar. Because of the lack of transparency into the inner workings of the business models of most companies, the public is prevented from understanding the values woven into these technologies.



3) It's been a widespread concern that Silicon Valley companies haven't been held accountable to lawmakers and regulators for years. What do you think is stopping governments from cracking down on them? Is it just a lack of knowledge, or are there other forces at play that we might not be aware of?


Democratic governments have given too much room to companies, by placing enormous trust in the markets to deliver what is right for society. This is particularly true in the United States. A combination of a utopian view of the impact of technology, growing lobbying power, and deeper divisions have hindered regulation in the US. In the EU things are a bit better with a new set of laws adopted, the Digital Services Act, the Digital Markets Act and the AI Act.


One of the reasons why tech companies have evaded accountability, is that they have lobbied directly against regulatory proposals, or that they are trying to convince lawmakers that companies themselves are in a good position to detect and mitigate risk. Framing is key here. Today, in the United States, there are powerful voices convincing politicians that US tech companies are best left unregulated, so that they can stand as a defense against the competition and threat from China.



4)Your book compares China's use of tech to tighten state control with the US's laissez-faire approach to Silicon Valley. Neither of these approaches seems ideal to me. However, in terms of digital governance, do you believe China has outpaced the West? And overall, do you think there could be a balanced approach to regulation that protects both innovation and democracy?


The Chinese model is the opposite of a rule-of-law-based model that I advocate for. What it does remind us of is that the state can be powerful if it chooses to assert itself. Regulation and innovation must go hand in hand, and democratic governments have a task of balancing different interests against innovation and disruption. They need to protect fundamental rights and national security for example. 



5) After reading your book, it's easy to see how our current democracies just can't keep up with the growing power of big tech companies. What can we do to protect democracy from this technological takeover? Does democracy need a “software update” to deal with the challenges of the digital age?


Democracies can do a whole lot more than they have done so far to keep up. Beyond regulation, democracies should use their position as customers of tech companies so set different standards and use investments to create markets. All of this can be done with more public values, openness and transparency, at heart. Democracy needs a governance boost to deal with the challenges of the digital age.

1

EU's AI Act: A Landmark Regulation Reshaping the Future of Artificial Intelligence

Rectangle 7827.png

2

Are AI’s energy demands spiralling out of control?

Rectangle 7827.png

3

Big Tech is prioritising speed over AI safety

Rectangle 7827.png

4

Who are the AI power users, and how to become one

Rectangle 7827.png

5

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI

Rectangle 7827.png

Popular Articles

Get the CogX Newsletter 

Get the latest tech news in your inbox each week

The first AI chip to enable self-improvement

Issue 38

Designing computer chips has long been a complex and time-consuming process. Now, Google believes it's found a way to dramatically accelerate this task using AI.

Undoing the Tech Coup: A Thrilling Conversation with Marietje Schaake

Issue 37

Marietje Schaake,a former European Parliament member and Fellow at Stanford University’s Cyber Policy Center and the Stanford Institute for Human-Centered Artificial Intelligence, discusses the strategies outlined in her new book, 'The Tech Coup: How to Save Democracy from Silicon Valley,' on how to reclaim democratic control in the digital age.

Is Sam Altman right about the future of AI

Issue 36

It's not every day that a tech CEO morphs into an AI prophet, but when OpenAI's Sam Altman speaks, you know many people will be listening.

OpenAI's o1 model has been hailed as a breakthrough in AI

Issue 35

Just days ago, the AI world was buzzing with the announcement of OpenAI's secretive "Strawberry" project. Now known as the o1 model, this AI powerhouse has shattered benchmarks. But does it live up to the hype?

Will AI take over: A conversation with Jaan Tallinn

Issue 34

AI pioneer Jaan Tallinn, founding engineer of Skype and Future of Life Institute, shares his insights on AI's potential dangers — and how we can mitigate them.

DeepMind's AlphaProteo AI is outpacing years of scientific research

Issue 33

Designing proteins from scratch has long been a scientific puzzle. Now, Google DeepMind believes it's one step closer to solving this problem.

Related Articles
bottom of page