top of page



Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 22.03.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes

Today’s newsletter features guest contributor Dr. Joy Buolamwini — acclaimed computer scientist, author, artist, and founder of the Algorithmic Justice league — in her quest to ‘unmask the coded gaze’.


Dr. Joy reveals what it means to be ‘excoded’, the real-life impact of algorithmic bias, and how poetry informs AI. Find an excerpt of the interview below, or read the full profile on the CogX Blog here


In other news, YouTube greenlight deepfake cartoons for kids, a recent Microsoft report warns 87% of the UK isn’t ready for AI, and creators have learnt that AI + conspiracy theories + social media = a whole lot of profit (at the cost of misinformation)...

- Charlie and the Research and Intelligence Team

Ps. We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here

The Top Story

Unmasking the coded gaze: Dr. Joy Buolamwini's fight for fair AI


With Dr Joy Buolamwini


“We do not have to accept digital humiliation as the tax for innovation.”


The following is a discussion between ourselves, and Dr. Joy:

1. The book begins with an encounter with the coded gaze, can you explain what that is?

The coded gaze refers to the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure. 


It highlights how the machines we build reflect the biases and values of their creators, impacting individuals' opportunities and liberties. While the coded gaze is not always explicit, it is ingrained in the fabric of society, similar to systemic forms of oppression like patriarchy and white supremacy. I encountered what I call the coded gaze when I put on white mask or my dark-skinned face to be detected.

2. What does it mean to be excoded?

To be excoded means to be an individual or part of a community that is harmed by algorithmic systems. This harm can manifest in various ways, such as receiving unfair rankings or treatment from automated systems, being denied opportunities or services based on algorithmic decisions, or facing exclusion and discrimination due to the use of AI systems. 


Being excoded highlights the risks and realities of how AI systems can negatively impact people's lives, especially those who are already marginalized or vulnerable.

3. What are some of the ways algorithmic bias and discrimination is impacting everyday people?

One significant impact is the perpetuation of bias and discrimination in AI systems, leading to harmful outcomes for individuals from marginalized communities. For example, algorithmic audits and evocative audits reveal how systemic discrimination is reflected in algorithmic harms, affecting individuals on a personal level. These biases can result in unfair treatment, misidentification, and targeting of certain groups, such as Black individuals and Brown communities, in areas like criminal justice and facial recognition technologies.


Take for example Porcha Woodruff, who was eight months pregnant when she was arrested by police using faulty face surveillance tools for robbery and carjacking. While in their custody, she started experiencing contractions in her holding cell and had to be rushed to the hospital when they released her. AI-powered facial recognition failed her – putting her and her unborn child at risk in a country where Black maternal mortality rates are double and triple that of their counterparts. 


Sadly, Porcha is far from the last to be impacted by the implications of AI. Last year, Louise Stivers, graduate student at University of California Davis Political was accused of cheating using generative AI even though she hadn’t used it. Despite the algorithm being wrong and her innocence ultimately being proven, the investigation remains on her resume and she will have to self-report to law schools and state bar associations. The algorithm got it wrong, and she wouldn’t be the last victim of its error.


Deepfakes or AI-generated photorealistic images and videos is another tool that could have disastrous consequences. Aside from potentially skewing election results with the spread of false information, deepfakes are often used to superimpose the faces of celebrities onto the bodies of individuals performing sexual acts without any regard for consent. The most recent and prominent examples include Taylor Swift and Bobbi Althoff.


Ultimately, no one is immune from AI harms. The need for biometric rights becomes ever more apparent as we see how easily your likeness can be taken and transformed, or your face surveilled for nefarious uses. It is crucial that we use our voices to speak out against harmful uses of AI. If we remain silent, an AI backlash will result in pushback against beneficial applications of this technology. We do not have to accept digital humiliation as the tax for innovation.


… want to keep reading? Check out the full interview here on the CogX Blog.


Share your expertise! Want to be a guest contributor in our next issue? drop us a line at:


Ethics and Governance

🇪🇺Here’s what the EU AI Act will, and won’t change: The Act will introduce regulatory changes aimed at enhancing safety and accountability in AI development while preserving innovation. Key changes include:

  • Enhanced Transparency: Mandatory labelling of AI-generated content.

  • Increased Oversight: An EU AI Office for public complaints and enforcement.

  • Bans on High-Risk AI: Bans on AI in sensitive sectors like healthcare and policing.

However, the Act won’t change everything:

  • Exemptions for Law Enforcement: Biometric data use and facial recognition in serious crime cases.

  • Open-Source Leniency: Open-source AI projects are exempt from many obligations.

  • Status Quo for General AI Use: Most AI applications, those not considered high-risk, remain largely unaffected.


🛡️Microsoft reports the UK is not ready for AI, with 87% of businesses vulnerable to cybercrime. Incorporating AI into cyber-defense is vital for the UK's AI superpower ambition, potentially boosting the economy by £52 billion through enhanced resilience.


🤝The US proposed a nonbinding UN resolution for global AI regulation to ensure "safe and trustworthy" AI, co-sponsored by over 50 countries. The resolution seeks to bridge the tech divide between developed and developing nations — excluding military applications.


🤖Africa is moving to regulate AI amid rapid growth, proposing AI apps for Tanzanian farmers and South African social projects. The African Union is currently drafting policy, with seven nations crafting strategies: debates on timing, infrastructure and innovation are ongoing.


AI Dilemmas

🎮Google's gamer AI, and why it poses a serious problem: DeepMind are training SIMA — an AI gaming companion that follows verbal commands — sparking concerns about unfair advantages in online gaming. Google emphasises that SIMA is a research tool, to avoid misuse. 

🚦YouTube's latest rules greenlight AI cartoons for kids, mandating disclosure for all deepfaked media while exempting children's animations. This policy, intended to promote online safety, opens the door for young viewers to encounter unchecked deepfake content.

🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!


Insights & Research

📝AI researchers are using gen-AI to assist in peer reviewing ML papers, finds a study analysing major AI conferences. The study found that between 6.5% and 16.9% of peer reviews might have been significantly aided by AI, especially close to submission deadlines. 


🔝The unexpected mastermind behind the EU's lead in AI regulation: Dragos Tudorache,  Romanian MEP. His blend of seriousness, strategic action, and leadership in a domain where others are still grappling with basic concepts, propelled the EU to the forefront of global AI leadership. 


🕵️‍♂️AI is being used to create and spread conspiracy theories on social media for profit, exploiting algorithms that favour engaging content. This trend affirms the call for platforms to implement stricter regulations and adjust monetisation strategies, to prevent the proliferation of misinformation. 

🧠This study finds a successful method to curb hallucination in LLMs, enhancing model reliability without additional instructions. By injecting carefully selected, misaligned keywords, it prompts LLMs to ponder "what if" scenarios, aligning responses more closely with reality.

In case you missed it

AGI in 7 months? GPT-5 on the cards? And updates from Q*? This video’s got you covered:

✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work

🚀 We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here

bottom of page