top of page

7 OCTOBER | LONDON 2024

SEPTEMBER 12TH - 14TH
The O2, LONDON

Preparing for AI

Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 03.05.24

If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes


On May 7th, we will be joined by EY at CogX LA for a session on The Ecosystem Edge: Scaling AI Startups for the Future to explore the symbiosis between startups and global organisations in the evolving market of AI-driven solutions. The conversation will be led by EY Global Emerging Tech Leader Jay Persaud, in discussion with startup leaders Michael Lawder of ASAPP and Anand Kannappan of Patronus AI.

 

In preparation, we sat down with Jay Persaud to discuss five key questions about EY’s commitment to the global emerging tech ecosystem. From boosting tech-startups to supercharging client AI solutions, here’s everything you need to know about the GETE. Read on for a short excerpt, or cut to the chase and read the full Q&A on our blog.

 

We cover this, PLUS regulating ‘killer robots’, how to spot AI washing,  and new legislation that could see deepfakes of musicians banned in the UK…


- Charlie and the Research and Intelligence Team


P.S You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.



The Global Emerging Tech Ecosystem: An exclusive Q&A with EY.


‘When EY, our clients, and our technology companies and investors work together, 1+1+1 = 5.’


Leading EY’s Global Emerging Technology Ecosystem (GETE), Jay Persaud is putting ‘ecosystems’ at the heart of EY’s data and AI strategy. The GETE is a newly formed group created to build relationships with emerging technology firms and their investors, with a focus on harnessing the power of emerging technologies — such as AI, blockchain, data management, and cybersecurity — to drive innovation and enhance client services. 

 

In this exclusive interview, Jay Persaud sheds light on EY’s dynamic role in harnessing startup innovation to propel business strategies forward. Find out what startup founders are really focussed on (both in and beyond AI), plus EY's own AI strategy, and the lessons large organisations can learn from the scale-ups in emerging tech.

 

1. What is EY’s Global Emerging Technology Ecosystem (GETE)? 

 

In our years of transformation at EY, we’ve learned that startups and entrepreneurs are best positioned to recognize innovative market needs, which is what led us to institutionalise our approach into the Global Emerging Tech Ecosystem.

 

This means we have a global team coordinating with many of the largest companies in the world and startups that are vested in today’s emerging technology themes of AI, data management, cybersecurity and Web3 to match startups with the EY network.

 

Through the GETE, my team is helping our clients become more innovative and productive using the best available technologies from startups, scaleups and small cap tech companies.

 

We do this by identifying and curating a powerful set of technologies along important themes such as AI, data management, cybersecurity and Web3, which we integrate into our rich mix of consulting services.

 

When EY, our clients, and our technology companies and investors work together, 1+1+1 = 5.

 

Together, it’s a winning combination to help clients be innovative, productive and stay ahead of the curve.

 

2. So much of the focus is on big tech companies. Why are EY choosing to work with startup networks?

 

We’re living in a time that’s moving so quickly where startups are playing an extraordinary role in the economy.

 

In our years of transformation at EY, we’ve learned that startups and entrepreneurs are best positioned to recognize innovative market needs. We must go to market with them in a way that’s suitable for how they interact as small, agile companies without the resources and processes large organisations have. It’s this vision that led us to institutionalise our approach into the Emerging Tech Ecosystem.

 

3. What are startup founders focussing on with today’s AI hype? What can larger organisations learn from this?

 

AI is everywhere. Whether they’re looking to invest or deploy their own solutions, every company is focused on it. The demand is there.

 

As a result, we’re seeing a proliferation of startups – individuals and companies developing incredible tools to fill market needs amid this AI hype cycle.

 

There’s a lot on their minds: how do I build and scale this solution effectively and how do I bring my tool to market? These are all gaps that EY’s GETE can fill.

 

But aside from lifting these tools off the ground, there are ethical, privacy and security concerns that can’t be overlooked. Founders are trying to balance the risks and rewards to roll out these tools responsibly. 

 

… want to keep reading? Check out the full interview here on the CogX Blog.

 

Share your expertise! Want to be a guest contributor in our next issue? drop us a line at: editors@cogx.live.



Ethics and Governance

 

🏛️ There’s an AI lobbying frenzy in Washington, dominated by big tech: While publicly endorsing AI regulation, these firms privately push for minimal oversight. Despite a threefold spike in AI lobbying activity over the past year, Congress has yet to pass specific legislation.

 

🚫 Austria calls for rapid regulation over 'killer robots,' stressing the need for international rules to ensure human control over AI weapons systems. With discussions at the UN yielding little progress, participants emphasise the urgency of action as AI advances rapidly.

 

🛡️ US Department of Homeland Security released new AI security guidelines focused on customising risk assessments and mitigation for specific sectors, plus enhancing supply chain security through secure AI deployment and rigorous vetting of AI model sources.

 

🤖 Russia has deployed new AI-powered anti-drone devices, named "Abzats" and "Gyurza," on the battlefield in Ukraine. These mobile jamming systems use AI to disrupt enemy drones by jamming all active frequency ranges without human intervention. 


💉 UK MHRA announced its AI regulatory strategy following the UK gov’s 2023 pro innovation white paper. The Medicines and Healthcare products Regulatory Agency aims for safe integration, better efficiency and patient safety —particularly for AI as a medical device.

 

AI Dilemmas


⚠️ AI will make online child sexual abuse much worse, warns Stanford, stressing the already burdened CyberTipline, necessitating funding, reporting, and legal reforms. In response, OpenAI, Google, and Meta are partnering with organisations like Thorn and All Tech is Human to implement safe by design principles.

 

🕵️‍♂️ How to spot AI washing: Companies are exaggerating the functionalities of products marketed as AI-powered, often without substantial technological backing. How to avoid it? Stay sceptical, look for specific tech like neural networks or ML, and seek transparency.


🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!

 

Insights & Research


🛡️ Military is the missing word in AI safety discussions. Despite widespread establishment of AI safety bodies, none specifically govern the military applications of AI. This gap is critical given the escalating use, such as the IDF’s AI that inaccurately targeted drone attacks. 

 

🇺🇸 The U.S. government needs to 'get it right' on AI regulation, according to experts from government, national security, and social justice sectors. Despite bipartisan interest — driven by geopolitical rivalry with China — legislative progress is still too slow.

 

🔍 Everything you need to know about AI text detectors: These tools (that distinguish between human and AI content) are laden with issues. As the use of AI expands, so does the use of these tools — enhancing the accuracy and reliability of these detectors is crucial.


📝 How to measure LLM trustworthiness: Kersting et al. have devised a method to gauge the reliability of LLMs by assessing their harmony, denoted γ, to indicate the degree of stability and reliability of LLM responsible Lower γ values signify higher trustworthiness.


In case you missed it


Could deepfakes of musicians be banned in the UK?



✍️ Enjoying this newsletter?  Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work


🚀 You are now able to sign up for the super early bird offer (75% off) to the CogX AI Summit in London on October 7th here.

bottom of page