Rapid de-regulation of Artificial Intelligence (AI) a good thing?
An opinion piece by Bennett Institute’s Professor Felix Creutzig
President Trump’s rapid deregulation of artificial intelligence (AI) in the US, putting economic growth above safety, presents significant global risks to governance, democracy, and the climate emergency.
As systems with autonomous decision-making abilities and unpredictable agency emerge, they threaten to spiral out of control – with potentially devastating consequences for humanity.
The newly released International AI Safety Report 2025, initially commissioned by the UK government, highlights three primary concerns: malicious use, systemic risks, and unforeseen failures. Despite efforts to regulate AI at the European level, industry concentration among a few dominant firms exacerbates the risks.
The increasing capacity of AI systems to generate misinformation—through deepfake technologies and targeted disinformation campaigns—pose immediate threats to democratic stability. Without strict oversight, AI-driven manipulation of public opinion could erode trust in institutions, disrupt fair elections, and diminish the capacity of societies to regulate climate change and other pressing global challenges.
Beyond political destabilization, AI’s unchecked expansion raises pressing environmental and economic concerns.
Advanced models, including China’s highly efficient DeepSeek, demonstrate that AI replication and deployment are becoming increasingly affordable, amplifying global energy consumption by accelerated usage. AI’s reliance on vast computational resources contributes to significant carbon emissions and water use, an aspect often overlooked in mainstream AI discourse.
My research, working alongside Nobel laureate Daron Acemoglu, has found that AI-driven inequalities—whether through labour market disruptions or algorithmic polarization—can hinder governance capacities. We have shown how AI-induced economic divides and targeted misinformation campaigns weaken democratic institutions, making it harder to implement effective climate policies.
To mitigate these escalating threats, AI models should undergo rigorous pre-market security assessments, with independent oversight ensuring algorithmic transparency and bias mitigation. Furthermore, social media platforms must be held accountable for identifying and restricting AI-generated disinformation. before it undermines public discourse. The European Union, through policies like the Digital Services Act, has the tools to enforce stricter AI governance. Now it is time for strict enforcement.