A comparison of government responses to the AI revolution.
- Courtenay Crow
- Feb 7
- 11 min read
Updated: Feb 10

What is ‘AI’?
Definitions are not exactly the industry’s strong-suit. But let’s outline the basics.
Artificial Intelligence technologies (‘AI’ for short here) earn their title for carrying out the kinds of functions which we associate with human cognition. Examples include problem-solving, reasoning, and learning. There are many different kinds of AI, with varying abilities and ways of operating. For instance, ‘narrow’ AI can effectively perform specific tasks, whereas ‘artificial general intelligence’ (AGI), which doesn’t exist yet, could in theory carry out any cognitive task that a human can. Similarly, whilst ‘reactive machines’ are AI technologies with no memory, ‘limited memory’ AI can draw on past experiences to improve its functionality. When you create a ChatGPT account, for example, it asks if you’d like it to remember previous interactions in order to better understand your preferences.
If you’re still a little confused, don’t worry, so are government specialists. That’s why in many countries, like Japan, there’s no legally recognised definition. Even the official definitions of AI that do exist vary between and within countries. For instance, there is no unified definition of AI that applies across the complex patchwork of AI measures in the USA. This lack of clarity is one of the key challenges of regulation.
Are we living through an ‘AI Revolution’?
AI has been around for decades (for extra nerdy detail see Oxford’s ‘A history of AI’). That said, many of us not living as enclosed Carthusian monks will have noticed a recent boom in generative AI, particularly chatbots. These are often based on large language models (LLMs) which are trained on vast quantities of text to understand and create language. In November 2022, OpenAI’s ChatGPT was launched to the public, swiftly followed by other competitor chatbots. What's more, the 2024 Nobel Prize for Chemistry was awarded partly to those behind DeepMind’s generative AI model, AlphaFold, which can predict the structures of 200 million proteins. All this is to say that the development of AI has recently reached a staggering pace. In fact, a 2023 open letter signed by leading figures in the tech world, stated that AI technologies are “now becoming human-competitive at general tasks”. Some, like Harvard Professor Mihir Desai, contend that future AI developments “will not be nearly as revolutionary or imminent as promised”. But it’s undeniable that the technological innovations that have already occurred are significant in themselves.
The new ‘Manhattan Project’? Why governments want to lead on AI.
The USA’s Stargate plans, which involve investing $500bn in domestic AI infrastructure, have been resonantly called ‘America’s AI Manhattan Project’. This reflects the sense that global leadership in AI will convey the same calibre of geopolitical advantage as being the first to develop an atomic weapon. Some view AGI as the direct parallel to the atom bomb. Consequently, it’s understandable why so many governments are expressing intentions to become future AI leaders.
The OECD’s excellent policy paper lists 10 potential major benefits from AI. Here we’ll focus on four that seem to matter most to governments. First, embracing AI has a strong economic rationale. Matt Clifford, author of the UK’s AI Opportunities Action Plan (January 2025) predicted that AI could generate £400bn of economic growth for the country by 2030. It’s unclear exactly how this figure was calculated, but few doubt that AI can boost productivity. Naturally, this appeals to governments, as growth = more money without raising taxes. Similarly, it makes sense to tempt leading AI companies to your own soil, or grow your own, as they’ll provide a bountiful harvest of corporation tax income.
The second key reason why governments are keen to promote AI is defence-related. AI is already being used to develop smarter weapons and more sophisticated cyber attacks. Any country that falls behind in that regard, and is unable to defend itself, faces immense security threats. Thirdly, governments are excited by how AI technologies might turbocharge scientific progress, bringing substantial improvements to a broad range of important spheres like health, agriculture and climate. Lastly, many governments are keen to use AI technologies to make public services more effective. This would be a vote-winner and money-saver if orchestrated well.
How to win the battle for AI supremacy…
In recent years there has been a boom in the creation of AI strategies. This is true not just of ‘the West’ and East Asia, but across the board. For instance, the African Union laid out its Continental AI Strategy in July 2024. Many of these strategies involve institutionalising a focus on AI in government, ploughing money into research and development, building infrastructure to boost energy capacity and computing power (or ‘compute’ as the tech-bros call it), creating incentives for luring over AI talent, offering tax breaks and other subsidies for AI companies, positioning the government as a leading customer for AI pilots, providing access to government data assets, and cutting regulation. Some governments focus on making sure others don’t win. For example, the USA has methodically restricted China’s access to the most advanced semiconductor chips. However, as indicated by the panic in response to the evident sophistication of start-up Chinese chatbot, DeepSeek (powered by second-tier chips), this doesn’t necessarily work.
Naturally, different government responses to the AI revolution use a different mix of these methods. Thanks to the NHS the UK is especially well-placed to provide high quality health data for training AI technologies, although concerns about data privacy will undoubtedly be prevalent in the public sphere. Some countries will find it politically easier than others to relax immigration restrictions to attract AI talent. This is already emerging as a point of contention within Trump’s MAGA coalition. Perhaps most strikingly, all countries have access to different levels of private and public sector funding for AI. Stargate’s $100bn of initial funding (and, in theory, the $400bn to come) is completely derived from the private sector. In comparison, the EU’s AI Factories Initiative is mostly publicly funded and will invest several orders of magnitude less, around €2bn. It should be noted that DeepSeek purports to have spent under $6m on the compute used in training V3 (note - this excludes the cost of wages, real estate, and chips). This suggests AI innovation could be cheaper than previously thought, but does not negate that, in general, more money is helpful.
Despite the intensification of government concentration on how to promote AI development, there are still some issues which remain relatively neglected. One is ensuring the sustainability of the energy sources enabling the required compute increases. Estimates vary, but Alex de Vries has posited that AI could be gobbling up to 134 terawatt hours of energy a year by 2027. To put that in context, that would be around 0.5% of all global electricity consumption. This is only likely to increase. Consequently, although it is probable that many governments will initially resort to oil, gas and nuclear, it makes sense to base any long-term AI vision on a renewable energy supply. In a similar vein, as suggested at the Seoul Summit by South Korean Minister, Lee Jong-Ho, it would be worth investing in research around “low power AI chips”.
More attention is also required around enabling all segments of society to benefit from AI and have a say in its development. Populations will need to be upskilled in response to how AI is changing the world of work, starting at a school level and continuing throughout the life-cycle. Whilst governments have made noises about this, much more detailed and wide-ranging plans are required, and fast. Additionally, few strategies have been implemented with regards to how to prevent people from viewing AI as purely a threat. Many AI strategies take a rather top-down view of things, but public trust is essential for the successful adoption of these technologies.
“Should we risk loss of control of our civilisation?”. Why (some) governments want to regulate AI.
This quote comes from the aforementioned 2023 open letter, which called for a six-month pause to the training of powerful AI systems so that safety researchers and policymakers could catch up. Experts disagree on the exact risks AI poses, but this sentiment that the risks are significant is widespread.
One of the chief dangers is the intentional malicious use of AI, for instance, to spread disinformation, design potent biological weapons, and violate human rights and freedoms. Famously, the Chinese government has used facial recognition AI to track Uighurs. If you’re Xi Jingping, you’re also probably a little worried about AI reducing your control over public opinion. This concern comes through in the recent ‘AI Measures’ (July 2023) which stated that AI technologies must uphold “socialist core values”. Other governing bodies have also expressed concern about AI undermining their principles; the African Union’s Continental AI Strategy contains a section on the “risks to African values”.
Risks could also emerge from the social and economic fallout of AI’s thorough integration into the economy. For instance, it’s possible that AI technologies will deepen inequality on an intra- and international level, raise unemployment rates, entrench discrimination through biases in training data, and prop up harmful concentrations of power amongst particular companies or indeed countries. Moreover, there is cause to be concerned about the impacts of unforeseen errors in AI technologies used in sectors deemed particularly critical, such as key government services. This is because there are still many gaps in our knowledge around AI interpretability and explainability (how and why it produces a given output), robustness (whether it can stave off attempts to misuse it), trustworthiness (whether it operates securely and does the job it promises to do), and alignment with human intentions.
Some worry about the possibility of AGI, if/when invented, being able to pursue its own agenda in opposition to human aims. There’s also a broader concern that all these risks will be exacerbated by an AI arms race, which disincentivises careful risk management.
Jolly stuff.
"The great public policy question of our generation" (Jamie Susskind). How are governments mitigating the risks of AI?
The early 2020s have seen a burst of efforts to promote AI safety. The inaugural international AI Safety Summit was held at Bletchley Park in 2023, followed by the AI Seoul Summit, and France’s AI Action Summit, starting today (10th February). By the end of 2024, 10 national AI Safety Institutes, dedicated to researching AI risk-mitigation and monitoring the development of the technology, had been established. Corporations, governments, and international bodies are increasingly making agreements, guidance, regulation and laws to promote AI safety. Furthermore, existing laws such as those around data privacy and intellectual property are already being applied to AI.
There are some substantial similarities in government responses to the AI revolution from a safety perspective, which could suggest an emerging ideal standard of practice. Governments often ask AI companies to set up AI safety governance structures; implement thorough risk management processes across the lifecycle of the technology (like risk assessments); and be transparent about the technology’s training data and capabilities, how it works, when it’s being used, and the results of safety tests. Many are working out how to enable appeals against decisions taken by AI, clarifying liability when breaches of safety guidelines occur, and professing support for international knowledge-sharing, to reduce the risk of an AI arms race. In some cases, AI applications that are deemed particularly harmful are banned outright. In the EU AI Act (2024), for instance, AI technologies which enable ‘social scoring’ are forbidden.
All these measures acknowledge the need for flexibility to enable swifter adaptation to the changing technological climate. Moreover, they tend not to apply to AI research and development or AI developments for national security in the same way. Nevertheless, many governments do at least propose to hold their own use of AI to similarly stringent safety and ethical standards, fund AI Safety Institutes, and monitor technological developments.
Where these safety standards vary significantly is in who is responsible for policing/kindly encouraging their implementation. Currently, the UK’s existing regulators (e.g. the Financial Conduct Authority) are responsible for promoting adherence to AI safety guidelines in their respective industries. By contrast, the EU’s AI Act calls for member states to specially establish both a notifying and market surveillance authority to enforce the law. The European Commission AI Office has also been created largely for this purpose. On the one hand, there are benefits to a sector-specific regulatory approach. It allows guidelines to be infused with existing sectoral expertise and take more specific form. Indeed, the broad-brush approach of the EU’s AI Act has engendered much criticism. On the other hand, existing regulators will need much more funding to develop the AI expertise and organisational capacity to tackle this ever-growing regulatory task. Moreover, especially as AI changes the economy, there’s a risk that existing regulators won’t cover all the fields in which AI safety needs to be promoted. Therefore, perhaps the best solution would be to have a central AI regulator which sets a minimum standard for the safety of all AI technologies and coordinates the work of the sector-specific regulators. Those, in turn, could provide more tailored guidelines.
Further differences lie in the coverage of AI safety measures, and methods for determining what this coverage should be. Some guidelines apply equally to all uses of AI. Others take an explicitly risk-based approach, calculated primarily with reference to the technologies’ intended uses. Colorado’s AI Act is directed towards what it deems “high risk AI systems”. Likewise, the EU’s AI Act applies to all AI but grades the stringency of its safety standards based on the risk level of the technology. Other measures focus more specifically on a particular technology type. For instance, China has separate regulations for recommendation algorithms and for generative AI. That said, these also often involve an element of risk analysis. In China’s ‘AI Measures’, it is stated that those technologies with “public opinion attributes or the capacity for social mobilization” will receive greater regulatory attention. Some measures are based on user numbers. California’s AI Transparency Act only applies to systems with over a million monthly users. California has also seen a plethora of smaller-scale AI acts implemented, contrasting the more broad-ranging scope of the EU’s AI Act’. For example, California's Assembly Bill 2013 requires developers of generative AI to publish a "high-level summary" of the datasets used in training. It is too early to tell which of these approaches is most appropriate. Each has a range of advantages and disadvantages, which governments must reflect upon in this ever-changing regulatory environment.
The most significant disparity between AI safety measures lies in whether these standards are enforceable. Many are either expressed as guidelines (like Biden’s blueprint for an AI Bill of Rights) and/or are voluntary (like the Frontier AI Safety Commitments made by 16 companies in Seoul). Some governments have created enforceable regulations, but not laws, such as China’s ‘AI Measures’. Others have made their standards legally binding. There are AI-specific laws in California, Colorado, and the EU. In these cases, punishments for violations are more powerful. Contravening the EU’s AI Act can lead to a fine of €35m or 7% of the company’s total annual turnover - whichever is higher. There is a strong argument that, if a government or international body knows what needs to be done to promote responsible AI development, and protect citizens from potentially immense harm, then enforceable standards are necessary. This is because proper risk management does slow development, and it’s unlikely that most CEOs will be willing to risk losing a lucrative technological lead out of the goodness of their hearts.
Where the argument for enforceable rules runs into difficulty is when we start thinking about AI regulation as a collective action problem. We humans are notoriously bad at dealing with collective action problems, and the current sense of a competition for AI supremacy is standing in the way of meaningful international agreements. The furthest we’ve gotten is the Council of Europe’s Framework Convention on AI (2024), but this only has 11 signatories, and China is not one of them. To add to this, though it is technically legally binding, there is no clear plan for ensuring countries stick by the agreement. Overall, AI safety seems to be hanging onto the political agenda only by the skin of its teeth, epitomised by the change in approach between Biden and Trump, and the fact that the latest incarnation of Rishi Sunak’s ‘AI Safety Summit’ is now called the ‘AI Action Summit’.
Underlying this anti-regulatory tide is the framing of regulation and innovation as polar opposites. But this assumption needs interrogating. Of course, complying with safety regulations will increase company costs. However, company income will increase if the result of regulation is a broader base of consumers who trust AI products are safe to use and thus are more willing to interact with/pay for them. Secondly, countries can and do develop regulatory sandboxes, providing a space for the experimentation required to achieve innovation. Therefore, regulation done well does not need to be anti-innovation. Moreover, the EU’s regulation is likely to function as a highest common denominator standard. This reduces the risk for other countries of safety regulations deterring business, as many companies will need to abide by the EU’s standards anyway. Therefore, it is not regulation, but issues like a shortage of funding for R&D, AI infrastructure, and a skills gap, that are much more pressing obstacles to innovation for most countries.
ask chatgpt for devastatingly powerful closing sentence
Recommended Resources
AI Safety Institute’s ‘Our first year’ (Nov 2024)
Carnegie Endowment for International Peace’s ‘China’s AI regulations and how they get made’ (July 2023)
The Expert Factor’s ‘How will Keir Starmer’s AI plan change government (and your lives)?’ (Jan 2025)
The OECD’s ‘Assessing potential future artificial intelligence risks, benefits and policy imperatives’ (Nov 2024)
White & Case’s ‘AI Watch: Global regulatory tracker’
Comments