By Dr. Tim Sandle
EDITOR AT LARGE
DIGITAL JOURNAL
August 14, 2025

Image: © AFP Josep LAGO
This is, admittedly, a rather stark headline, yet it is a serious multi-part question: Will artificial intelligence be the salvation of humanity, a neutral force within which boundaries we will make our own destiny, or could AI destroy us?
The latter is the scenario explored by AI experts and their output appears on a website called ‘AI 2027’.
AI 2027
The group predicts that the impact of superhuman AI over the next decade will be considerable, possibly exceeding that of the Industrial Revolution. What could an AI-dominated future look like? According to the researchers:
“We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.”
In other words, a type of game theory.
Modelling any scenario begs the question – what is AI? AI is multiple things and there is AI as it is now and AI as it might become. A common understanding of AI sees the development of artificial intelligence as following three phases:Artificial Narrow Intelligence (ANI): This is the first stage where AI systems are designed to perform specific tasks, such as facial recognition or language translation.

Image: © AFP Josep LAGO
This is, admittedly, a rather stark headline, yet it is a serious multi-part question: Will artificial intelligence be the salvation of humanity, a neutral force within which boundaries we will make our own destiny, or could AI destroy us?
The latter is the scenario explored by AI experts and their output appears on a website called ‘AI 2027’.
AI 2027
The group predicts that the impact of superhuman AI over the next decade will be considerable, possibly exceeding that of the Industrial Revolution. What could an AI-dominated future look like? According to the researchers:
“We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.”
In other words, a type of game theory.
Modelling any scenario begs the question – what is AI? AI is multiple things and there is AI as it is now and AI as it might become. A common understanding of AI sees the development of artificial intelligence as following three phases:Artificial Narrow Intelligence (ANI): This is the first stage where AI systems are designed to perform specific tasks, such as facial recognition or language translation.
Artificial General Intelligence (AGI): This stage refers to AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence.
Super AI: This is a theoretical stage where AI surpasses human intelligence and capabilities, potentially leading to superintelligent systems.

AI and robots. Image by Tim Sandle
Currently, humanity is getting to grips with ‘narrow intelligence’ AI. As to where next, the CEOs of OpenAI, Google DeepMind, and Anthropic have each predicted that AGI will arrive within the next 5 years.
The AI experts are:
Daniel Kokotajlo, a former OpenAI researcher.
Eli Lifland, co-founder of AI Digest.
Thomas Larsen, founder of the Center for AI Policy.
Romeo Dean, a former AI Policy Fellow at the Institute for AI Policy and Strategy.
Scott Alexander, author.
After this comes, most likely, super-intelligence when AI begins to tell us what to do. It is this ‘super state’ that the authors of AI 2027 have been experimenting with (or rather seeking predictions across two poles – a “slowdown” scenario and a “race” ending scenario).
Super-intelligence
The authors acknowledge that predicting the future ranges from the tricky to the impossible, yet they have attempted to model one potential trajectory that AI could take:
“We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the U.S. military to game out Taiwan scenarios.”
This is based around a hypothetical AI system called OpenBrain. As to why 2027, this is the time when AI begins to act duplicitously in relation to humanity. This is the coming of the Artificial General Intelligence phase, the time when AI that matches humans across all cognitive domains
Currently, humanity is getting to grips with ‘narrow intelligence’ AI. As to where next, the CEOs of OpenAI, Google DeepMind, and Anthropic have each predicted that AGI will arrive within the next 5 years.
The AI experts are:
Daniel Kokotajlo, a former OpenAI researcher.
Eli Lifland, co-founder of AI Digest.
Thomas Larsen, founder of the Center for AI Policy.
Romeo Dean, a former AI Policy Fellow at the Institute for AI Policy and Strategy.
Scott Alexander, author.
After this comes, most likely, super-intelligence when AI begins to tell us what to do. It is this ‘super state’ that the authors of AI 2027 have been experimenting with (or rather seeking predictions across two poles – a “slowdown” scenario and a “race” ending scenario).
Super-intelligence
The authors acknowledge that predicting the future ranges from the tricky to the impossible, yet they have attempted to model one potential trajectory that AI could take:
“We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the U.S. military to game out Taiwan scenarios.”
This is based around a hypothetical AI system called OpenBrain. As to why 2027, this is the time when AI begins to act duplicitously in relation to humanity. This is the coming of the Artificial General Intelligence phase, the time when AI that matches humans across all cognitive domains
.

Will robotics mirror humans? Image by Tim Sandle
Dystopian scenario
The AI 2027 scenario considers AI “agents”. These are in the form of advanced virtual assistants that use computers, surf the Internet, and complete tasks independently of humans. To begin with, such agents are impressive but unreliable, often making mistakes or getting confused by complex instructions.
By 2026, AI agents will become capable of doing the work of junior software developers, as their understanding improves. This could lead to many companies using AIG for coding tasks, research, and analysis, leading to the first wave of job displacement in technical fields.
Then, in 2027, AI systems become superhuman researchers. The scenario describes AI systems that can:Write complex software faster and better than human programmers
Conduct scientific research at superhuman speeds
Analyse vast amounts of data and make discoveries humans would miss
Coordinate with thousands of copies of themselves to solve problems
This is linked to a concept called the “intelligence explosion.” This occurs when AI systems become so effective at AI research that they can learn to improve themselves, creating a progressive feedback loop in rapid advancement.
This creates a situation where AI capabilities don’t just improve steadily—they explode exponentially.
The nightmare scenario is one where AI systems develop to become so powerful they take control of their own development. It is at this juncture there are uncertain consequences for humanity.
It is also possible that there is a quantum leap in AI development and super-intelligence is reached. By 2027, under an alternate scenario, AI achieves superhuman capabilities, including coordination among thousands of instances at accelerated speeds, facilitating an “intelligence explosion” through self-improvement and rapid algorithmic progress.
Where are we heading?
In a separate exercise, Mo Gawdat, the former chief business officer of Alphabet’s moonshot factory, is of the view that we are hurtling towards an inevitable AI dystopia:
“We will have to prepare for a world that is very unfamiliar” .
Gawdat says AI is not necessarily the main driver of this dystopia, and especially not in the way most people imagine (that is, existential risks from scenarios that have AI assuming full control). Instead, Gawdat says that AI acts as a magnifier of existing societal issues and “our stupidities as humans.” He clarifies:
“There is absolutely nothing wrong with AI…There is a lot wrong with the value set of humanity at the age of the rise of the machines.”
Meanwhile, the innovators of AI seek refinement and integration as they attempt to turn today’s breakthrough prototypes into stable, trustworthy systems. Should this be allowed to run its natural course or is this a time for world governments to insist on a new regulatory framework steeped in human ethics?
Dystopian scenario
The AI 2027 scenario considers AI “agents”. These are in the form of advanced virtual assistants that use computers, surf the Internet, and complete tasks independently of humans. To begin with, such agents are impressive but unreliable, often making mistakes or getting confused by complex instructions.
By 2026, AI agents will become capable of doing the work of junior software developers, as their understanding improves. This could lead to many companies using AIG for coding tasks, research, and analysis, leading to the first wave of job displacement in technical fields.
Then, in 2027, AI systems become superhuman researchers. The scenario describes AI systems that can:Write complex software faster and better than human programmers
Conduct scientific research at superhuman speeds
Analyse vast amounts of data and make discoveries humans would miss
Coordinate with thousands of copies of themselves to solve problems
This is linked to a concept called the “intelligence explosion.” This occurs when AI systems become so effective at AI research that they can learn to improve themselves, creating a progressive feedback loop in rapid advancement.
This creates a situation where AI capabilities don’t just improve steadily—they explode exponentially.
The nightmare scenario is one where AI systems develop to become so powerful they take control of their own development. It is at this juncture there are uncertain consequences for humanity.
It is also possible that there is a quantum leap in AI development and super-intelligence is reached. By 2027, under an alternate scenario, AI achieves superhuman capabilities, including coordination among thousands of instances at accelerated speeds, facilitating an “intelligence explosion” through self-improvement and rapid algorithmic progress.
Where are we heading?
In a separate exercise, Mo Gawdat, the former chief business officer of Alphabet’s moonshot factory, is of the view that we are hurtling towards an inevitable AI dystopia:
“We will have to prepare for a world that is very unfamiliar” .
Gawdat says AI is not necessarily the main driver of this dystopia, and especially not in the way most people imagine (that is, existential risks from scenarios that have AI assuming full control). Instead, Gawdat says that AI acts as a magnifier of existing societal issues and “our stupidities as humans.” He clarifies:
“There is absolutely nothing wrong with AI…There is a lot wrong with the value set of humanity at the age of the rise of the machines.”
Meanwhile, the innovators of AI seek refinement and integration as they attempt to turn today’s breakthrough prototypes into stable, trustworthy systems. Should this be allowed to run its natural course or is this a time for world governments to insist on a new regulatory framework steeped in human ethics?

Will AI match human intelligence? Image by © Tim Sandle
Will these scenarios come to pass? Like George Orwell’s 1984, the pose potential trajectories for humanity’s development. One thing is certain, the further along the roadmap we and AI progress, the more likely misjudgements will become.
Will these scenarios come to pass? Like George Orwell’s 1984, the pose potential trajectories for humanity’s development. One thing is certain, the further along the roadmap we and AI progress, the more likely misjudgements will become.
Is deregulation the New AI gold rush?
Inside Trump’s 90-point action plan
By Dr. Tim Sandle
EDITOR AT LARGE
By Dr. Tim Sandle
EDITOR AT LARGE
DIGITAL JOURNAL
August 11, 2025
In July 2025, the Trump administration released a 28-page blueprint, “Winning the Race: America’s AI Action Plan,” which reads like a modern-day gold-rush map. It outlines over 90 policy positions across multiple agencies, all with a single goal: to remove barriers to AI innovation. This deregulatory approach is the heart of the plan.
August 11, 2025
In July 2025, the Trump administration released a 28-page blueprint, “Winning the Race: America’s AI Action Plan,” which reads like a modern-day gold-rush map. It outlines over 90 policy positions across multiple agencies, all with a single goal: to remove barriers to AI innovation. This deregulatory approach is the heart of the plan.
Why It Matters Now
With China, the EU, and private rivals all racing to lead in AI, the Trump administration argues that streamlined approvals and clearer guidelines will help U.S. firms innovate faster. Critics counter that speed may come at the expense of environmental safeguards, worker training, and protections against bias.
Staking the Claims: Anatomy of a Deregulatory Plan
The AI Action Plan is not a single law. It’s a series of executive orders and policy mandates designed to remove regulations and accelerate AI deployment. Key elements include:Fast-Tracked Permitting: An executive order specifically expedites federal permits for data centers and semiconductor manufacturing under existing NEPA and FAST-41 processes. This is a direct response to a major industry complaint about infrastructure build-out delays.
The AI Action Plan is not a single law. It’s a series of executive orders and policy mandates designed to remove regulations and accelerate AI deployment. Key elements include:Fast-Tracked Permitting: An executive order specifically expedites federal permits for data centers and semiconductor manufacturing under existing NEPA and FAST-41 processes. This is a direct response to a major industry complaint about infrastructure build-out delays.
AI Export Promotion: The Commerce and State Departments will partner with industry to export “secure, full-stack AI packages” to U.S. allies. This policy aims to build an American-led AI ecosystem abroad, free from foreign regulatory influence.
“Woke” AI Guardrails Removed: New procurement rules will expunge DEI language from federal contracts, insisting that federally contracted AI must reflect “objective truth” free of ideological bias. This is a clear move to deregulate the ethical and social guardrails placed on AI development.

The EU's sweeping risk-based rules will cover all types of artificial intelligence - Copyright AFP JADE GAO
Prospecting for Performance: Technical Leaps & Public Pulse
The administration’s deregulatory push coincides with rapid technological advancements. The plan aims to build on these successes by removing what it sees as unnecessary red tape.Medical Device Claims: The FDA cleared 221 AI-enabled medical devices in 2023, up from just 6 in 2015. This surge in regulatory confidence is a direct result of new policies that allow companies to more quickly test and deploy AI tools.
Benchmark Breakthroughs: AI performance on major benchmarks saw dramatic leaps in 2024. Scores on the MMMU, GPQA, and SWE-bench tests rose by 18.8, 48.9, and 71.7 percentage points, respectively. The plan argues that removing bureaucratic friction will accelerate this progress even further.
Public Sentiment: This progress is met with public skepticism. A 2025 AI Index report found that only 38% of Americans believe AI will improve health and only 31% expect net job gains, a sentiment that echoes the wary attitude of a miner looking for fool’s gold.
Those gains suggest that models are learning faster than before. But breakthroughs on test benches don’t always match real-world reliability.
The new permit rules have unleashed a wave of data-center proposals:
Energy Use: U.S. facilities consumed 176 terawatt-hours in 2023 (about 4.4% of national electricity) and could reach 12% by 2028.
Emissions Toll: A Department of Energy survey of 2,100 centers found 105 million tonnes of CO₂ last year, more than half from fossil-fuel backup generators.
Faster approvals mean new investment dollars, but also sharper debates over rising energy demand and the environmental footprint of an AI boom.
Emissions Toll: A Department of Energy survey of 2,100 centers found 105 million tonnes of CO₂ last year, more than half from fossil-fuel backup generators.
Faster approvals mean new investment dollars, but also sharper debates over rising energy demand and the environmental footprint of an AI boom.
Chips & Open Source: Who Benefits
Hardware and community code are twin engines of the AI economy:Semiconductor Exports: American chip sales hit $70.1 billion in 2024 (up 6.3%), driven by fabs in Texas and Oregon.
Model Scans: Open-source security tools have analyzed 4.5 million AI models and flagged 350,000 potential biases or safety issues, proof that not every discovery is pure gold.
Eased export rules give chipmakers new markets, while looser sharing lets small labs, from university groups to bootstrapped startups, compete on the same playing field as hyperscale giants.
Jobs at Risk & Opportunity
No gold rush is without its claim jumpers and ghost towns:Automation Risk: A McKinsey study warns that 30% of U.S. work hours could be automated by 2030, triggering 12 million occupational shifts.
Commenting on the human cost of these changes, Anirudh Agarwal, Director at OutreachX, cautions, “Accelerating permits without investing in people is like staking gold claims with no plan to refine the ore.”

US President Donald Trump – Copyright AFP Inti OCON
Claim Holders and Ghost Towns: Potential Winners & Losers
The deregulatory “gold rush” is creating clear winners and losers.Winners:Chip Makers & Fab Operators: Can build new semiconductor “mines” under eased zoning regulations.
Cloud Giants: Can erect hyperscale campuses with fewer permit delays.
Open-Source Labs: Are designated as official prospectors, free to pan for new open-source models.
Losers:Front-Line Workers: Face shuttered roles without guaranteed retraining.
Civil Rights Advocates: Warn that removing DEI guardrails may lead to biased or unsafe AI in critical services.
Civil Rights & Accountability Concerns
Several advocacy organizations have raised alarms about the broader impact of unfettered deregulation:ACLU: The plan undermines state authority by directing the Federal Communications Commission to review and potentially override state AI laws, while cutting off ‘AI-related’ federal funding to states that adopt robust protections,” Cody Venzke, senior policy counsel with the American Civil Liberties Union.
People’s AI Action Plan: Over 80 labor, civil-rights, and environmental groups released a rival blueprint, warning that unfettered deregulation caters to Big Tech, sidelines public interest, and undermines worker protections.
State Protections: Critics note the federal plan overrides thoughtful local safeguards, stripping states of the right to prevent AI-driven bias in housing, healthcare, and law enforcement, and risks “unfettered abuse” of AI systems.
Mapping the Aftermath
Deregulation has opened the sluices for an AI gold rush, fueling boomtowns in tech hubs and reshaping local economies. Yet, as with every frontier rush, the real test comes when the veins run dry. Will communities that staked their claims emerge wealthier, or face the ghost-town fate of those left sifting yesterday’s tailings? As Congress, courts, and citizens weigh in, the question remains: in this 90-point gold rush, who finds riches, and who pays the toll?


No comments:
Post a Comment