Consultants and Artificial Intelligence: The Next Great Confidence Trick
Why trust these gold-seeking buffoons of questionable expertise? Overpaid as they are by gullible clients who really ought to know better, consultancy firms are now getting paid for work done by non-humans, conventionally called “generative artificial intelligence”. Occupying some kind of purgatorial space of amoral pursuit, these vague, private sector entities offer services that could (and should) just as easily be done within government or a firm at a fraction of the cost. Increasingly, the next confidence trick is taking hold: automation using large language models.
First, let’s consider why companies such as McKinsey, Bain & Company, and Boston Consulting Group are the sorts that should be tarred, feathered, and run out of town. Opaque in their operations, hostile to accountability, the consultancy industry secures lucrative contracts with large corporations and governments of a Teflon quality. Their selling point is external expertise of a singular quality, a promise that serves to discourage expertise that should be sharpened by government officials or business employees. The other, and here, we have a silly, rosy view from The Economist, such companies “make available specialist knowledge that may not exist within some organisations, from deploying cloud computing to assessing climate change’s impact on supply chains. By performing similar work for many clients, consultants spread productivity-enhancing practices.”
Leaving that ghastly, mangled prose aside, the same paper admits that generating such advice can lead to a “self-protection racket.” The CEO of a company wishing to thin the ranks of employees can rely on a favourable assessment to justify the brutal measure; consultants are hardly going to submit anything that would suggest preserving jobs.
The emergence of AI and its effects on the consulting industry yield two views. One insists that the very advent of automated platforms such as ChatGPT will make the consultant vanish into nursing home obsolescence. Travis Kalanick, cofounder of that most mercenary of platforms, Uber, is a very strong proponent of this. “If you’re a traditional consultant and you’re just doing the thing, you’re executing the thing, you’re probably in some trouble,” he suggested to Peter Diamandis during the 2025 Abundance Summit. This, however, had to be qualified by the operating principle involving the selection of the fittest. “If you’re the consultant that puts the things together that replaces the consultant, maybe you got some stuff.”
There would be some truth to this, insofar as junior consultants handling the dreary, tilling business of research, modelling, and analysis could find themselves cheapened into redundancy, leaving the dim sharks at the apex dreaming about strategy and coddling their clients with flattering emails automated by software.
The other view is that AI is a herald for efficiency, sharpening the ostensible worth of the consultant. Kearney senior partner, Anshuman Sengar, brightly extols the virtues of the technology in an interview with the Australian Financial Review. Generative AI tools “save me up to 10 to 20 percent of my time.” As he could not attend every meeting or read every article, this had “increased” the relevance of coverage. Crisp summaries of meetings and webinars could be generated. Accuracy was not a problem here as “the input data is your own meeting.”
Mindful of any sceptics of the industry keen to identify sloth, Sengar was careful to emphasise the care he took in drafting emails using tools such as Copilot. “I’m very thoughtful. If an email needs a high degree of EQ [emotional intelligence], and if I’m writing to a senior client, I would usually do it myself.” The mention of the word “usually” is most reassuring, and something that hoodwinked clients would do well to heed.
Across the field, we see the use of agentic AI, typically the sort of software agents that complete menial tasks. In 2024, Boston Consulting Group earned a fifth of its revenue from AI-related work. IBM raked in over US$1 billion in sales commitments for consulting work through its Watsonx system. After earning no revenue from such tools in 2023, KPMG International received something in the order of US$650 million in business ventures driven by generative AI.
The others to profit in this cash bonanza of wonkiness are companies in the business of creating generative AI. In May last year, PwC purchased over 100,000 licenses of OpenAI’s ChatGPT Enterprise system, making it the company’s largest customer.
Seeking the services of these consultancy-guided platforms is an exercise in cerebral corrosion. Deloitte offers its Zora AI platform, which uses NVIDIA AI. “Simplify enterprise operations, boost productivity and efficiency, and drive more confident decision making that unlocks business value, with the help of an ever-growing portfolio of specialized AI agents,” states the company’s pitch to potential customers. It babbles and stumbles along to suggest that such agents “augment your human workforce with extensive domain-specific intelligence, flexible technical architecture, and built-in transparency to autonomously execute and analyze complex business processes.”
Given such an advertisement, the middle ground of snake oil consultancy looks increasingly irrelevant – not that it should have been relevant to begin with. Why bother with Deloitte’s hack pretences when you can get the raw technology from NVIDIA? But the authors of a September article in the Harvard Business Review insist that consultancy is here to stay. (They would, given their pedigree.) The industry is merely “being fundamentally reshaped.” And hardly for the better.
The rise of AI will exacerbate income inequality throughout the country, and it’s the government’s duty to step up and take care of its citizens when required.

Amazon employees and supporters gather during a walk-out protest against recent layoffs, a return-to-office mandate, and the company’s environmental impact, outside Amazon headquarters in Seattle, Washington, on May 31, 2023.
(Photo by Jason Redmond / AFP via Getty Images)
Stephanie Justice
Nov 07, 2025
In 2019, the New York Times published a series of op-ed columns “from the future,” including one from 2043 urging policymakers to rethink what the American Dream looks like amid an AI revolution.
Well, it’s only 2025, and the American Dream is already in jeopardy of dying because of AI’s impact.

Trump FTC Deletes Lina Khan-Era Blog Posts Warning of Threat AI Poses to Consumers

Report Details How ‘Gas-Fed AI Boom’ Set to Blow Up US Climate Goals
Earlier this year, Anthropic CEO Dario Amodei warned of a “white-collar bloodbath,” which was met with criticism by some of his tech colleagues and competitors. However, we’re already seeing a “bloodbath” come to pass. Amazon is preparing to lay off as many as 30,000 corporate employees, with its senior vice president stating that AI is “enabling companies to innovate much faster.” As it (unsurprisingly) turns out, CEOs across industries share this same sentiment.
We’re seeing the most visible signs of this “bloodbath” at the entry level. Recent graduates are having difficulty finding work in their fields and are taking part-time roles in fast food and retail in order to make ends meet. After being told for years that going to college was the key to being successful, up-and-coming generations are being met with disillusionment.
If Americans can’t reach a decent standard of living now, they’ll be worse off as the AI revolution marches forward.
Despite dire statistics and repeated warnings from researchers and economists alike, people at the decision-making table aren’t listening. White House AI czar David Sacks brushed off fears of mass job displacement this past summer, and adviser Jacob Helberg dismissed the idea that the government has to “hold the hands of every single person getting displaced” by AI.
Unlike the hypothetical 2043, there aren’t people marching in the streets demanding that the government guarantee they’ll still have livelihoods when AI takes their jobs—yet. However, this prediction could easily come true. Life is already unaffordable for the majority of Americans. Add Big Tech’s hoarding of the wealth being created by AI and inconsistent job opportunities, and we could have class warfare on our hands.
OpenAI’s Sam Altman perfectly encapsulated the ignorance of Silicon Valley when he implied that if jobs are replaced by AI, they aren’t “real work.” It’s no surprise that Altman, who has profit margins reaching the billions, doesn’t understand that jobs aren’t just jobs to middle-class families; they are ways for Americans to build their livelihoods, and ultimately, find purpose. Our country—for better or for worse—was built on the idea that anyone could keep their head down, work hard, and achieve the American Dream. If that’s no longer the case, then we must rethink the American Dream itself.
We can’t close the Pandora’s box of AI, nor should we. Advanced AI will bring about positive, transformative change in society if we utilize it correctly. But our policymakers must start taking AI’s impact on our workforce seriously.
That’s not to say there aren’t influential leaders already speaking out. In fact, concerns about AI’s effects on American workers span party lines. Democratic Sen. Chris Murphy wrote a compelling essay arguing in part that there won’t be enough jobs created by advanced AI to replace the lost jobs. Republican Sen. Josh Hawley is pushing the Republican Party to make AI a priority in order to be “a party of working people.” Independent Sen.Bernie Sanders released a report revealing that as many as 100 million jobs could be displaced to AI and proposed a “robot tax” to mitigate the technology’s effects on the labor force—another version of universal basic income (UBI).
Now, I won’t pretend to know the best policy solution that will allow Americans to continue flourishing in the AI era. However, I do know that the rise of AI will exacerbate income inequality throughout the country and that it’s the government’s duty to step up and take care of its citizens when required.
This starts by looking at how we can rebuild our social safety net in an era where Americans do less or go without work altogether. For millions of Americans, healthcare coverage is tied to their employment, as are Social Security benefits. If Americans aren’t employed, then they can’t contribute to their future checks when they’re retired. This leads to questions about the concept of retirement. Will it even exist in the future? Will Americans even be able to find happiness in forced “retirement” without an income and without the purpose provided by work?
It’s easy to spiral here, but you get the point. This is a complicated issue with consequences that we’ll be reckoning with for years to come. But we don’t have that kind of time. If Americans can’t reach a decent standard of living now, they’ll be worse off as the AI revolution marches forward.
It’s 2025, and AI is already transforming the world as we know it. In this economy, we must create a new American Dream that allows Americans to pursue life, liberty, and happiness on their own terms.
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
Stephanie Justice
Stephanie Justice is the press secretary at The Alliance for Secure AI, a nonprofit organization that educates the public about the implications of advanced AI.
Full Bio >
“Big Tech is building a mountain of speculative infrastructure,” warned one critic. “Now it wants the US government to prop up the bubble before it bursts.”

Signage of AI (Artificial Intelligence) is seen during the World Audio Visual Entertainment
(Photo: Indranil Aditya/Middle East Images/AFP via Getty Images)
Brad Reed
Nov 06, 2025
COMMON DREAMS
Tech giant OpenAI generated significant backlash this week after one of its top executives floated potential loan guarantees from the US government to help fund its massive infrastructure buildout.
In a Wednesday interview with The Wall Street Journal, OpenAI chief financial officer Sarah Friar suggested that the federal government could get involved in infrastructure development for artificial intelligence by offering a “guarantee,” which she said could “drop the cost of the financing” and increase the amount of debt her firm could take on.

Watchdog Says OpenAI’s For-Profit Restructuring Scheme ‘Should Not Be Allowed to Stand’
When asked if she was specifically talking about a “federal backstop for chip investment,” she replied, “Exactly.”
Hours after the interview, Friar walked back her remarks and insisted that “OpenAI is not seeking a government backstop for our infrastructure commitments,” while adding that she was “making the point that American strength in technology will come from building real industrial capacity, which requires the private sector and government playing their part.”
Despite Friar’s walk-back, OpenAI CEO Sam Altman said during a podcast interview with economist Tyler Cowen that released on Thursday that he believed the government ultimately could be a backstop to the artificial intelligence industry.
“When something gets sufficiently huge... the federal government is kind of the insurer of last resort, as we’ve seen in various financial crises,” he said. “Given the magnitude of what I expect AI’s economic impact to look like, I do think the government ends up as the insurer of last resort.”
Friar and Altman’s remarks about government backstops for OpenAI loans drew the immediate ire of Robert Weissman, co-president of consumer advocacy organization Public Citizen, who expressed concerns that the tech industry may have already opened up talks about loan guarantees with President Donald Trump’s administration.
“Given the Trump regime’s eagerness to shower taxpayer subsidies and benefits on favored corporations, it is entirely possible that OpenAI and the White House are concocting a scheme to siphon taxpayer money into OpenAI’s coffers, perhaps with some tribute paid to Trump and his family.” Weissman said. “Perhaps not so coincidentally, OpenAI President Greg Brockman was among the attendees at a dinner for donors to Trump’s White House ballroom, though neither he nor OpenAI have been reported to be actual donors.”
JB Branch, Public Citizen’s Big Tech accountability advocate, said even suggesting government backstops for OpenAI showed that the company and its executives were “completely out of touch with reality,” and he argued it was no coincidence that Friar floated the possibility of federal loan guarantees at a time when many analysts have been questioning whether the AI industry is an unsustainable financial bubble.
“The truth is simple: the AI bubble is swelling, and OpenAI knows it,” he said. “Big Tech is building a mountain of speculative infrastructure without real-world demands or proven productivity-enhancing use cases to justify it. Now it wants the US government to prop up the bubble before it bursts. This is an escape plan for an industry that has overpromised and underdelivered.”
An MIT Media Lab report found in September that while AI use has doubled in workplaces since 2023, 95% of organizations that have invested in the technology have seen “no measurable return on their investment.”
Concerns about an AI bubble intensified earlier this week when investor Michael Burry, who famously made a fortune by short-selling the US housing market ahead of the 2008 financial crisis, revealed that his firm was making bets against Nvidia and Palantir, two of the biggest players in the AI industry.
This has led to some AI industry players to complain that markets and governments are undervaluing their products.
During her Wednesday WSJ interview, for instance, Friar complained that “I don’t think there’s enough exuberance about AI, when I think about the actual practical implications and what it can do for individual.”
Nvidia CEO Jensen Huang, meanwhile, told the Financial Times that China was going to beat the US in the race to develop high-powered artificial intelligence because the Chinese government offers more energy subsidies to AI and doesn’t put as much regulation on AI development.
Huang also complained that “we need more optimism” about the AI industry in the US.
Investment researcher Ross Hendricks, however, dismissed Huang’s warning about China winning the AI battle, and he accused the Nvidia CEO of seeking special government favors.
“This is nothing more than Jensen Huang foaming the runway for a federal AI bailout in coordination with OpenAI’s latest plea in the WSJ,” he commented in a post on X. “These grifters simply can’t be happy making billions from one of the greatest investment manias of all time. They’ll do everything possible to loot taxpayers to prevent it from popping.”
No comments:
Post a Comment