Sunday, February 02, 2025

Opinion - How American tech companies built China into an AI powerhouse

Paul Rosenzweig, opinion contributor
THE HILL
Sat, February 1, 2025 



The United States advantage in artificial intelligence is in question following the recent news that DeepSeek — a small Chinese startup — developed a better, more efficient large language AI model for billions less and in a fraction of the time it took leading American tech companies like Meta and Microsoft to do so themselves.

It’s a stunning development — one that has called into question the assumptions of American AI leaders. But the real issue isn’t just that American companies are falling behind from a technical standpoint, it’s that China’s AI ascension was in part enabled by American tech companies building, operating and growing their AI capabilities in the country.

Inadvertent or not, American companies have provided China with everything it needs to gain a competitive advantage.

One American company, for example, Microsoft, has made China its second home for AI development — operating AI research facilities directly in China that helped to make Chinese AI what it is today.


Indeed, Microsoft spent years helping to create China’s AI industry, by, for example, developing AI technologies at its Beijing research lab and in conjunction with Chinese military universities. A little over a year ago, the company gave the China access to almost 40 new Azure features — including many AI functions, and it recently signed agreements with Chinese propaganda outlets to provide them with AI technology to better target propaganda dissemination.

Perhaps more importantly, most tech companies operating in China have to comply with China’s National Cybersecurity Law, which requires them to store Chinese user data on mainland servers, providing an access route for Chinese intelligence and state security agencies. This includes giving the government access to source code, encryption keys and backdoor access.

How China processes that data and leverages it is heavily veiled by the Chinese government, but it’s well-known that China uses it for the benefit of its private sector and directly against its adversaries. Microsoft admitted that China’s cybersecurity laws on vulnerability disclosures have led to an increase in China-based nation-state threat actors exploiting zero-day vulnerabilities. Surely China has every reason to leverage these laws to exploit Microsoft’s AI products as well.

Microsoft isn’t the only company that operates in China. Indeed, Oracle, Amazon Web Services and Meta all have similarly spent years building and operating AI partnerships in China. But Microsoft has a uniquely massive presence in China and is one of the largest providers of technology to the U.S. government, which makes its activities in China potentially harmful to the U.S. private sector and perilous to U.S. national security.

Americans have every right to question China’s rapid AI ascension because it could have easily been different. Ironically, Microsoft itself showed that there is a path to building the U.S. AI industry without simultaneously building China’s through its deal with Emirati firm G42.


The deal, which the U.S. government approved last year, authorized the export of advanced AI chips to a Microsoft-operated facility in the United Arab Emirates as part of a partnership between Microsoft and G42. It expanded the reach and use of American AI technology through close coordination between the U.S. government and the private sector and reaffirmed a strong partnership with a close ally, all while leveraging clear and transparent export controls that mitigated China’s ability to access Microsoft’s AI.

But what is past is past. The damage, such as it is, has been done and Chinese AI advances are now a matter of fact. But there’s no excuse for the government to allow the situation to continue and for U.S. tech companies to be researching advanced AI in China.

To help the U.S. take back its position as the unquestioned AI leader, the government should take action to review the actions of American companies that have sensitive AI operations in China and work to diminish the partnerships that are helping China directly compete with the U.S.

Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security and cybersecurity consulting company, and a senior advisor to The Chertoff Group. Rosenzweig formerly served as deputy assistant secretary for Policy in the Department of Homeland Security.

Copyright 2025 Nexstar Media, Inc. All rights reserved.


DeepSeek Failed Every Single Security Test, Researchers Found

Victor Tangermann
FUTURISM
Sat, February 1, 2025 




Security researchers from the University of Pennsylvania and hardware conglomerate Cisco have found that DeepSeek's flagship R1 reasoning AI model is stunningly vulnerable to jailbreaking.

In a blog post published today, first spotted by Wired, the researchers found that DeepSeek "failed to block a single harmful prompt" after being tested against "50 random prompts from the HarmBench dataset," which includes "cybercrime, misinformation, illegal activities, and general harm."

"This contrasts starkly with other leading models, which demonstrated at least partial resistance," the blog post reads.

It's a particularly noteworthy development considering the sheer amount of chaos DeepSeek has wrought on the AI industry as a whole. The company claims its R1 model can trade blows with competitors including OpenAI's state-of-the-art o1, but at a tiny fraction of the cost, sending shivers down the spines of Wall Street investors.


But the company seemingly has done little to guard its AI model against attacks and misuse. In other words, it wouldn't be hard for a bad actor to turn it into a powerful disinformation machine or get it to explain how to create explosives, for instance.

The news comes after cloud security research company Wiz came across a massive unsecured database on DeepSeek's servers, which included a trove of unencrypted internal data ranging from "chat history" to "backend data, and sensitive information."

DeepSeek is extremely vulnerable to an attack "without any authentication or defense mechanism to the outside world," according to Wiz.

The Chinese hedge fund-owned company's AI made headlines for being far cheaper to train and run than its many competitors in the US. But that frugality may come with some significant drawbacks.

"DeepSeek R1 was purportedly trained with a fraction of the budgets that other frontier model providers spend on developing their models," the Cisco and University of Pennsylvania researchers wrote. "However, it comes at a different cost: safety and security."

AI security company Adversa AI similarly found that DeepSeek is astonishingly easy to jailbreak.

"It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises," Cisco VP of product, AI software and platform DJ Sampath told Wired.

However, it's not just DeepSeek's latest AI. Meta's open-source Llama 3.1 model also flunked almost as badly as DeepSeek's R1 in a comparison test, with a 96 percent attack success rate (compared to dismal 100 percent for DeepSeek).

OpenAI's recently released reasoning model, o1-preview, fared much better, with an attack success rate of just 26 percent.

In short, DeepSeek's flaws deserve plenty of scrutiny going forward.

"DeepSeek is just another example of how every model can be broken — it’s just a matter of how much effort you put in," Adversa AI CEO Alex Polyakov told Wired. "If you’re not continuously red-teaming your AI, you’re already compromised."

No comments: