Friday, December 26, 2025

David Sacks: Trump’s  Billionaire AI power broker

By AFP
December 24, 2025


AI and crypto czar David Sacks looks on before US President Donald Trump signs executive orders on AI - Copyright AFP Brendan SMIALOWSKI


Alex PIGMAN

From a total Washington novice, Silicon Valley investor David Sacks has against expectations emerged as one of the most successful members of the second Trump administration.

He is officially chair of President Donald Trump’s Council of Advisors on Science and Technology.

However, in the White House he is referred to as the AI and crypto tsar, there to guide the president through the technology revolutions in which the United States play a central role.

“I am grateful we have him,” OpenAI boss Sam Altman said in a post on X.

“While Americans bicker, our rivals are studying David’s every move,” billionaire Salesforce CEO Marc Benioff chimed in.

Those supportive posts responded to a New York Times investigation highlighting Sacks’s investments in technology companies benefiting from White House AI support.

Sacks dismissed the report as an “anti-truth” hit job by liberal media.

But the episode confirmed that this South African-born outsider has become a force in Trump’s Washington, outlasting his friend Elon Musk, whose White House career ended in acrimony after less than six months.

“Even among Silicon Valley allies, he has outperformed expectations,” said a former close associate, speaking anonymously to discuss the matter candidly.



– ‘Mafia’ member –



Unlike many Silicon Valley figures, the South African-born Sacks has been staunchly conservative since his Stanford University days in the 1990s.

There he met Peter Thiel, the self-styled philosopher king of the right-wing tech community.

In the early 1990s, the two men wrote for a campus publication, attacking what they saw as political correctness destroying American higher education.

After earning degrees from Stanford and the University of Chicago, Sacks initially took a conventional path as a management consultant at McKinsey & Company.

But Thiel lured his friend to his startup Confinity, which would eventually become PayPal, the legendary breeding ground for the “PayPal mafia” — a group of entrepreneurs including Musk and LinkedIn billionaire Reid Hoffman — whose influence now extends throughout the tech world.

After PayPal, Sacks founded a social media company, sold it to Microsoft, then made his fortune in venture capital.

A major turning point came during the COVID pandemic when Sacks and some right-wing friends launched the All-In podcast as a way to pass time, talk business and vent about Democrats in government.

The podcast rapidly gained influence, and the brand has since expanded to include major conferences and even a tequila line.

Sacks began his way to Trump’s inner circle through campaign contributions ahead of last year’s presidential election.

With Musk’s blessing, he was appointed as pointman for AI and cryptocurrency policy.

Before diving into AI, Sacks shepherded an ambitious cryptocurrency bill providing legal clarity for digital assets.

It’s a sector Trump has enthusiastically embraced, with his family now heavily invested in crypto companies and the president himself issuing a meme coin — activity that critics say amounts to an open door for potential corruption.

But AI has become the central focus of Trump’s second presidency with Sacks there to steer Trump toward industry-friendly policies.

However, Sacks faces mounting criticism for potential overreach.

According to his former associate, Sacks pursues his objectives with an obsessiveness that serves him well in Silicon Valley’s company-building culture. But that same intensity can create friction in Washington.

The main controversy centers on his push to prevent individual states from creating their own AI regulations. His vision calls for AI rules to originate exclusively from Washington.

When Congress twice failed to ban state regulations, Sacks took his case directly to the president, who signed an executive order threatening to cut federal funding to states passing AI laws.



– ‘Out of control’ –



Tech lobbyists worry that by going solo, Sacks torpedoed any chance of effective national regulation.

More troubling for Sacks is the growing public opposition to AI’s rapid deployment. Concerns about job losses, proliferating data centers, and rising electricity costs may become a major issue in the 2026 midterm elections.

“The tech bros are out of control,” warned Steve Bannon, the right-wing Trump movement’s strategic mastermind, worried about political fallout.

Rather than seeking common ground, Sacks calls criticism “a red herring” from AI doomers “who want all progress to stop.”




The European laws curbing big tech… and irking Trump  AND TECH BILLIONAIRES


By AFP
December 24, 2025


Tech giants have been targeted by the EU for a number of allegedly unfair practices - Copyright AFP/File Sebastien SALOM-GOMIS

The European Union is back in the crosshairs of the Trump administration over its tech rules, which Washington denounced as an attempt to “coerce” American social media platforms into censoring viewpoints they oppose.

The US State Department said Tuesday it would deny visas to a former EU commissioner and four others, saying they “have advanced censorship crackdowns by foreign states — in each case targeting American speakers and American companies”.

Trump has vowed to punish countries that seek to curb US big tech firms.

Brussels has adopted a powerful legal arsenal aimed at reining in tech giants — namely through its Digital Markets Act (DMA) which covers competition and the Digital Services Act (DSA) on content moderation.

The EU has already slapped heavy fines on US behemoths including Apple, Meta and X under the new rules.

Here is a look at the EU rules drawing Trump’s ire:



– Digital Services Act –



Rolled out in stages since 2023, the mammoth Digital Services Act forces online firms to aggressively police content in the 27 countries of the European Union — or face major fines.

Aimed at protecting consumers from disinformation and hate speech as well as counterfeit or dangerous goods, it obliges platforms to swiftly remove illegal content or make it inaccessible.

The law instructs platforms to suspend users who frequently share illegal content such as hate speech — a provision framed as “censorship” by detractors across the Atlantic.

Tougher rules apply to a designated list of “very large” platforms that include US giants Apple, Amazon, Facebook, Google, Instagram, Microsoft, Snapchat and X.

These giants must assess dangers linked to their services regarding illegal content and privacy, set up internal risk mitigation systems, and give regulators access to their data to verify compliance.

Violators can face fines of up to six percent of global turnover, and the EU has the power to ban offending platforms from Europe for repeated non-compliance.

Elon Musk’s X was hit with the first fine under the DSA on December 5, a 120-million-euro ($140 million) penalty for a lack of transparency over what the EU calls the deceptive design of its “blue checkmark” for supposedly verified accounts, and its failure to provide access to public data for researchers.



– Digital Markets Act –



Since March 2024, the world’s biggest digital companies have faced strict EU rules intended to limit abuses linked to market dominance, favour the emergence of start-ups in Europe and improve options for consumers.

Brussels has so far named seven so-called gatekeepers covered by the Digital Markets Act: Google’s Alphabet, Amazon, Apple, TikTok parent ByteDance, Facebook and Instagram parent Meta, Microsoft and travel giant Booking.

Gatekeepers can be fined for locking in customers to use pre-installed services, such as a web browser, mapping or weather information.

The DMA has forced Google to overhaul its search display to avoid favouring its own services — such as Google flights or shopping.

It requires that users be able to choose what app stores they use — without going via the dominant two players, Apple’s App Store and Google Play.

And it has forced Apple to allow developers to offer alternative payment options directly to consumers — outside of the App Store, hitting it with a fine of 500 million euros in April.

The DMA has also imposed interoperability between messaging apps WhatsApp and Messenger and competitors who request it.

The EU fined Meta 200 million euros in April over its “pay or consent” system after it violated rules on the use of personal data on Facebook and Instagram.

Failure to comply with the DMA can carry fines in the billions of dollars, reaching 20 percent of global turnover for repeat offenders.



– RGPD and AI –



The EU’s data protection rules (RGPD) have also tripped up US tech giants, with Brussels issuing numerous fines since they came into force in 2018.

The rules require firms to seek the consent of users to collect personal data and to explain what it will be used for, and gives users the right to ask firms to delete personal data.

Fines for violations can go as high as 20 million euros, or four percent of a company’s global turnover.

The EU has also adopted its AI act which will gradually bring in guardrails on using artificial intelligence in high-risk areas such as security, health and civic rights. In the face of pressure from the industry, the EU is considering weakening the measures and delaying their implementation.

burs-rl/gv

AI overestimates how smart people are, according to HSE economists



National Research University Higher School of Economics





Scientists at HSE University have found that current AI models, including ChatGPT and Claude, tend to overestimate the rationality of their human opponents—whether first-year undergraduate students or experienced scientists—in strategic thinking games, such as the Keynesian beauty contest. While these models attempt to predict human behaviour, they often end up playing 'too smart' and losing because they assume a higher level of logic in people than is actually present. The study has been published in the Journal of Economic Behavior & Organization.

In the 1930s, British economist John Maynard Keynes developed the theoretical concept of a metaphorical beauty contest. A classic example involves newspaper readers being asked to select the six most attractive faces from a set of 100 photos. The prize is awarded to the participant whose choices are closest to the most popular selection—that is, the average of everyone else’s picks. Typically, people tend to choose the photos they personally find most attractive. However, they often lose, because the actual task is to predict which faces the majority of respondents will consider attractive. A rational participant, therefore, should base their choices on other people’s perceptions of beauty. Such experiments test the ability to reason across multiple levels: how others think, how rational they are, and how deeply they are likely to anticipate others’ reasoning.

Dmitry Dagaev, Head of the Laboratory of Sports Studies at the Faculty of Economic Sciences, together with colleagues Sofia Paklina and Petr Parshakov from HSE University–Perm and Iuliia Alekseenko from the University of Lausanne, Switzerland, set out to investigate how five of the most popular AI models—including ChatGPT-4o and Claude-Sonnet-4—would perform in such an experiment. The chatbots were instructed to play Guess the Number, one of the most well-known variations of the Keynesian beauty contest.

According to the rules, all participants simultaneously and independently choose a number between 0 and 100. The winner is the one whose number is closest to half (or two-thirds, depending on the experiment) of the average of all participants’ choices. In this contest, more experienced players attempt to anticipate the behaviour of others in order to select the optimal number. To investigate how a large language model (LLM) would perform in the game, the authors replicated the results of 16 classic Guess the Number experiments previously conducted with human participants by other researchers. For each round, the LLMs were given a prompt explaining the rules of the game and a description of their opponents—ranging from first-year economics undergraduates and academic conference participants to individuals with analytical or intuitive thinking, as well as those experiencing emotions such as anger or sadness. The LLM was then asked to choose a number and explain its reasoning. 

The study found that LLMs adjusted their choices based on the social, professional, and age characteristics of their opponents, as well as the latter’s knowledge of game theory and cognitive abilities. For example, when playing against participants of game theory conferences, the LLM tended to choose a number close to 0, reflecting the choices that typically win in such a setting. In contrast, when playing against first-year undergraduates, the LLM expected less experienced players and selected a significantly higher number.

The authors found that LLMs are able to adapt effectively to opponents with varying levels of sophistication, and their responses also displayed elements of strategic thinking. However, the LLMs were unable to identify a dominant strategy in a two-player game. 

The Keynesian beauty contest has long been used to explain price fluctuations in financial markets: brokers do not base their decisions on what they personally would buy, but on how they expect other market participants to value a stock. The same principle applies here—success depends on the ability to anticipate the preferences of others.

'We are now at a stage where AI models are beginning to replace humans in many operations, enabling greater economic efficiency in business processes. However, in decision-making tasks, it is often important to ensure that LLMs behave in a human-like manner. As a result, there is a growing number of contexts in which AI behaviour is compared with human behaviour. This area of research is expected to develop rapidly in the near future,' Dagaev emphasised.

The study was conducted with support from HSE University's Basic Research Programme.

Survey reveals ethical gaps slowing AI

adoption in pediatric surgery




Zhejiang University
Ethical concerns in the use of artificial intelligence (AI) in pediatric surgery practice among study participants. 

image: 

Ethical concerns in the use of artificial intelligence (AI) in pediatric surgery practice among study participants.

view more 

Credit: World Journal of Pediatric Surgery (WJPS)






Artificial intelligence (AI) is rapidly advancing across modern healthcare, yet its role in pediatric surgery remains limited and ethically complex. This study reveals that although surgeons recognize AI’s potential to enhance diagnostic precision, streamline planning, and support clinical decision-making, its practical use is still rare and mostly academic. Pediatric surgeons expressed strong concerns about accountability in the event of AI-related harm, the difficulty of obtaining informed consent for children, the risk of data privacy breaches, and the possibility of algorithmic bias. By examining pediatric surgeons’ experiences and perceptions, this study highlights the critical barriers that must be addressed before AI can be safely and responsibly integrated into pediatric surgical care.

At present, throughout the world, AI is reshaping how medical data are interpreted, how risks are predicted, and how complex decisions are supported. Yet pediatric surgery faces unique ethical challenges due to children’s limited autonomy, the need for parental decision-making, and the heightened sensitivity of surgical risks. In low-resource settings, concerns about infrastructure, data representativeness, and regulatory preparedness further complicate adoption. Pediatric surgeons must balance innovation with the obligation to protect vulnerable patients and maintain trust. These pressures intensify debates around transparency, fairness, and responsibility in the use of AI tools. It was with these challenges that a deeper research is needed to guide the ethical and practical integration of AI in pediatric surgical care.

A national team of pediatric surgeons from the Federal Medical Centre in Umuahia, Nigeria, has released the first comprehensive survey examining how clinicians perceive the ethical and practical implications of integrating AI into pediatric surgical care. Published (DOI: 10.1136/wjps-2025-001089) on 20 October 2025 in the World Journal of Pediatric Surgery (WJPS), the study gathered responses from surgeons across all six geopolitical zones to assess levels of AI awareness, patterns of use, and key ethical concerns. The findings reveal a profession cautiously weighing AI’s potential benefits against unresolved questions regarding accountability, informed consent, data privacy, and regulatory readiness.

The study analyzed responses from 88 pediatric surgeons, most of whom were experienced consultants actively practicing across diverse clinical settings. Despite global momentum in AI-enabled surgical innovation, only one-third of respondents had ever used AI, and their use was largely restricted to tasks such as literature searches and documentation rather than clinical applications. Very few reported using AI for diagnostic support, imaging interpretation, or surgical simulation, highlighting a substantial gap between emerging technological capabilities and everyday pediatric surgical practice.

Ethical concerns were nearly universal. Surgeons identified accountability for AI-related errors, the complexity of securing informed consent from parents or guardians, and the vulnerability of patient data as major sources of hesitation. Concerns also extended to algorithmic bias, reduced human oversight, and unclear legal responsibilities in the event of harm. Opinions on transparency with families were divided. While many supported informing parents about AI involvement, others felt disclosure was unnecessary when AI did not directly influence clinical decisions.

Most respondents expressed low confidence in existing legal frameworks governing AI use in healthcare. Many called for stronger regulatory leadership, clearer guidelines, and standardized training to prepare pediatric surgeons for future AI integration. Collectively, the findings underscore an urgent need for structured governance and capacity building.

“The results show that pediatric surgeons are not opposed to AI—they simply want to ensure it is safe, fair, and well regulated,” the research team explained. “Ethical challenges such as accountability, informed consent, and data protection must be addressed before clinicians can confidently rely on AI in settings involving vulnerable children. Clear national guidelines, practical training programs, and transparent standards are essential to ensure that AI becomes a supportive tool rather than a source of uncertainty in pediatric surgical care.”

The study underscores the need for pediatric-specific ethical frameworks, clearer consent procedures, and well-defined accountability mechanisms for AI-assisted care. Strengthening data governance, improving digital infrastructure, and expanding AI literacy among clinicians and families will be essential for building trust. As AI continues to enter surgical practice, these measures offer a practical roadmap for integrating innovation while safeguarding child safety and public confidence.

###

References

DOI

10.1136/wjps-2025-001089

Original Source URL

https://doi.org/10.1136/wjps-2025-001089

About World Journal of Pediatric Surgery 

World Journal of Pediatric Surgery (WJPS), founded in 2018, is the open-access, peer-reviewed journal in pediatric surgery area. Sponsored by Zhejiang University and Children’s Hospital, Zhejiang University School of Medicine, and published by BMJ Group. WJPS aims to be a leading international platform for advances in pediatric surgical research and practice. Indexed in PubMed, ESCI, Scopus, CAS, DOAJ, and CSCD, WJPS achieved the latest impact factor (IF) of 1.3/Q3, CiteScore of 1.5, and an estimate 2025 IF of approximately 2.0.