Friday, December 26, 2025

David Sacks: Trump’s  Billionaire AI power broker

By AFP
December 24, 2025


AI and crypto czar David Sacks looks on before US President Donald Trump signs executive orders on AI - Copyright AFP Brendan SMIALOWSKI


Alex PIGMAN

From a total Washington novice, Silicon Valley investor David Sacks has against expectations emerged as one of the most successful members of the second Trump administration.

He is officially chair of President Donald Trump’s Council of Advisors on Science and Technology.

However, in the White House he is referred to as the AI and crypto tsar, there to guide the president through the technology revolutions in which the United States play a central role.

“I am grateful we have him,” OpenAI boss Sam Altman said in a post on X.

“While Americans bicker, our rivals are studying David’s every move,” billionaire Salesforce CEO Marc Benioff chimed in.

Those supportive posts responded to a New York Times investigation highlighting Sacks’s investments in technology companies benefiting from White House AI support.

Sacks dismissed the report as an “anti-truth” hit job by liberal media.

But the episode confirmed that this South African-born outsider has become a force in Trump’s Washington, outlasting his friend Elon Musk, whose White House career ended in acrimony after less than six months.

“Even among Silicon Valley allies, he has outperformed expectations,” said a former close associate, speaking anonymously to discuss the matter candidly.



– ‘Mafia’ member –



Unlike many Silicon Valley figures, the South African-born Sacks has been staunchly conservative since his Stanford University days in the 1990s.

There he met Peter Thiel, the self-styled philosopher king of the right-wing tech community.

In the early 1990s, the two men wrote for a campus publication, attacking what they saw as political correctness destroying American higher education.

After earning degrees from Stanford and the University of Chicago, Sacks initially took a conventional path as a management consultant at McKinsey & Company.

But Thiel lured his friend to his startup Confinity, which would eventually become PayPal, the legendary breeding ground for the “PayPal mafia” — a group of entrepreneurs including Musk and LinkedIn billionaire Reid Hoffman — whose influence now extends throughout the tech world.

After PayPal, Sacks founded a social media company, sold it to Microsoft, then made his fortune in venture capital.

A major turning point came during the COVID pandemic when Sacks and some right-wing friends launched the All-In podcast as a way to pass time, talk business and vent about Democrats in government.

The podcast rapidly gained influence, and the brand has since expanded to include major conferences and even a tequila line.

Sacks began his way to Trump’s inner circle through campaign contributions ahead of last year’s presidential election.

With Musk’s blessing, he was appointed as pointman for AI and cryptocurrency policy.

Before diving into AI, Sacks shepherded an ambitious cryptocurrency bill providing legal clarity for digital assets.

It’s a sector Trump has enthusiastically embraced, with his family now heavily invested in crypto companies and the president himself issuing a meme coin — activity that critics say amounts to an open door for potential corruption.

But AI has become the central focus of Trump’s second presidency with Sacks there to steer Trump toward industry-friendly policies.

However, Sacks faces mounting criticism for potential overreach.

According to his former associate, Sacks pursues his objectives with an obsessiveness that serves him well in Silicon Valley’s company-building culture. But that same intensity can create friction in Washington.

The main controversy centers on his push to prevent individual states from creating their own AI regulations. His vision calls for AI rules to originate exclusively from Washington.

When Congress twice failed to ban state regulations, Sacks took his case directly to the president, who signed an executive order threatening to cut federal funding to states passing AI laws.



– ‘Out of control’ –



Tech lobbyists worry that by going solo, Sacks torpedoed any chance of effective national regulation.

More troubling for Sacks is the growing public opposition to AI’s rapid deployment. Concerns about job losses, proliferating data centers, and rising electricity costs may become a major issue in the 2026 midterm elections.

“The tech bros are out of control,” warned Steve Bannon, the right-wing Trump movement’s strategic mastermind, worried about political fallout.

Rather than seeking common ground, Sacks calls criticism “a red herring” from AI doomers “who want all progress to stop.”




The European laws curbing big tech… and irking Trump  AND TECH BILLIONAIRES


By AFP
December 24, 2025


Tech giants have been targeted by the EU for a number of allegedly unfair practices - Copyright AFP/File Sebastien SALOM-GOMIS

The European Union is back in the crosshairs of the Trump administration over its tech rules, which Washington denounced as an attempt to “coerce” American social media platforms into censoring viewpoints they oppose.

The US State Department said Tuesday it would deny visas to a former EU commissioner and four others, saying they “have advanced censorship crackdowns by foreign states — in each case targeting American speakers and American companies”.

Trump has vowed to punish countries that seek to curb US big tech firms.

Brussels has adopted a powerful legal arsenal aimed at reining in tech giants — namely through its Digital Markets Act (DMA) which covers competition and the Digital Services Act (DSA) on content moderation.

The EU has already slapped heavy fines on US behemoths including Apple, Meta and X under the new rules.

Here is a look at the EU rules drawing Trump’s ire:



– Digital Services Act –



Rolled out in stages since 2023, the mammoth Digital Services Act forces online firms to aggressively police content in the 27 countries of the European Union — or face major fines.

Aimed at protecting consumers from disinformation and hate speech as well as counterfeit or dangerous goods, it obliges platforms to swiftly remove illegal content or make it inaccessible.

The law instructs platforms to suspend users who frequently share illegal content such as hate speech — a provision framed as “censorship” by detractors across the Atlantic.

Tougher rules apply to a designated list of “very large” platforms that include US giants Apple, Amazon, Facebook, Google, Instagram, Microsoft, Snapchat and X.

These giants must assess dangers linked to their services regarding illegal content and privacy, set up internal risk mitigation systems, and give regulators access to their data to verify compliance.

Violators can face fines of up to six percent of global turnover, and the EU has the power to ban offending platforms from Europe for repeated non-compliance.

Elon Musk’s X was hit with the first fine under the DSA on December 5, a 120-million-euro ($140 million) penalty for a lack of transparency over what the EU calls the deceptive design of its “blue checkmark” for supposedly verified accounts, and its failure to provide access to public data for researchers.



– Digital Markets Act –



Since March 2024, the world’s biggest digital companies have faced strict EU rules intended to limit abuses linked to market dominance, favour the emergence of start-ups in Europe and improve options for consumers.

Brussels has so far named seven so-called gatekeepers covered by the Digital Markets Act: Google’s Alphabet, Amazon, Apple, TikTok parent ByteDance, Facebook and Instagram parent Meta, Microsoft and travel giant Booking.

Gatekeepers can be fined for locking in customers to use pre-installed services, such as a web browser, mapping or weather information.

The DMA has forced Google to overhaul its search display to avoid favouring its own services — such as Google flights or shopping.

It requires that users be able to choose what app stores they use — without going via the dominant two players, Apple’s App Store and Google Play.

And it has forced Apple to allow developers to offer alternative payment options directly to consumers — outside of the App Store, hitting it with a fine of 500 million euros in April.

The DMA has also imposed interoperability between messaging apps WhatsApp and Messenger and competitors who request it.

The EU fined Meta 200 million euros in April over its “pay or consent” system after it violated rules on the use of personal data on Facebook and Instagram.

Failure to comply with the DMA can carry fines in the billions of dollars, reaching 20 percent of global turnover for repeat offenders.



– RGPD and AI –



The EU’s data protection rules (RGPD) have also tripped up US tech giants, with Brussels issuing numerous fines since they came into force in 2018.

The rules require firms to seek the consent of users to collect personal data and to explain what it will be used for, and gives users the right to ask firms to delete personal data.

Fines for violations can go as high as 20 million euros, or four percent of a company’s global turnover.

The EU has also adopted its AI act which will gradually bring in guardrails on using artificial intelligence in high-risk areas such as security, health and civic rights. In the face of pressure from the industry, the EU is considering weakening the measures and delaying their implementation.

burs-rl/gv

AI overestimates how smart people are, according to HSE economists



National Research University Higher School of Economics





Scientists at HSE University have found that current AI models, including ChatGPT and Claude, tend to overestimate the rationality of their human opponents—whether first-year undergraduate students or experienced scientists—in strategic thinking games, such as the Keynesian beauty contest. While these models attempt to predict human behaviour, they often end up playing 'too smart' and losing because they assume a higher level of logic in people than is actually present. The study has been published in the Journal of Economic Behavior & Organization.

In the 1930s, British economist John Maynard Keynes developed the theoretical concept of a metaphorical beauty contest. A classic example involves newspaper readers being asked to select the six most attractive faces from a set of 100 photos. The prize is awarded to the participant whose choices are closest to the most popular selection—that is, the average of everyone else’s picks. Typically, people tend to choose the photos they personally find most attractive. However, they often lose, because the actual task is to predict which faces the majority of respondents will consider attractive. A rational participant, therefore, should base their choices on other people’s perceptions of beauty. Such experiments test the ability to reason across multiple levels: how others think, how rational they are, and how deeply they are likely to anticipate others’ reasoning.

Dmitry Dagaev, Head of the Laboratory of Sports Studies at the Faculty of Economic Sciences, together with colleagues Sofia Paklina and Petr Parshakov from HSE University–Perm and Iuliia Alekseenko from the University of Lausanne, Switzerland, set out to investigate how five of the most popular AI models—including ChatGPT-4o and Claude-Sonnet-4—would perform in such an experiment. The chatbots were instructed to play Guess the Number, one of the most well-known variations of the Keynesian beauty contest.

According to the rules, all participants simultaneously and independently choose a number between 0 and 100. The winner is the one whose number is closest to half (or two-thirds, depending on the experiment) of the average of all participants’ choices. In this contest, more experienced players attempt to anticipate the behaviour of others in order to select the optimal number. To investigate how a large language model (LLM) would perform in the game, the authors replicated the results of 16 classic Guess the Number experiments previously conducted with human participants by other researchers. For each round, the LLMs were given a prompt explaining the rules of the game and a description of their opponents—ranging from first-year economics undergraduates and academic conference participants to individuals with analytical or intuitive thinking, as well as those experiencing emotions such as anger or sadness. The LLM was then asked to choose a number and explain its reasoning. 

The study found that LLMs adjusted their choices based on the social, professional, and age characteristics of their opponents, as well as the latter’s knowledge of game theory and cognitive abilities. For example, when playing against participants of game theory conferences, the LLM tended to choose a number close to 0, reflecting the choices that typically win in such a setting. In contrast, when playing against first-year undergraduates, the LLM expected less experienced players and selected a significantly higher number.

The authors found that LLMs are able to adapt effectively to opponents with varying levels of sophistication, and their responses also displayed elements of strategic thinking. However, the LLMs were unable to identify a dominant strategy in a two-player game. 

The Keynesian beauty contest has long been used to explain price fluctuations in financial markets: brokers do not base their decisions on what they personally would buy, but on how they expect other market participants to value a stock. The same principle applies here—success depends on the ability to anticipate the preferences of others.

'We are now at a stage where AI models are beginning to replace humans in many operations, enabling greater economic efficiency in business processes. However, in decision-making tasks, it is often important to ensure that LLMs behave in a human-like manner. As a result, there is a growing number of contexts in which AI behaviour is compared with human behaviour. This area of research is expected to develop rapidly in the near future,' Dagaev emphasised.

The study was conducted with support from HSE University's Basic Research Programme.

Survey reveals ethical gaps slowing AI

adoption in pediatric surgery




Zhejiang University
Ethical concerns in the use of artificial intelligence (AI) in pediatric surgery practice among study participants. 

image: 

Ethical concerns in the use of artificial intelligence (AI) in pediatric surgery practice among study participants.

view more 

Credit: World Journal of Pediatric Surgery (WJPS)






Artificial intelligence (AI) is rapidly advancing across modern healthcare, yet its role in pediatric surgery remains limited and ethically complex. This study reveals that although surgeons recognize AI’s potential to enhance diagnostic precision, streamline planning, and support clinical decision-making, its practical use is still rare and mostly academic. Pediatric surgeons expressed strong concerns about accountability in the event of AI-related harm, the difficulty of obtaining informed consent for children, the risk of data privacy breaches, and the possibility of algorithmic bias. By examining pediatric surgeons’ experiences and perceptions, this study highlights the critical barriers that must be addressed before AI can be safely and responsibly integrated into pediatric surgical care.

At present, throughout the world, AI is reshaping how medical data are interpreted, how risks are predicted, and how complex decisions are supported. Yet pediatric surgery faces unique ethical challenges due to children’s limited autonomy, the need for parental decision-making, and the heightened sensitivity of surgical risks. In low-resource settings, concerns about infrastructure, data representativeness, and regulatory preparedness further complicate adoption. Pediatric surgeons must balance innovation with the obligation to protect vulnerable patients and maintain trust. These pressures intensify debates around transparency, fairness, and responsibility in the use of AI tools. It was with these challenges that a deeper research is needed to guide the ethical and practical integration of AI in pediatric surgical care.

A national team of pediatric surgeons from the Federal Medical Centre in Umuahia, Nigeria, has released the first comprehensive survey examining how clinicians perceive the ethical and practical implications of integrating AI into pediatric surgical care. Published (DOI: 10.1136/wjps-2025-001089) on 20 October 2025 in the World Journal of Pediatric Surgery (WJPS), the study gathered responses from surgeons across all six geopolitical zones to assess levels of AI awareness, patterns of use, and key ethical concerns. The findings reveal a profession cautiously weighing AI’s potential benefits against unresolved questions regarding accountability, informed consent, data privacy, and regulatory readiness.

The study analyzed responses from 88 pediatric surgeons, most of whom were experienced consultants actively practicing across diverse clinical settings. Despite global momentum in AI-enabled surgical innovation, only one-third of respondents had ever used AI, and their use was largely restricted to tasks such as literature searches and documentation rather than clinical applications. Very few reported using AI for diagnostic support, imaging interpretation, or surgical simulation, highlighting a substantial gap between emerging technological capabilities and everyday pediatric surgical practice.

Ethical concerns were nearly universal. Surgeons identified accountability for AI-related errors, the complexity of securing informed consent from parents or guardians, and the vulnerability of patient data as major sources of hesitation. Concerns also extended to algorithmic bias, reduced human oversight, and unclear legal responsibilities in the event of harm. Opinions on transparency with families were divided. While many supported informing parents about AI involvement, others felt disclosure was unnecessary when AI did not directly influence clinical decisions.

Most respondents expressed low confidence in existing legal frameworks governing AI use in healthcare. Many called for stronger regulatory leadership, clearer guidelines, and standardized training to prepare pediatric surgeons for future AI integration. Collectively, the findings underscore an urgent need for structured governance and capacity building.

“The results show that pediatric surgeons are not opposed to AI—they simply want to ensure it is safe, fair, and well regulated,” the research team explained. “Ethical challenges such as accountability, informed consent, and data protection must be addressed before clinicians can confidently rely on AI in settings involving vulnerable children. Clear national guidelines, practical training programs, and transparent standards are essential to ensure that AI becomes a supportive tool rather than a source of uncertainty in pediatric surgical care.”

The study underscores the need for pediatric-specific ethical frameworks, clearer consent procedures, and well-defined accountability mechanisms for AI-assisted care. Strengthening data governance, improving digital infrastructure, and expanding AI literacy among clinicians and families will be essential for building trust. As AI continues to enter surgical practice, these measures offer a practical roadmap for integrating innovation while safeguarding child safety and public confidence.

###

References

DOI

10.1136/wjps-2025-001089

Original Source URL

https://doi.org/10.1136/wjps-2025-001089

About World Journal of Pediatric Surgery 

World Journal of Pediatric Surgery (WJPS), founded in 2018, is the open-access, peer-reviewed journal in pediatric surgery area. Sponsored by Zhejiang University and Children’s Hospital, Zhejiang University School of Medicine, and published by BMJ Group. WJPS aims to be a leading international platform for advances in pediatric surgical research and practice. Indexed in PubMed, ESCI, Scopus, CAS, DOAJ, and CSCD, WJPS achieved the latest impact factor (IF) of 1.3/Q3, CiteScore of 1.5, and an estimate 2025 IF of approximately 2.0.

Inside Chernobyl, Ukraine scrambles to repair radiation shield

By AFP
December 25, 2025


The 1986 meltdown at Chernobyl was the world's worst ever nuclear power plant incident - Copyright AFP Glody MURHABAZI


Sergii VOLSKYI and Tetiana DZHAFAROVA

Inside an abandoned control room at Ukraine’s Chernobyl nuclear power plant, a worker in an orange hardhat gazed at a grey wall of seemingly endless dials, screens and gauges that were supposed to prevent disaster.

The 1986 meltdown at the site was the world’s worst ever nuclear incident. Since Russia invaded in 2022, Kyiv fears another disaster could be just a matter of time.

In February, a Russian drone hit and left a large hole in the New Safe Confinement (NSC), the outer of two radiation shells covering the remnants of the nuclear power plant.

It functions as a modern high-tech replacement for an inner steel-and-concrete structure — known as the Sarcophagus, a defensive layer built hastily after the 1986 incident.

Ten months later, repair work is still ongoing, and it could take another three to four years before the outer dome regains its primary safety functions, plant director Sergiy Tarakanov told AFP in an interview from Kyiv.

“It does not perform the function of retaining radioactive substances inside,” Tarakanov said, echoing concerns raised by the International Atomic Energy Agency.

The strike had also left it unclear if the shell would last the 100 years it was designed to.

The gaping crater in the structure, which AFP journalists saw this summer, has been covered over with a protective screen, but 300 smaller holes made by firefighters when battling the blaze still need to be filled in.

Scaffolding engulfs the inside of the giant multi-billion-dollar structure, rising all the way up to the 100-metre-high ceiling.

Charred debris from the drone strike that hit the NSC still lay on the floor of the plant, AFP journalists saw on a visit to the site in December.



– ‘Main threat’ –



Russia’s army captured the plant on the first day of its 2022 invasion, before withdrawing a few weeks later.

Ukraine has repeatedly accused Moscow of targeting Chernobyl and its other nuclear power plants, saying Moscow’s strikes risk triggering a potentially catastrophic disaster.

Ukraine regularly reduces power at its nuclear plants following Russian strikes on its energy grid.

In October, a Russian strike on a substation near Chernobyl cut power flowing to the confinement structure.

Tarakanov told AFP that radiation levels at the site had remained “stable and within normal limits”.

Inside a modern control room, engineer Ivan Tykhonenko was keeping track of 19 sensors and detection units, constantly monitoring the state of the site.

Part of the 190 tonnes of uranium that were on site in 1986 “melted, sank down into the reactor unit, the sub-reactor room, and still exists,” he told AFP.

Worries over the fate of the site — and what could happen — run high.

Another Russian hit — or even a powerful nearby strike — could see the inner radiation shell collapse, director Tarakanov told AFP.

“If a missile or drone hits it directly, or even falls somewhere nearby … it will cause a mini-earthquake in the area,” he said.

“No one can guarantee that the shelter facility will remain standing after that. That is the main threat,” he added.


Russian strike could collapse Chernobyl shelter: plant director


By AFP
December 23, 2025


Kyiv has accused Russia of repeatedly targeting the Chernobyl site throughout the invasion - Copyright AFP Farooq NAEEM

A Russian strike could collapse the internal radiation shelter at the defunct Chernobyl nuclear power station in Ukraine, the plant’s director has told AFP.

Kyiv has accused Russia of repeatedly targeting the facility, the site of a 1986 meltdown that is still the world’s worst ever nuclear disaster, since Moscow invaded in February 2022.

A hit earlier this year punched a hole in the outer radiation shell, triggering a warning from the International Atomic Energy Agency (IAEA) that it had “lost its primary safety functions.”

In an interview with AFP, plant director Sergiy Tarakanov said fully restoring that shelter could take three to four years, and warned that another Russian hit could see the inner shell collapse.

“If a missile or drone hits it directly, or even falls somewhere nearby, for example, an Iskander, God forbid, it will cause a mini-earthquake in the area,” Tarakanov told AFP in an interview conducted last week.

The Iskander is Russia’s short-range ballistic missile system that can carry a variety of conventional warheads, including those to destroy bunkers.

“No one can guarantee that the shelter facility will remain standing after that. That is the main threat,” he added.

The remnants of the nuclear power plant are covered by an inner steel-and-concrete radiation shell — known as the Sarcophagus and built hastily after the disaster — and a modern, high-tech outer shell, called the New Safe Confinement (NSC) structure.

The roof of the NSC was severely damaged in a Russian drone strike in February, which caused a major fire in the outer cladding of the steel structure.

“Our NSC has lost several of its main functions. And we understand that it will take us at least three or four years to restore these functions,” Tarakanov added.

The IAEA said earlier this month an inspection mission found the shelter had “lost its primary safety functions, including the confinement capability, but also found that there was no permanent damage to its load-bearing structures or monitoring systems.”

Director Tarakanov told AFP that radiation levels at the site remained “stable and within normal limits.”

The hole caused by the drone hit has been covered with a protective screen, he said, but 300 smaller holes made by firefighters when battling the blaze still need to be filled in.

Russia’s army captured the plant at the start of its 2022 invasion, before withdrawing a few weeks later.