Showing posts sorted by date for query CHATGPT. Sort by relevance Show all posts
Showing posts sorted by date for query CHATGPT. Sort by relevance Show all posts

Thursday, January 15, 2026

A robot learns to lip sync



Columbia Engineers build a robot that learns to lip sync to speech and song.


Columbia University School of Engineering and Applied Science

Lip Syncing Robot 

image: 

Hod Lipson and his team have created a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing.

view more 

Credit: Jane Nisselson/ Columbia Engineering





New York, NY—Jan. 14, 2026—Almost half of our attention during face-to-face conversation focuses on lip motion. Yet, robots still struggle to move their lips correctly. Even the most advanced humanoids make little more than muppet mouth gestures – if they have a face at all. 

We humans attribute outsized importance to facial gestures in general, and to lip motion in particular. While we may forgive a funny walking gait or an awkward hand motion, we remain unforgiving of even the slightest facial malgesture. This high bar is known as the “Uncanny Valley.” Robots oftentimes look lifeless, even creepy, because their lips don't move. But that is about to change.

A Columbia Engineering team announced today that they have created a robot that, for the first time, is able to learn facial lip motions for tasks such as speech and singing. In a new study published in Science Robotics, the researchers demonstrate how their robot used its abilities to articulate words in a variety of languages, and even sing a song out of its AI-generated debut album “hello world_.”

The robot acquired this ability through observational learning rather than via rules. It first learned how to use its 26 facial motors by watching its own reflection in the mirror before learning to imitate human lip motion by watching hours of YouTube videos. 

“The more it interacts with humans, the better it will get,” promised Hod Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering and director of Columbia’s Creative Machines Lab, where the work was done.

Robot watches itself talking 

Achieving realistic robot lip motion is challenging for two reasons: First, it requires specialized hardware containing a flexible facial skin actuated by numerous tiny motors that can work quickly and silently in concert. Second, the specific pattern of lip dynamics is a complex function dictated by sequences of vocal sounds and phonemes. 

Human faces are animated by dozens of muscles that lie just beneath a soft skin and sync naturally to vocal chords and lip motions. By contrast, humanoid faces are mostly rigid, operating with relatively few degrees of motion, and their lip movement is choreographed according to rigid, predefined rules. The resulting motion is stilted, unnatural, and uncanny.

In this study, the researchers overcame these hurdles by developing a richly actuated, flexible face and then allowing the robot to learn how to use its face directly by observing humans. First, they placed a robotic face equipped with 26 motors in front of a mirror so that the robot could learn how its own face moves in response to muscle activity. Like a child making faces in a mirror for the first time, the robot made thousands of random face expressions and lip gestures. Over time, it learned how to move its motors to achieve particular facial appearances, an approach called a “vision-to-action” language model (VLA).

Then, the researchers placed the robot in front of recorded videos of humans talking and singing, giving AI that drives the robot an opportunity to learn how exactly humans’ mouths moved in the context of various sounds they emitted. With these two models in hand, the robot’s AI could now translate audio directly into lip motor action.

The researchers tested this ability using a variety of sounds, languages, and contexts, as well as some songs. Without any specific knowledge of the audio clips' meaning, the robot was then able to move its lips in sync.

The researchers acknowledge that the lip motion is far from perfect. “We had particular difficulties with hard sounds like ‘B’ and with sounds involving lip puckering, such as ‘W’. But these abilities will likely improve with time and practice,” Lipson said. 

More importantly, however, is seeing lip sync as part of more holistic robot communication ability. 

“When the lip sync ability is combined with conversational AI such as ChatGPT or Gemini, the effect adds a whole new depth to the connection the robot forms with the human,” explained Yuhang Hu, who led the study for his PhD. “The more the robot watches humans conversing, the better it will get at imitating the nuanced facial gestures we can emotionally connect with.” 

“The longer the context window of the conversation, the more context-sensitive these gestures will become,” he added. 

The missing link of robotic ability

The researchers believe that facial affect is the ‘missing link’ of robotics. 

“Much of humanoid robotics today is focused on leg and hand motion, for activities like walking and grasping,” said Lipson. “But facial affection is equally important for any robotic application involving human interaction.”

Lipson and Hu predict that warm, lifelike faces will become increasingly important as humanoid robots find applications in areas such as entertainment, education, medicine, and even elder care. Some economists predict that over a billion humanoids will be manufactured in the next decade.

“There is no future where all these humanoid robots don’t have a face. And when they finally have a face, they will need to move their eyes and lips properly, or they will forever remain uncanny,” Lipson estimates.

“We humans are just wired that way, and we can’t help it. We are close to crossing the uncanny valley,” added Hu.

Risks and limits

This work is part of Lipson’s decade-long quest to find ways to make robots connect more effectively with humans, through mastering facial gestures such as smiling, gazing, and speaking. He insists that these abilities must be acquired by learning, rather than being programmed using stiff rules. 

“Something magical happens when a robot learns to smile or speak just by watching and listening to humans,” he said. “I’m a jaded roboticist, but I can’t help but smile back at a robot that spontaneously smiles at me.”

Hu explained that human faces are the ultimate interface for communication, and we are beginning to unlock their secrets.

“Robots with this ability will clearly have a much better ability to connect with humans because such a significant portion of our communication involves facial body language, and that entire channel is still untapped,” Hu said. 

The researchers are aware of the risks and controversies surrounding granting robots greater ability to connect with humans. 

“This will be a powerful technology. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks,” Lipson said. 


Lip Syncing Robot Video [VIDEO] 


Wednesday, January 14, 2026

'Disgrace': Furor as Pete Hegseth's Pentagon partners with Elon Musk

Stephen Prager,
 Common Dreams
January 13, 2026 


Elon Musk and U.S. Defence Secretary Pete Hegseth laugh at the Pentagon in Washington, D.C., U.S., March 21, 2025 in this screengrab obtained from a video. REUTERS/Idrees Ali

Elon Musk, the world’s richest man and the owner of the social media app X, has faced a mountain of outrage in recent weeks as his platform’s artificial intelligence chatbot “Grok” has been used to generate sexualized deepfake images of non-consenting women and children, and Musk himself has embraced open white nationalism.

But none of this seems to be of particular concern to Defense Secretary Pete Hegseth. Despite the swirl of scandal, he announced on Monday that Musk’s chatbot would be given intimate access to reams of military data as part of what the department described as its new “AI acceleration strategy.”

During a speech at the headquarters of SpaceX, another company owned by Musk, Hegseth stood alongside the billionaire and announced that later this month, the department plans to “make all appropriate data” from the military’s IT systems available for “AI exploitation,” including “combat-proven operational data from two decades of military and intelligence operations.”

As the Associated Press noted, it’s a departure from the more cautious approach the Biden administration took toward integrating AI with the military, which included bans on certain uses “such as applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons.”

While it’s unclear if those bans remain in place under President Donald Trump, Hegseth said during the speech he will seek to eschew the use of any AI models “that won’t allow you to fight wars” and will seek to act “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s AI will not be “woke” or “equitable.”

He added that the department “will unleash experimentation, eliminate bureaucratic barriers, focus our investments, and demonstrate the execution approach needed to ensure we lead in military AI. He added that ”we will become an ‘AI-first’ warfighting force across all domains.

Hegseth’s embrace of Musk hardly comes as a surprise, given his role in the Trump administration’s dismantling of the administrative state as head of its so-called “Department of Government Efficiency” (DOGE) last year, and his record $290 million in support for the president’s 2024 election campaign.

But it is quite noteworthy given the type of notoriety Grok has received of late after it introduced what it called “spicy mode” for the chatbot late last year, which “allows users to digitally remove clothing from images and has been deployed to produce what amounts to child pornography—along with other disturbing behavior, such as sexualizing the deputy prime minister of Sweden,” according to a report last month from MS NOW (formerly MSNBC).

It’s perhaps the most international attention the bot has gotten, with the United Kingdom’s media regulator launching a formal investigation on Monday to determine whether Grok violated the nation’s Online Safety Act by failing to protect users from illegal content, including child sexual abuse material.

The investigation could result in fines, which, if not followed, could lead to the chatbot being banned, as it was over the weekend in Malaysia and Indonesia. Authorities in the European UnionFranceBrazil, and elsewhere are also reviewing the app for its spread of nonconsensual sexual images, according to the New York Times.

It’s only the latest scandal involving the Grok, which Musk pitched as an “anti-woke” and “truth-seeking” alternative to applications like ChatGPT and Google’s Gemini.

At several points last year, the chatbot drew attention for its sudden tendency to launch into racist and antisemitic tirades—praising Adolf Hitler, accusing Jewish people of controlling Hollywood and the government, and promoting Holocaust denial.

Before that, users were baffled when the bot began directing unrelated queries about everything from cats to baseball back to discussions about Musk’s factually dubious pet theory of “white genocide” in South Africa, which the chatbot later revealed it was “instructed” to talk about.

Hegseth’s announcement on Monday also comes as Musk has completed his descent into undisguised support for a white nationalist ideology over the past week.

The billionaire’s steady lurch to the far-right has been a years-long process—capped off last year, with his enthusiastic support for the neofascist Alternative for Germany Party and apparent Nazi salute at Trump’s second inauguration.

But his racist outlook was left impossible to deny last week when he expressed support for a pair of posts on X stating that white people must “reclaim our nations” or “be conquered, enslaved, raped, and genocided” and that “if white men become a minority, we will be slaughtered,” necessitating “white solidarity.”

While details about the expansiveness of Grok’s use by the military remain scarce, Musk’s AI platform, xAI, announced in July that it had inked a deal with the Pentagon worth nearly $200 million (notably just a week after the bot infamously referred to itself as “MechaHitler”).

In September, reportedly following direct pressure from the White House to roll it out “ASAP,” the General Services Administration announced a “OneGov” agreement, making Grok available to every federal agency for just $0.42 apiece.

That same month, Sen. Elizabeth Warren (D-Mass.) sent a letter to Hegseth warning that Musk, who’d also used Grok extensively under DOGE to purge disloyal government employees, was “gaining improper advantages from unique access to DOD data and information.” She added that Grok’s propensity toward “inaccurate outputs and misinformation” could “harm DOD’s strategic decisionmaking.”

Following this week’s announcement, JB Branch, the Big Tech accountability advocate at Public Citizensaid on Tuesday that, “allowing an AI system with Grok’s track record of repeatedly generating nonconsensual sexualized images of women and children to access classified military or sensitive government data raises profound national security, civil rights, and public safety concerns.”

“Deploying Grok across other areas of the federal government is worrying enough, but choosing to use it at the Pentagon is a national security disgrace,” he added. “If an AI system cannot meet basic safety and integrity standards, expanding its reach to include classified data puts the American public and our nation’s safety at risk.”

Pentagon Partners With Musk’s AI Chatbot Despite Child Porn Scandal and Owner’s Embrace of White Nationalism

“If an AI system cannot meet basic safety and integrity standards, expanding its reach to include classified data puts the American public and our nation’s safety at risk,” said a tech expert at Public Citizen.


Secretary of Defense Pete Hegseth stands with Elon Musk at the headquarters of his company SpaceX in Starbase, Texas on January 12, 2025.
(Photo from Secretary of Defense Pete Hegseth)

Stephen Prager
Jan 13, 2026
COMMON DREAMS


Elon Musk, the world’s richest man and the owner of the social media app X, has faced a mountain of outrage in recent weeks as his platform’s artificial intelligence chatbot “Grok” has been used to generate sexualized deepfake images of nonconsenting women and children, and Musk himself has embraced open white nationalism.

But none of this seems to be of particular concern to Defense Secretary Pete Hegseth. Despite the swirl of scandal, he announced on Monday that Musk’s chatbot would be given intimate access to reams of military data as part of what the department described as its new “AI acceleration strategy.”

During a speech at the headquarters of SpaceX, another company owned by Musk, Hegseth stood alongside the billionaire and announced that later this month, the department plans to “make all appropriate data” from the military’s IT systems available for “AI exploitation,” including “combat-proven operational data from two decades of military and intelligence operations.”

As the Associated Press noted, it’s a departure from the more cautious approach the Biden administration took toward integrating AI with the military, which included bans on certain uses “such as applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons.”

While it’s unclear if those bans remain in place under President Donald Trump, Hegseth said during the speech he will seek to eschew the use of any AI models “that won’t allow you to fight wars” and will seek to act “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s AI will not be “woke” or “equitable.”

He added that the department “will unleash experimentation, eliminate bureaucratic barriers, focus our investments, and demonstrate the execution approach needed to ensure we lead in military AI. He added that ”we will become an ‘AI-first’ warfighting force across all domains.




Hegseth’s embrace of Musk hardly comes as a surprise, given his role in the Trump administration’s dismantling of the administrative state as head of its so-called “Department of Government Efficiency” (DOGE) last year, and his record $290 million in support for the president’s 2024 election campaign.

But it is quite noteworthy given the type of notoriety Grok has received of late after it introduced what it called “spicy mode” for the chatbot late last year, which “allows users to digitally remove clothing from images and has been deployed to produce what amounts to child pornography—along with other disturbing behavior, such as sexualizing the deputy prime minister of Sweden,” according to a report last month from MS NOW (formerly MSNBC).



It’s perhaps the most international attention the bot has gotten, with the United Kingdom’s media regulator launching a formal investigation on Monday to determine whether Grok violated the nation’s Online Safety Act by failing to protect users from illegal content, including child sexual abuse material.

The investigation could result in fines, which, if not followed, could lead to the chatbot being banned, as it was over the weekend in Malaysia and Indonesia. Authorities in the European UnionFranceBrazil, and elsewhere are also reviewing the app for its spread of nonconsensual sexual images, according to the New York Times.

It’s only the latest scandal involving the Grok, which Musk pitched as an “anti-woke” and “truth-seeking” alternative to applications like ChatGPT and Google’s Gemini.

At several points last year, the chatbot drew attention for its sudden tendency to launch into racist and antisemitic tirades—praising Adolf Hitler, accusing Jewish people of controlling Hollywood and the government, and promoting Holocaust denial.

Before that, users were baffled when the bot began directing unrelated queries about everything from cats to baseball back to discussions about Musk’s factually dubious pet theory of “white genocide” in South Africa, which the chatbot later revealed it was “instructed” to talk about.

Hegseth’s announcement on Monday also comes as Musk has completed his descent into undisguised support for a white nationalist ideology over the past week.

The billionaire’s steady lurch to the far-right has been a years-long process—capped off last year, with his enthusiastic support for the neofascist Alternative for Germany Party and apparent Nazi salute at Trump’s second inauguration.

But his racist outlook was left impossible to deny last week when he expressed support for a pair of posts on X stating that white people must “reclaim our nations” or “be conquered, enslaved, raped, and genocided” and that “if white men become a minority, we will be slaughtered,” necessitating “white solidarity.”



While details about the expansiveness of Grok’s use by the military remain scarce, Musk’s AI platform, xAI, announced in July that it had inked a deal with the Pentagon worth nearly $200 million (notably just a week after the bot infamously referred to itself as “MechaHitler”).

In September, reportedly following direct pressure from the White House to roll it out “ASAP,” the General Services Administration announced a “OneGov” agreement, making Grok available to every federal agency for just $0.42 apiece.

That same month, Sen. Elizabeth Warren (D-Mass.) sent a letter to Hegseth warning that Musk, who’d also used Grok extensively under DOGE to purge disloyal government employees, was “gaining improper advantages from unique access to DOD data and information.” She added that Grok’s propensity toward “inaccurate outputs and misinformation” could “harm DOD’s strategic decisionmaking.”

Following this week’s announcement, JB Branch, the Big Tech accountability advocate at Public Citizensaid on Tuesday that, “allowing an AI system with Grok’s track record of repeatedly generating nonconsensual sexualized images of women and children to access classified military or sensitive government data raises profound national security, civil rights, and public safety concerns.”

“Deploying Grok across other areas of the federal government is worrying enough, but choosing to use it at the Pentagon is a national security disgrace,” he added. “If an AI system cannot meet basic safety and integrity standards, expanding its reach to include classified data puts the American public and our nation’s safety at risk.”

 


Faking It ‘Til We Break It


“Video showing Maduro allegedly torturing Venezuelan dissidents is going viral, with 15 million views and 81k likes already. The only problem? It is actually a scene from a movie.” This tweet from journalist Alan Macleod captured just one droplet in a flood of disinformation following the U.S. capture of Nicolás Maduro. As The Guardian noted, AI-generated images of the event reaped millions of views, instantly saturating social media with fiction.

This cycle repeated days later when Renee Good was killed by an ICE agent in Minnesota. NPR reported that “AI images and internet rumors spread confusion,” fueling a rush to judgment that bypassed all evidence. Rather than awaiting verified facts, the public retreated into partisan scripts: Democrats immediately condemned the agency, while Republicans vilified the victim. This reflexive tribalism illustrates a nation that has abandoned the patience of critical analysis in favor of viral, evidence-free outrage. This reflexive rush to judgment is now weaponized by a new tool of deception: the deepfake.

The emergence of deepfakes has introduced a volatile new dimension to the challenge of disinformation. This technology preys upon systemic vulnerabilities: a widespread lack of media literacy, a ‘greed is good’ hyper-individualism, and a techno-utopian ethos that prioritizes innovation over decency, truth, and social cohesion. The result is a fractured digital landscape where ethically bankrupt creators and profit-driven platforms engineered for engagement oversee the steady demise of civil society. This marriage of cutting-edge deception and ancient tribalism has created a perfect storm where the most successful lie wins and the truth is buried under a million algorithmically-driven clicks. To survive this era of manufactured outrage, we must move beyond passive consumption and demand systemic accountability for the engines of our deception.

The Mechanics of Deception

So-called AI is just the latest tool in a Big-Tech shed that fosters and incentivizes propaganda. Studies have long shown that falsehoods spread more widely than truth on social media, not due to individual behavior, but because Big-Tech platforms are engineered to incentivize the spread of false and misleading content.

AI has complicated the problem of disinformation. Seventy years ago, AI was envisioned as the pursuit of human-like cognitive reasoning; today, that label is frequently marketed to describe technologies that bear little resemblance to those original intellectual ambitions. Modern systems of so-called AI rely primarily on massive datasets and statistical pattern recognition. As a result, they are far from intelligent, and limited in their capacity. Indeed, AI bots are prone to getting things wrong and fabricating information: One study found that AI bot summaries of news content were inaccurate 45% of the time. Other studies have found AI fabricating information from 66% to over 80% of the time.

Beyond deploying bots that circulate misinformation, Big-Tech has released AI tools that empower average users to produce highly convincing, yet entirely fabricated, content with ease. For example, more than 20 percent of videos shown to new YouTube users are “AI slop,” meaning low-quality, mass-produced, algorithmically generated content designed to maximize clicks and watch time rather than inform. These types of deepfakes shaped audience interpretations of recent conflicts such as Israel-Gaza and Russia-Ukraine.

After Maduro’s extraordinary rendition, the internet was quickly flooded with AI-generated content designed to persuade Americans that his capture was an act of justice welcomed by the Venezuelan people. These posts showed Venezuelans supposedly “crying on their knees” to thank President Donald Trump for their liberation. One such video, flagged by Ben Norton, racked up 5 million views.

Similarly, after Good was shot and killed by ICE on January 7, 2026, deepfakes circulated online falsely claiming to reveal the face of the agent involved.

However, the image did not depict the actual agent, Jonathan Ross, an Iraq War veteran with decades of experience in immigration and border enforcement, who had been wearing a mask at the time of the incident.

Ross was not the only one falsely identified; immediately after the shooting, photos circulated online claiming to be of Renee Good. In reality, the images were a confusing mix of a former WWE wrestler and another woman who had previously participated in a poetry contest with the actual victim. The digital desecration continued as users weaponized AI to undress an old photo of Good and manipulate images of her lifeless body, generating deepfakes that placed the victim in a bikini even as she lay at the scene of the shooting.

The Shield of Immunity: Section 230 and Beyond

At a time when approximately 90% of U.S. citizens have access to smartphones and 62% use so-called AI, the U.S. government has largely allowed Big Tech platforms and devices to remain unregulated. Indeed, U.S. policy has favored the tech industry for decades, prioritizing immense corporate profits over meaningful accountability for the societal impacts of these platforms.

Thanks to Section 230 of the Communications Decency Act (CDA), which protects internet platforms from legal liability for content users post, and a religious devotion to the idea that “what’s good for tech is good for America,” these companies enjoy total immunity. Similarly, Trump recently signed an Executive Order shielding the industry from AI regulations at the state level. These actions stem from a shared conviction among Big-Tech and its allies in government that regulation is the fundamental enemy of progress and innovation. The few regulations that do exist typically place the burden on users rather than on platforms, such as requiring individuals to show identification, which further contributes to the surveillance mechanisms that define these tools.

Classroom Capture: Big-Tech’s Educational Influence

In the absence of a robust regulatory framework, many argue that media literacy education offers the most promise for mitigating the influence of misinformation on the public. In the U.S., media literacy is broadly defined as “the ability to access, analyze, evaluate, create, and act using all forms of communication.” Indeed, media literacy education has been associated with a reduction in users accepting disinformation. However, the lack of a central education authority to establish a national curriculum, combined with opposition from traditionalists resistant to new media in the classroom, has significantly hindered the spread of media literacy education. Most Americans lack access to a formal media literacy education, even though more than three-fourths of the population believe it is a critical skill that everyone should develop.

While efforts to integrate media literacy into the classroom are growing, they are increasingly dominated by the very companies they should be critiquing. Big Tech has leveraged the vast wealth gained from harvesting user data to exert an outsized role in shaping educational standards. By offering content and programs for classroom use, these corporations provide tools of “corporate indoctrination.” Their curricula emphasize the opportunities of technology while framing issues like “fake news” and bullying as individual moral failures, such as a lack of character or excessive screen time, rather than systemic results of the dopamine loops and profit models the industry intentionally built.

In contrast, critical media literacy scholars argue that a robust education must teach students to interrogate power dynamics, ownership, platform design, and profit motives. However, because their work challenges the industry, these scholars receive almost no corporate funding and must rely on nonprofits and volunteer labor.

Despite these concerns, educational institutions are leaning further into corporate partnerships. For example, the California State University system, the nation’s largest public university with nearly half a million students, recently announced a $17 million deal with OpenAI, the creator of ChatGPT. This massive investment in “Big Tech” occurred even as the CSU system was simultaneously cutting faculty and staff positions across its 23 campuses.

From Social Capital to Content Creation

While Big-Tech algorithms are complicit, they are not solely to blame. In the post-Cold War era, the United States concluded that capitalism had definitively triumphed over all other systems, treating the Cold War as a final, accurate contest of ideas. This ushered in a fundamental shift in the nation’s cultural and political compass, famously epitomized by the mantra from Oliver Stone’s Wall Street: “greed is good.” Researchers like Robert Putnam, in his influential work Bowling Alone, noted that this shift eroded the social capital and communal bonds essential for a functioning democracy. This hyper-individualistic context has profoundly shaped every generation since, leading to what Jean Twenge and W. Keith Campbell refer to as a “narcissism epidemic.” The nation has become so individualistic that even some people associated with the left, which historically has believed in collectivism, such as Matt Taibbi and Cenk Uygur, joined conservatives in outrage when New York Mayor Zohran Mamdani used the word “collectivism” in his inaugural speech this month.

The resulting culture of narcissistic individualism has engendered a generation of ethically hollow influencers and content creators who worship at the altar of Big-Tech, sacrificing collective integrity for personal profit. As commentator Krystal Ball noted, the incentives for these creators are so perverse that even when videos, such as those regarding Maduro, are debunked, users continue to share them for engagement.

Perhaps the most egregious example is influencer Nick Shirley, who went viral for ‘exposing’ alleged fraud at daycare centers in Minnesota. Although the New York Times had previously reported on a legitimate multimillion-dollar fraud case in Minnesota, where the state and federal governments under the Biden Administration were actively prosecuting individuals for embezzling childcare funds, Shirley fabricated his own distorted narrative. He produced videos alleging a deeper, hidden layer of corruption that the government was supposedly ignoring; however, he relied on a blend of fabricated evidence and baseless accusations to support a “cover-up” narrative that simply did not exist. Nonetheless, Shirley’s video has over 100 million views within a week across different platforms. It was reposted by Vice President J.D. Vance and FBI Director Kash Patel.

Beyond merely inspiring copycat content, these viral fabrications were weaponized by the Trump administration to justify freezing federal funding in five Democratic-led states. Under the guise of addressing fraud and systemic misuse, the administration withheld billions of dollars earmarked for essential childcare and social services. Simultaneously, the Department of Homeland Security deployed as many as 2,000 federal agents to Minnesota in a massive law enforcement surge that resulted in Good’s death.

Shirley reflects a broader culture where viral lies are rewarded with wealth. In this environment, deception has become a viable business model because fraud no longer carries a social stigma when used for profit. Instead, it is often rewarded. Just look at Elon Musk. He is one of the richest men on Earth and was a distributor of some of the fake online content following Maduro’s capture. At the same time, Musk is expanding his wealth in the age of AI with tools that spread baseless racist conspiracies such as the myth of white genocide in South Africa, a new version of Wikipedia that refers to Adolf Hitler simply as “The Führer,” and AI tools that enable users to create deepfake images undressing women and children.

Instead of being treated like a James Bond villain, Musk is worshipped as an aspirational figure. He embodies the ultimate “fake it ’til you make it” con man: a self-brander who convinced the world he was a self-made, intelligent inventor, when in fact he relied heavily on $38 billion in government funding, investments from his father, and piggybacked on the creations of truly brilliant inventors.

The Architecture of Hypocrisy: Why One Standard is No Longer Enough

The toxicity of narcissistic content creators and hyperpartisan figures seeking to expand their brands in the attention economy goes beyond the mere production of falsehoods; it is a symptom of a culture seemingly unable or unwilling to shame even the most glaring contradictions.

For instance, many conservatives backed Trump when he labeled the law enforcement officer who shot a woman during the January 6 Capitol riot a “thug,” yet his allies staunchly defended the agents involved in the Good incident. In fact, even before an official investigation had been launched, let alone concluded, Vice President J.D. Vance argued that Ross possessed “immunity.” Furthermore, while these same circles argue that individuals must take personal responsibility for their actions, rejecting the idea that Trump’s rhetoric created the context for January 6, they paradoxically blame leftists for creating the environment that led to Ross shooting Good.

Yet, even these double standards pale in comparison to the reaction following Charlie Kirk’s death in September 2025. In the ensuing months, conservatives frequently bemoaned a lack of empathy and decorum from the left, which criticized Kirk’s legacy of divisive rhetoric while his wife and loved ones were still grieving. However, those same voices, with notable exceptions such as Tucker Carlson, refused to extend that same grace or “decorum” to the wife and loved ones of Good.

Fox News Channel’s Jesse Watters dismissed Good’s claim that she is a poet and mocked her for listing pronouns in her online bio, a relatively mild attack compared to others. Without evidence, former President Trump called Good a “professional agitator.” Secretary of Homeland Security Kristi Noem labeled her actions an “act of domestic terrorism.” Meanwhile, Vance described Good’s death as a “tragedy of her own making.”

They accompanied these baseless claims with rhetoric directly contradicted by witnesses and video. Trump falsely claimed that Good “viciously ran over” Ross who was recovering in the hospital. In reality, Good’s car did not run over anyone, and Ross walked away from the scene unassisted. Relatedly, a Department of Homeland Security spokesperson, Tricia McLaughlin, falsely claimed that multiple ICE officers were hurt. McLaughlin also falsely accused Good of “stalking agents all day long, impeding our law enforcement.” In reality, video evidence reveals that Good had been on-site for a few minutes. She had just dropped off her now-orphaned six-year-old child at school and was not blocking the road; in fact, cars can be clearly seen passing her vehicle throughout the footage. A crowd had gathered because an ICE vehicle was immobilized in the snow. Unlike the U.S. Postal Service, which is famously expected to operate in all weather conditions—under the creed, “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds”—ICE was visibly struggling to function in the elements.

These fabrications and baseless accusations reveal a profound callousness toward Good’s grieving family, most notably her wife and orphaned child. This stands in stark, bitter contrast to the demands for decorum and empathy that conservatives issued following Charlie Kirk’s death. One would expect a civilized nation to extend basic sympathy whenever a citizen dies: whether they are shot during a chaotic political protest or killed and denied medical attention. Faced with such blatant double standards, the nation must finally direct Joseph Welch’s famous rebuke of McCarthyism toward itself: ‘Have you no sense of decency, sir, at long last? Have you left no sense of decency?'”

Rhetorically, most claim to view these incidents as tragedies; in practice, however, they operate with blatant double standards: empathy for one side while displaying callousness, or even cruelty, toward the other. A more sophisticated culture would remember comedian George Carlin’s wisdom: “Let’s not have a double standard. One standard will do just fine.”

Conclusion

To restore the soul of a nation fractured by digital fabrication, there must be a collective refusal to continue this cycle of reflexive tribalism. Engaging in a perpetual war of “us versus them,” where truth is sacrificed for the sake of a partisan win, ensures that everyone loses, and the country remains a casualty of its own division. It is insanity to continue entrusting the national discourse to unregulated algorithms and narcissistic creators, expecting that more of the same will somehow yield a different, more unified result.

The time has come to demand a higher standard: one that prioritizes evidence over engagement and human decency over ideological dominance. By rejecting the lure of the deepfake and the ease of the echo chamber, a path can be cleared toward a more sophisticated culture, one that values critical analysis, insists on corporate accountability, and understands that without a single standard of truth and empathy, the foundations of a functioning democracy cannot hold.

Nolan Higdon is a Project Censored national judge, an author, and university lecturer at Merrill College and the Education Department at University of California, Santa Cruz. Read other articles by Nolan, or visit Nolan's website.