Wednesday, September 11, 2024

LONG READ

Trump, Twitter, and truth judgments: The effects of “disputed” tags and political knowledge on the judged truthfulness of election misinformation

Misinformation has sown distrust in the legitimacy of American elections. Nowhere has this been more concerning than in the 2020 U.S. presidential election wherein Donald Trump falsely declared that it was stolen through fraud. Although social media platforms attempted to dispute Trump’s false claims by attaching soft moderation tags to his posts, little is known about the effectiveness of this strategy. We experimentally tested the use of “disputed” tags on Trump’s Twitter posts as a means of curbing election misinformation. Trump voters with high political knowledge judged election misinformation as more truthful when Trump’s claims included Twitter’s disputed tags compared to a control condition. Although Biden voters were largely unaffected by these soft moderation tags, third-party and non-voters were slightly less likely to judge election misinformation as true. Finally, little to no evidence was found for meaningful changes in beliefs about election fraud or fairness. These findings raise questions about the effectiveness of soft moderation tags in disputing highly prevalent or widely spread misinformation.


 September 11, 2024

Peer Reviewed

By John C. Blanchar

Department of Psychology, University of Minnesota Duluth, USA

By Catherine J. Norris

Department of Psychology, Swarthmore College, USA



image by mediamodifier on pixabay
Topics

Research Questions

  • Do soft moderation tags warning about “disputed” information influence the judged truthfulness of election misinformation alleged by Donald Trump following the 2020 U.S. presidential election?
  • Does the effectiveness of attaching “disputed” tags to Donald Trump’s election misinformation depend upon a person’s political knowledge or pre-existing belief about fraud?

Essay Summary

  • A sample of U.S. Americans (= 1,078) were presented with four social media posts from Donald Trump falsely alleging election fraud in the weeks following the 2020 election. Participants were randomly assigned to the disputed tag or control condition, with only the former including soft moderation tags attached to each of Trump’s false allegations. Participants rated the truthfulness of each allegation and answered questions about election fraud and fairness. Individual differences in political knowledge and verbal ability were measured before the election.
  • There was little to no evidence that Twitter’s disputed tags decreased the judged truthfulness of election misinformation or meaningfully changed pre-existing beliefs in election fraud or fairness.
  • Trump voters with high political knowledge were more likely to perceive election misinformation as truthful when Donald Trump’s posts included disputed tags versus not.
  • Trump voters that were initially skeptical of election fraud in the 2020 election were more likely to judge election misinformation as truthful when Donald Trump’s posts included disputed tags.

Implications

In recent years, misinformation has undermined trust in the legitimacy of American democratic elections. Nowhere has this been more concerning than in the 2020 U.S. presidential election, which saw the sitting president, Donald Trump, falsely declare that the election was stolen through widespread fraud (Timm, 2020). This culminated in many hundreds of Trump’s supporters storming the U.S. Capitol building to stop certification of challenger Joe Biden’s victory (Zengerle et al., 2021). This is not a fringe belief (Blanchar & Norris, 2021); national polls indicate that majorities of Republicans and conservatives endorse the belief that Trump probably or definitely won the election (Ipsos, 2020; Pew Research Center, 2021).

Although social media companies like Twitter (now X) and Facebook attempted to dispute Trump’s false claims of election fraud by attaching soft moderation tags to his posts (Graham & Rodriguez, 2020), little is known about the effectiveness of this strategy. The present experiment tests the effectiveness of attaching “disputed” tags to Trump’s Twitter posts as a means of curbing election misinformation about voter fraud among U.S. Americans. We assessed individual differences in political knowledge (i.e., basic facts about American politics) and verbal ability to examine whether misinformation susceptibility depends on domain knowledge (Lodge & Taber, 2013; Tappin et al., 2021).

Disputing misinformation

A wide array of interventions for curbing misinformation have been applied with varying results (Ziemer & Rothmund, 2024). Among these, fact-checking approaches are the most common; they attempt to refute or dispute misleading or false information through tagging (or flagging), social invalidation, or expert corrections. Tagging simply involves labeling a claim as false or disputed, in contrast to more elaborate fact checks like social invalidation through corrective comments below a social media post or expert-based corrections that provide detailed rebuttals from professional entities or scientific organizations. However, because tagging misinformation on social media platforms tends to be reactive rather than proactive, it typically addresses misinformation only after it has been identified and spread. This reactive nature mirrors the broader challenges of fact-checking, which often lags behind the rapid dissemination of false claims. A more natural test would involve assessing truth judgments of ongoing false claims that extend from recognizable or existing beliefs and narratives. The present research experimentally tests the efficacy of Twitter’s “disputed” tag as a form of soft moderation to reduce belief in timely, real-world, and widely propagated misinformation that aligns or conflicts with partisans’ beliefs. Specifically, we considered the special case of Donald Trump’s false claims about election fraud following the 2020 U.S. presidential election, where people very likely have strong pre-existing beliefs (Blanchar & Norris, 2021).

Evidence surrounding the use of misinformation tags on social media posts largely supports their efficacy in reducing belief and sharing (Koch et al., 2023; Martel & Rand, 2023; Mena, 2020). However, these effects are relatively small and depend on tag precision (Martel & Rand, 2023). For instance, Clayton et al. (2020) observed that, whereas tagging fake news headlines as “disputed” slightly reduced their perceived accuracy compared to a control condition, this approach was less effective than tagging fake headlines as “rated false.” Tagging posts as “false” is clearer than tagging them as “disputed,” with the latter possibly implying mixed evidence and/or legitimate disagreement. However, although Pennycook et al. (2020) reported similar findings, they also found that these tags slightly increased the perceived accuracy of other fake but non-tagged news headlines presented alongside tagged headlines. The presence of tagged warnings on some but not other information may lead people to guess that anything not tagged is probably accurate.

Although major reviews indicate that corrections are generally effective at reducing misinformation (Chan et al., 2017; Porter & Wood, 2024), some scholars have suggested the possibility of “backfire effects,” where corrective information may arouse cognitive dissonance—an uncomfortable psychological tension from holding incompatible thoughts or beliefs—leading people to double-down on their initial beliefs instead of changing them (Nyhan & Reifler, 2010; Nyhan et al., 2013; see also Festinger et al., 1956). Nevertheless, these effects are quite rare, and many subsequent tests have yielded contradictory evidence (Haglin, 2017; Lewandowski et al., 2020; Nyhan et al., 2020; Wood & Porter, 2019). Even so, attempts to correct or dispute misinformation sometimes do fail. For instance, people’s beliefs tend to persevere despite being discredited by new information (Anderson, 1995; Ecker & Ang, 2019; Ecker et al., 2022; Ross et al., 1975; Thorson, 2016). Sharevski et al. (2022) reported evidence that tagging Twitter posts for vaccine misinformation during the COVID-19 pandemic failed to change people’s belief in the discredited information. However, using interstitial covers that obscure misleading tweets before they are clicked on effectively reduced the perceived accuracy of misinformation. People also dislike feeling that they are being told what to do, think, or say (Brehm, 1966), and thus attempts to dispute misinformation may spur reactance and paradoxically increase its exposure and prevalence (Ma et al., 2019; Wicklund, 1974). Oeldorf‐Hirsch and colleagues (2020) found that “disputed” tags did not influence the perceived credibility of inaccurate news articles and internet memes. Additionally, DeVerna et al. (2024) employed supervised machine-learning techniques to analyze over 430,000 tweets and found that after official corrections, the spread of false rumors decreased among political liberals but increased among political conservatives. Collectively, these findings suggest that although corrective measures can be effective, their success may depend on how they are implemented and the context in which they are received.

The role of political knowledge

An important factor that may moderate how people process or react to soft moderation tags attached to election misinformation is their level of political knowledge. Lodge and Taber (2013) argue that political knowledge affords partisans greater opportunity to effectively discount or counterargue against information that challenges their beliefs and to reach conclusions congenial to their political identity. Consistently, partisans that score higher in political knowledge demonstrate greater skepticism of counter-attitudinal information and more polarized attitudes following mixed evidence compared to their less knowledgeable peers (Taber et al., 2009; Taber & Lodge, 2006). Moreover, Nyhan et al. (2013) found that challenging Sarah Palin’s “death panel” claims was counterproductive, yielding stronger belief for Palin’s supporters with high political knowledge (for similar findings, see Wiliams Kirkpatrick, 2021).

Another possibility follows a Bayesian account. People may be updating their beliefs based on the strength and reliability of new information, but their prior beliefs, which tend to be associated with their politics or group membership, play a significant role in this process (Jern et al., 2014; Tappin et al., 2021). From this perspective, individuals with greater political knowledge may possess stronger pre-existing beliefs that are more resistant to change, or they may have pertinent prior beliefs that influence the way new corrective information is integrated with their existing beliefs. Cognitive sophistication, including political knowledge, has been shown to increase the effect of partisans’ pre-existing beliefs on their subsequent reasoning (Flynn et al., 2017; Tappin et al., 2021; see also Kahan, 2013). Pennycook and Rand (2021), for example, observed that false beliefs about election fraud and Trump as the winner of the 2020 U.S. presidential election were positively correlated with political knowledge among Trump voters and negatively correlated with political knowledge among Biden voters.

The totality of this work suggests that explicit attempts to dispute misinformation may be likely to fail for partisans higher in political knowledge. Hence, we explored whether political knowledge would moderate the effect of Twitter’s “disputed” tags on Trump voters’ judgments of election misinformation. Verbal ability was measured as a control variable to rule out political knowledge as general cognitive ability. We found that “disputed” tags were generally ineffective at curbing election misinformation among Trump voters. Ironically, these tags may have slightly increased belief in misinformation for those Trump voters with high political knowledge. Additionally, Trump voters who were initially skeptical of mass election fraud were more likely to perceive Donald Trump’s misinformation as truthful when exposed to the disputed tag compared to the control condition. Although Biden voters were unaffected by the inclusion of “disputed” tags, third-party and non-voters were marginally less likely to believe election misinformation in the “disputed” tag condition compared to the control. It is important to note that we did not anticipate that soft moderation “disputed” tags would be counterproductive, or “backfire,” for Trump voters with high political knowledge. Our expectation was that political knowledge would diminish or eliminate the effectiveness of “disputed” tags. We emphasize caution with this particular finding. Consequently, we are more confident in concluding that the effectiveness of “disputed” tags decreased as political knowledge increased among Trump voters.

Limitations and considerations

Our sample consisted of 1,078 adults living in the United States recruited via CloudResearch’s online platform. Although CloudResearch is known to attract highly attentive and engaged participants, its samples are less representative compared to other online participant-sourcing platforms (Stagnaro et al., 2024). It is conceivable that our sample of Trump voters high in political knowledge may possess distinct characteristics, potentially skewing the sample’s representation away from the broader population of similarly informed Trump supporters. This limitation warrants caution when generalizing our findings, as does the specific context of Trump’s false claims surrounding the 2020 U.S. presidential election. This was an unprecedented event in American history, marked by the sitting President’s refusal to concede and repeated assertions of widespread voter fraud. It remains unclear whether similar responses would occur in the context of other, less consequential, divisive, and pervasive instances of misinformation.

Additionally, our analyses focused on a relatively smaller number of Trump voters than Biden voters. Participants were recruited more than a month before the election for a larger longitudinal project, making it difficult to deliberately oversample Trump voters in hindsight. Experimental tests of the effectiveness of “disputed” tags among individuals with varying levels of political knowledge further sliced our sample size of Trump voters. We emphasize caution and reiterate that our finding is more robust regarding the ineffectiveness of “disputed” tags for Trump voters and the diminishing effectiveness of these tags as their political knowledge increases. There should be less confidence in the notion that these tags are counterproductive (or “backfire”) per se. Furthermore, we cannot definitively distinguish between potential mechanisms such as cognitive dissonance, psychological reactance, or Bayesian updating. These limitations highlight opportunities for confirmatory tests in future research.

Findings

Manipulation check and analysis strategy

Twelve participants reported conflicting voting decisions between survey waves and were excluded from all analyses. One-hundred four participants failed the attention manipulation check by incorrectly indicating that a disputed tag about misinformation was present in the control condition (n = 57) or not present in the disputed tag condition (n = 47). There was no difference in the pass/fail rate between conditions, Ï‡2(n = 1,078) = 0.79, p = .374, and our findings remained consistent irrespective of whether those failing the attention check were excluded from analyses. Hence, we report analyses with these participants included (= 1,078: 290 Trump voters, 673 Biden voters, and 115 third-party/non-voters) to better simulate the effectiveness of disputed tags on the judged truthfulness of election misinformation. Because distributions of truth judgments varied drastically by voter group (see Figure 1), we separated analyses by voting. We fit linear mixed models with random intercepts of participants and tweets to examine truth judgments of Trump’s false claims about election fraud (four observations per participant given four tweets) using the lme4 and lmerTest packages in R (Bates et al., 2015; Kuznetsova et al., 2017).1 That is, we used a statistical technique that allowed us to consider individual differences between participants and the specific tweets they rated, so we could see how people judged the truthfulness of the claims and ensure that any patterns we found weren’t simply due to one person or tweet being unusual. Voter-specific models included social media condition (-0.5 = control, 0.5 = disputed tag), political knowledge (mean-centered), and their interaction as fixed-effects predictors.

Figure 1. Density plot of truth judgments as a function of voter group.

Finding 1: Overall, “disputed” tags were ineffective at curbing misinformation among Trump voters. Trump voters with high political knowledge judged Donald Trump’s election misinformation as more truthful when his posts included disputed tags compared to the control condition.

Illustrated in Figure 2, the model with Trump voters yielded a significant interaction between moderation tag condition and political knowledge, b = 0.20, SE = .09, t = 2.18, p = .030, but no effects of condition, b = 0.16, SE = .19, t = 0.81, p = .420, or political knowledge, b = 0.04, SE = .04, t = 0.94, p = .347. Belief in election misinformation increased with political knowledge in the disputed tag condition, b = 0.14, SE = .06, t = 2.30, p = .022, 95% CI [0.02, 0.26], and it was unrelated to political knowledge in the control condition, b = -0.06, SE = .07, t = 0.85, p = .398, 95% CI [-0.18, 0.07]. Moreover, whereas Trump voters high in political knowledge (+1 SD) reported marginally stronger belief in Trump’s election fraud claims in the disputed tag condition relative to the control condition, b = 0.58, SE = .27, Bonferroni adjusted p = .069, 95% CI [0.04, 1.12], those with low political knowledge (-1 SD) were unaffected by social media condition, b = -0.27, SE = .27, Bonferroni adjusted p = .662, 95% CI [-0.81, 0.27]. In other words, Trump voters with high political knowledge (those in the top 18.1%, scoring above 9) found Trump’s election fraud misinformation to be somewhat more truthful when it had a disputed tag compared to when it didn’t. This difference (d = 0.32) is about one-third of the standard deviation in truthfulness ratings between the two conditions. Controlling for verbal ability did not change these results and the condition by political knowledge interaction remained significant, b = 0.19, SE = .09, t = 2.13, p = .034.

The models for Biden voters and third-party and non-voters revealed no significant interactions between condition and political knowledge, bs = 0.01 and -0.02, SEs = .04 and .13, ts = 0.29 and 0.14, ps = .771 and .888. However, we did observe a significant main effect of political knowledge for Biden voters, b = -0.11, SE = .02, t = 5.45, p < .001, and a marginal effect of social media condition for third-party and non-voters, b = -0.62, SE = .33, t = 1.89, p = .061. Biden voters were less likely to believe Trump’s fraud claims as their political knowledge increased, and third-party and non-voters tended to believe Trump’s fraud claims marginally less in the disputed tag condition relative to the control condition. No other effects emerged, ts < 0.78, ps > .440.

Figure 2. Judged truthfulness of Donald Trump’s election fraud claims as a function of voter group, political knowledge, and moderation tag condition, 95% CIs.

Finding 2: Trump voters that were initially skeptical of mass election fraud were more likely to perceive Donald Trump’s misinformation as truthful in the disputed tag condition compared to the control.

We considered pre-existing belief in voter fraud favoring Biden (vs. Trump) as a predictor in voter-specific models along with moderation tag condition. The model for Trump voters produced a main effect of pre-existing fraud belief, b = 0.70, SE = .07, t = 10.51, p < .001, with stronger belief predicting greater susceptibility to misinformation, and a significant interaction between pre-existing belief and social media condition, b = -0.34, SE = .13, t = 2.57, p = .011. Illustrated in Figure 3, Trump voters relatively skeptical of election fraud benefiting Biden (-1 SD) rated Trump’s claims as more truthful in the disputed tag condition compared to the control condition, b = 0.59, SE = .23, Bonferroni adjusted p = .024, 95% CI [0.13, 1.05]. Conversely, Trump voters that fully (max.) endorsed belief in election fraud were unaffected by the disputed tag manipulation, b = -0.12, SE =.20, Bonferroni adjusted p = 1.000, 95% CI [-0.50, 0.27]. Said differently, Trump voters with minimal to no belief in mass voter fraud benefitting Biden in the 2020 U.S. presidential election (those in the bottom 14.1% of pre-existing belief, scoring less than 1 on the response scale) rated Trump’s election fraud misinformation to be moderately more truthful when it included a “disputed” tag versus when it did not. This difference (d = 0.36) indicates that the increase in perceived truthfulness is roughly one-third of the standard deviation in truthfulness ratings between the two conditions.

The model for third-party and non-voters produced main effects of condition, b = -0.43, SE = .21, t = 2.06, p = .042, and existing fraud belief, b = 1.06, SE = .09, t = 11.86, p < .001, but no interaction, b = -0.15, SE = .18, t = 0.84, p = .403. Those in the disputed tag condition were less likely to judge Trump’s claims as true and pre-existing fraud belief positively predicted susceptibility to misinformation (see Figure 3). Conversely, no significant effects emerged in the model with Biden voters, ts < 1.54, ps > .125. The similar pattern observed among Trump voters and third-party/non-voters, where pre-existing belief in election fraud predicted perceived truthfulness of election misinformation, seems to stem from ample variance in the beliefs and judgments within these groups. In contrast, Biden voters nearly unanimously rejected the notion of election fraud and consistently judged Trump’s claims as false; this lack of variance among Biden voters prevented pre-existing beliefs from predicting their subsequent truth judgments.

Figure 3. Judged truthfulness of Donald Trump’s election fraud claims as a function of voter group, pre-existing belief in widespread election fraud, and moderation tag condition, 95% CIs.

Finding 3: Disputed tags failed to meaningfully change pre-existing beliefs about election fraud or fairness.

There was little to no evidence that attaching disputed tags to Trump’s tweets meaningfully changed participants’ pre-existing beliefs about election fraud or fairness, especially for the key target audience of Trump voters (see Appendices A and B).

Methods

A sample of American adults was recruited via CloudResearch’s participant-sourcing platform (Litman et al., 2017) for a longitudinal study on the 2020 U.S. presidential election. Our target sample was set to at least 1,500 participants for the first wave of data collection to achieve a large sample, with the expectation of attrition across three subsequent waves administered in three-week intervals. Totals of 1,556, 1,247, 1,163, and 1,131 respondents completed Wave 1 (October 6–10, 2020), Wave 2 (October 27–31, 2020), Wave 3 (November 17–21, 2020), and Wave 4 (December 8–12, 2020), respectively. Participants’ residences included 49 U.S. states and the District of Columbia (for sample demographics, see supplemental materials on Open Science Framework [OSF]). The present experiment was administered in Wave 4 but included individual difference measures from Wave 1. Measures in Waves 2 and 3 did not concern the present research. After data cleaning procedures to identify duplicate and non-U.S. IP addresses (n = 40; Waggoner et al., 2019) and low-quality data via responses to eight open-ended questions (n = 155; for details, see supplemental materials on OSF), our final sample for data analyses included 1,078 participants. All data cleaning was completed prior to any analyses.

Tweet stimuli, disputed tags, and truth judgments

Participants reported how they voted in the 2020 U.S. presidential election between Donald Trump, Joe Biden, Jo Jorgensen or other third-party candidates, and not voting. In Wave 4, they rated the truthfulness of four representative tweets from Donald Trump falsely claiming instances of election fraud. Trump’s tweets were selected as stimuli based on the following criteria: they made specific false claims about election fraud, covered distinct (supposed) events, and did not include images. Depicted in Figure 4, all four tweets included Twitter’s disputed tag (“This claim about election fraud is disputed”) or no additional information (control). Participants were told they would be presented with “actual Tweets made by President Trump” and instructed to “read each Tweet and indicate the extent to which you believe his statement is true or false.” Truth judgments (“Do you believe this statement to be true or false?”) were provided using 7-pt response scales (1 = extremely false, 7 = extremely true). An attentional manipulation check asked if participants recalled whether Trump’s tweets included a tag disputing his claims (“yes,” “no,” or “I can’t remember”). Before and after rating the truthfulness of Trump’s claims, they also indicated their perceptions of voter fraud (“To what extent do you think voter fraud contributed to the results of the 2020 U.S. presidential election?” [-3 = strongly benefited Donald Trump, +3 = strongly benefited Joe Biden]) and election fairness (“As far as you know, do you think the 2020 U.S. presidential election was a free and fair election?” [1 = definitely not, 5 = definitely yes]).

Figure 4. Example of Trump tweets claiming election fraud with a disputed tag or no additional information.

 Political knowledge and verbal ability

Adapted from Taber and Lodge (2006), political knowledge was measured using 10 factual questions about American politics (e.g., “Do you happen to know what job or political office is now held by John Roberts? What is it?”, “How much of a majority is required for the U.S. Senate and House to override a presidential veto?”). Political knowledge scores were computed by summing the number of correct responses. To measure verbal ability, participants completed the WordSum Test (Huang & Hauser, 1998). This 10-item vocabulary test is adapted from the Wechsler Adult Intelligence Scale and strongly correlates with general factor intelligence (Wolfle, 1980). Verbal ability scores were computed by summing the number of correct responses. Both measures were administered before the experiment in Wave 1.

Download PDF
Cite this Essay

Blanchar, J. C., & Norris, C. J. (2024). Trump, Twitter, and truth judgments: The effects of “disputed” tags and political knowledge on the judged truthfulness of election misinformation. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-157

Links

Bibliography

Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software67(1), 1–48. https://doi.org/10.18637/jss.v067.i01

Blanchar, J. C., & Norris, C. J. (2021). Political homophily, bifurcated social reality, and perceived legitimacy of the 2020 U.S. presidential election: A four-wave longitudinal study. Analyses of Social Issues and Public Policy, 21(1), 259–283. https://doi.org/10.1111/asap.12276

Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.

Chan, M. S., Jones, C. R., Hall Jamieson, K., & Albarracín, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. https://doi.org/10.1177/0956797617714579

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., Morgan, E., Sandhu, M., Sang, R., Scholz-Bright, R., Welch, A. T., Wolff, A. G., Zhou, A., & Nyhan, B. (2020). Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior, 42(4), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0

DeVerna, M. R., Guess, A. M., Berinsky, A. J., Tucker, J. A., & Jost, J. T. (2024). Rumors in retweet: Ideological asymmetry in the failure to correct misinformation. Personality and Social Psychology Bulletin, 50(1), 3–17. https://doi.org/10.1177/01461672221114222

Ecker, U. K. H., & Ang, L. C. (2019). Political attitudes and the processing of misinformation corrections. Political Psychology40(2), 241–260. https://doi.org/10.1111/pops.12494

Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y

Festinger, L., Riecken, H. W., & Schachter, S. (1956). When prophecy fails. Minneapolis: University of Minnesota Press.

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150. https://doi.org/10.1111/pops.12394

Graham, M., & Rodriguez, S. (2020, November 4). Twitter and Facebook race to label a slew of posts making false election claims before all votes counted. CNBC. https://www.cnbc.com/2020/11/04/twitter-and-facebook-label-trump-posts-claiming-election-stolen.html

Haglin, K. (2017). The limitations of the backfire effect. Research & Politics, 4(3). https://doi.org/10.1177/2053168017716547

Huang, M., & Hauser, R. M. (1998). Trends in black-white test score differentials: II. The WORDSUM Vocabulary Test. In U. Neisser (Ed.), The rising curve: Long-term gains in IQ and related measures. American Psychological Association. https://doi.org/10.1037/10270-011

Ipsos. (2020, November 18). Most Americans agree Joe Biden is rightful winner of 2020 election: Latest Reuters/Ipsos poll shows most approve of how Joe Biden is handling his position of President-elect [Press release]. https://www.ipsos.com/sites/default/files/ct/news/documents/2020-11/topline_reuters_post_election_survey_11_18_2020.pdf

Jern, A., Chang, K. M. K., & Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206–224. https://doi.org/10.1037/a0035941

Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407–424. https://doi.org/10.1017/S1930297500005271

Koch, T. K., Frischlich, L., & Lermer, E. (2023). Effects of fact-checking warning labels and social endorsement cues on climate change fake news credibility and engagement on social media. Journal of Applied Social Psychology, 53(6), 495–507. https://doi.org/ 10.1111/jasp.12959

Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software82(13), 1–26. https://doi.org/10.18637/jss.v082.i13

Lewandowsky, S., Cook, J., Ecker, U., Albarracín, D., Kendeou, P. Newman, E. J., Pennycook, G., Porter, E., Rand, D. G., Rapp, D. N., Reifler, J., Roozenbeek, J., Schmid, P., Seifert, C. M., Sinatra, G. M., Swire-Thompson, B., van der Linden, S., Wood, T. J., & Zaragoza, M. S. (2020). The debunking handbook 2020. DigitalCommons@University of Nebraska.  https://digitalcommons.unl.edu/scholcom/245

Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods49(2), 433–442. https://doi.org/10.3758/s13428-016-0727-z

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge University Press.

Ma, Y., Dixon, G., & Hmielowski, J. D. (2019). Psychological reactance from reading basic facts on climate change: The role of prior views and political identification. Environmental Communication, 13(1), 71–86. https://doi.org/10.1080/17524032.2018.1548369

Martel, C., & Rand, D. G. (2023). Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology, 54, Article 101710. https://doi.org/10.1016/j.copsyc.2023.101710

Mena, P. (2020). Cleaning up social media: The effect of warning labels on likelihood of sharing false news on Facebook. Policy & Internet, 12(2), 165–183. https://doi.org/10.1002/poi3.214

Nyhan, B., Porter, E., Reifler, J., & Wood, T. J. (2020). Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior, 42(3), 939–960. https://doi.org/10.1007/s11109-019-09528-x

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2

Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132. https://doi.org/10.1097/MLR.0b013e318279486b

Oeldorf‐Hirsch, A., Schmierbach, M., Appelman, A., & Boyle, M. P. (2020). The ineffectiveness of fact‐checking labels on news memes and articles. Mass Communication and Society, 23(5), 682–704. https:// doi.org/10.1080/15205436.2020.1733613

Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66(11), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478

Pennycook, G., & Rand, D. G. (2021). Research note: Examining false beliefs about voter fraud in the wake of the 2020 presidential election. Harvard Kennedy School (HKS) Misinformation Review, 2(1). https://doi.org/10.37016/mr-2020-51

Pew Research Center. (2021). Biden begins presidency with positive ratings; Trump departs with lowest-ever job mark [Report]. https://www.pewresearch.org/politics/2021/01/15/biden-begins-presidency-with-positive-ratings-trump-departs-with-lowest-ever-job-mark/

Porter, E., & Wood, T. J. (2024). Factual corrections: Concerns and current evidence. Current Opinion in Psychology, 55, Article 101715. https://doi.org/10.1016/j.copsyc.2023.101715

Ipsos. (2020, November 18). Most Americans agree Joe Biden is rightful winner of 2020 election: Latest Reuters/Ipsos poll shows most approve of how Joe Biden is handling his position of President-elect [Press release]. https://www.ipsos.com/sites/default/files/ct/news/documents/2020-11/topline_reuters_post_election_survey_11_18_2020.pdf

Ross, L., Lepper, M. R., & Hubbard, M. (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology, 32(5), 880–892. https://doi.org/10.1037/0022-3514.32.5.880

Sharevski, F., Alsaadi, R., Jachim, P., & Pieroni, E. (2022). Misinformation warnings: Twitter’s soft moderation effects on COVID-19 vaccine belief echoes. Computers & Security, 114, Article 102577. https://doi.org/10.1016%2Fj.cose.2021.102577

Stagnaro, M. N., Druckman, J., Berinsky, A. J., Arechar, A. A., Willer, R., & Rand, D. G. (2024, April 24). Representativeness versus response quality: Assessing nine opt-in online survey samples. PsyArXiv. https://doi.org/10.31234/osf.io/h9j2d

Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior, 31(2), 137–155. https://doi.org/10.1007/s11109-008-9075-8

Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x

Tappin, B. M., Pennycook, G., & Rand, D. G. (2021). Rethinking the link between cognitive sophistication and politically motivated reasoning. Journal of Experimental Psychology: General150(6), 1095–1114. https://doi.org/10.1037/xge0000974

Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. https://doi.org/10.1080/10584609.2015.1102187

Timm, J. C. (2020, November 4). With states still counting, Trump falsely claims he won. NBC News. https://www.nbcnews.com/politics/2020-election/10-states-still-counting-millions-votes-trump-falsely-claims-he-n1246336

Waggoner, P. D., Kennedy, R., & Clifford, S. (2019). Detecting fraud in online surveys by tracing, scoring, and visualizing IP addresses. Journal of Open Source Software, 4(37), 1285. https://doi.org/10.21105/joss.01285

Wicklund, R. A. (1974). Freedom and reactance. Lawrence Erlbaum.

Williams Kirkpatrick, A. (2021). The spread of fake science: Lexical concreteness, proximity, misinformation sharing, and the moderating role of subjective knowledge. Public Understanding of Science, 30(1), 55–74. https://doi.org/10.1177/0963662520966165

Wolfle, L. M. (1980). The enduring effects of education on verbal skills. Sociology of Education, 104–114. https://doi.org/10.2307/2112492

Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior, 41(1), 135–163. https://doi.org/10.1007/s11109-018-9443-y

Zengerle, P., Landay, J., & Morgan, D. (2021, January 6). Under heavy guard, Congress back to work after Trump supporters storm U.S. Capitol. Reuters. https://www.reuters.com/article/us-usa-election/under-heavy-guard-congress-back-to-work-after-trump-supporters-storm-u-s-capitol-idUSKBN29B2PU

Ziemer, C.-T., & Rothmund, T. (2024). Psychological underpinnings of misinformation countermeasures: A systematic scoping review. Journal of Media Psychology: Theories, Methods, and Applicationshttps://doi.org/10.1027/1864-1105/a000407

Funding

This work was supported by the Eugene M. Lang Center for Civic and Social Responsibility at Swarthmore College.

Competing Interests

The authors declare no competing interests.

Ethics

This research received ethics approval from Swarthmore College’s Institutional Review Board.

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

All materials needed to replicate this study are available via the Harvard Dataverse: https://doi.org/10.7910/DVN/YWYS42 and the Open Science Framework: https://osf.io/vnft5/?view_only=567fea549d6b49c2ab9cc6b68ed97b3a

Are noncitizens really voting in US elections?


|Kristopher Radder/The Brattleboro Reformer/AP
A resident of Chesterfield, New Hampshire, fills out his ballot during the state primary election on Sept. 10, 2024.

By Christa Case Bryant Staff writer
Sophie Hills Staff writer
CHRISTIAN SCIENCE MONITOR
Sept. 11, 2024, 


With illegal immigration one of the top issues on voters’ minds heading into the 2024 election, Republicans are making a nationwide push to require proof of citizenship in order to vote. The GOP-run House of Representatives passed a bill that would do just that, the SAVE Act, in July – with support from five Democrats.

Former President Donald Trump has also repeatedly urged such measures, including in Tuesday night’s debate, alleging that his opponents are irresponsibly encouraging undocumented immigrants to vote. “A lot of these illegal immigrants coming in, they’re trying to get them to vote, they can’t even speak English, they don’t even know what country they’re in practically, and these people are trying to get them to vote,” he said.

Now Speaker Mike Johnson is saying that unless the House and Senate agree to the SAVE Act, he’ll shut down the government when the fiscal year ends Sept. 30 – though it appears he lacks the support within his own party to do so.

Why We Wrote This

Concerned about voter fraud and illegal immigration, Republicans are pushing to require proof of citizenship to vote. Critics say noncitizen voting is already illegal and rare, but it’s hard to prove a negative. Here’s what is known – and unknown.

But Democrats, citing a lack of documented cases of noncitizen voting, say the law is unnecessary since it’s already illegal for noncitizens to vote. Moreover, they argue, it would result in disqualifying eligible voters. They accuse Republicans, including former President Donald Trump, of pushing this issue to lay the groundwork for claiming the election was stolen if they lose in November.
Is proof of citizenship currently required to vote?

The short answer is, citizenship is required in federal elections, but proof of citizenship generally isn’t, although some voters may provide that while establishing their identity and residency.

Sixteen municipalities allow noncitizens to vote in local elections, according to Ballotpedia. But elsewhere there’s pushback to the idea. Amendments to bar noncitizen voting are on the ballot this fall in eight states: Idaho, Iowa, Kentucky, Missouri, North Carolina, Oklahoma, South Carolina, and Wisconsin.

Only citizens are allowed to vote in federal elections, however, according to U.S. law. On the federal voter registration form, which is accepted by all states except New Hampshire and Wisconsin, prospective voters must check a box affirming under penalty of perjury that they are a U.S. citizen. The form does not require accompanying proof of citizenship, but warns applicants that anyone who provides false information may be fined, imprisoned, or – if a noncitizen – deported.

Many voters demonstrate citizenship status when obtaining certain forms of identification, but state rules vary. Among those with stricter rules:New Hampshire: Requires proof of citizenship to register and vote. Currently, if a voter does not have such proof, he or she can sign an affidavit, under oath, before an election official.
Arizona: Requires proof of citizenship to register via the state’s registration form. However, after numerous legal challenges to the 2022 law requiring proof of citizenship, applicants using the federal form can vote in federal elections (but not state and local elections) without such proof.
Texas: Requires state court clerks to notify the secretary of state of those excused or disqualified from jury duty for not being a U.S. citizen. If those people don’t provide proof of citizenship within 30 days of being notified, their voter registration can be canceled.
Georgia: May cross-check voters’ Social Security numbers against state databases.
How much evidence is there that noncitizens register and vote?

There are very few proven instances of noncitizens registering to vote, and even fewer of them actually voting. Instances of unauthorized immigrants voting are “so rare as to be statistically nonexistent,” says Aaron Reichlin-Melnik, senior fellow at the American Immigration Council.

A 2016 study by the Brennan Center for Justice, which involved interviewing local election officials in 42 jurisdictions – including 8 of the 10 with the largest noncitizen populations – found that only an estimated 30 incidents of suspected noncitizen voting among 23.5 million votes cast were referred for further investigation or prosecution. However, the interviews relied on officials’ estimates rather than hard records, and did not ask how officials evaluated eligibility or determined whether to refer cases to prosecution.

Even the conservative Heritage Foundation, which maintains an extensive database of documented voter fraud, has identified just several dozen prosecutions of noncitizens registering to vote or actually voting over the past 20 years. Hans von Spakovsky, who heads the think tank’s Election Law Reform Initiative, says that’s because of both the lack of mechanisms to verify citizenship and the lack of prosecutions.


Jonathan Drake/Reuters
A precinct official performs logic and accuracy testing on voting machines ahead of the coming general election, at Wake County Board of Elections headquarters in Raleigh, North Carolina, Sept. 5, 2024.

He cites, for example, a U.S. House Oversight investigation into a California race 20 years ago, which brought to light evidence of 624 votes cast illegally by noncitizens. Ultimately, the California secretary of state did not prosecute any of the cases, determining that each registered in error, not with intent to register illegally.

But efforts by a handful of GOP secretaries of state have not turned up evidence of widespread noncitizen voting. They have flagged thousands of potential noncitizens who tried – and in some cases, succeeded – to register as voters, though only a fraction actually voted. The states, in announcing their findings below, did not disclose how they verified lack of citizenship. One option is cross-checking registered voters with the U.S. Citizen and Immigration Services’s Systematic Alien Verification for Entitlements (SAVE) database, which verifies immigration status. Alabama: The state since 2023 has identified 3,200 noncitizens registered to vote, though it said it’s possible that some are now naturalized and eligible to vote.
Georgia: A 2022 review in the state, which has a process for checking citizenship, found that election officials had flagged and prevented about 1,600 registration applications by potential noncitizens over the past 25 years.
Ohio: This year, Ohio identified 138 noncitizens “who appear to have cast a ballot” by cross-referencing state and federal databases, including jury pool data.
Virginia: Gov. Glenn Youngkin removed 6,303 noncitizens from the state’s voter rolls over the past two years, based on data collected by the Department of Motor Vehicles and shared with the state Department of Elections.

In each of these states, votes cast by noncitizens account for a significantly smaller share of overall fraud than other categories, such as deceased voters, voters who have moved out of state, or who still have driver’s licenses from another state. In Texas, for example, of more than 1 million people removed from voter rolls since 2021, only 6,500 were potential noncitizens and of those only 1,900 had voted.

Many opponents of stricter proof-of-citizenship standards say noncitizens already have a huge disincentive to vote, risking their status in the country or their ability to naturalize. However, some instances are unintentional, and often the penalties are minimal. For example, in the nearly four dozen cases Heritage documented, only one noncitizen was referred to ICE for deportation. Many did not serve any jail time, and apart from two men who faced a raft of other charges, the highest fine any paid was $2,500.

What are the counter concerns about voting rights?

Some Democrats have warned that measures like additional updates for voter rolls or requiring proof of citizenship could result in voter suppression. Proof-of-citizenship requirements act as an unnecessary hurdle for eligible voters, they say, often pointing to surveys that show that about 1 in 10 Americans don’t have ready access to documents like birth certificates or passports. The Biden administration echoed those concerns in a July statement, adding that an additional risk is accidentally purging eligible voters from voter rolls.

Critics of proof-of-citizenship requirements point to Kansas and Arizona as case studies showing that the number of eligible voters removed from the rolls can far exceed the number of ineligible voters identified.

In Kansas in 2020, a federal appeals court overruled the state’s 2011 law requiring documentary proof of citizenship to apply to register to vote. The court’s decision cited 31,089 voters whose applications were canceled or suspended in the state, and only found evidence that 67 noncitizens attempted to register and that 39 succeeded between 1999 and 2013, when the law took effect.

The heat is on: Indigenous led nonprofit seeks to lower heating expenses with solar thermal

8th Fire Solar was formed in 2018, to produce and install solar thermal technologies. Unlike photovoltaic systems which convert sunlight into electricity, solar thermal only produces heat.
Mathew Holding Eagle III | MPR News

 Getting to Green: Minnesota's energy future


A visitor will find 8th Fire Solar just behind a grove of trees in a remote part of northwestern Minnesota. It’s a small manufacturing facility owned and operated by the local community development group named Akiing. 

Program coordinator Gwekaanimad Gasco is White Earth Ojibwe and a member of the Little Traverse Bay Band of Odawa.  

“We’re in Osage, Minnesota, just on the boundary of the reservation,” Gasco said.

Leading a tour of the facility Gasco said solar thermal technology is simple. Unlike photovoltaic systems which convert sunlight into electricity, solar thermal uses the rays to warm liquid or air in the system, producing only heat.  


“We use air here, so our system recirculates the air and drops it into the house where they need it. Could be a bedroom, could be a bathroom, living room,” he said.  

A fan pushes air from inside the house through the solar thermal collector. From there it is heated and redistributed back into the home through a warm air outlet. A thermostat allows users to raise or lower the temperature. While effective, it’s not designed to be a home’s sole source of heat but more of a supplement to lower costs.  

Another caveat is that none of the excess energy can be banked.    

“You’re getting a very simple system that you can install yourself that’ll last you 20, 30 years. No moving parts. You know at most the fan might burn out. You might have a wiring problem,” Gasco said. “You call us up, we’re in Osage, right down the block. We’ll come help you out. There’s a 10-year warranty on the panel in the system if we install it.” 

From left to right: Nicholas Bellrock, Jon Martin and Gwe Gasco outside 8th Fire Solar in front of solar thermal panels they helped install at their headquarters near Osage on Aug. 21.
Mathew Holding Eagle III | MPR News

‘Heat vs eat’

Most of the parts used by 8th Fire Solar are sourced in Minnesota. 

Gasco said producing solar thermal panels for impoverished communities is an important tool in the fight for energy sovereignty in tribal lands. He said money spent on energy can’t be spent on groceries.  

And when it is, consumers look to cheaper alternatives. 

“The heat-versus-eating dilemma is a dilemma that’s not just faced by tribal communities around here, but it’s nationwide,” he said. “We’re buying more processed stuff instead of going after the healthier stuff. So that’s directly related to the energy burden, poverty, tribal health. So, it’s all very connected.” 

Gasco said the idea behind solar thermal isn’t new. Pueblo nations in the southwest have been using it for centuries. Their adobe homes absorb the sun’s rays during the day and then keep them warm through the night.

According to Clean Energy Resource Teams one solar air heater can reduce a family’s monthly heating cost by 30 percent.  

“This balance with our environment is something that has been in our blood, in our DNA, you know. So, it only makes sense that we would be the ones to help light the way,” Gasco said. “And I hate to say that because there’s a lot of people of color, there’s a lot of great work being done by a lot of allies, a lot of friends. And it’s never just going to be us at this point. They call it the melting pot for a reason.” 

The company says solar thermal can reduce a household’s carbon emissions by 20 to 40 percent. 

Gasco said 8th Fire Solar get its name from the ancient Anishinaabe, Seven Fires Prophecy. It says we’re currently living in the time of the seventh fire. The prophecy says the next generation, or eighth fire, will come to a fork in the road. One path will be scorched and worn. The other will be green and untrodden.  

“8th Fire is lighting that eighth fire. It’s making that decision to go down that green path,” he said. “That path that we haven’t taken before.” 

Nicholas Bellrock is a shop worker at 8th Fire Solar. He said he’s gained skills he can use in other parts of his life. 

“I like that we have the ability to learn new things every day,” he said. “I’ve learned how to drive a tractor since I got here, and just how to do inventory, things like keep count and wash glass. And how to build a panel.”

Nicholas Bellrock works on preparing a solar thermal panel before installation.
Mathew Holding Eagle III | MPR News

Untapped potential 

Renewable energy expert Robert Blake founded the Solar Bear company. He said for now solar thermal’s potential impact on the environment is untapped.  

“The quickest way for us to fight climate change is to focus on low-income folks, because they’re the ones that are going to suffer the most,” he said. “But that also presents a really good opportunity for us to be able to create jobs opportunities and then fight climate change a lot quicker and faster, because we’re not going to do it with rich people. They already have the resources to fight it.” 

8th Fire Solar installed the first solar thermal unit on the Lake Vermilion Reservation for homeowner Tracey Strong Dagen in 2021. 

She said the transition has lowered her dependence on propane fuel. 

“It’s a very, good supplement during the day. That’s when they mainly heat, so you don’t use propane at all during the day,” she said. “It keeps it pretty steady upstairs. We lost electricity one time too, and my brother’s house got down to 55 [degrees] where mine only got down to 68.”   

8th Fire Solar is currently working toward installing solar thermal for 25 elder households on the White Earth Nation.  

THE ORIGINAL 9/11, 1973

 








At Least Two Saudi Officials May Have Deliberately Assisted 9/11 Hijackers, New Evidence Suggests


Newly revealed information also raises questions about whether the FBI and CIA mishandled or downplayed evidence of the kingdom’s possible ties to the plotter

PROPUBLICA
Sept. 11, 2024


ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

From the start of U.S. investigations into the terrorist attacks of Sept. 11, 2001, the question of whether the Saudi government might have been involved has hovered over the case.

The FBI, after the most extensive criminal probe in its history, concluded that a low-level Saudi official who helped the first two hijackers in California met them by chance and aided them unwittingly. The CIA said it saw no evidence of a higher-level Saudi role. The bipartisan 9/11 commission adopted those findings. A small FBI team continued to dig into the question, turning up information that raised doubts about some of those conclusions.

But now, 23 years after the attacks, new evidence has emerged to suggest more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.

Whether the Saudis knew the men were terrorists remains unclear. But the new information shows that both officials worked with Saudi and other religious figures who had ties to al-Qaida and other extremist groups.

Most of the evidence has been gathered in a long-running federal lawsuit against the Saudi government by survivors of the attacks and relatives of those who died. That lawsuit has reached a critical moment, with a judge in New York preparing to rule on a Saudi motion to dismiss the case.


Already, though, information put forward in the plaintiffs’ case — which includes videos, telephone records and other documents that were collected soon after the attacks but were never shared with key investigators — argues for a fundamental reassessment of the Saudi government’s possible involvement with the hijackers.

The court files also raise questions about whether the FBI and CIA, which repeatedly dismissed the significance of Saudi links to the hijackers, mishandled or deliberately downplayed evidence of the kingdom’s possible complicity in the attacks that killed 2,977 people and injured thousands more.

“Why is this information coming out now?” asked retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

Saudi officials have long denied any involvement in the plot, emphasizing that they were at war with al-Qaida well before 2001.

They have also leaned on earlier U.S. assessments, especially the one-page summary of a joint FBI-CIA report that was publicly released by the Bush administration in 2005. That summary said there was no evidence that “the Saudi Government or members of the Saudi royal family knowingly provided support” for the attacks.

Pages of the report that were declassified in 2022 are more critical of the Saudi role, describing extensive Saudi funding for Islamic charities linked to al-Qaida and the reluctance of senior Saudi officials to cooperate with U.S. counterterrorism efforts.




The plaintiffs’ account still leaves significant gaps in the story of how two known al-Qaida operatives, Nawaf al-Hazmi and Khalid al-Mihdhar, avoided CIA surveillance overseas, flew into Los Angeles under their own names and then — despite speaking no English and ostensibly knowing no one — settled in Southern California to start preparing for the attacks.

Still, the lawsuit has exposed layers of contradictions and deceit in the Saudi government’s portrayal of Omar al-Bayoumi, a middle-aged Saudi graduate student in San Diego who was the central figure in the hijackers’ support network.

Almost immediately after the 9/11 attacks, FBI agents identified Bayoumi as having helped the two young Saudis rent an apartment, set up a bank account and take care of other needs. Bayoumi, then 42, was arrested on Sept. 21, 2001, in Birmingham, England, where he had moved to continue graduate studies in business. Scotland Yard terrorism investigators questioned him for a week in London as two FBI agents monitored the sessions.

Bayoumi dissembled from the start, newly released transcripts of the interrogations show. He said he barely remembered the two Qaida operatives, having met them by chance in a halal cafe in the Los Angeles suburb of Culver City, after he stopped at the Saudi Consulate to renew his passport. The evidence shows he actually renewed his passport the day before the encounter in the cafe, one of many indications that his meeting with the hijackers was planned.

After pressure from Saudi diplomats, Bayoumi was freed by the British authorities without being charged. U.S. officials did not try to have him extradited.

Two years later, in Saudi Arabia, Bayoumi sat for interviews with the FBI and the 9/11 commission that were overseen by Saudi intelligence officials. Again, he insisted that he was just being hospitable to the hijackers. He knew nothing of their plans, he said, and was opposed to violent jihad.

Gonzalez and other FBI agents were dubious. Though Bayoumi was supposedly a student, he did almost no studying. He was far more active in setting up a Saudi-funded mosque in San Diego and spreading money around the Muslim community. (The Saudi government paid him surreptitiously through an aviation-services company in Houston.)

FBI officials in Washington accepted the Saudi depiction of Bayoumi as an amiable, somewhat bumbling government accountant trying to improve his skills, and as a devout but moderate Muslim — and not a spy. The lead agent on the FBI team that investigated him, Jacqueline Maguire, told the 9/11 commission that by “all indications,” Bayoumi’s connection with the hijackers had been the result of “a random encounter” at the cafe.

The 9/11 commission accepted that assessment. The commission’s investigators noted Bayoumi’s “obliging and gregarious” manner in interviews and called him “an unlikely candidate for clandestine involvement with Islamist extremists.” The panel found “no credible evidence that he believed in violent extremism or knowingly aided extremist groups.”

But in 2017, the FBI concluded that Bayoumi was, in fact, a Saudi spy — although it kept that finding secret until 2022, after President Joe Biden ordered agencies to declassify more documents from the 9/11 files

.
A page from an exhibit submitted by the plaintiffs in a long-running lawsuit against the Saudi government over the role it may have played in the 9/11 attacks. The exhibit contains screenshots from a video by a Saudi official, Omar al-Bayoumi, who toured Washington, D.C., in 1999. Credit:Obtained by ProPublica from the U.S. District Court of the Southern District of New York

Exactly whom in the Saudi government Bayoumi was working for remains unclear. FBI reports describe him as a “cooptee,” or part-time agent, of the Saudi intelligence service, but say he reported to the kingdom’s powerful former ambassador to Washington, Prince Bandar bin Sultan. (Lawyers for the Saudi government have continued to repeat Bayoumi’s earlier denials that he ever had “any assignment” for Saudi intelligence.)


Another layer of Bayoumi’s hidden identity has emerged from documents, videotapes and other materials that were seized from his home and office at the time of his arrest in England. The plaintiffs had sought that information from the Justice Department for years but received almost nothing until the British authorities began sharing their copies of the material in 2023.

Although Saudi officials insist that Bayoumi merely volunteered at a local mosque, the British evidence points to his deeper collaboration with the Ministry of Islamic Affairs. The Saudi royals had established the ministry in 1993 as part of a governing pact with the powerful clergy. In return for political support, they gave the clerics effective control over domestic religious matters and funded their efforts to spread their fundamentalist Wahhabi brand of Islam overseas.

From the start of the FBI’s 9/11 investigation, agents pored over a short excerpt of a videotape recorded at a party that Bayoumi hosted for some two dozen Muslim men in February 2000, soon after Hazmi and Mihdhar arrived in San Diego.

It was another coincidence, Bayoumi claimed, that he held the event in the hijackers’ apartment. The two young Saudis had nothing really to do with the gathering, he said, but he needed to keep his wife and other women in his own apartment, sequestered from male guests according to conservative Muslim custom.

The FBI did not share a full copy of the VHS recording with either its own field agents or the 9/11 families, who sought it repeatedly. (An FBI spokesperson declined to comment on the bureau’s handling of the Bayoumi evidence.) But the full recording was provided to the plaintiffs by the British police last December.

The longer version casts Bayoumi’s gathering in a different light. Although the nominal guest of honor is a visiting Saudi cleric, the two hijackers are carefully introduced to the other guests and are seemingly at the center of the proceedings.

After identifying many of the party guests for the first time, the plaintiffs’ lawyers were able to document that many went on to play significant roles in the hijackers’ support network, helping them set up internet and telephone service, sign up for English classes and buy a used car.

“Bayoumi hand-picked these individuals because he knew and assessed that they were well-suited to provide the Al Qaeda operatives with important forms of support,” the lawyers wrote of the party guests.

Another videotape taken from Bayoumi’s Birmingham home is even more at odds with the image he conveyed to the FBI and the 9/11 commission. The video follows Bayoumi as he tours Washington, D.C., with two visiting Saudi clerics early in the summer of 1999.

Lawyers for the Saudi government called the recording an innocent souvenir — “a tourist video that includes footage of artwork, flowerbeds, and a squirrel on the White House lawn.” But the plaintiffs’ lawyers posit a more ominous purpose, especially as Bayoumi focuses on his main subject: an extensive presentation of the Capitol building, which is shown from a series of vantage points and in relation to other Washington landmarks.

“We greet you, the esteemed brothers, and we welcome you from Washington,” Bayoumi says on the video. Later, standing before the camera, he reports as “Omar al-Bayoumi from Capitol Hill, the Capitol building.”


The footage shows the Capitol from various angles, noting architectural features, entrances and the movement of security guards. Bayoumi sprinkles his narration with religious language and refers to a “plan.”

“Bayoumi’s video footage and his narration are not that of a tourist,” the plaintiffs contend in one court document, citing the analysis of a former FBI expert. The video, they add, “bears the hallmarks of terror planning operations identified by law enforcement and counterterrorism investigators in operational videos seized from terror groups including Al Qaeda.”

Lawyers for the Saudi government dismissed this conclusion as preposterous.

But the video’s timing is noteworthy. According to the 9/11 commission report, Osama bin Laden and other al-Qaida leaders began discussing their “planes operation” in the spring of 1999. Although they disagreed on which U.S. landmarks to strike, the report states, “all of them wanted to hit the Capitol.”



The two Saudi clerics who joined Bayoumi on the trip, Adel al-Sadhan and Mutaeb al-Sudairy, were so-called propagators — emissaries of the Islamic Affairs ministry sent to proselytize abroad. U.S. investigators later linked them to a handful of Islamist militants.
Another page from the plaintiffs’ exhibit shows two Saudi religious officials, Mutaeb al-Sudairy and Adel al-Sadhan, during a trip in the Washington, D.C., area with Bayoumi early in the summer of 1999. Credit:Obtained by ProPublica from the U.S. District Court of the Southern District of New York

Most notably, Sudairy, whom Bayoumi describes as the emir, or leader, of the Washington trip, spent several months living in Columbia, Missouri, with Ziyad Khaleel, a Palestinian-American al-Qaida member who delivered a satellite phone to bin Laden in Afghanistan in 1998. The Qaida leader used the phone to coordinate the deadly bombings of U.S. embassies in Kenya and Tanzania, FBI officials have said.

Sudairy and Sadhan, who had diplomatic status, had previously visited California, working with Bayoumi and staying at a small San Diego guesthouse where the hijackers later lived. Many new details of their travels were revealed in the British documents. The two Saudis had previously denied even knowing Bayoumi, one of many false claims in depositions coordinated by the Saudi government.

The new evidence also shows that Sadhan and Sudairy worked with the other key Saudi official linked to the hijackers, the cleric Fahad al-Thumairy. According to one FBI source, it was Thumairy, the 32-year-old imam of a prominent Saudi mosque in Culver City, who received the hijackers when they arrived on Jan. 15, 2000, and arranged for their temporary housing and other needs.

Thumairy, a Ministry of Islamic Affairs official who was also assigned to the Saudi consulate, insisted he had no memory of Hazmi and Mihdhar, although the three were seen together by several FBI informants. Thumairy also denied knowing Bayoumi, despite telephone records that show at least five dozen calls between them. Thumairy’s diplomatic visa was withdrawn by the State Department in 2003 because of his suspected involvement with terrorist activity.

In an extensive analysis of telephone records produced by the FBI and the British authorities, the plaintiffs also documented what they called patterns of coordination involving Bayoumi, Thumairy and other Saudi officials. (Lawyers for the Saudi government said the calls were about mundane religious matters.)

Two weeks before the hijackers’ arrival, for example, the records show calls among Bayoumi, Thumairy and the Islamic Affairs director at the Saudi Embassy in Washington. Bayoumi and Thumairy also made a number of calls around that time to a noted Yemeni American cleric, Anwar al-Awlaki, who later emerged as an important Qaida leader in Yemen.


Operation Encore and the Saudi Connection: A Secret History of the 9/11 Investigation


It has long been known that Awlaki, who was killed by a U.S. drone strike in 2011, had some contact with Hazmi and Mihdhar in San Diego and met two other 9/11 hijackers after moving to a mosque in Falls Church, Virginia. But many FBI investigators believed he was radicalized well after 9/11 and may not have known the hijackers’ plans.


New evidence filed in the court case points to a more significant relationship. Awlaki appears to have met Hazmi and Mihdhar as soon as they arrived in San Diego. He joined Bayoumi in helping them rent an apartment and set up bank accounts, and he was seen by others to have served as a trusted spiritual advisor.

Awlaki’s worldview “matched quite closely to al-Qaida’s at the time,” said Alexander Meleagrou-Hitchens, a biographer of Awlaki who served as an expert for the plaintiffs. “The new information now becoming public, on top of what we already know about his teachings and associations, makes it reasonable to conclude that Awlaki knew the hijackers were part of the al-Qaeda network.”



Tim Golden
Tim Golden is a reporter at ProPublica, concentrating on national security, foreign policy and criminal justice.
Indian nurse wins payout against UK care company

Kirankumar Rathod is among more than 100,000 overseas workers who have arrived in Britain to take up care jobs.


FILE PHOTO: Former prime ministers Boris Johnson (C) and Rishi Sunak during a visit to Westport Care Home in Stepney Green in London, England. (Photo by Paul Edwards / WPA Pool / Getty Images)

AN Indian nurse who was sacked by a British care company won a significant payout on Monday (9) in a case that lawyers said could spur other migrant workers to pursue claims against unscrupulous bosses

Kirankumar Rathod is among more than 100,000 overseas workers who have arrived in Britain to take up care jobs since 2022 when the government opened up a new visa route to help tackle massive staffing gaps.

But critics say reports of labour abuses in the sector have soared since the scheme’s introduction.

Rathod said he was left in dire financial straits after London-based Clinica Private Healthcare Ltd hired him, but failed to provide him with any work and then fired him.


In an unusual ruling on Monday, employment judge Natasha Joffe ordered Clinica to pay Rathod nearly £17,000 in unpaid wages to date, and to continue paying his salary until his claim for unfair dismissal is decided.

“This is very significant,” Rathod’s solicitor Sarmila Bose of the Work Rights Centre told the Thomson Reuters Foundation.

“It shows that redress is possible for the many people who have been wronged by the way the visa scheme has been operated.”

Rathod said he was left in dire financial straits after London-based Clinica Private Healthcare Ltd hired him, but failed to provide him with any work and then fired him. (Photo for representation: iStock)

Bose said the award was “a lifesaver” for Rathod, his wife and six-year-old daughter who had been left in a “desperate financial situation” by Clinica.

After the ruling, Rathod said he felt “massive relief”.

“This has been an incredibly stressful time for me, both emotionally and financially, as while Clinica denied me work and income, I was unable to provide for my family,” he told the Thomson Reuters Foundation in a statement.

Legal experts said it was extremely rare for judges to grant “interim relief” and the ruling was a strong indication Rathod would win his case at the Central London Employment Tribunal when it is eventually heard in full.

Rathod said he had paid an agent in India £22,000 to secure a care job in Britain – although there is no suggestion Clinica knew this.

After arriving in May 2023, he received a contract to work as a healthcare assistant with an annual salary of £23,500.

When no work materialised, Rathod and other employees visited Clinica’s west London offices several times.

Following one visit, he received an email accusing him of “gross insubordination”.

After months without work, Rathod told a company representative he would take legal action. He was sacked the following day on November 8, 2023.

“I told him the situation was killing me, I could not sleep at night,” Rathod said in a witness statement.

In a previous written summary of the case, the judge said Clinica appeared to have “strung along” a significant number of others and its behaviour suggested “something awry” in the way it ran its business.

The tribunal heard that Clinica’s licence to employ overseas workers had since been revoked. Its representative had argued the company could not be asked to pay Rathod since it no longer had a licence to sponsor overseas workers.

The Work Rights Centre, which is helping three other migrant workers with claims related to different companies, said more than 60 people had approached it with similar stories this year alone.

But Bose said this was the tip of the iceberg.

“Some companies employed dozens of people (from overseas) when they didn’t really have jobs for them,” she said. “The number of people affected overall is in the thousands.”

Labour experts told an investigation by the Thomson Reuters Foundation this year that abuse was rife in the care sector, but most migrant workers did not complain because they feared losing their visa and being deported.

(The Thomson Reuters Foundation)