It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Wednesday, June 29, 2022
Belgian researchers explain why people with lower economic status don’t trust politicians as much
The ‘anomie’ concept – that the society is disintegrating and losing moral standards – explains why people with low socio-economic status trust politics less than those with a higher one, concludes a new study published in the scientific journal Social Psychological Bulletin.
The study was conducted by two Belgian researchers, Thierry Bornand (ULB and IWEPS) and Olivier Klein (ULB) in 2018 using a representative sample of the Belgian population of Wallonia (one of the three regions of Belgium). This region, known for its economic difficulties, is a relevant context for investigating the link between the ‘anomie’ concept and political trust.
But what is ‘anomie’?
‘Anomie’, a concept first proposed by Durkheim, refers to the perception that the social fabric is disintegrating, and that moral standards and trust have disappeared in the society.
Interestingly, the present study reveals that people with low socio-economic status perceive more ‘anomie’ in society than people with a higher one, which in turn explains why they also trust politics less.
Why is this important?
Even though it is a major psychological mechanism, the perception of ‘anomie’ had not yet been empirically studied as an explanatory factor of political trust. Thus, what this study tells us is that political trust is not only a matter of evaluating what politicians do or do not.
Political trust is also influenced by the way that individuals perceive the society as a whole. It is influenced by their wider perception of how society works. If people perceive that moral standards or social trust are failing, then political trust will also be in decline.
Importantly, this study also shows that the perception of ‘anomie’ is higher among the individuals with lower socio-economic status. The lower the status of the individuals, the more they perceive that the social fabric is breaking down. In other words, the difference in socio-economic status is an element that, at the individual level, reduces political trust regardless of the performance or the achievements of the government.
Additionally, the researchers have also shown that perception of ‘anomie’ is associated with lower interpersonal trust. Thus, inequalities between individuals might sustain a vicious circle.
Although the study has not been designed to compare different contexts, the authors believe that it is up to social policy, and its success at reducing inequalities to break that vicious circle, as the association between socio-economic status and ‘anomie’ diminishes.
Research paper:
Bornand, T., & Klein, O. (2022). Political Trust by Individuals of low Socioeconomic Status: The Key Role of Anomie. Social Psychological Bulletin, 17, 1-22. https://doi.org/10.32872/spb.6897
Salmon famously travel hundreds of miles upstream to reach their home waters to spawn, but climate change is shrinking their destination. A new study offers high-resolution details on how Chinook salmon habitats are being lost on Bear Valley Creek, a headwater stream of the Salmon River in central Idaho.
The study, published today in the AGU journal Geophysical Research Letters, suggests lower water volumes and warming temperatures are dramatically shrinking spawning beds and nurseries for the culturally and economically important fish. Researchers predict salmon here could lose nearly half their total habitat in this river as soon as 2040 due to an estimated 50% decrease in river discharge.
Daniele Tonina, lead author of the new study and a professor of ecohydraulics at the University of Idaho, and colleagues examined a 14-kilometer stretch of Bear Valley Creek, which is known for hosting a robust population of Chinook salmon. With a wide valley, meandering main river and cozy side-streams, the site is representative of ideal salmon habitats in the Pacific Northwest.
The team mapped the river's channels and floodplain using a kind of remote, 3D laser scanning, or LiDAR, that uses green-wavelength lasers to see into shallow aquatic environments. They then used 60 years of historical stream-flow data, from 1957 to 2016, from stream gauges at eight nearby streams to calculate trends in the annual summer discharge, a critical time for fish survival. Using three different hydrologic models to combine the river features and predicted discharge until 2090, they estimated changes to salmon habitat both in the past and over the coming decades.
Over the historical study period, summer stream flow volume dropped by 19%, and it slowed by 17%. That means less overall area suitable for salmon nests and a loss of off-channel havens for fingerlings as side-streams get cut off from the main channel. About 20% of this critical off-channel habitat was already lost over the 60-year period, the study estimated. The salmon lost 23% of their spawning habitat as well.
"This really allowed us to understand how the environment will change at different discharges, which hasn't really been done before. Now we can say the impact will be that the habitat gets smaller and more fragmented, meaning even the parts that are still good [quality] might be too small to be useful," Tonina said. "Still, this is a glass half-full, half-empty result. At least it's not a total loss of habitat yet."
"A huge limitation has been our ability to study the landscape at a scale that's biologically relevant for salmon," said Lisa Crozier, a research ecologist at NOAA's Northwest Fisheries Science Center who was not involved in the study. "We know the overall pattern is that low flows are bad for fish, but we don't know exactly why or what life stages are most impacted. But here, we can see very specifically they're losing off-channel and spawning habitat. It really helps to know those details."
Saving salmon streams
Salmon have exacting requirements for their nests. Each female can use up to six square meters of riverbed real estate to lay her eggs; the gravel must be just right, the water must be cold and rushing, there must be calm side streams for fingerlings to grow. And of course, there must be enough water flowing in the streams to let the salmon arrive in the first place. Each requirement is threatened.
Smaller salmon habitats of poorer quality could reduce the success of spawning and increase struggles for young salmon, which already face a host of human-caused barriers. More female salmon may be competing for shrinking nest sites, and young salmon will compete for dwindling space and resources as well. Breeding salmon whose home waters are cut off or disappear altogether may expend too much energy searching for a new spot and die of exhaustion before laying their eggs.
Studies like this one help ecologists and conservationists figure out what areas are most likely to remain suitable habitats for salmon and other species, and can therefore be targeted for protection, Tonina said. Other cold-water fish, such as trout and steelheads, would be impacted in similar ways. Chinook salmon serve as a useful "indicators of enormous ecosystem change," Crozier added, but "every single species will be affected by these changes. It's uncharted territory."
More information:Daniele Tonina et al, Climate Change Shrinks and Fragments Salmon Habitats in a Snow‐Dependent Region,Geophysical Research Letters(2022).DOI: 10.1029/2022GL098552
Even when passed through water treatment plants, some types of viruses can remain infectious for at least 2 days by riding on tiny plastic pellets known as microplastics, The Guardian reports. Researchers compared the survival of two types of viruses—a enveloped or lipid-coated bacteriophage virus that only infects bacteria and a nonenveloped rotavirus (pictured) that causes diarrhea and upset stomachs in humans—in three types of treated water with and without microplastics present. The lipid membrane surrounding the bacteriophage virus made it decay quickly with or without microplastics present, but the membraneless rotavirus stayed stable for the 48-hour test period when surrounded with microplastics, the scientists report this month in Environmental Pollution. The researchers posit that the rotavirus, unburdened by a lipid membrane, survived by “hitchhiking” with microplastics and flowing back into rivers and lakes, where it could be swallowed by unsuspecting people taking a dip.
Melting Glaciers Could Release Deadly Microbes, Scientists Suggest
David Bressan Contributor I deal with the rocky road to our modern understanding of earth
Jun 28, 2022
Artist's impression of viruses trapped in glacier ice. GETTY
Last year, scientists announced the discovery of 33 viruses in ice and snow samples collected from glaciers. Now another study found almost 1,000 species of bacteria in similar samples.
Most of those microorganisms, which survived because they had remained frozen, are unlike any microorganisms that have been cataloged to date.
The researchers analyzed ice cores taken from Tibetan glaciers. The ice cores contain layers of ice that accumulate year after year, trapping whatever was in the atmosphere around them at the time each layer froze - including microbes and viruses.
"These glaciers were formed gradually, and along with dust and gasses, many, many viruses were also deposited in that ice," said microbiologist Zhi-Ping Zhong, lead author of the 2021 study published in the journal Microbiome.
The study of viable microorganisms in glaciers is a relatively new branch of science. In 2015, a study published in the Proceedings of the National Academy of Sciences found that the 30,000-year-old virus Mollivirus sibericum could still infect modern amoeba. In 2020, a preprint study described ancient viruses found in samples from a melting glacier in Tibet.
"We know very little about viruses and microbes in these extreme environments, and what is actually there," explains Lonnie Thompson, a glaciologists involved in the research. "The documentation and understanding of that is extremely important: How do bacteria and viruses respond to climate change?"
As glaciers all over the world are melting at an alarming rate, the released microbes could travel with the meltwater into rivers and streams and reach populated areas, infecting plants, animals and people. The glaciers in Tibet feed several rivers that lead to densely populated regions of China and India. As some of the bacteria and viruses are very old - some are older than 15,000 years - modern organisms could lack immunity to these microorganisms.
In the worst-case scenario, meltwater from glaciers and ice caps could release potentially infectious pathogens into the environment. Researchers have found still intact smallpox and the Spanish flu viruses in 100-year-old frozen tissue samples. An outbreak of anthrax in Siberia five years ago is believed to be the result of the pathogen preserved in reindeer carcasses. Frozen for decades, the bodies thawed out of the ground during an exceptional heatwave, releasing the still infectious anthrax spores.
In Permafrost Thaw, Scientists Seek to Understand Radon Risk
The cancer-causing gas is colorless, odorless and tasteless, making it an invisible threat to homes built on permafrost.
Towns like Narsaq in Greenland may already face unsafe radon levels,
DEEP IN THE frozen ground of the north, a radioactive hazard has lain trapped for millennia. But UK scientist Paul Glover realized some years back that it wouldn’t always be that way: One day it might get out.
Glover had attended a conference where a speaker described the low permeability of permafrost — ground that remains frozen for at least two years or, in some cases, thousands. It is an icy shield, a thick blanket that locks contaminants, microbes and molecules below foot — and that includes the cancer-causing radioactive gas radon.
“It immediately occurred to me that, well, if there is radon underground, it will be trapped there by a layer of permafrost,” recalls Glover, a petrophysicist at the University of Leeds in England. “What happens if that layer suddenly isn’t there anymore?” Ever since then, Glover has worked on methods to estimate how much radon — which is released as the element radium decays — might be liberated as climate change causes the permafrost to thaw.
Significant areas of Arctic and sub-Arctic ground contain permafrost — but today it is melting, and the rate of that thaw is accelerating. In a report published in January, Glover and coauthor Martin Blouin, now technical director at the mapping software firm Geostack, used modeling techniques to show that homes with basements built on areas of permafrost could be exposed to high levels of radon gas in the future. “As the permafrost melts, this reservoir of active radon can flood to the surface and get into buildings — and by being in buildings, cause a health hazard,” Glover says.
No one knows exactly how quickly radon diffuses through icy ground, but by using the rate of diffusion of carbon dioxide and adjusting for the properties of radon, Glover came up with a figure that he could use in the model. Based on 40 percent permafrost thaw, the calculations reveal that radon emissions could raise radioactivity levels to more than 200 becquerels per meter cubed (Bq/m3) for a period of more than four years in homes with basements at or below ground level. This happens when the 40 percent thaw occurs in 15 years or less.
According to the World Health Organization, the risk of lung cancer increases by about 16 percent with every 100 Bq/m3 of long-term exposure. Some countries, including the UK, set the safe level of average exposure at 200 Bq/m3. But without testing for radon in areas where the geology suggests it’s present, people will not know whether they are at risk — because the gas is odorless, colorless and tasteless.
Glover stresses that the model in the paper is an early attempt to understand how permafrost thaw could affect people’s exposure to the gas. It doesn’t, for example, account for seasonal variation in the rate of permafrost thaw or the effects of soil compaction when ice within it melts, something which could pump yet more radon to the surface.
Some 3.3 million people live on permafrost that will have completely melted away by 2050, according to estimates in a 2021 study. Not all of these people live in areas prone to radon but many do: For example, in parts of Canada, Alaska, Greenland and Russia. And the link between radon exposure and lung cancer is well-established, as is the fact that smoking further increases one’s risk, says Stacy Stanifer, oncology clinical nurse specialist at the University of Kentucky’s College of Nursing. She points to studies suggesting that radon could be behind up to 1 in 10 lung cancer deaths, of which there are 1 million in total worldwide every year.
“Breathing radon is dangerous for everyone, but it’s even more harmful when you also breathe tobacco smoke,” says Stanifer. Smoking is prevalent in Arctic and sub-Arctic communities; for example, a 2012 study reported that nearly two-thirds of Canadian Inuit age 15 and over who live within the Inuit homeland said they smoke cigarettes daily, compared with 16 percent of Canadians overall.
Scientists don’t know how much radon is actually emanating from areas with melting permafrost today, says Nicholas Hasson, a geoscientist and PhD student at the University of Alaska Fairbanks: “I would call this a blank spot.” He notes that, in real life, permafrost layers are complex and irregular, and agrees with Glover that field measurements are essential to validate the model. Instead of a uniform sheet of ice underground, imagine permafrost as more of a higgledy-piggledy Swiss cheese of ice, with some areas much thicker than others and places where groundwater courses through it, exacerbating the thaw.
Houses with basements built into permafrost are more at risk of radon contamination.
Visual: Knowable Magazine
Hasson and colleagues have studied locations where permafrost is thawing unusually quickly and emitting methane, a greenhouse gas many times more potent than carbon dioxide. Similar “chimneys” could be spewing out elevated amounts of radon gas in some places, he suggests.
For human health, what really matters is the amount of radon that gets into people’s homes. Scientists and even homeowners themselves can use radioactivity detectors to assess this. A study published online in February 2022, which is yet to be peer-reviewed, measured levels of radon over the course of a year in more than 250 homes in three towns in Greenland. Out of 59 homes in Narsaq, for instance, 17 were found to have radiation levels above 200 Bq/m3.
Lead author Violeta Hansen, a radioecologist at Aarhus University in Denmark, stresses that these are early results based on a small number of homes. It would take much more research, she says, before she could evaluate the health risks associated with radon in properties like these across Greenland. She is now leading an international project that will run field experiments and gather radon measurements from homes in various countries, including Canada and Greenland. “We need to come back to the public with low-cost and effective, validated mitigation measures,” Hansen says.
The good news is that there are tried-and-tested methods of lowering levels of radon inside a house once the homeowner knows it is there.
It is important to avoid panicking people without solid data and solutions on hand, says Aaron Goodarzi, a radiobiologist at the University of Calgary in Canada. The good news is that there are tried-and-tested methods of lowering levels of radon inside a house once the homeowner knows it is there. Goodarzi points, for example, to a technique called sub slab depressurization, in which a sealed pipe is inserted below the house and connected to a fan. This sucks any radon out from below the building before blowing it away into the atmosphere. “Think of it simply like a bypass,” he says.
The type of building matters. Glover’s model found that homes built on piles or stilts, and thus separated from the ground, did not experience a boost in radon levels. Fortunately, many homes in the Arctic and sub-Arctic are constructed in this fashion. But for those that aren’t, the cost of mitigating radon could be prohibitive for low-income communities in these regions. “That’s an equity issue that has to be considered, certainly,” says Goodarzi, who notes that the onus might be on social housing administrators in some areas to ensure that the housing they provide is healthy.
A spokesperson for Health Canada says that the government agency currently recommends that homeowners test radon levels in their properties and use certified suppliers to install mitigation technologies if such are required.
Many people may not think about radon very much, given the fact that it is invisible. Glover says that getting informed now, before the permafrost thaw worsens, could save lives.
“We know that people die from it,” he says. “But at the same time, there’s so much that we can do to protect ourselves.”
Chris Baraniuk is a freelance science journalist and nature lover who lives in Belfast, Northern Ireland. His work has been published by the BBC, the Guardian, New Scientist, Scientific American, and Hakai Magazine, among other publications.
On June 13, I received an invitation to publish in what was obviously a predatory paleontology journal. (The “Assistant Managing Editor” incorrectly assumed I have a doctorate (I don’t)).
Here is the email, preceded by my curt response:
On June 14, the “Assistant Managing Editor” made this reply:
Dear Dr. Daniel Phelps,
Thank you for your reply, We would like to inform you that ours is not a predatory journal. We have received the ISSN which is provided in our previous email. For your convenience, we are providing the link of our journal where you can find complete information of our journal and Editorial Board members. Link: https://medwinpublishers.com/IJPBP/index.php Kindly revert back if you have any queries. Look forward hear from you soon. Kind Regards, Jackie Crystal Assistant Managing Editor
I clicked on the link and found, sadly, several papers by people that were probably starting their careers. Then there were a couple of absolute crackpot papers. One featured a weird claim about a supposed saber-tooth cat fossil, that was comically just weathered rocks (pareidolia, no actual bones), but the topper was this “Research Article” that claims humans are the descendants of dinosaurs! Here is a screenshot of the start of the “Research Article” in case it is removed:
This “science journal” is taking money from naive and delusional people. They should be ashamed. The “Editorial Board” should be forced to help the unfortunate authors who publish with them and should be publicly exposed and shamed for being associated with this “paleontology journal.”
My apologies for so many scare quotes in this post.
It’s always a risky game to predict what the Supreme Court will do about anything, but we can always discern the general trends, and as a decades-long member of the (spooky, scary!) Federalist Society, I thought I’d take a crack at answering the first three of Joe Felsenstein’s questions about the future of teaching evolution.
What would this court do if asked to decide a case similar to the Dover School Board case?
In the Dover case — which, believe it or not, is going on 20 years ago, now — the district court found that the school board violated the First Amendment by adopting an “Intelligent Design Policy.” This policy required that a statement be read to students in ninth grade biology class which said that biological evolution “is not a fact,” that it contained “gaps ... for which there is no evidence,” and that “Intelligent Design ... differs from Darwin’s view.” It encouraged students to consult Of Pandas and People, a book which advocated ID creationism, and which was made available to students. After a thorough trial on the factual issues, the court found this unconstitutional using two different legal tests: the “Endorsement Test” and the “Lemon test.”
These tests are intellectual devices judges use to answer whether government has crossed the constitutional lines that bar it from either establishing a religion — that is, creating an official church or creed of some kind — or inhibiting the free exercise of religion — that is, punishing or burdening someone for practicing a faith. And these two tests are quite similar. The Lemon test, however, is a three-part analysis dating back to the 1971 case of Lemon v. Kurtzman. The Endorsement Test is somewhat older.
The Endorsement Test asks “whether an objective observer, acquainted with the text, legislative history, and implementation of the [law being challenged in court], would perceive it as a state endorsement of prayer in public schools,” or, to put it another way, whether it “sends the ancillary message to members of the audience who are non-adherents that they are outsiders, not full members of the political community, and an accompanying message to adherents that they are insiders, favored members of the political community.”
The Lemon test is somewhat more comprehensive. It asks three questions: whether the challenged law has a secular purpose, whether its principal or primary effect is to advance or inhibit religion, and whether it creates an “excessive entanglement” with religion — meaning, whether it draws government too close to a church.
The Lemon test has come under a lot of criticism over the years, primarily by religious conservative justices such as Antonin Scalia and Clarence Thomas, who view it as either too vague to be helpful or as inherently biased against religion. They think courts applying the Lemon test tend to unfairly block the government from engaging in policies that do nothing more than treat religious people or institutions equally. Personally, I’ve never found this argument persuasive — and I’ve never seen anyone offer a really good alternative to Lemon — but it’s been a major issue among conservative lawyers, who think Lemon does not lead to true religious neutrality, but instead tends to exclude religious perspectives from government policy even where those perspectives do not amount to compelling someone to pray or to subsidize a church.
This morning, in Kennedy v. Bremerton School District, the Supreme Court overruled both the Endorsement Test and Lemon, although it did so in language intended to make readers think these tests had been overruled “long ago.” The Court says, for example, that “[over] time, [the Lemon test] came to involve estimations about whether a ‘reasonable observer’ would consider the government’s challenged action an ‘endorsement’ of religion.” The fatal flaws of the Lemon and Endorsement Tests, it claims, is that they used an “abstract, and ahistorical approach to the Establishment Clause” instead of relying on “historical practices and understandings.” Going forward, courts should “focus[] on original meaning and history.”
What precisely does this mean? Unfortunately, one of the fatal flaws in the now-dominant constitutional theory of Originalism is that it substitutes appeals to past generations’ subjective understandings for any conceptual and logical legal argument. Since such a substitution is not literally possible, some of the better Originalist thinkers have fashioned clever ways to smuggle in the latter by dressing up “abstract and ahistorical” arguments in ancient clothing (this is called “the Construction Zone”). Pointing to historical practice is not only confusing, since these practices sometimes conflicted and can be interpreted in different ways, but because we must infer general rules from those practices, and inferring general rules is, like it or not, necessarily an “abstract and ahistorical” undertaking. It’s no surprise, therefore, that the Kennedy decision itself employs an “abstract an ahistorical” principle, when it bases its decision on the presence or absence of “coercion” — which is an abstraction; a concept, not a list of specific historical events. Obviously history can inform a proper grasp of the law — just as the views of previous generations of scientists can help us understand a natural phenomenon — but actually understanding what the law is requires an objective analysis, which must rely on abstractions and “ahistorical” appeals to principle rather than, as Alexander Hamilton put it, rummaging through musty parchments. Given this confusion, it’s unsurprising that the Court drops the entire issue at that point, and gives us no guidance as to what exact kinds of “historical practices and understandings” should govern the question of whether something constitutes an “establishment of religion” or the “free exercise thereof.”
Given that government-funded schools are a century and a half older than the Constitution itself, and were in America’s early years quite heavily saturated in religion, it seems unlikely that a court relying on history alone — without any “abstract and ahistorical principle” to reinforce its constitutional understanding — would reach the same conclusion as the district court in the Dover case. Of course, schools — both public and private — were teaching Intelligent Design in 1791 (when the First Amendment was ratified), since it was the state of the art back then. And the Establishment Clause itself expressly allowed states to maintain their state-established churches; it was only over the course of the nineteenth century that they were abolished, and later still that the Fourteenth Amendment was viewed as forbidding states to force people to subsidize churches. Just the other day, Justice Barrett observed that the Court has “not conclusively determine[d] the manner and circumstances in which post-ratification practice may bear on the original meaning of the Constitution,” and therefore that it’s unclear whether “whether courts should primarily rely on the prevailing understanding of an individual right when the Fourteenth Amendment was ratified in 1868 [when the Fourteenth Amendment was ratified] or when the Bill of Rights was ratified in 1791.” If 1791 were our guide, it seems quite clear that the Dover would be decided differently. But if we instead take 1868 as our starting point, and draw from the understanding of that time that the schools were intended to be secular institutions devoted to teaching the natural world rather than religious doctrine, then the question becomes more complicated: our creationist friends will then seek to argue that teaching a sort of broadly theistic “creator” doesn’t cross the line — indeed, many religious conservative lawyers have argued that for decades already. They contend that the First Amendment only bars sectarianism in government classrooms, not what they call “general” religious views (whatever those are).
In the Dover case, the judge found that “a reasonable, objective observer who knows the policy’s language, origins, and legislative history, as well as the history of the community and the broader social and historical context in which the policy arose,” would understand that the school board’s Intelligent Design policy was “creationism re-labeled,” and was intended to reach students “that the intelligent designer is God.” Since an objective observer would understand that the ID policy was intended to express official government endorsement of religion, it constituted an unconstitutional establishment of religion. But if schools are allowed to teach vaguely defined “Judeo-Christian values,” because that was permitted in 1791 or 1868, then it’s hard to see how a court would decide the Dover case today.
Of course, there’s the abstraction of “coercion.” In the Kennedy decision, the Court says
government may not, consistent with a historically sensitive understanding of the Establishment Clause, “make a religious observance compulsory. Government “may not coerce anyone to attend church,” nor may it force citizens to engage in “a formal religious exercise.” Members of this Court have sometimes disagreed on what exactly qualifies as impermissible coercion in light of the original meaning of the Establishment Clause. But in this case Mr. Kennedy’s private religious exercise did not come close to crossing any line one might imagine separating protected private expression from impermissible government coercion.
Everyone, of course, agrees that coercion is one of the things the Establishment Clause forbids: the devil’s in the detail of “what exactly qualifies as impermissible coercion.” For students — ninth graders, in the Dover case — to be pressured by authority figures in school with more than a wink and a nudge, can certainly be coercive, especially in a mandatory science class, in a school the law requires them to attend. Being compelled to pay taxes that fund a public school system where religious being taught is certainly coercive. But if those things are not enough, then today’s decision does little to help guide future courts about what “coercion” means, except to undermine the intellectual foundations on which any genuine answer would have to depend.
Would all lower courts rule against such a school board, with the case appealed to the Supreme Court?
It would require a court-by-court analysis to answer this. Some recent judicial appointees are well known for being open advocates of ID creationism. Others would no doubt continue to hold that the First Amendment prohibits religious indoctrination in public schools.
Or would the increasing number of conservative justices who have been vetted by the Federalist Society allow the Supreme Court to dodge such an issue by letting the pro-ID or pro-creationism ruling of an appeals court stand, by refusing to hear the case?
The Supreme Court takes fewer than one percent of the cases it’s asked to take, so it’s overwhelmingly likely that virtually any appellate court ruling will “stand,” regardless of what it’s about or whether it’s right. I suspect that there’s little interest on the Supreme Court to take a creationism case, however. While I can’t speak for the Court, or even for the whole Federalist Society (having been a Society member for some 25 years now, I know well enough it’s impossible to speak for the whole Society), I can say that I’m not aware of a prominent contingent within it that’s particularly enthusiastic about creationism. Instead, the religious conservative faction of lawyers is more concerned with school prayer in general, and I think it more likely that the Court will be interested in cases involving that, than those involving creationism specifically. This is reading tea-leaves, of course, so take that for what it’s worth.
How soon can we expect the Discovery Institute to lawyer-up and decide that its original position of “Teach The Controversy” is maybe not such a bad approach after all?
Any reasonable observer would assume they were on the phone with their lawyers at about 10:30 am eastern time yesterday morning.
Scientists identify new brain mechanism involved in impulsive cocaine-seeking in rats
Discovery may represent a future target for treating substance use disorders
Date: June 28, 2022
Source: NIH/National Institute on Drug Abuse
Summary:
Researchers have found that blocking certain acetylcholine receptors in the lateral habenula (LHb), an area of the brain that balances reward and aversion, made it harder to resist seeking cocaine in a rat model of impulsive behavior. These findings identify a new role for these receptors that may represent a future target for the development of treatments for cocaine use disorder. There are currently no approved medications to treat cocaine use disorder.
FULL STORY
Researchers have found that blocking certain acetylcholine receptors in the lateral habenula (LHb), an area of the brain that balances reward and aversion, made it harder to resist seeking cocaine in a rat model of impulsive behavior. These findings identify a new role for these receptors that may represent a future target for the development of treatments for cocaine use disorder. There are currently no approved medications to treat cocaine use disorder.
Published in the Journal of Neuroscience, the study was supported by the National Institute on Drug Abuse (NIDA), part of the National Institutes of Health. In 2020, over 41,000 people died from drug overdoses involving stimulants, including cocaine and methamphetamine. Developing safe and effective medications that help treat addictions to cocaine and other stimulants is critical to expand the choices offered to people seeking treatment and to help sustain recovery.
"This discovery gives researchers a new, specific target toward solving a problem that has long been elusive -- developing treatments for cocaine addiction," said NIDA Director, Nora Volkow, M.D. "As we have seen with medications to treat opioid use disorder, adding this tool to clinical care could save lives from overdose and drastically improve health and quality of life."
Addiction science researchers are particularly interested in the LHb as a target for future treatment development because of its position as an interface between brain regions involved with reasoning and other higher order thought processes and those mediating emotion and reward -- factors known to be associated with substance use disorders as well as major depressive disorders. For instance, these areas are involved in regulating behaviors like abstaining from a reward when it is determined not to be "beneficial."
Building on previous work that established the importance of the LHb and acetylcholine receptor signaling in impulsive cocaine-seeking, this study further defines the cellular mechanisms through which LHb neurons regulate this behavior. Researchers used a behavioral paradigm called the Go/NoGo model in rats. In this model, rats were trained to self-administer cocaine, where a lever press led to an injection of the drug. This was followed by specific training in the Go/NoGo task where cocaine was available when the lights were on (Go), but not when the lights were off (NoGo). Animals quickly learned to stop responding when cocaine was not available.
The researchers then chemically manipulated the LHb, to assess the impact on the rats' ability to withhold their response to cocaine. They found that response inhibition for cocaine was impaired by blocking a specific type of muscarinic acetylcholine receptor, known as M2Rs, with an experimental drug called AFDX-116, and not with a drug called pirenzepine that blocks other muscarinic acetylcholine receptors known as M1Rs. Thus, when M2Rs were blocked in the LHb the rodents were no longer able to stop responding for cocaine even when it was not available (the "NoGo" condition), despite the training. This indicates that increasing LHb M2R function may represent a potential target for treating impulsive drug seeking and substance use disorders.
The researchers also studied the cellular mechanisms by which M2Rs alter LHb neuronal activity by measuring changes in the electrical activity of these neurons in response to acetylcholine-like drugs. Although these drugs reduced both excitatory and inhibitory inputs onto LHb neurons, there was a net increase in inhibition, which may account for acetylcholine's ability to limit impulsive cocaine seeking.
"The LHb acts like an interface between rational thought in the forebrain and the modulation of neurotransmitters like dopamine and serotonin that originate in the midbrain, which are important in regulating decision processes and emotions," said Carl Lupica, Ph.D., chief of the Electrophysiology Research Section of the Computational and Systems Neuroscience Branch of NIDA. "While the immediate results of this study are related to cocaine seeking, there are also greater implications for impulsivity as it relates to other drugs as well as to psychiatric conditions like obsessive-compulsive disorder. Our future studies will explore the relationship between LHb activity and impulsive behavior related to other drugs such as cannabis, and opioids such as heroin."
Although targeting M2Rs is promising, there are challenges because the muscarinic acetylcholine system is involved in everything from regulating heart rate, affecting motion sickness, and controlling vasodilation, for example. These receptors are also located throughout the body, including many other regions of the brain. Further research is needed to develop ways to target the M2Rs in the LHb without causing a cascade of side effects, and as a first step these researchers are now trying to identify where in the brain the acetylcholine released in the LHb originates.
The study was funded by the NIDA Intramural Research Program.
Journal Reference:Clara I.C. Wolfe, Eun-Kyung Hwang, Elfrieda C. Ijomor, Agustin Zapata, Alexander F. Hoffman and Carl R. Lupica. Muscarinic Acetylcholine M2 Receptors Regulate Lateral Habenula Neuron Activity and Control Cocaine Seeking Behavior. Journal of Neuroscience, 2022 DOI: 10.1523/JNEUROSCI.0645-22.2022
In the Near Future, Unprecedented Drought Conditions Are Projected to Be More Frequent and Consecutive in Certain Regions
Newswise — For a successful climate change strategy, it is crucial to understand how the impacts of global warming may evolve over time. A new study led by the National Institute for Environmental Studies (NIES) presents the future periods for which aberrant drought conditions will become more frequent, thereby creating a new normal.
Global warming is expected to increase the intensity and frequency of future drought in several global regions, adversely affecting the water resource, agriculture, and energy sectors. Given that the current water management practices and existing infrastructures in these sectors are based on historical statistics or experiences, under a changing climate, these practices and infrastructures may eventually become insufficient. Therefore, it is critical to better understand when severe drought conditions expressed as “unprecedented” will become frequent.
“Regarding precipitation and temperature, preceding studies report the timing at which the impact of climate change emerges. However, no study had successfully estimated the timing in terms of drought focusing on river discharge at a global scale,” said Tokuta Yokohata, a coauthor and a Chief Senior Researcher of the Earth System Risk Analysis Section at the Earth System Division, NIES. “A temporal evaluation about future drought conditions in comparison to our historical experiences is essential to take appropriate climate change strategies, especially for climate adaptations, in the long term and in time.”
The paper published in Nature Communications estimates the periods when drought conditions will shift to an unprecedented state in a warmer world. The research group evaluated changes in drought day frequency for 59 global subcontinental regions until the end of the 21st century. They estimated the time of first emergence (TFE) of consecutive unprecedented drought, which is the first onset of exceedance beyond the maximum bound of the historical climate variability during the reference period (1865-2005) that occurs consecutively for a certain number of years. For instance, TFE5 indicates that the regional drought frequency remains larger than the maximum value during the reference 141-year period for more than five years. The scientists analyzed their river discharge simulation dataset, which was derived from combinations of five global hydrological models and four climate model projections. The study considered low and high greenhouse gas concentration scenarios to evaluate the consequences of society’s decisions on the climate mitigation pathway.
Large regional disparity in the pace of growing warming impacts
“The projected impacts of warming show significant regional disparities in their intensity and the pace of their growth over time,” said the corresponding lead author Yusuke Satoh, a research associate professor at Korea Advanced Institute of Science & Technology. By the middle of this century, increases in drought frequency are statistically significant in 25% and 28% of the global land under low and high greenhouse gas concentration scenarios, respectively. Specific regions show substantial increases of more than double the current frequency. Under both scenarios, so-called hotspots of drought increases include the Mediterranean regions, southern and central South America, Australia, etc. “Some regions exhibit steady increases in drought frequency. The projected increases are highly likely by the middle of this century compared to the historical period”.
This new study considers consecutive exceedances of more than five years and detects TFE5 in 18 out of 59 regions by the end of this century under a high greenhouse gas concentration scenario. Even for a low greenhouse gas concentration scenario that assumes stringent mitigation strategies, 11 regions are projected to reach TFE5 within the century. “Under high and low greenhouse gas concentration scenarios, respectively, seven and five regions show TFE5 in approximately 30 years, which is before or around an expected climate stabilization in case of the low climate change scenario. Importantly, the results imply unavoidable unprecedented states in these regions,” said Hideo Shiogama, a co-author and the head of the Earth System Risk Analysis Section at NIES. In particular, southwestern South America and the Mediterranean regions consistently show early and robust TFE5 in both scenarios. On the other hand, the differences between greenhouse gas concentration scenarios indicate that our choice of mitigation strategies produces a noticeable difference in the timing and robustness of the projection. “Appropriate and feasible climate mitigation and adaptation plans are essential for overcoming the expected extraordinarily severe dry conditions. Particularly regarding adaptation, it is crucial to improve our preparedness in the given time horizon before unprecedented drought conditions emerge,” said Satoh.
What is health-monitoring cat litter, and how does it help detect when your cat is sick?
Chemistry gives the classic adsorbent material a colorimetric twist and could provide information about your pet’s health
by Brianna Barbu June 28, 2022 | A version of this story appeared in Volume 100, Issue 24
Credit: C&EN/Brianna Barbu
If you have a cat, as approximately 25% of US households do according to the American Veterinary Medical Association, the internet may have shown you targeted ads for a health-monitoring cat litter. These products promise to change color in response to certain disease markers in a cat’s urine, to help owners spot early signs of illness before more serious symptoms arise.
“A goal with any disease is to pick up disease early, not late,” says Jody Lulich, who specializes in small-animal internal medicine at the University of Minnesota College of Veterinary Medicine. The earlier a disease can be caught, the better the outcome is likely to be.
Cats are relatively prone to urinary issues, especially as they get older or if they’re overweight, according to the Cornell Feline Health Center website. Naturally, given litter’s role absorbing pee, health-monitoring cat litters focus on detecting potential problems in a cat’s urinary tract: the kidneys, bladder, and the plumbing that connects them. But how do these litters do that? C&EN decided to try to get the scoop on it.
AN ABSORBING TOPIC
Unlike some other consumer products for monitoring a cat’s urinary health at home, such as urine-testing dipsticks, health-monitoring litters also need to work as an everyday cat litter. That means their main function is to absorb moisture and odors from your cat’s excretory activities, hopefully without kicking up too much dust. Most cat litters—about 92%—are made from clay,
CLAY DUST IS DANGEROUS FOR YOUR CAT'S LUNGS
according to Mariangela Imbrenda of the Clorox Company, the parent company of the Fresh Step litter brand. But indicator litters are made of silica, which is about 2% of the overall litter market.
THESE ARE THE BEST TO AVOID LUNG PROBLEMS
The other 6% covers the myriad organic litter materials, including corn, pine, paper, and even tofu.
Silica litter, often marketed as “crystal” litter, is made from amorphous silica gel, the same material that is often found in packets inside shoe boxes and bags of jerky to keep those products dry. Silica gel’s silicon-and-oxygen framework contains a multitude of tiny pores that adsorb small molecules that can form hydrogen bonds. For cat litter, those molecules are water and the ammonium ions that are produced when microbes break down urea in urine. Ammonium causes the acrid smell of a soiled litter box.
Silica litter doesn’t clump the way most clay litters do, and it’s more expensive, but it tends to be lighter, less dusty than clay, and more efficient at trapping moisture and odor-causing molecules: it can adsorb about 35% of its weight in water without swelling. Pure silica gel is also naturally white, which can help colors show up if dyes are added during the manufacturing process—for example, color-changing indicators to analyze a cat’s pee.
TRUE COLORS
Health-monitoring cat litters show pH changes and blood in the urine to try to alert owners to early signs of urinary tract issues. Source: PrettyLitter.
COLORIMETRIC BASICS
A response to a chemical interaction is a key component of any chemical test, according to Jessica Beard, a chemistry PhD candidate at the Massachusetts Institute of Technology who is developing colorimetric tests for detecting water pollutants. In a colorimetric test, the response is a color change.
Colorimetric indicators are useful for tests where you don’t want to use sophisticated instrumentation, because they can be made so that the results are visible to the human eye, Beard says. And commercial indicators often rely on chemistries that have been known for a long time, which is a big plus, she adds. “If they could get it to work in the early 1900s, it’s probably tolerant of a lot of interference.”
Using colorimetric tests for a quick urine analysis is nothing new, veterinary expert Lulich says. Vets first test a pet’s urine using a dipstick test strip with colorimetric indicators before following up with more specific tests if necessary. These test strips may include tests for pH and the presence of certain disease markers such as protein, blood, or glucose.
The main feature that health-monitoring cat litters advertise is the ability to detect changes in pH using color. That’s exactly what C&EN saw when we got our paws on a bag of PrettyLitter indicator litter and tested its pH-indicating power. According to the company’s patent, PrettyLitter contains the indicator compound bromothymol blue. This compound is yellow in its protonated form (below pH 6) and blue in its deprotonated form (above pH 7.6). Solutions with a mix of protonated and deprotonated molecules appear as different shades of green.
In C&EN’s at-home tests, filtered water (approximately neutral pH) turned the litter a yellowish green, vinegar water (pH of around 3) created an orangey-yellow color, and baking soda in water (pH of about 8) turned the litter bright blue. The orange and yellow colors faded after a few hours, which Beard hypothesizes might be because of a proton transfer between the silica and the indicator. The blue did not fade.
As long as the body can get rid of it, then it’s not abnormal. Jody Lulich, veterinarian, University of Minnesota College of Veterinary Medicine
INDICATING A PROBLEM?
According to the Merck Veterinary Manual, the normal pH of cat urine is 6.3–6.6. It also says that urine pH can affect the formation of bladder stones, and some bacterial infections can result in alkaline urine. But according to Lulich, a cat’s urinary tract can handle quite a bit of variation in pH. “If the animal takes in a lot of alkali, the body is going to get rid of it,” he says. The same goes for consuming or making a lot of acid. “As long as the body can get rid of it, then it’s not abnormal.”
PrettyLitter Inc. did not respond to questions about the litter’s contents by C&EN’s deadline, but assuming the indicator is indeed bromothymol blue, PrettyLitter is probably alerting cat owners to urine below pH 6 and above pH 7.6. If the pH is outside that range, it could mean a cat is sick, but pH can also change for perfectly harmless reasons, Lulich says, because “pH is not very useful by itself.” Instead, vets will consider pH values in the context of other symptoms as well as risk factors such as a cat’s age, diet, and medical history.
Many health-monitoring litters are also supposed to detect blood in the urine. That could be a much more valuable test. Lulich says that blood in the urine can be a sign of a number of serious conditions. Almost any amount that’s detectable is concerning enough to follow up on, he says.
In the clinic, medical-grade urine test strips use diisopropylbenzene dihydroperoxide and 3,3′,5,5′-tetramethylbenzidine to detect blood. Blood contains hemoglobin, which contains iron(II). If any hemoglobin is present in the urine tested, the iron(II) will react with the peroxide molecules to form radicals. The radicals then oxidize the tetramethylbenzidine, triggering a color change from yellow to blue green.
But according to three brands, blood-detecting litter will turn red, not blue, if a cat’s pee has blood in it. With the help of American University chemist Matthew Hartings, C&EN tested PrettyLitter with a few different concentrations of hemoglobin in water (Hartings is a member of C&EN’s advisory board). C&EN found that the color of the litter reflected the color of the hemoglobin solution. So the blood detection, at least for PrettyLitter, seems to rely not on a chemical test but on the contrast of bloody urine’s reddish color against a white backdrop.
Although Lulich cautions against reading too much into single data points, he says he considers at-home pet health monitoring to be a generally positive thing because it can help foster communication between pet owners and vets, and that may lead to better health outcomes. “It doesn’t hurt anything but your pocketbook,” he says. “Anytime you can pick up disease early and not go overboard with tests that may hurt, then the answer is it should be good.”