Friday, August 18, 2023

Natural compound in white button mushrooms could benefit animal, human health

Natural compound in white button mushrooms could benefit animal, human health
Researchers in Penn State’s College of Agricultural Sciences have identified a compound in white button mushrooms that potentially can be beneficial for gut health in mammals. Credit: Pixabay

A team of researchers in Penn State's College of Agricultural Sciences has identified a compound in white button mushrooms that could potentially benefit gut health in mammals by activating a protective biological response.

"Our research showed that a biochemometric approach—modeling chemistry and biology data together—can lead to the discovery of new components of chemical mixture in foods that might be therapeutic for animal and ," said Joshua Kellogg, assistant professor of metabolomics in the Department of Veterinary and Biomedical Sciences. The researchers published their findings in the Journal of Functional Foods.

Using cell-based assays and a molecular networking approach—a method that organizes molecules according to their structural similarity—the researchers found that the new compound they identified in  activates the aryl hydrocarbon receptor, or AHR, which is found in mammals including mice, pigs and humans.

Prior studies have demonstrated that when mice ingest mushrooms, the AHR either becomes antagonized or inhibited. The compound Kellogg's team identified activates AHR when applied to human and mouse-derived cell lines, which are cultured in the laboratory to test the molecular effects of certain variables.

"It's a delicate balancing act, which is why studying whole foods as well as individual compounds is essential," Kellogg said. "There are benefits to AHR activation as well as antagonism."

Kellogg said that AHR plays an important role in gut health. When activated, it can induce a cellular response to detoxify aryl hydrocarbons, which are known carcinogens, in the gut. When inhibited, AHR can help reduce tumor growth in certain cancers. AHR is also critical in other facets of promoting gut health, including maintaining the integrity of the mucosal lining of the gut and preventing bacteria from invading the gut.

The team's latest research builds on prior work by co-authors Andrew Patterson, professor of molecular toxicology, biochemistry and , and Gary Perdew, H. Thomas & Dorothy Willits Hallowell Chair of Agricultural Sciences and director of the Center for Molecular Toxicology and Carcinogenesis. They previously looked at molecules called benzothiazoles and how they react with AHR.

"In our research, we recognized those benzothiazoles, but we also saw previously undiscovered molecules that were structurally related," Kellogg explained. "When we profiled the chemistry of these related structures, we wondered if they would also work with AHR. And we found that they do activate AHR."

The researchers' findings underscore the importance of studying the roles each chemical component plays in a whole food, according to Kellogg.

"Foods are complex chemical mixtures," Kellogg said. "What we do at our core is focus on ways to hunt for active chemistry in natural sources—plants, mushrooms, bacteria. We're interested in how chemical mixtures in foods react with AHR and could protect gut health in general."

Graduate student Xiaoling Chen, a member of the research team and lead author of the paper, is continuing the research by examining at molecular mixtures in other mushroom species.

The team also is applying the biochemometric approach to infectious disease research, Kellogg said. For example, they are screening various plants from regions across Pennsylvania for compounds that act against pathogenic bacteria. They have found phytochemicals—part of a plant's  that helps protect against viruses—in the plant Artemisia that seem to be effective at inhibiting the growth of mycobacteria that cause tuberculosis.

More information: Xiaoling Chen et al, Molecular networking identifies an AHR-modulating benzothiazole from white button mushrooms (Agaricus bisporus), Journal of Functional Foods (2023). DOI: 10.1016/j.jff.2023.105602

 

New paper highlights dangerous misconceptions of AI

ai
Credit: CC0 Public Domain

Artificial Intelligence (AI) is discriminatory, susceptible to racial and sexist bias and its improper use is sending education into a global crisis, a leading Charles Darwin University (CDU) expert warns in a new research paper.

The critique of AI as a foundation for judicious use in higher ' urges society to look beyond the hype of AI and analyze the risks associated with adopting the technology in education after AI ubiquitously invaded and colonized public imaginations across the world in late 2022 and early 2023.

In the paper, author and CDU AI expert Dr. Stefan Popenici discusses the two most dangerous myths about AI in education: the belief AI is objective, factual and unbiased when it is in fact directly related to specific values, beliefs and biases; and the belief AI doesn't discriminate when it is inherently discriminatory, referencing also the lack of gender diversity in the growing field.

"If we think about how technology actually operates, we realize that there is not one point in the history of humanity when technology is not directly related to specific cultures and values, beliefs and biases,  or gender stances," Dr. Popenici said.

"There is consistent research and books that are providing examples of AI algorithms that discriminate, grotesquely amplify injustice and inequality, targeting and victimizing the most vulnerable and exposing us all to unseen mechanisms of decision where we have no transparency and possibility of recourse."

Dr. Popenici examines how the discrepancy in priorities of higher education and "Big Tech"—the most dominant companies in the —are growing, with a striking and perilous absence of critical thinking about automation in education, especially in the case of AI. The lack of concern for AI in education is affecting the use of students' data, impacts on their privacy and ability to think critically and creatively.

"Big Tech is driven by the aims of profits and power, control and financial gain. Institutions of education and teachers have very different aims: the advancement of knowledge and to nurture educated, responsible, and active citizens that are able to live a balanced life and bring a positive contribution to their societies," Dr. Popenici said.

"It is deceiving to say, dangerous to believe, that  is... intelligent. There is no creativity, no , no depth or wisdom in what generative AI gives users after a prompt."

"Intelligence, as a human trait, is a term that describes a very different set of skills and abilities, much more complex and harder to separate, label, measure and manipulate than any computing system associated with the marketing label of AI."

"If universities and educators want to remain relevant in the future and have a real chance to reach the aims of education, it is important to consider the ethical and intellectual implications of AI."

"The critique of AI as a foundation for judicious use in higher education" was published in the Journal of Applied Learning & Teaching.

More information: Stefan Popenici et al, The critique of AI as a foundation for 


judicious use in higher education, Journal of Applied Learning & 


Teaching (2023). DOI: 10.37074/jalt.2023.6.2.4


Q&A: As AI changes education, important conversations for kids still happen off-screen

chatgpt
Credit: Pixabay/CC0 Public Domain

When ChatGPT surged into public life in late 2022, it brought new urgency to long-running debates: Does technology help or hinder kids' learning? How can we make sure tech's influence on kids is positive?Such questions live close to the work of Jason Yip, a University of Washington associate professor in the Information School. Yip has focused on technology's role in families to support collaboration and learning.

As another school year approaches, Yip spoke with UW News about his research.

What sorts of family technology issues do you study?

I look at how technologies mediate interactions between kids and their families. That could be parents or guardians, grandparents or siblings. My doctoral degree is in , but I study families as opposed to schools because I think families make the biggest impact in learning.

I have three main pillars of that research. The first is about building new technologies to come up with creative ways that we can study different kinds of collaboration. The second is going into people's homes and doing field studies on things like how families search the internet, or how they interact with voice assistants or . We look at how new consumer technologies influence  collaborations. The third is co-design: How do adults work with children to co-create new technologies? I'm the director of KidsTeam UW. We have kids come to the university basically to work with us as design researchers to make technologies that work for other children.

Jason Yip from Newswise on Vimeo.

Credit: University of Washington

Can you explain some ways you've explored the pros and cons of learning with technology?

I study "joint media engagement," which is a fancy way of saying that kids can work and play with others when using technology. For example, digital games are a great way parents and kids can actually learn together. I'm often of the opinion that it's not the amount that people look at their screens, but it's the quality of that screen time.

I did my postdoc at Sesame Workshop, and we've known for a long time that if a child and parent watch Sesame Street together and they're talking, the kid will learn more than by watching Sesame Street alone. We found this in studies of "Pokémon Go" and "Animal Crossing." With these games, families were learning together, and in the case of Animal Crossing, processing pandemic isolation together.

Whether I'm looking at artificial intelligence or families using internet search, I'm asking: Where does the talking and sharing happen? I think that's what people don't consider enough in this debate. And that dialogue with kids matters much more than these questions of whether technology is frying kids' brains. I grew up in the '90s when there was this vast worry about video games ruining children's lives. But we all survived, I think.

When ChatGPT came out, it was presented as this huge interruption in how we've dealt with technology. But do you think it's that unprecedented in how kids and families are going to interact and learn with it?

I see the buzz around AI as a hype curve—with a surge of excitement, then a dip, then a plateau. For a long time, we've had artificial intelligence models. Then someone figured out how to make money off AI models and everything's exploding. Goodbye, jobs. Goodbye, school. Eventually we're going to hit this apex—I think we're getting close—and then this hype will fade.

The question I have for big tech companies is: Why are we releasing products like ChatGPT with these very simple interfaces? Why isn't there a tutorial, like in a video game, that teaches the mechanics and rules, what's allowed, what's not allowed?

Partly, this AI anxiety comes because we don't yet know what to do with these powerful tools. So I think it's really important to try to help kids understand that these models are trained on data with human error embedded in it. That's something that I hope generative AI makers will show kids: This is how this model works, and here are its limitations.

Have you begun studying how ChatGPT and generative AI will affect kids and families?

We've been doing co-design work with children, and when these AI models started coming out, we started playing around with them and asked the kids what they thought. Some of them were like, "I don't know if I trust it," because it couldn't answer simple questions that kids have.

A big fear is that kids and others are going to just accept the information that ChatGPT spits out. That's a very realistic perspective. But there's the other side: People, even kids, have expertise, and they can test these models. We had a kid start asking ChatGPT questions about Pokémon. And the kid is like, "This is not good," because the model was contradicting what they knew about Pokémon.

We've also been studying how  can use ChatGPT to teach kids about misinformation. So we asked kids, "If ChatGPT makes a birthday card greeting for you to give to your friend Peter, is that misinformation?" Some of the kids were like, "That's not okay. The card was fine, but Peter didn't know whether it came from a human."

The third research area is going into the homes of immigrant families and trying to understand whether ChatGPT does a decent job of helping them find critical information about health or finances or economics. We've studied how the children of immigrant families are searching the internet and helping their families understand the information. Now we're trying to see how AI models affect this relationship.

What are important things for parents and kids to consider when using new technology—AI or not—for learning?

I think parents need to pay attention to the conversations they're having around it. General parenting styles range from authoritative to negotiation style to permissive. Which style is best is very contextual. But the conversations around  still have to happen, and I think the most important thing parents can do is say to themselves, "I can be a learner, too. I can learn this with my kids." That's hard, but parenting is really hard. Technologies are developing so rapidly that it's okay for parents not to know. I think it's a better position to be in this growth mindset together.

You've taught most every grade level: elementary, junior high, high school and college. What should teachers be conscious of when integrating generative AI in their classrooms?

I feel for the teachers, I really do, because a lot of the teachers' decisions are based on district policies. So it totally depends on the context of the teaching. I think it's up to school leaders to think really deeply about what they're going to do and ask these hard questions, like: What is the point of education in the age of AI?

For example, with generative AI, is testing the best way to gauge what people know? Because if I hand out a take-home test, kids can run it through an AI model and get the answer. Are the ways we've been teaching kids still appropriate?

I taught AP chemistry for a long time. I don't encounter AP chemistry tests in my daily life, even as a former chemistry teacher. So having kids learn to adapt is more important than learning new content, because without adaptation, people don't know what to do with these new tools, and then they're stuck. Policymakers and leaders will have to help the teachers make these decisions.

 

Study highlights jobseekers' skepticism towards artificial intelligence in recruitment

recruitment
Credit: Pixabay/CC0 Public Domain

A wave of technological transformation has been reshaping the landscape of HR and recruitment, with the emergence of artificial intelligence (AI) promising efficiency, accuracy, and unbiased decision-making.

Amid the rapid adoption of AI technology by HR departments, a joint study conducted by NUS Business School, the International Institute for Management Development (IMD), and The Hong Kong Polytechnic University delved into a vital question: How do jobseekers perceive AI's role in the selection and ? The study has been published in the Journal of Business Ethics.

Associate Professor Jayanth Narayanan from NUS Business School shared that the genesis of this subject emerged from a personal anecdote. "A close friend of mine who had been unwell was evaluated for a role using a video interviewing software," said Professor Jayanth. The software provided feedback that the interviewee did not seem enthusiastic during the video interview.

Professor Jayanth expressed that such an outcome would likely not have transpired had a human interviewer been present. A human evaluator, endowed with perceptiveness, could have discerned signs of illness and conceivably asked about the candidate's well-being. "A human interviewer may even conclude that if the candidate is sick and still making such a valiant effort, they deserve a positive evaluation," he added.

Distrust of AI in providing a fair hiring assessment prevalent among jobseekers

The study, which was conducted from 2017 to 2018, involved over 1,000 participants of various nationalities mostly in their mid-30s. The participants were recruited from Amazon's crowd-sourcing platform Mechanical Turk and were involved in four scenario experiments to examine how people perceive the use of computer algorithms in a recruitment context.

The first two experiments studied how the use of algorithms affects the perception of fairness among job applicants in the , while the remaining two sought to understand the reasons behind the lower fairness score.

According to the findings, jobseekers viewed the use of AI in recruitment processes as untrustworthy and perceived algorithmic decision-making to be less fair than human-assisted methods. They also perceived a higher degree of fairness when humans are involved in the resume screening and hiring decision process, as compared to an entirely algorithmic approach. This observation remains consistent even among candidates who experienced successful outcomes in -driven recruitment processes.

The disparity in perceived fairness is largely attributed to AI's limitations in recognizing the unique attributes of candidates. In contrast to human recruiters, who are more likely to pick up qualitative nuances that set each candidate apart, AI systems can overlook important qualities and potentially screen out good candidates. These findings challenge the widely-held belief that algorithms provide fairer evaluations and eliminate human biases.

Merging human and machine intelligence in hiring processes

In balancing AI technology and the human touch in the recruitment process, Professor Jayanth advocates for a collaborative approach, envisioning algorithms as decision co-pilots alongside human recruiters.

"For example, algorithms can flag that the recruiter is not shortlisting enough women or people from a minority group. Algorithms can also flag the uniqueness of a candidate compared to other applicants," said Professor Jayanth.

Considering the trajectory of AI technology, Professor Jayanth forecasts an imminent surge in its prevalence and accessibility in the  space. However, he underscores the significance of human oversight, suggesting that while algorithms are set to play an essential role, the core responsibility of evaluating fellow humans for job suitability should remain within human purview.

"Why would we give up an important aspect of organizational life to an algorithm? Humanity needs to make conscious and mindful choices on how and why we automate. If we simply use the logic that we can automate anything that will result in , we are going to find that we will automate tasks that are inherently enjoyable for humans to do," he said.

More information: Maude Lavanchy et al, Applicants' Fairness Perceptions of Algorithm-Driven Hiring Procedures, Journal of Business Ethics (2023). DOI: 10.1007/s10551-022-05320-w

Social scientists recommend addressing ChatGPT's ethical challenges before using it for research

chatgpt
Credit: Unsplash/CC0 Public Domain

A new paper by researchers at Penn's School of Social Policy & Practice (SP2) and Penn's Annenberg School for Communication offers recommendations to ensure the ethical use of artificial intelligence resources such as ChatGPT by social work scientists.

Published in the Journal of the Society for Social Work and Research, the article was co-written by Dr. Desmond Upton Patton, Dr. Aviv Landau, and Dr. Siva Mathiyazhagan. Patton, a pioneer in the interdisciplinary fusion of social work, communications, and , holds joint appointments at Annenberg and SP2 as the Brian and Randi Schwartz University Professor.

Outlining challenges that ChatGPT and other large language models (LLMs) pose across bias, legality, ethics, , confidentiality, informed consent, and , the piece provides recommendations in five areas for ethical use of the technology:

  • Transparency: Academic writing must disclose how content is generated and by whom.
  • Fact-checking: Academic writing must verify information and cite sources.
  • Authorship: Social work scientists must retain authorship while using AI tools to support their work.
  • Anti-plagiarism: Idea owners and content authors should be located and cited.
  • Inclusion and social justice: Anti-racist frameworks and approaches should be developed to counteract potential biases of LMMs against authors who are Black, Indigenous, or people of color, and authors from the Global South.

Of particular concern to the authors are the limitations of artificial intelligence in the context of human rights and . "Similar to a bureaucratic system, ChatGPT enforces thought without compassion, reason, speculation, or imagination," the authors write.

Pointing to the implications of a model trained on existing content, they state, "This could lead to bias, especially if the text used to train it does not represent diverse perspectives or scholarship by under-represented groups. . . . Further, the model generates text by predicting the next word based on the previous words. Thus, it could amplify and perpetuate existing bias based on race, gender, sexuality, ability, caste, and other identities."

Noting ChatGPT's potential for use in research assistance, theme generation, data editing, and presentation development, the authors describe the chatbot as "best suited to serve as an assistive tech tool for  scientists."

More information: Desmond Upton Patton et al, ChatGPT for Social Work Science: Ethical Challenges and Opportunities, Journal of the Society for Social Work and Research (2023). DOI: 10.1086/726042


Provided by University of Pennsylvania Tackling the ethical dilemma of responsibility in large language models


More human than human: Measuring ChatGPT political bias

chatgpt
Credit: Pixabay/CC0 Public Domain

The artificial intelligence platform ChatGPT shows a significant and systemic left-wing bias, according to a new study led by the University of East Anglia (UEA). The team of researchers in the UK and Brazil developed a rigorous new method to check for political bias.

Published today in the journal Public Choice, the findings show that ChatGPT's responses favor the Democrats in the US; the Labour Party in the UK; and in Brazil, President Lula da Silva of the Workers' Party.

Concerns of an inbuilt political bias in ChatGPT have been raised previously, but this is the first large-scale study using a consistent, evidenced-based analysis.

Lead author Dr. Fabio Motoki, of Norwich Business School at the University of East Anglia, said, "With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible. The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media."

The researchers developed an innovative new method to test for ChatGPT's political neutrality. The platform was asked to impersonate individuals from across the  while answering a series of more than 60 ideological questions. The responses were then compared with the platform's default answers to the same set of questions—allowing the researchers to measure the degree to which ChatGPT's responses were associated with a particular political stance.

To overcome difficulties caused by the inherent randomness of "large language models" that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1,000-repetition "bootstrap" (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.

"We created this procedure because conducting a single round of testing is not enough," said co-author Victor Rodrigues. "Due to the model's randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum."

A number of further tests were undertaken to ensure the method was as rigorous as possible. In a "dose-response test," ChatGPT was asked to impersonate radical political positions. In a "placebo test," it was asked politically neutral questions. And in a "profession-politics alignment test," it was asked to impersonate different types of professionals.

"We hope that our method will aid scrutiny and regulation of these rapidly developing technologies," said co-author Dr. Pinho Neto. "By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology," he added.

The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby "democratizing oversight," said Dr. Motoki. As well as checking for political , the tool can be used to measure other types of biases in ChatGPT's responses.

While the research project did not set out to determine the reasons for the , the findings did point towards two potential sources.
The first was the training dataset, which may have biases within it, or added to it by the human developers, which the developers' "cleaning" procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.

The research was undertaken by Dr. Fabio Motoki (Norwich Business School, University of East Anglia), Dr. Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance—FGV EPGE, and Center for Empirical Studies in Economics—FGV CESE), and Victor Rodrigues (Nova Educação).

More information: More Human than Human: Measuring ChatGPT Political Bias, Public Choice (2023). papers.ssrn.com/sol3/papers.cf … ?abstract_id=4372349


Provided by University of East Anglia ChatGPT's responses to healthcare-related queries 'nearly indistinguishable' from those provided by humans


New 3D images give never-seen-before views inside New Zealand's largest fault

New 3-D images give never-seen-before views inside New Zealand’s largest fault
Credit: Science Advances (2023). DOI: 10.1126/sciadv.adh0150

Aotearoa New Zealand's largest fault, the Hikurangi Subduction Zone (HSZ), is where the Pacific tectonic plate dives west beneath the Australian plate and underneath the east coast of the North Island.

In some parts of the subduction zone, GPS instruments are showing the plates slowly move by a few millimeters a year. This behavior is called a "slow slip" and occurs over periods of weeks or months. However, in other parts the plates are stuck, locked together, and building up pressure.

By understanding the structural factors that create the smoother slipping and stuck zones, scientists are seeking to better diagnose what areas could generate potential future earthquakes and tsunami. As Aotearoa's largest source of potential earthquakes and tsunami, its critical to be able understand the HSZ in high-resolution detail.

New 3D images reveal hidden structures in the HSZ

In 2018 a collaboration of researchers from U.S., Japan, UK, and GNS Science used the RV Marcus Langseth to record numerous overlapping race-track "seismic reflection data" lines. The data were gathered together alongside deployments of ocean bottom seismographs and onshore seismometer in a effort called the "NZ3D" survey.

In an international collaborative effort spanning three recent high-profile publications, the first ever spectacular 3D seismic images of the northern part of Hikurangi margin have now documented new insights for understanding the structural, stratigraphic and hydrogeologic characteristics of the HSZ.

Understanding these qualities, specifically how they transport fluids, are key to knowing the conditions that lead to generation of subduction earthquakes.

How the 3D images were created

Seismic reflection data are typically how geophysicists visualize the crust. To capture this data a specialist vessel, in this case the R/V Marcus Langseth, tows an array of individual sound sources that are tuned and combined to radiate a sound wave downward to the seafloor. The echoes that bounce back from layers in the earth are recorded on a streamer towed behind the vessel and on sensitive seismographs located onshore and on the seabed.

While a grid of 2D profiles is good enough to identify major plate boundary structures, this high-resolution 3D data are needed to visualize details within subduction zones to improve understanding of fault geometry and slip behavior. The 3D data are combined in a CAT scan image of the subduction zone that shows the architecture and properties of the boundary between tectonic plates can contribute to variability in the location of strong and seismogenic versus weak slipping segments.

The 3D data provides new constraints on the  and rock properties to inform computer simulations and forecasts of  ground shaking and tsunami inundation that greatly help improved hazard preparedness and response.

How fluids and underwater volcanoes influence how New Zealand's largest fault moves

In June 2023 a Nature Geoscience paper reports how the NZ3D data capture a seamount (underwater volcano) caught in the act of subducting beneath the shallow part of the Hikurangi margin and forms sediment lenses in its wake that appear to enhance slow slip.

Further, in a Geology paper the NZ3D data reveal a detailed map of the deeper parts plate interface that shows that it has kilometer-high hill and valleys.

The new NZ3D data show that the plate interface may strongly govern the nature of how the margin deforms, including the localization of both slow slip and hazardous fast-slip earthquakes.

Most recently, a Science Advances paper revealed a previously hidden water reservoir within the layers of the Pacific plate being swallowed up in the subduction process.

The new finding suggests that subducting plate of volcanic rocks act as amplified source of water that influences the slip behavior of the margin. The trapped water is under pressure and results in the plate boundary being weak and prone to unlocking and sliding in slow slip. The study highlights the presence of significant water delivery to slow slip source from the incoming Pacific, that were previously unknown.

"Importantly, we are able to pinpoint the location of water rich layers, that allow smooth slipping, versus other water-poor segments that are stuck and will likely rupture in fast earthquakes," says Dr. Stuart Henrys, project lead and principal scientist, GNS Science.

Revealing the mysteries of the subduction process in ways never possible before

The hope is that these new generation 3D images will be able to identify areas of the plate boundary where water rich layers enable smooth slip and other areas that are locked and stuck.

By understanding how the slip behavior varies along the subduction zone, it allows scientists to better diagnose and pinpoint areas that are more prone to generate large earthquakes.

Our 3D data also provides new constraints on the physical conditions and rock properties to inform simulations of earthquake ground shaking and tsunami inundation that greatly help improved hazard preparedness and response.

Henrys says, "Our unique 3D seismic data, acquired offshore Gisborne along the northern Hikurangi subduction zone, is providing breakthroughs in understanding of the physical processes that control earthquakes. Globally  are where one plate dives beneath another and can rupture in devastating earthquakes and tsunami like those in Sumatra (2004) and Japan (2011)."

"These zones are also subjected to benign slow slip behavior that lasts weeks or months. Diagnosing whether slip is fast or slow along the Hikurangi subduction zone, our largest fault, will provide more reliable forecasts and assessments of the risks to vulnerable people and buildings.

"The 3D data we acquired is combined in a medical CAT scan like image providing super cool visualization of a small part of the subduction zone. For the first time we are able to map in detail the architecture and determine properties of the boundary between tectonic plates. Importantly we are able to pinpoint the location of water rich layers, that allow smooth slipping, versus other segments that are water poor, stuck and will likely rupture in fast earthquakes.

"The results represent another piece in the  puzzle that we can start using in large-scale earthquake cycle simulations that greatly help improved hazard preparedness and response."

More information: Andrew C. Gase et al, Subducting volcaniclastic-rich upper crust supplies fluids for shallow megathrust and slow slip, Science Advances (2023). DOI: 10.1126/sciadv.adh0150

Volcanic eruption in southwest Iceland ends: met office

The famous Eyjafjallajokull eruption paralysed air traffic in Europe in 2010
The famous Eyjafjallajokull eruption paralysed air traffic in Europe in 2010.

Iceland's meteorological office on Wednesday declared that the volcanic eruption near the country's capital Reykjavik was officially over as no activity had been observed for 10 days.

"Ten days have passed since activity was last measured in the Litli-Hrutur crater. There is no longer any deformation observed in the area and  has decreased considerably," the Icelandic Meteorological Office (IMO) said in a statement.

"As a result, we can say that another chapter in the resurgence of volcanism on the Reykjanes peninsula has come to an end," it added.

Thousands of visitors have been flocking to the site to take in the hypnotic spectacle of red-hot lava spurting out of the ground.

The Reykjanes peninsula had been dormant for eight centuries but has experienced a resurgence of volcanic activity in recent years.

There have been two other recent eruptions—one in the Geldingadalir valley in March 2021, which lasted six months, and one in the Meradalir valley in August 2022, which lasted three weeks.

All of them belong to the Fagradalsfjall volcanic system.

Last week the IMO noted that this third consecutive  in as many years marked "a turning point in the volcanism of the Reykjanes Peninsula."

Unlike  that spew out thousands of tons of ash—such as the famous Eyjafjallajokull eruption that paralyzed air traffic in Europe in 2010—the three recent ones have been so-called "effusive" eruptions and have had little impact, apart from  and locally toxic gas spikes.

Iceland has 33 volcanic systems currently considered active, the highest number in Europe. It has an eruption every five years on average.

The North Atlantic island straddles the Mid-Atlantic Ridge, a crack in the  separating the Eurasian and North American tectonic plates.

© 2023 AFP


Hundreds of quakes in Iceland spur volcano warning

 

How extraterrestrial tales of aliens gain traction

How extraterrestrial tales of aliens gain traction
Credit: vchal/Shutterstock

One night, upon returning to the cave that his tribe calls home, the monkey-humanoid Moon-Watcher finds a strange crystal object, a kind of monolith that fascinates him at first, but then quickly loses his interest when he discovers that it is not edible. Soon after, the true purpose of the monolith is revealed to be none other than penetrating the minds of our ancestors to induce new abilities that, over time, will cause the development of an intelligence capable of creating new technology.

Many readers will recognize this scene from the novel 2001, A Space Odyssey, by Arthur C. Clarke, and the film of the same name, directed by Stanley Kubrick. It almost goes without saying that the crystal monolith in question is the work of an extraterrestrial civilization that observes life on other planets and "experiments" on them to encourage the development of intelligence in as many parts of the cosmos as possible.

Seeking simple answers to complex questions

Understanding how we, as a species, came to be intelligent is one of the great enigmas of evolutionary study. Small mutations, followed by a process of natural selection to choose the most advantageous, seems too slow a process for structures as complex as the human nervous system or brain to emerge. It is this very complexity that allows millions of neurons to communicate with each other, resulting in the emergence of qualities such as the ability to respond voluntarily to , or to ask questions about the very nature of humankind and the universe.

Nowadays, we know that there are evolutionary mechanisms that have lead to great leaps in terms of complexity, but that does not stop people from turning to non-human forces—Gods, extraterrestrials, spiritual energies—to explain things that are difficult to comprehend.

This has always been the case, in all . A classic example would be attributing atmospheric events—thunder, lightning, floods—to the wrath of God. These ideas came about before humans had ever left the ground, so it is no surprise that we turned our eyes even higher—to extraterrestrials—to explain other phenomena that we could only observe once traveling at high altitudes became part of our daily lives.

The allure of the unknown

The possibility that we might have been visited by beings from other worlds has always fascinated us. The element of mystery, of the unknown, only makes it more interesting.

Any phenomenon is made all the more enticing when it seems it is being covered up or hidden for secretive reasons. The attractiveness of conspiracies often leads people towards ideas which have no scientific basis, such as the belief that the Earth is flat, that humans never set foot on the moon, or that vaccines can control our behavior.

Even though these ideas have repeatedly been shown to be untrue, their rapid dissemination through social media, using simple, blunt language that appeals to emotion over logic, makes them very powerful weapons.

The supposed "proof" of alien visits to our planet ranges from specific Bible passages to ancient stone carvings portraying creatures or objects that may appear to be aliens or spacecraft. The latter often take the form of flying saucers.

However, we cannot forget that humans have always created imaginary creatures that resemble them and attributed them with magical powers. When imagining Gods, humans have given them a human appearance, and almost always imagined them as living in the sky.

When we look at these representations through modern eyes, we associate them with extraterrestrial beings or structures, when in fact they could be referring to a range of different things.

How extraterrestrial tales of aliens gain traction
Image of petroglyphs in Cub Creek (Utah, United States of America). 
Credit: MikeGoad / Pixabay

When unproven stories become larger than life

Recently, in the United States Congress, UFOs (currently known as UAPs: "Unidentified Anomalous Phenomena") are back in the limelight. This is because a former air force intelligence official has made claims that the Pentagon is in possession of remains of extraterrestrial craft and "non-human biological matter". The claims have been backed up by the testimony of a retired navy commander and a former navy pilot.

What we can be certain of is that the more we explore our skies, the more likely it is that we will encounter phenomena that we cannot explain. However, this does not mean that they are extraterrestrial. Past experience has shown us that most of these events can be attributed to optical illusions, spy or weather balloons, space junk, or even satellites that we ourselves have made.

In Spain, UFOs were a hot topic between the 1960s and the 1980s. In this era, everyone knew someone who was convinced that they had seen a UFO. This even reached the point where an exoplanet, called Ummo, was made up. It was populated by a more advanced civilization than ours who made contact with people on Earth. In the letters these aliens supposedly sent, the 'Ummites' explained concepts such as genetics and cell structure.

The truth is that nowadays, reading some of these letters can be quite amusing. The story of the planet of Ummo was ultimately proved to be a monumental hoax, a fact later admitted by its own creator.

The Ummo hoax was even linked to the creation of a pedophile ring, which should make us reflect on the harmful consequences that the spread of fabricated news stories can have.

Can we deny the possibility that intelligent alien civilizations exist?

The answer, of course, is no. The universe is immense, and it is more than likely that circumstances similar to those which led to the appearance of life on Earth have been repeated on other planets. But there is a huge distance (literally and figuratively) between acknowledging the existence of these creatures and considering the possibility that they might have visited us.

Exoplanets, also known as extrasolar planets, are extremely far away, and we are limited by the speed of light which, as proven by Einstein, is the maximum possible speed at which anything can travel. Therefore, the journey to even a "nearby" exoplanet would take thousands of years. Maybe a civilization more advanced than ours could find a way to do it faster, but not to the point of it being something easy or commonplace.

In any case, if the remains of alien life or spacecraft are stored away somewhere, why are they not being shown to us? Scientists would jump at the chance to analyze this organic matter to find out how it is structured, how it metabolizes energy, or what molecules it uses to store genetic information.

Until there is proof, this is not a question of science, but rather, of stories. Stories can be very entertaining, but these kinds of stories do not help us to build a more accurate or helpful view of the world.

Provided by The Conversation 

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation


What does 60 years of silence tell us about the search for extraterrestrials?