Friday, September 06, 2024

Better than expected: World's first semi-submersible floating wind farm

By Joe Salas
September 05, 2024

One of three Windfloat Atlantic off-shore floating platforms
Windplus
VIEW GALLERY - 3 IMAGES


The WindFloat Atlantic project – the world's first semi-submersible floating offshore wind farm, located off the coast of Portugal with only three turbines – has exceeded expectations over the last four years of operation, generating a total of 320 GWh of electricity. That's enough to power about 25,000 homes each year.

Fully connected to the grid and commissioned in 2020, the wind farm is made up of three floating platforms with an 8.4-MW Vestas turbine on each. The semi-submerged platforms are anchored to the sea floor – 328 ft (100 m) below the surface – with chains to keep them from floating away and are connected to an electrical substation in Viana do Costelo, Portugal, via a 12.4-mile (20-km) cable.

In 2022, the project produced 78 GWh, while 2023 saw even better figures, making 80 GWh of electricity.

Vestas is known for making huge turbines with high power-generation capacities. The turbines used in the Windfloat Atlantic project have a diameter of 538 ft (164 m), with the blade tips whipping around as fast as 232 mph (373 km/h), kicking out a modest 66,000 volts DC. The nacelle alone weighs 375 tons (340 tonnes), making it the world's largest offshore wind turbine.


A closer look at the Windfloat Atlantic semi-submersible platform, with people for scale
Windplus

Semi-submersible floating wind farms offer the unique advantage of being able to be placed in waters that are too deep for traditional bottom-fixed turbines, which are only feasible for waters about 150-200 ft (50-60 m) deep. And as you get further from shore, winds tend to be stronger and more consistent, making the turbines more efficient at harnessing wind power than their land-based counterparts. Given the partially submerged, three-pronged nature of the platforms with active ballast systems, they are more stable in rougher seas, also upping their efficiency.

Windfloat Atlantic project semi-submersible platform compared to the Torre dos Clerigos tower, which stands at 246 ft (75 m) tall. Miniscule, compared to the 679 ft (207 m) turbine

Windplus

Between 2011 and 2016, the Windfloat Atlantic project put a 2-MW prototype out to sea where it generated electricity for five uninterrupted years, even having survived through extreme weather conditions like 69 mph (111 km/h) winds and 55-ft (17-m) waves unscathed, paving the way for its full-scale 25-MW installation.

In 2023, the Windfloat Atlantic survived a particularly bad storm with 86-mph (139-km/h) winds and 65-ft (20-m) waves, proving how robust the off-shore electricity-generating system is.

Source: Windfloat Atlantic

 Generative AI in Academia: Balancing Innovation with Ethics

Could universities be compromising their ethical standards and academic integrity by adopting AI tools without fully addressing the potential risks and moral dilemmas?

Research Article: Generative Artificial Intelligence in Higher Education: Why the 'Banning Approach' to Student use is Sometimes Morally Justified. Image Credit: MMD Creative / Shutterstock

An article recently published in the journal Philosophy & Technology comprehensively explored the implications of generative artificial intelligence (AI) tools in higher education, highlighting debates on their responsible use in academic settings. The author, Karl de Fine Licht of Chalmers University of Technology, Sweden, examined the benefits and drawbacks of integrating generative AI tools, such as chat generative pre-trained transformer (ChatGPT), Gemini, and GitHub Copilot, into university curricula, with a focus on the broader ethical implications, such as student privacy and environmental impact, and emphasized the importance of a balanced, philosophically-grounded approach to AI adoption.

Background

Generative AI tools have transformed interactions with technology by enabling machines to learn from large amounts of data and generate human-like text, code, images, and other content. These tools have rapidly gained popularity due to their potential to assist with research, writing, programming, and problem-solving.

In higher education, they can potentially improve student learning outcomes and enhance academic productivity. However, their use has raised concerns about academic integrity, bias, cost, digital divides, and overreliance on technology. These concerns underscore the need for research on the impact of generative AI on academic integrity, student learning, and the evolving role of educators.

About the Research

The paper presents a detailed analysis of the ethical considerations and practical challenges of using generative AI tools in higher education. The authors used a bottom-up approach to philosophical inquiry, employing reflective equilibrium to balance specific cases with broader ethical principles. They argued that universities could justifiably ban generative AI tools under certain conditions: (a) collective support from faculty, students, and administration after a fair process and (b) limited resources. This argument is grounded in the moral responsibility of universities to avoid participating in processes that may be ethically questionable, such as those that harm the environment or compromise student privacy.

The study highlighted the risks and benefits of these tools and advocated for a "banning approach" in cases where universities lack resources and ethical concerns arise. It emphasized that banning these tools is not just about control but about maintaining academic integrity and upholding the values of higher education.

Key Findings

This work identified several key concerns about the unrestricted use of generative AI tools in higher education. One major ethical concern is the potential for these tools to foster dependency, where students may rely excessively on AI, leading to a degradation of critical thinking skills. It argued that these tools could lead to an overreliance on technology, weaken students' critical thinking skills, and promote superficial engagement with learning materials.

The study also noted the risk of educational inequality, where students with access to advanced AI tools might outperform peers who lack such resources. Additionally, the authors highlighted the significant environmental impact of generative AI, particularly the high energy consumption needed to train large language models (LLMs), and argued that universities have a moral obligation to consider these impacts.

Furthermore, the research acknowledged the potential benefits of generative AI, such as improved learning outcomes and increased productivity. However, it argued these benefits are often overstated and may not outweigh the risks. The paper emphasizes the importance of understanding the broader ethical implications, including the risk of contributing to morally adverse processes, such as data exploitation by AI companies.

While AI tools can be helpful for specific tasks, they can also diminish the quality of student work when overused, as students may rely on AI-generated content without fully understanding the underlying concepts. This reliance can hinder the development of critical thinking skills and the ability to analyze information and synthesize complex ideas.

Applications

The research has important implications for policy and guideline development regarding the use of generative AI tools in higher education. The researchers support a balanced approach that considers both these technologies' potential benefits and risks. They argue that universities should engage in ongoing ethical reflection, taking into account the dynamic nature of real-world problems and the evolving role of AI in society. They suggest that universities focus on creating educational resources and training programs for faculty to ensure the responsible and effective integration of AI tools into the curriculum.

Conclusion

In summary, the study critically examined the implications of generative AI tools in higher education, outlining the potential risks and benefits. While recognizing the potential advantages, the authors argued that, under certain conditions, universities are justified in banning students' use of generative AI tools due to significant ethical concerns, including environmental impact and data privacy. The paper emphasized the need for educators to recognize these tools' biases and limitations and to develop strategies that align with the ethical values of higher education.

Their findings have significant implications for developing boundaries for AI use in higher education and highlight the need for ongoing research into the impact of these technologies on academic integrity, student learning, and the role of educators. Ultimately, the authors call for a more cautious and ethically informed approach to AI integration, prioritizing students' well-being and educational institutions' moral responsibilities.

Journal reference:

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

AUSTRALIA


The government says more people need to use AI. Here’s why that’s wrong

THE CONVERSATION
Published: September 5, 2024 

The Australian government this week released voluntary artificial intelligence (AI) safety standards, alongside a proposals paper calling for greater regulation of the use of the fast-growing technology in high-risk situations.

The take-home message from federal Minister for Industry and Science, Ed Husic, was:


We need more people to use AI and to do that we need to build trust.

But why exactly do people need to trust this technology? And why exactly do more people need to use it?

AI systems are trained on incomprehensibly large data sets using advanced mathematics most people don’t understand. They produce results we have no way of verifying. Even flagship, state-of-the-art systems produce output riddled with errors.

ChatGPT appears to be growing less accurate over time. Even at its best it can’t tell you what letters are in the word “strawberry”. Meanwhile, Google’s Gemini chatbot has recommended putting glue on pizza, among other comical failures.

Given all this, public distrust of AI seems entirely reasonable. The case for using more of it seems quite weak – and also potentially dangerous.


Federal Minister for Industry and Science Ed Husic wants more people to use AI. Mick Tsikas/AAP


AI risks


Much has been made of the “existential threat” of AI, and how it will lead to job losses. The harms AI presents range from the overt – such as autonomous vehicles that hit pedestrians – to the more subtle, such as AI recruitment systems that demonstrate bias against women or AI legal system tools with a bias against people of colour.

Other harms include fraud from deepfakes of coworkers and of loved ones.

Never mind that the federal government’s own recent reporting showed humans are more effective, efficient and productive than AI.

But if all you have is a hammer, everything looks like a nail.

Technology adoption still falls into this familiar trope. AI is not always the best tool for the job. But when faced with an exciting new technology, we often use it without considering if we should.

Instead of encouraging more people to use AI, we should all learn what is a good, and not good, use of AI.
Is it the technology we need to trust – or the government?

Just what does the Australian government get from more people using AI?

One of the largest risks is the leaking of private data. These tools are collecting our private information, our intellectual property and our thoughts on a scale we have never before seen.

Much of this data, in the case of ChatGPT, Google Gemini, Otter.ai and other AI models, is not processed onshore in Australia.

These companies preach transparency, privacy and security. But it is often hard to uncover if your data is used for training their newer models, how they secure it, or what other organisations or governments have access to that data.


Recently, federal Minister for Government Services, Bill Shorten, presented the government’s proposed Trust Exchange program, which raised concerns about the collection of even more data about Australian citizens. In his speech to the National Press Club, Shorten openly noted the support from large technology companies, including Google.

If data about Australians was to be collated across different technology platforms, including AI, we could see widespread mass surveillance.

But even more worryingly, we have observed the power of technology to influence politics and behaviour.

Automation bias is the terminology we use for the tendency for users to believe the technology is “smarter” then they are. Too much trust in AI poses even more risk to Australians – by encouraging more use of technology without adequate education, we could be subjecting our population to a comprehensive system of automated surveillance and control.

And although you might be able to escape this system, it would undermine social trust and cohesion and influence people without them knowing.

These factors are even more reason to regulate the use of AI, as the Australian government is now looking to do. But doing so does not have to be accompanied by a forceful encouragement to also use it.

Let’s dial down the blind hype

The topic of AI regulation is important.

The International Organisation for Standardisation has established a standard on the use and management of AI systems. Its implementation in Australia would lead to better, more well-reasoned and regulated use of AI.

This standard and others are the foundation of the government’s proposed Voluntary AI Safety standard.

What was problematic in this week’s announcement from the federal government was not the call for greater regulation, but the blind hyping of AI use.

Let’s focus on protecting Australians – not on mandating their need to use, and trust, AI.

Author
Erica Mealy
Lecturer in Computer Science, University of the Sunshine Coast

Disclosure statement
Erica Mealy is a member of the Board of Directors of not-for-profit digital rights organisation Electronic Frontiers Australia. She is is also member of the Australian Computer Society, the Australian Information Security Association, and the International Association for Public participation (IAP2).

New dementia research highlights deprivation as major reason Māori and Pacific people more at risk

By Melissa Nightingale
Senior Reporter, NZ Herald - Wellington·NZ Herald·
5 Sep, 2024 


People in the most deprived areas are 60% more at risk of developing dementia
Māori and Pacific people have a higher prevalence of dementia, which researchers say is not due to ethnicity, but disadvantage
Only half of Māori and Pacific dementia sufferers receive diagnoses, often at late disease stages

People living in New Zealand’s most deprived areas are 60% more at risk of developing dementia than those in the least deprived areas - and Māori and Pacific people are taking the brunt of the risk.

New research from the Department of Psychological Medicine at the University of Auckland shows social disadvantage is a factor fuelling the country’s soaring rates of dementia.

The findings indicate Māori and Pacific people are not at more risk of dementia due to their ethnicity as such, but because they are overrepresented in areas of deprivation.
Risk factors for dementia 'cluster around social deprivation'. Photo / 123RF

These figures are exacerbated by poor rates of diagnoses for these groups, and a lack of tailored support for patients and their whānau.

“I think, at every step of the health system, Māori and Pacific people are drawing the short straw,” said Dr Etuini Ma’u, who co-authored the study with Sarah Cullum and Gary Cheung.

He said this new research shifted the blame away from ethnicity alone, but the way Kiwis and the Government look at dementia needed to change.

“One of the issues is that you look at some of the risk factors for dementia and you say ‘this is personal responsibility, these are behaviours and lifestyles that people have chosen to engage in’,” he said.

“I think we need to shift the entire focus and say ‘these are risk factors that cluster around social deprivation’.”


The Lancet Commission Report 2024 found there are 14 risk factors which, if eliminated completely, would prevent or delay 45% of dementia cases worldwide. The risk factors are:

Early life
Less education

Mid life
Hearing loss
High LDL cholesterol
Depression
Traumatic brain injury
Physical inactivity
Diabetes
Smoking
Hypertension
Obesity
Excessive alcohol

Later life
Social isolation
Air pollution
Visual loss

Ma’u said most of these factors could be connected to poverty, noting a recent study showed, for example, there were about seven times more vape shops in the most socioeconomically deprived areas in New Zealand. He said in Auckland social-deprived areas were closer to main roads and motorways, which contributed to poor air quality.

“We are being let down by a system that continues to foster inequities,” he said.

Despite having a higher prevalence of dementia, only half of Māori and Pacific people with the condition receive a diagnosis, and more than half of these people only get diagnosed once they are at a moderate to severe stage of their disease.

This meant there were families and communities caring for people with dementia without any access to funded supports.

“Even those who get a diagnosis, we know that many of them don’t even use the supports that are on offer.”

This was because the supports were not tailored to the communities that needed them.

Focusing on a “whole of Government” move to addressing dementia, rather than putting so much of the onus on an individual’s behaviours, meant policies could be put in place tackling the food, tobacco and alcohol industries that people in deprived areas were more strongly targeted by.

Forty per cent of Māori and Pacific people live in the lowest two deciles in New Zealand, and these groups were also between 45% and 80% more at risk of developing dementia than Pākehā.

“We know the number of people living with dementia in NZ is expected to double in the next 20 years and triple in the same period for Māori and Pacific peoples,” Ma’u said.

“We know the 14 risk factors for the disease as outlined in the recently-released Lancet Commission Report and we know good policy can change that trajectory.”

Recent estimates indicate reducing 12 of the 14 risk factors in NZ by just 10% could result in 3000 fewer people with dementia, Ma’u said.

“Most risk factors build up across a lifetime. It is their incremental and cumulative damage to the brain that eventually leads to dementia. This shows the importance of promoting brain health in early life and midlife, even when the immediate dementia risk is deemed to be low.”

Other possible changes to areas of social deprivation included more green spaces and cycleways, creating facilities people were encouraged to use that could increase healthy habits.

“I think we can do a lot to prevent dementia. I think we really need to relook at the way we’re targeting it and I think we really need to be looking at these much more broad population level legislations and policies that increase our ability to live a healthy lifestyle.”

Melissa Nightingale is a Wellington-based reporter who covers crime, justice and news in the capital. She joined the Herald in 2016 and has worked as a journalist for 10 years.

    Hitchhiking cane toads travelling to Southern Australia pose threat to biosecurity

    by Elsie Adamo and Selina Green
    ABC Rural




    Hitchhiking cane toads have been spotted in regional areas as well as major cities. (ABC News: Mitchell Denman Woolnough)

    In short:

    A cane toad recently hitchhiked into Adelaide and was spotted in a city park.

    While not traditionally suited to a southern climate, climate change could contribute to populations establishing in new areas.

    What's next?

    Work is being done to help prepare for any future outbreaks.

    Southerners are being asked to keep an eye out for hitchhiking cane toads before they get a chance to establish new populations.

    The call comes after a cane toad was captured last week near a university campus in the centre of Adelaide.

    The invasive species is unfortunately a common sight further north in Queensland but not so common in southern Australia.

    So how do cane toads end up in Adelaide? They hitch a ride.
    Spotting a toad down south

    South Australian Department of Primary Industries and Regions senior biosecurity officer Kate Fielder said about 40 cane toad sightings were reported in the state each year.



    Cane toads have been transported across the country in trucks, cars and even luggage. (Supplied: Cathy Zwick)

    Of those, two to three are found to be real cane toads.

    "Most of the time they are cane toads that have hitched a ride from interstate or overseas," Ms Fielder said.

    "Thankfully we do have a lot of vigilant members of the public here and they are always keeping an eye out.

    "We know cane toads are one of the worst pests in Australia and we certainly don't want them to get here."
    Preparing for an outbreak

    Even though there are no established cane toad populations in South Australia, work is being done to prepare for any potential incursions or outbreaks which may occur in the future.

    "We've got traps which draw in cane toads using vocalisation," Ms Fielder said.


    "And we are looking to get into the use of AI [artificial intelligence] to detect cane toads vocalisation in the wild.

    "So a little monitor will sit out in the wild and alert us if it hears a cane toad."


    Cane toads were introduced into Australia in the 1930s. (ABC Kimberley: Ted O'Connor)

    Invasive Species Council conservation and biosecurity analyst, Lyall Grieve, said he was not surprised cane toads were found in South Australia.

    "They're extremely good at hitchhiking," he said.

    "It only takes a few of them to start a population."

    While the species did not naturally do well in the South Australian climate, he said they were still a threat.
    The 'waterless barrier' that could stop cane toads invading the Pilbara

    Photo shows A cane toad on a rock lit by a flash, in the background a pink sky and a boab tree silhouette



    A bevy of small mammals and lizards could find themselves on the endangered species list if cane toads got into the Pilbara. A bold plan could prevent the biodiversity disaster.

    "With climate change we're getting some warmer winters, we're getting warmer temperatures, we're getting extreme weather events like flooding," Mr Grieve said.

    "All of those things will lead to a situation where cane toads could actually establish further south."

    He said while the risk of established populations in South Australia was not immediate, one female could lay two clutches of up to 35,000 eggs a year so no chances could be taken.

    "We don't want people to be to be worried or to be afraid that they're suddenly going to be overwhelmed by cane toads," Mr Grieve said.

    "They're very good survivors, they're very good at rapidly filling an environment and over competing all the native frogs and reptiles.


    "It really wouldn't take long, maybe a couple of years to establish a reproducing population."

    Identifying cane toads

    Anyone who believes they have spotted a cane toad should contact the Australian Pest Alert Hotline and not attempt to kill any the animal themselves.



    Cane toads were introduced to control pest beetles in Queensland's sugar cane crops. (ABC News: Alex Hyman)

    "In other parts of Australia there's been a lot of sad cases of misidentification," Mr Grieve said.

    "Well-meaning people think they're found a terrible pest … unfortunately there's a lot of native frogs that do look a bit similar."

     Harnessing bacteriophages as targeted treatments for drug-resistant bacteria

    As antibiotic resistance becomes an increasingly serious threat to our health, the scientific and medical communities are searching for new medicines to fight infections. Researchers at Gladstone Institutes have just moved closer to that goal with a novel technique for harnessing the power of bacteriophages.

    Bacteriophages, or phages for short, are viruses that naturally take over and kill bacteria. Thousands of phages exist, but using them as treatments to fight specific bacteria has so far proven to be challenging. To optimize phage therapy and make it scalable to human disease, scientists need ways to engineer phages into efficient bacteria-killing machines. This would also offer an alternative way to treat bacterial infections that are resistant to standard antibiotics.

    Now, Gladstone scientists have developed a technology that lets them edit the genomes of phages in a streamlined and highly effective way, giving them the ability to engineer new phages and study how the viruses can be used to target specific bacteria.

    "Ultimately, if we want to use phages to save the lives of people with infections that are resistant to multiple drugs, we need a way to make and test lots of phage variants to find the best ones," says Gladstone Associate Investigator Seth Shipman, PhD, the lead author of a study published in Nature Biotechnology. "This new technique lets us successfully and rapidly introduce different edits to the phage genome so we can create numerous variants."

    The new approach relies on molecules called retrons, which originate from bacterial immune systems and act like DNA-production factories inside bacterial cells. Shipman's team has found ways to program retrons so they make copies of a desired DNA sequence. When phages infect a bacterial colony containing retrons, using the technique described in the team's new study, the phages integrate the retron-produced DNA sequences into their own genomes.

    The enemy of your enemy

    Unlike antibiotics, which broadly kill many types of bacteria at once, phages are highly specific for individual strains of bacteria. As rates of antibiotic-resistant bacterial infections rise-;with an estimated 2.8 million such infections in the United States each year-;researchers are increasingly looking at the potential of phage therapy as an alternative to combat these infections.

    "They say that the enemy of your enemy is your friend," says Shipman, who is also an associate professor in the Department of Bioengineering and Therapeutic Sciences at UCSF, as well as a Chan Zuckerberg Biohub Investigator. "Our enemies are these pathogenic bacteria, and their enemies are phages."

    Already, phages have been successfully used in the clinic to treat a small number of patients with life-threatening antibiotic-resistant infections, but developing the therapies has been complex, time-consuming, and difficult to replicate at scale. Doctors must screen collections of naturally-occurring phages to test whether any could work against the specific bacteria isolated from an individual patient.

    Shipman's group wanted to find a way to modify phage genomes to create larger collections of phages that can be screened for therapeutic use, as well as to collect data on what makes some phages more effective or what makes them more or less specific to bacterial targets.

    "As the natural predators of bacteria, phages play an important role in shaping microbial communities," says Chloe Fishman, a former research associate at Gladstone and co-first author of the new study, now pursuing her graduate degree at Rockefeller University. "It's important to have tools to modify their genomes in order to better study them. It's also important if we want to engineer them so that we can shape microbial communities to our benefit-;to kill antibiotic-resistant bacteria, for example."

    Continuous phage editing

    To precisely engineer phage genomes, the scientists turned to retrons. In recent years, Shipman and his group pioneered the development and use of retrons to edit the DNA of human cells, yeast, and other organisms.

    Shipman and his colleagues began by creating retrons that produce DNA sequences specifically designed to edit invading phages-;a system the team dubbed "recombitrons." Then, they put those retrons into colonies of bacteria. Finally, they let phages infect the bacterial colonies. As the phages infected bacteria after bacteria, they continuously acquired and integrated the new DNA from the recombitrons, editing their own genome as they went along.

    The research team showed that the longer they let phages infect a recombitron-containing bacterial colony, the greater the number of phage genomes were edited. Moreover, the researchers could program different bacteria within the colony with different recombitrons, and the phages would acquire multiple edits as they infected the colony.

    As a phage is bouncing from bacterium to bacterium, it picks up different edits. Making multiple edits in phages is something that was previously incredibly hard to do; so much so that, most of the time, scientists simply didn't do it. Now, you basically throw some phages into these cultures, wait a while, and get your multiple-edited phages."

    Seth Shipman, PhD, lead author

    A platform to screen phages

    If scientists already knew exactly what edits they wanted to make to a given phage to optimize its therapeutic potential, the new platform would let them easily and effectively carry out those edits. However, before researchers can predict the consequence of a genetic change, they first need to better understand what makes phages work and how variations to their genomes impact their effectiveness. The recombitron system helps makes progress here, too.

    If multiple recombitrons are put into a bacterial colony, and phages are allowed to infect the colony for only a short time, different phages will acquire different combinations of edits. Such diverse collections of phages could then be compared.

    "Scientists now have a way to edit multiple genes at once if they want to study how these genes interact or introduce modifications that could make the phage a more potent bacterial killer," says Kate Crawford, a graduate student in the Shipman lab and co-first author of the new study.

    Shipman's team is working on increasing the number of different recombitrons that can be put into a single bacterial colony-;and then passed along to phages. They expect that eventually, millions of combinations of edits could be introduced to phages to make huge screening libraries.

    "We want to scale this high enough, with enough phage variants, that we can start to predict which phage variants will work against what bacterial infections," says Shipman.

    Source:
    Journal reference:

    Fishman, C. B., et al. (2024). Continuous multiplexed phage genome editing using recombitrons. Nature Biotechnologydoi.org/10.1038/s41587-024-02370-5.

    A better understanding of climate change: Researchers study cloud movement in the Arctic

    Precise measurement of the warming and cooling of transported air masses for the first time

    Date: September 4, 2024

    Source: Universität Leipzig


    Summary:

    Special features of the Arctic climate, such as the strong reflection of the sun's rays off the light snow or the low position of the sun, amplify global warming in the Arctic. However, researchers are often faced with the challenge of modelling the underlying climatic processes in order to be able to provide reliable weather forecasts. Scientists succeeded in precisely measuring the movement of air masses from and to the Arctic. This will contribute to a better understanding of the processes accelerating climate change in the region.


    FULL STORY


    Special features of the Arctic climate, such as the strong reflection of the sun's rays off the light snow or the low position of the sun, amplify global warming in the Arctic. However, researchers are often faced with the challenge of modelling the underlying climatic processes in order to be able to provide reliable weather forecasts. Scientists from the HALO (AC)³ aircraft campaign have succeeded in precisely measuring the movement of air masses from and to the Arctic. This will contribute to a better understanding of the processes accelerating climate change in the region. Their research has been published in a European Geosciences Union paper.


    "We want to make fundamental and ground-breaking progress in our understanding of Arctic amplification and improve the reliability of models for predicting the dramatic warming in the Arctic," says Professor Manfred Wendisch, Director of the Institute for Meteorology at Leipzig University and lead author of the study.

    In mid-March 2022, the large-scale international research campaign HALO (AC)³ began its investigation of changes in air masses in the Arctic.

    Researchers from Leipzig University and several other research institutions are involved.

    During the campaign, they used special aircraft to study the movement of air masses to and from the Arctic via northward moist- and warm-air intrusions (WAIs) and southward marine cold-air outbreaks (CAOs). Two low-flying aircraft and one long-range, high-altitude research aircraft were flown in close formation whenever possible.

    "We observed air mass transformations over areas of open ocean, the marginal sea ice zone and the central Arctic sea ice," says Wendisch.

    The HALO AC³ aircraft campaign was conducted over the Norwegian and Greenland Seas, the Fram Strait, and the central Arctic Ocean in March and April 2022. A new observation strategy was used to track the changes in the air masses. This enabled the researchers to measure the moving-air parcels twice along their transport pathway. The meteorologist explains: "This allowed us to quantify the warming and cooling of the transported air masses for the first time. For example, we have shown that cold air that breaks out of the Arctic and heads south warms up to three degrees Celsius per hour on its way from the sea ice to the open sea. In addition, the humidity of the air increases as it moves south." The scientists also studied changes in cloud properties as air masses are transported. This unprecedented data is currently being compared with calculations from the German weather forecast model.


    Story Source:

    Materials provided by Universität Leipzig. Note: Content may be edited for style and length.


    Journal Reference:Manfred Wendisch, Susanne Crewell, André Ehrlich, Andreas Herber, Benjamin Kirbus, Christof Lüpkes, Mario Mech, Steven J. Abel, Elisa F. Akansu, Felix Ament, Clémantyne Aubry, Sebastian Becker, Stephan Borrmann, Heiko Bozem, Marlen Brückner, Hans-Christian Clemen, Sandro Dahlke, Georgios Dekoutsidis, Julien Delanoë, Elena De La Torre Castro, Henning Dorff, Regis Dupuy, Oliver Eppers, Florian Ewald, Geet George, Irina V. Gorodetskaya, Sarah Grawe, Silke Groß, Jörg Hartmann, Silvia Henning, Lutz Hirsch, Evelyn Jäkel, Philipp Joppe, Olivier Jourdan, Zsofia Jurányi, Michail Karalis, Mona Kellermann, Marcus Klingebiel, Michael Lonardi, Johannes Lucke, Anna E. Luebke, Maximilian Maahn, Nina Maherndl, Marion Maturilli, Bernhard Mayer, Johanna Mayer, Stephan Mertes, Janosch Michaelis, Michel Michalkov, Guillaume Mioche, Manuel Moser, Hanno Müller, Roel Neggers, Davide Ori, Daria Paul, Fiona M. Paulus, Christian Pilz, Felix Pithan, Mira Pöhlker, Veronika Pörtge, Maximilian Ringel, Nils Risse, Gregory C. Roberts, Sophie Rosenburg, Johannes Röttenbacher, Janna Rückert, Michael Schäfer, Jonas Schaefer, Vera Schemann, Imke Schirmacher, Jörg Schmidt, Sebastian Schmidt, Johannes Schneider, Sabrina Schnitt, Anja Schwarz, Holger Siebert, Harald Sodemann, Tim Sperzel, Gunnar Spreen, Bjorn Stevens, Frank Stratmann, Gunilla Svensson, Christian Tatzelt, Thomas Tuch, Timo Vihma, Christiane Voigt, Lea Volkmer, Andreas Walbröl, Anna Weber, Birgit Wehner, Bruno Wetzel, Martin Wirth, Tobias Zinner. Overview: quasi-Lagrangian observations of Arctic air mass transformations – introduction and initial results of the HALO-(AC)³ aircraft campaign. Atmospheric Chemistry and Physics, 2024; 24 (15): 8865 DOI: 10.5194/acp-24-8865-2024


    Cite This Page:MLA
    APA
    Chicago
    Universität Leipzig. "A better understanding of climate change: Researchers study cloud movement in the Arctic." ScienceDaily. ScienceDaily, 4 September 2024. <www.sciencedaily.com/releases/2024/09/240904130911.htm>.
    Witness 1.8 billion years of tectonic plates dance across Earth’s surface in a new animation













    Two tectonic plates meet in Thingvellir National Park, Iceland. VisualProduction/Shutterstock


    THE CONVERSATION
    Published: September 5, 2024

    Using information from inside the rocks on Earth’s surface, we have reconstructed the plate tectonics of the planet over the last 1.8 billion years.

    It is the first time Earth’s geological record has been used like this, looking so far back in time. This has enabled us to make an attempt at mapping the planet over the last 40% of its history, which you can see in the animation below.

    The work, led by Xianzhi Cao from the Ocean University in China, is now published in the open-access journal Geoscience Frontiers.

    Plate tectonics over the last 1.8 billion years of Earth history.



    A beautiful dance

    Mapping our planet through its long history creates a beautiful continental dance — mesmerising in itself and a work of natural art.

    It starts with the map of the world familiar to everyone. Then India rapidly moves south, followed by parts of Southeast Asia as the past continent of Gondwana forms in the Southern Hemisphere.

    Around 200 million years ago (Ma or mega-annum in the reconstruction), when the dinosaurs walked the earth, Gondwana linked with North America, Europe and northern Asia to form a large supercontinent called Pangaea.

    Then, the reconstruction carries on back through time. Pangaea and Gondwana were themselves formed from older plate collisions. As time rolls back, an earlier supercontinent called Rodinia appears. It doesn’t stop here. Rodinia, in turn, is formed by the break-up of an even older supercontinent called Nuna about 1.35 billion years ago.
    Why map Earth’s past?

    Among the planets in the Solar System, Earth is unique for having plate tectonics. Its rocky surface is split into fragments (plates) that grind into each other and create mountains, or split away and form chasms that are then filled with oceans.

    Apart from causing earthquakes and volcanoes, plate tectonics also pushes up rocks from the deep earth into the heights of mountain ranges. This way, elements which were far underground can erode from the rocks and end up washing into rivers and oceans. From there, living things can make use of these elements.

    Among these essential elements is phosphorus, which forms the framework of DNA molecules, and molybdenum, which is used by organisms to strip nitrogen out of the atmosphere and make proteins and amino acids – building blocks of life.

    Plate tectonics also exposes rocks that react with carbon dioxide in the atmosphere. Rocks locking up carbon dioxide is the main control on Earth’s climate over long time scales – much, much longer than the tumultuous climate change we are responsible for today.

    Iceland is on a plate boundary, which makes for frequent volcanic activity. Thorir Ingvarsson/Shutterstock
    A tool for understanding deep time

    Mapping the past plate tectonics of the planet is the first stage in being able to build a complete digital model of Earth through its history.

    Such a model will allow us to test hypotheses about Earth’s past. For example, why Earth’s climate has gone through extreme “Snowball Earth” fluctuations, or why oxygen built up in the atmosphere when it did.

    Indeed, it will allow us to much better understand the feedback between the deep planet and the surface systems of Earth that support life as we know it.
    So much more to learn

    Modelling our planet’s past is essential if we’re to understand how nutrients became available to power evolution. The first evidence for complex cells with nuclei — like all animal and plant cells — dates to 1.65 billion years ago.

    This is near the start of this reconstruction and close to the time the supercontinent Nuna formed. We aim to test whether the mountains that grew at the time of Nuna formation may have provided the elements to power complex cell evolution.

    Much of Earth’s life photosynthesises and liberates oxygen. This links plate tectonics with the chemistry of the atmosphere, and some of that oxygen dissolves into the oceans. In turn, a number of critical metals – like copper and cobalt – are more soluble in oxygen-rich water. In certain conditions, these metals are then precipitated out of the solution: in short, they form ore deposits.

    Many metals form in the roots of volcanoes that occur along plate margins. By reconstructing where ancient plate boundaries lay through time, we can better understand the tectonic geography of the world and assist mineral explorers in finding ancient metal-rich rocks now buried under much younger mountains.

    In this time of exploration of other worlds in the Solar System and beyond, it is worth remembering there’s so much about our own planet we are only just beginning to get a glimpse of.

    There are 4.6 billion years of it to investigate, and the rocks we walk over contain the evidence for how Earth has changed over this time.

    This first attempt at mapping the last 1.8 billion years of Earth’s history is a leap forward in the scientific grand challenge to map our world. But it is just that – a first attempt. The next years will see considerable improvement from the starting point we have now made.

    The author would like to acknowledge this research was largely done by Xianzhi Cao, Sergei Pisarevsky, Nicolas Flament, Derrick Hasterok, Dietmar Muller and Sanzhong Li; as a co-author, he is just one cog in the research network. The author also acknowledges the many students and researchers from the Tectonics and Earth Systems Group at The University of Adelaide and national and international colleagues who did the fundamental geological work this research is based on.

    Authors
    Alan Collins is a Friend of The Conversation.
    Professor of Geology, University of Adelaide
    Disclosure statement
    Alan Collins receives funding from The Australian Research Council (he is an ARC Laureate Fellow), AuScope and the MinEx CRC. He also has funding from a number of State and Federal Government bodies as well as BHP, Santos, Empire Energy, Teck Australia and the CSIRO.



    Newly discovered viruses in parasitic nematodes could change our understanding of how they cause disease

    Date: September 4, 2024

    Source: Liverpool School of Tropical Medicine 

    Summary:

    New research shows that parasitic nematodes, responsible for infecting more than a billion people globally, carry viruses that may solve the puzzle of why some cause serious diseases.


    FULL STORY

    New research shows that parasitic nematodes, responsible for infecting more than a billion people globally, carry viruses that may solve the puzzle of why some cause serious diseases.

    A study led by Liverpool School of Tropical Medicine (LSTM) used cutting-edge bioinformatic data mining techniques to identify 91 RNA viruses in 28 species of parasitic nematodes, representing 70% of those that infect people and animals. Often these are symptomless or not serious, but some can lead to severe, life-changing disease.

    Nematode worms are the most abundant animals on the planet, prevalent in all continents worldwide, with several species infecting humans as well as agriculturally and economically important animals and crops. And yet in several cases, scientists do not know how some nematodes cause certain diseases.

    The new research, published in Nature Microbiology, opens the door to further study of whether these newly discovered viruses -- only five of which were previously known to science -- could contribute to many chronic, debilitating conditions. If a connection can be proven, it could pave the way for more effective treatments in the future.

    Mark Taylor, Professor of Parasitology at LSTM, said: "This is a truly exciting discovery and could change our understanding of the millions of infections caused by parasitic nematodes. Finding an RNA virus in any organism is significant, because these types of viruses are well-known agents of disease. When these worms that live inside of us release these viruses, they spread throughout the blood and tissues and provoke an immune response.

    "This raises the question of whether any of the diseases that these parasites are responsible for could be driven by the virus rather than directly by the parasitic nematode."

    Parasitic nematodes including hookworms and whipworms can cause severe abdominal problems and bloody diarrhoea, stunted development and anaemia. Infection with filarial worms can lead to disfiguring conditions such as lymphoedema or 'elephantiasis', and onchocerciasis, or 'river blindness', that leads to blindness and skin disease.


    The study authors propose that these newly identified viruses may play a role in some of these conditions. For example, Onchocerciasis-Associated Epilepsy (OAE) that occurs in children and adolescents in Sub-Saharan Africa has recently been associated with onchocerciasis, but it is not known why this causes neurological symptoms such as uncontrollable repeated head nodding, as well as severe stunting, delayed puberty and impaired mental health.

    One of the viruses in the parasites that cause onchocerciasis identified in the new study is a rhabdovirus -- the type that causes rabies. The authors of the study suggest that if this virus is infecting or damaging human nerve or brain tissue, that could explain the symptoms of OAE.

    The full extent and diversity of the viruses living in parasitic nematodes, how they impact nematode biology and whether they act as drivers of disease in people and animals now requires further study.

    The illuminating discovery of these widespread yet previously hidden viruses was first made by Dr Shannon Quek, a Postdoctoral Research Associate at LSTM and lead author of the new study, who had initially been using the same data mining method to screen for viruses within mosquitoes that spread disease, before deciding to investigate nematodes.

    Dr Quek, who is from Indonesia, a country burdened by many parasitic nematodes, said: "As a child, I saw a lot of people infected with these diseases and I suffered from the dengue virus on three occasions. That got me interested in tropical diseases. Diseases caused by parasitic nematodes are very long-term, life-long illnesses that persistently affect people. It has a significant impact on people's quality of life, their economic outputs and mental health.

    "There are a lot of studies about the microbiomes of mosquitoes, and how the bacteria that lives inside can block the spread of viruses, which might stop vector-borne diseases like dengue. This interplay between organisms in the same host led me to think -- what else might be inside parasitic nematodes as well? Which after my discovery will now be the focus of our research."

    Story Source:

    Materials provided by Liverpool School of Tropical Medicine. Note: Content may be edited for style and length.


    Journal Reference:Shannon Quek, Amber Hadermann, Yang Wu, Lander De Coninck, Shrilakshmi Hegde, Jordan R. Boucher, Jessica Cresswell, Ella Foreman, Andrew Steven, E. James LaCourse, Stephen A. Ward, Samuel Wanji, Grant L. Hughes, Edward I. Patterson, Simon C. Wagstaff, Joseph D. Turner, Rhys H. Parry, Alain Kohl, Eva Heinz, Kenneth Bentum Otabil, Jelle Matthijnssens, Robert Colebunders, Mark J. Taylor. Diverse RNA viruses of parasitic nematodes can elicit antibody responses in vertebrate hosts. Nature Microbiology, 2024; DOI: 10.1038/s41564-024-01796-6


    Cite This Page:MLA
    APA
    Chicago
    Liverpool School of Tropical Medicine. "Newly discovered viruses in parasitic nematodes could change our understanding of how they cause disease." ScienceDaily. ScienceDaily, 4 September 2024. <www.sciencedaily.com/releases/2024/09/240904130919.htm>.
    Traditional infrastructure design often makes extreme flooding events worse

    Massive 2014 flooding event in southeast Michigan showed why systems thinking beats local thinking in flood protection


    Date: September 4, 2024

    Source: University of Michigan

    Summary:

    Much of the nation's stormwater infrastructure, designed decades to a century ago to prevent floods, can exacerbate flooding during the severe weather events that are increasing around the globe.


    FULL STORY



    Much of the nation's stormwater infrastructure, designed decades to a century ago to prevent floods, can exacerbate flooding during the severe weather events that are increasing around the globe, new research led by the University of Michigan demonstrates.


    The problem lies in traditional planning's failure to recognize flood connectivity: how surface runoff from driveways, lawns and streets -- and the flows in river channels and pipes -- are all interlinked. The result is interactions, often unanticipated, between different stormwater systems that can make flooding worse.

    "When we design, we typically focus on localized solutions," said Valeriy Ivanov, U-M professor of civil and environmental engineering and co-first author of the study published in Nature Cities. "We have an area of concern, sometimes it's a single plot of land, or a set of parcels that need to be connected by stormwater infrastructure, and we design specifically for those areas.

    "But those areas are impacted by flooding that occurs around them, and that means designed stormwater infrastructure may have unintended consequences."

    The study is based on record-breaking rainfall that hit Metro Detroit on Aug. 11, 2014, resulting in flooding across the region. The disaster closed highways, stranded drivers, and caused power outages and property damage to over 100,000 homes, with a cost of $1.8 billion. Researchers analyzed data from that event, particularly from the city of Warren, and placed their findings in the context of current U.S. stormwater design standards and flood warning practices to develop policy recommendations.

    Those include:Stormwater system designs should take a holistic, systemwide approach to flood mitigation, rather than the conventional approach focused on local solutions.
    Design guidelines for stormwater systems should be revised to consider connectivity in urban landscapes, including flows in subsurface infrastructure such as pipes and sewers, open channel flows such as rivers and streams, and overland flows over natural and built surfaces.
    Advanced computer models that represent the full spectrum of stormwater elements and the behavior of water in them should be mandated.
    Design scenarios should represent the diverse spectrum of factors that control water flow in urban areas, such as complex rainfall patterns, antecedent soil water conditions, and the operation of existing stormwater drainage systems.
    Flood hazard mapping approaches should expand their focus beyond river-adjacent floodplains to include risks in urban areas that may be far from permanent bodies of water.

    "Current flood mapping practices are indicative of outdated thinking that needs to change," said Vinh Tran, U-M assistant research scientist in civil and environmental engineering and co-first author. "Whether it's the Federal Emergency Management Agency or someone else producing it, they only provide estimates for floodplains that are near rivers. But here's the problem: In cities, flooding can happen far from any river or stream.


    "Take Warren, Michigan, for example. The official flood maps didn't show flood risks in parts of the city that were miles from any major waterway. And it's not just Warren -- this is typical all over the country."

    According to FEMA, flooding is "the most common and costly disaster in the U.S." That risk is increasing due to climate change.

    Financially, it's a problem. FEMA notes that between 1980 and 2000, FEMA's National Flood Insurance Program paid out $9.4 billion in insurance claims. Over the following 20-year period, the program paid out $62.2 billion -- an increase of over 660%.

    "Without updated designs, the economic impact of flooding will only grow, placing a heavier burden on governments and taxpayers," said Jeff Bednar, environmental resources manager for Macomb County and a research contributor on the project. "By investing in resilient infrastructure now, we not only protect our environment but also strengthen the foundation for economic growth."

    Story Source:

    Materials provided by University of Michigan. Original written by Jim Lynch. Note: Content may be edited for style and length.


    Journal Reference:Vinh Ngoc Tran, Valeriy Y. Ivanov, Weichen Huang, Kevin Murphy, Fariborz Daneshvar, Jeff H. Bednar, G. Aaron Alexander, Jongho Kim, Daniel B. Wright. Connectivity in urbanscapes can cause unintended flood impacts from stormwater systems. Nature Cities, 2024; DOI: 10.1038/s44284-024-00116-7


    Cite This Page:MLA
    APA
    Chicago
    University of Michigan. "Traditional infrastructure design often makes extreme flooding events worse." ScienceDaily. ScienceDaily, 4 September 2024. <www.sciencedaily.com/releases/2024/09/240904131025.htm>.