It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Tuesday, August 22, 2023
Spanish astronomer discovers new active galaxy
by Tomasz Nowakowski , Phys.org
Images of the newfound galaxy. Credit: Elio Quiroga Rodriguez (2023).
By analyzing the images of the Sombrero Galaxy obtained with the Hubble Space Telescope (HST), Elio Quiroga Rodriguez of the Mid Atlantic University in Spain, has identified a peculiar object, which turned out to be a galaxy hosting an active galactic nucleus (AGN). The finding was reported in a paper published August 11 on the pre-print server arXiv.
An AGN is a compact region at the center of a galaxy, more luminous than the surrounding galaxy light. Studies show that AGNs are very energetic due either to the presence of a black hole or star formation activity at the core of the galaxy.
Astronomers generally divide AGNs into two groups based on emission line features. Type 1 AGNs show broad and narrow emission lines, while only narrow emission lines are present in Type 2 AGNs. However, observations revealed that some AGNs transition between different spectral types; therefore, they were dubbed changing-look (CL) AGNs.
Sombrero Galaxy (also known as Messier 104 or NGC 4594) is an unbarred spiral galaxy located between the borders of the Virgo and Corvus constellations, some 31 million light years away. With a mass of about 800 billion solar masses, it is one of the most massive objects in the Virgo galaxy cluster. It also hosts a rich system of globular clusters.
Rodriguez has recently investigated HST images of the Sombrero Galaxy, focusing one particular object in its halo. He found that this object, previously classified as a globular cluster candidate, may be a barred spiral galaxy of the SBc type, with an AGN at its center.
"While studying HST images available on the HST Legacy website of the halo of M104 (HST proposal 9714, PI: Keith Noll), the author observed at 12:40:07.829-11:36:47.38 (in j2000) an object about four arcseconds in diameter. A study with VO tools suggests that the object is a SBc galaxy with AGN (Seyfert)," the paper reads.
The object is cataloged in the Pan-STARRS1 data archive as PSO J190.0326-11.6132. By analyzing the data from the Aladin Sky Atlas RGB Rodriguez found that PSO J190.0326-11.6132 is a galaxy with a dominant central arm, nucleus and possibly two spiral arms with hot young stars and dust. The astronomer proposes that the newfound galaxy should be named the "Iris Galaxy."
The study found that PSO J190.0326-11.6132 has a radial velocity at a level of 1,359 km/s. Rodriguez assumes that the object, if gravitationally bound to the Sombrero Galaxy, could be its satellite with an angular size of around 1,000 light years.
However, the author of the paper noted that if the Iris Galaxy is not associated with the Sombrero Galaxy, its distance may be some 65 million light years. In this scenario, the angular size of the newly detected should be about 71,000 light years.
The X-ray emission luminosity of the Iris Galaxy was measured to be approximately 18 tredecillion erg/s, assuming a distance of 65 million light years. Such luminosity indicates the presence of an active galactic nucleus, however further observations are required in order to determine whether this is a Type 1 or Type 2 AGN.
Cosmological redshift depends upon a galaxy's distance. Credit: NASA/JPL-Caltech/R. Hurt (Caltech-IPAC)
In 1929 Edwin Hubble published the first solid evidence that the universe is expanding. Drawing upon data from Vesto Slipher and Henrietta Leavitt, Hubble demonstrated a correlation between galactic distance and redshift. The more distant a galaxy was, the more its light appeared shifted to the red end of the spectrum.
We now know this is due to cosmic expansion. Space itself is expanding, which makes distant galaxies appear to recede away from us. The rate of this expansion is known as the Hubble parameter, and while we have a good idea of its value, there is still a bit of tension between different results.
One of the difficulties in resolving this tension is that thus far we can only measure cosmic expansion as it appears right now. This also means we can't determine whether cosmic expansion is due to general relativity or a more subtle extension of Einstein's model. But as powerful new telescopes are built, we might be able to observe the evolution of cosmic expansion thanks to what is known as the redshift drift effect.
The Hubble parameter has a value of about 70 km/s per megaparsec. This means if a galaxy is about 1 megaparsec away (about 3 million light-years), then the galaxy appears to be moving away from us at about 70 km/s. If a galaxy is 2 megaparsecs away, it will appear to recede at about 140 km/s. The greater a galaxy's distance, the greater its apparent speed.
Since the universe is still expanding, with each passing year a galaxy is a bit more distant, and that means its redshift should become slightly larger. In other words, cosmic expansion means that the redshifts of galaxies should drift more to the red over time.
Theoretical redshift drift based on the standard model. Credit: ESO / ELT Science Case
This drift is extremely small. For a galaxy 12 billion light-years away, its apparent speed would be about 95% of the speed of light, while its drift would be just 15 cm/s each year. That's much too small for current telescopes to observe. But when the Extremely Large Telescope (ELT) starts gathering data in 2027, it should be able to observe this drift in time. Estimates are that after 5–10 years of precise observations, ELT should be able to see redshift drifts on the order of 5 cm/s.
While this will become a powerful tool in our understanding of the universe, it will take a lot of data and a lot of time. So a new paper, published on the preprint server arXiv, proposes a different method using gravitational lensing.
The authors call this effect redshift difference. Rather than observing the redshift of a galaxy over decades, the team proposes looking for distant galaxies that are gravitationally lensed by a closer galaxy. Lots of distant galaxies are lensed by a closer galaxy between us and the distant one, but most lensed galaxies appear as a single distorted arc to the side of the foreground galaxy.
How gravitational lensing can create multiple galaxy images. Credit: NASA/CXC/M.Weiss
But sometimes gravitational lensing can create multiple images of a distant galaxy. Since each image of the distant galaxy takes a slightly different path to reach us, the distance of each path is also slightly different. So instead of waiting decades for a galaxy to move farther away from us, we can get snapshots of the galaxy separated by years or decades. Each image would have a slightly different redshift, and by comparing these we could measure the redshift drift.
This is still beyond our current ability to detect. But while we are waiting for telescopes such as the ELT to come online, we can search for distant lensed galaxies with multiple images. That way when we do have the ability to detect redshift drift, we won't have to wait decades for the result.
More information: Chengyi Wang et al, The Redshift Difference in Gravitational Lensed Systems: A Novel Probe of Cosmology, arXiv (2023). DOI: 10.48550/arxiv.2308.07529
Fulvio Melia, Definitive test of theRh = ctuniverse using redshift drift, Monthly Notices of the Royal Astronomical Society: Letters (2016). DOI: 10.1093/mnrasl/slw157
New images from NASA’s James Webb Space Telescope of the well-known Ring Nebula provide unprecedented spatial resolution and spectral sensitivity. In the NIRCam (Near-Infrared Camera) image on the left, the intricate details of the filament structure of the inner ring are particularly visible in this dataset. On the right, the MIRI (Mid-InfraRed Instrument) image reveals particular details in the concentric features in the outer regions of the nebulae’s ring. Credit: ESA/Webb, NASA, CSA, M. Barlow (University College London), N. Cox (ACRI-ST), R. Wesson (Cardiff University).
NASA's James Webb Space Telescope obtained images of the Ring Nebula, one of the best-known examples of a planetary nebula. Much like the Southern Ring Nebula, one of Webb's first images, the Ring Nebula displays intricate structures of the final stages of a dying star. Roger Wesson from Cardiff University tells us more about this phase of a sun-like star's stellar lifecycle and how Webb observations have given him and his colleagues valuable insights into the formation and evolution of these objects, hinting at a key role for binary companions.
"Planetary nebulae were once thought to be simple, round objects with a single dying star at the center. They were named for their fuzzy, planet-like appearance through small telescopes. Only a few thousand years ago, that star was still a red giant that was shedding most of its mass."
"As a last farewell, the hot core now ionizes, or heats up, this expelled gas, and the nebula responds with colorful emission of light. Modern observations, though, show that most planetary nebulae display breathtaking complexity. It begs the question: how does a spherical star create such intricate and delicate non-spherical structures?"
"The Ring Nebula is an ideal target to unravel some of the mysteries of planetary nebulae. It is nearby, approximately 2,200 light-years away, and bright—visible with binoculars on a clear summer evening from the northern hemisphere and much of the southern. Our team, named the ESSENcE (Evolved StarS and their Nebulae in the JWST Era) team, is an international group of experts on planetary nebulae and related objects."
"We realized that Webb observations would provide us with invaluable insights, since the Ring Nebula fits nicely in the field of view of Webb's NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) instruments, allowing us to study it in unprecedented spatial detail. Our proposal to observe it was accepted (General Observers program 1558), and Webb captured images of the Ring Nebula just a few weeks after science operations started on July 12, 2022."
"When we first saw the images, we were stunned by the amount of detail in them. The bright ring that gives the nebula its name is composed of about 20,000 individual clumps of dense molecular hydrogen gas, each of them about as massive as the Earth. Within the ring, there is a narrow band of emission from polycyclic aromatic hydrocarbons, or PAHs—complex carbon-bearing molecules that we would not expect to form in the Ring Nebula."
"Outside the bright ring, we see curious 'spikes' pointing directly away from the central star, which are prominent in the infrared but were only very faintly visible in Hubble Space Telescope images. We think these could be due to molecules that can form in the shadows of the densest parts of the ring, where they are shielded from the direct, intense radiation from the hot central star."
"Our MIRI images provided us with the sharpest and clearest view yet of the faint molecular halo outside the bright ring. A surprising revelation was the presence of up to ten regularly-spaced, concentric features within this faint halo. These arcs must have formed about every 280 years as the central star was shedding its outer layers. When a single star evolves into a planetary nebula, there is no process that we know of that has that kind of time period."
"Instead, these rings suggest that there must be a companion star in the system, orbiting about as far away from the central star as Pluto does from our sun. As the dying star was throwing off its atmosphere, the companion star shaped the outflow and sculpted it. No previous telescope had the sensitivity and the spatial resolution to uncover this subtle effect."
"So how did a spherical star form such a structured and complicated nebulae as the Ring Nebula? A little help from a binary companion may well be part of the answer."
Throughout the 21st century, teamwork has come to define the modern work environment. Driven by advances in communication technology, working collaboratively is, as management experts will tell you, how you harness the "collective intelligence."
Collective intelligence is often seen as greater than the sum of its parts: superior to the cumulative individual intelligence of the group's members. Capitalizing on it is said to improve task accuracy (finding better and more correct answers), and enhance task efficiency (finding good answers faster). This in turn leads to quicker and higher quality completion. In other words, when we work together, our performance improves. This has been one of the major factors shaping our modern societies.
At the same time, though, both research and popular idiom underline the limits inherent to the concept. If "two heads are better than one" suggests the benefits of collaboration, "too many cooks spoil the broth" suggests the opposite.
I led a recent study looking at whether training and team composition might affect how efficient people are when working together. We found that the benefits of collective intelligence can be outweighed by the cost of having to coordinate between team members.
The dynamics of teamwork
We designed an experimental study using an existing online citizen science project, Wildcam Gorongosa. Participants analyze webcam photos taken in Gorongosa National Park, Mozambique, to find and identify animal species and behavior.
We invited 195 members of the public to our lab in Oxford to participate. The experiment comprised two stages: training, then testing, which they did first on their own and then in teams of two. They had five subtasks to complete: detecting the presence of animals; counting how many there were; identifying what they were doing (standing, resting, moving, eating or interacting); specifying whether any young were present; and identifying the animals from 52 possible species (the option of "nothing here" was included, but not "I don't know").
We split the participants into two groups. One received targeted training with images similar to the test set. The other received general training with a diverse range of images.
We found the type of training did indeed affect their performance. For those with general training—the "generalists"—efficiency initially improved, but then declined, once they were tested on the specific set of test images. By contrast, those with targeted training—the "experts"—consistently maintained or improved their performance.
How performance changed during the training and testing stages:
The average change in efficiency tracks the number of correct classifications per minute. Credit: Taha Yasseri, CC BY-NC-ND
To investigate the impact team dynamics would have, we then formed three types of group: these featured either two experts, two generalists, or a mixed pair.
Surprisingly, we found that neither two generalists nor a mixed group performed better than a single generalist working alone. Even two experts working together did not do better than a single expert.
How the groups' composition affected their efficiency:
Efficiency varied over time depending on whether the work was carried out by mixed groups, groups of experts, or single experts. Credit: Taha Yasseri, CC BY-NC-ND
We also found that while having an expert in a group improved accuracy for the more complex tasks, it did not improve the group's efficiency. In other words, the team got more correct answers but took considerably longer to do so. And for simple tasks, there was no improvement in accuracy from having an expert. Ultimately, the time that team members lost in coordinating with each other outweighed the benefit of adding an expert to the group.
What can we say about the future of work?
Research has long shown that underperformance in a group is often due to what social psychologists term "process losses". The collective intelligence of a team can, for example, be adversely affected by social biases and what cognitive scientists call "herding" effects, because these can lead to collective decisions being disproportionately influenced by a few members of the group who are less competent yet more confident.
Further, psychologists speak about "social loafing" to describe a person performing poorly because they are part of a group—they have the impression that others will do the job without them needing to contribute. When a large number of team members follow this strategy, it can result in the combined efforts of the team being even lower than the sum of individual efforts.
Research also shows the importance of social learning in the context of effective collaborative working, which our study highlights. The experimental method we implemented involved individual training sessions followed immediately by testing the teamwork—this precluded opportunities for people to learn by observing their coworkers' performance, and therefore one of the advantages of being part of the group during the learning process was eliminated.
The context in which teamwork and collaboration take place matters, as do the tools available for coordination between team members. As internet-based communication technologies are used not only for large-scale voluntary collaborative endeavors, such as citizen science projects, but also for remote working, it is important to recognize the potential effects of different training approaches and team dynamics.
When team members don't have the chance to observe other workers and reap the advantages of social learning, and when communication is less efficient than face-to-face interactions, the costs and benefits in the teamwork equation can shift. Our research shows that this is even more pronounced when you're dealing with simpler tasks that don't require extensive creative problem-solving. Opting to work individually could indeed be a more viable approach.
The dynamics of teamwork—whether in the workplace or in the context of collective action—are complex. While collaboration offers benefits in specific contexts, it is essential to consider the trade-offs between time, accuracy and efficiency. Coordination comes at a cost.
A study in the International Journal of Computational Systems Engineering has investigated the e-commerce landscape and how it is affected by financial crises. The insights from the study offer a financial accounting crisis early warning system that companies might use to predict and pre-empt economic turmoil.
The global pandemic underscored the vulnerability of businesses and economies, making the need for astute financial foresight more crucial than ever. Xiaoyang Meng of the Accounting Institute at Jiaozuo University in Jiaozuo, China, has looked specifically at the impact on China and has devised a novel system that melds adaptability and prediction.
The approach uses partial least squares (PLS) analysis, a sophisticated data analysis technique, and integrates it with the backpropagation (BP) neural network. The model can then discern the indicators of impending financial distress within the e-commerce sector. Meng has demonstrated the model's proficiency on historical data for 11 financially sound enterprises and nine that were teetering on the brink of financial crisis and shown that the model could reveal the early signs of financial distress with an accuracy surpassing 90% and for some tests an accuracy of 98%.
The implications of this research may well be far-reaching. In an era where economic turbulence threatens the stability of even the most robust business, Meng's PLS-BP model offers a grounded means to identify an imminent crisis and so put in place strategies that might avert it.
Meng acknowledges that the model as it stands has some limitations. While the early detection methodology offers good levels of precision, it is essentially a static approach. To better navigate real-world financial ecosystems, she proposes the integration of the model with system dynamics theory. This could potentially then offer a dynamic early warning system capable of adapting to the ever-evolving intricacies of e-commerce.
More information: Xiaoyang Meng, Research on e-commerce neural network financial accounting crisis early warning model combined with partial least squares, International Journal of Computational Systems Engineering (2023). DOI: 10.1504/IJCSYSE.2023.132913
Patents were meant to reward inventions. It's time to talk about how they might not
by Rebecca Giblin, Anders Furze and Kimberlee Weatherall, The Conversation
Credit: Shutterstock
For hundreds of years, we've been told patents help deliver big new inventions, such as life-saving drugs.
They are meant to be a bargain between the inventor and the public: tell us how your invention works, and we'll give you a fixed time—a patent protection period—in which you're the only person who can make use of it.
Such exclusive rights make it easier for inventors to profit from their investments in research and development, and in theory encourage innovation we wouldn't get otherwise, which benefits us all.
We've long had to accept this bargain on faith. But those core assumptions about patents are increasingly being subject to empirical testing, and—as we detail in a new podcast starting this week—often coming up short.
Many claimed inventions likely don't work
Consider the most basic assumption—that the public will benefit from patented technologies—both as products and services and as building blocks for more innovation. That's meant to be achieved by inventors coming up with inventions that work, then telling the patent office how they work.
But research by Janet Freilich from Fordham University in the United States suggests there is a "replicability crisis" in patent claims that rivals those in other fields.
Freilich graded the experiments said to back up 500 life sciences patents against the requirements of the journal Nature—and found as many as 90% didn't stack up and probably couldn't be reproduced.
She says, "patent law relies on the assumption that, when a patent is filed, it has been "reduced to practice"—meaning that the invention works. The reality is that most inventions likely do not work, casting serious doubt on this assumption."
One of the reasons is the way the patent system works.
Under the "first-to-file" system, when two inventors are developing similar technologies, the inventor who gets to the patent office first gets the patent. Freilich argues this means that any experiments they do conduct will inevitably be quick and preliminary.
Worse still, only 45% of the patents she examined were backed up by any sort of experiment. The remaining 55% were supported only by speculative and hypothetical evidence. This is allowed under patent law at least in some countries, but it does raise questions about what exactly the public gets out of the system.
Research sometimes accelerates when patents expire
We're also told we grant patents to "incentivize" (encourage and reward) the kind of work needed to get expensive products, like new drugs, to market.
But again, this theory doesn't always match the practice.
Research led by John Liddicoat of King's College London finds that in the development of many drugs, the most expensive trials (Phase II and Phase III) actually accelerate once patent protection expires, when universities and hospitals feel free to step in.
This raises a number of serious questions:
why aren't patents providing an incentive for patent holders to do these trials?
should we shorten the length of patents to bring forward trials?
are commercial organizations best suited for trials?
Generative AI could also lead to more patents: in the words of the government agency IP Australia, it is likely to reduce "the barrier to creating novelty." This could potentially overwhelm patent offices with even lower quality patents.
It is also likely to mean patent examiners can no longer rely on the default assumption that the claimed invention is solely the result of human exertion, raising the possibility of needing to rethink the patent bargain.
Invention matters more than ever
More and more, new research and new developments are telling us we can no longer take the claims made for the patent system on faith.
That makes this an ideal time to talk about whether our patent system is best equipped for that task, exploring a range of options for finding and applying the innovations we need—and bringing in voices and perspectives that are too often marginalized in intellectual property debates.
These ideas are discussed in the first episode of IP Provocations, a new podcast asking challenging and sometimes controversial questions around IP and data. You can listen here, or via your favorite podcast platform.
Just watched a rom-com on Netflix? Well, now there are "top picks" just like it in your queue, thanks to the streaming service's matching system.
Every time you engage with Amazon, Facebook, Instagram, Netflix and other online sites, algorithms are busy behind the scenes chronicling your activities and queuing up recommendations tailored to what they know about you. The invisible work of algorithms and recommendation systems spares people from a deluge of information and ensures they receive relevant responses to searches.
But Sachin Banker says a new study shows that subtle gender biases shape the information served up to consumers. The study, co-authored by Shelly Rathee, Arul Mishra and Himanshu Mishra, has been published in the Journal of Consumer Psychology.
"Everything you're consuming online is filtered through some kind of recommendation system," said Banker, an assistant professor of marketing in the David Eccles School of Business, "and what we're interested in understanding is whether there are subtle biases in the types of information that are presented to different people and how this affects behavior."
Banker, who researches how people interact with technology, said gender bias is relatively easy to study because Facebook provides information about that social characteristic. And it is not necessarily surprising that algorithms, which make word associations based by all the texts on the internet, pick up biases since they exist in human language. The bigger questions are to what extent is this happening and what are the consequences.
In their multi-step study, the researchers first demonstrated that gender biases embedded in language are incorporated in algorithms—associating women with negative psychographic attributes such as impulsivity, financially irresponsibility and irrationality.
The team then tweaked a single word in an ad—"responsible" versus "irresponsible"—to see who subsequently received it; they found ads with negative psychographic attributes were more likely to be delivered to women even though there was no basis for such differentiation.
It's a self-perpetuating loop, the researchers found, because undiscerning consumers reinforce the algorithmic gender bias by often clicking on the ads and accepting the recommendations they receive.
"There are actual consequences of this bias in the marketplace," Banker said. "We've shown that people are split into different kinds of consumption bubbles and that influences your thoughts and behaviors and reinforces historical biases."
For online technology companies, the study indicates a greater need for proactive work to minimize gender bias in algorithms used to serve up consumer ads and recommendations, Banker said. People advertising products may want to test an ad before launch to detect any subtle bias that might affect delivery. And consumers should be aware of the biases at play as they scroll through their feeds and visit online sites and engage in healthy skepticism about ads and recommendations.
Most people, he said, don't totally understand how these things work because the online giants don't disclose much about their algorithms, though Amazon appears to be providing more information to consumers about the recommendations they receive.
And while this study focused on gender bias, Banker said biases likely exist for other social characteristics, such as age, sexual orientation, religious affiliation, etc.
More information: Shelly Rathee et al, Algorithms propagate gender bias in the marketplace—with consumers' cooperation, Journal of Consumer Psychology (2023). DOI: 10.1002/jcpy.1351
The Big Wasp Survey, a citizen science project involving thousands of volunteers throughout the UK, has yielded important genetic insights into the common wasp, reports a study led by UCL researchers.
Using data and samples of Vespula vulgaris (a species of yellowjacket wasp known as the common wasp) collected by amateur "citizen scientists," the researchers conducted the first large-scale genetic analysis of the insect across its native range.
The insights, published in Insect Molecular Biology, revealed a single population of the wasp across Britain, while the insect's genetics were more differentiated across the Irish Sea in Northern Ireland. The researchers say this demonstrates that the wasp is effective at dispersing itself widely, which may be one reason for its success in human-modified environments, both in its native range in Europe and as an invasive species in Asia and elsewhere.
Lead author Iona Cunningham-Eurich (UCL Center for Biodiversity & Environment Research, UCL Biosciences, and the Natural History Museum), who began the research as an MSci student before beginning a Ph.D. at UCL, said, "Vespula vulgaris is one of the most familiar wasps to most of us in the UK, as we very commonly see it in late summer.
"Despite the wasp being ubiquitous in Britain, a lot of research has been conducted outside of its native range, so this study is important in establishing a baseline of information about the common wasp's ecology and dispersal behaviors at home.
"By finding a single, intermixing population across Britain, our findings add to evidence that the common wasp is very good at spreading across the landscape, which may be because the queens are able to fly great distances, either on their own steam, aided by the wind, or accidentally transported by people."
The Big Wasp Survey, sponsored by the Royal Entomological Society, has been running annually since 2017. Anyone can take part by making a homemade trap with an old plastic bottle with a little bit of beer to entice wasps. For the first few years, citizen scientists were asked to send in the wasps they had trapped, but since the COVID-19 pandemic, participants have been taught how to identify their wasps at home using online videos.
For the present study, the UCL-led research team analyzed 393 wasp samples collected in the first two years of the survey. By comparing samples collected all across the country, the researchers were able to find evidence of high rates of gene flow, contributing to little genetic differentiation across Britain.
Co-author and co-founder of the Big Wasp Survey, Professor Adam Hart (University of Gloucestershire) said, "Our study showcases the potentially immense value of citizen science projects. Even though the samples were simply and inexpertly preserved, we were still able to conduct advanced genetic analyses and yield very useful findings. We are very grateful to our citizen scientists, as this could not have been achieved without people willing to volunteer their time to contribute to scientific research."
In its first five years, 3,389 people took part in the Big Wasp Survey, collecting over 62,000 wasps. The data have produced reliable species distribution maps that are comparable in quality to those generated from four decades worth of data collected by experts, and the researchers are continuing to gain new insights into the diversity and distribution of social wasp species across the UK. The survey may also help to detect the yellow legged Asian hornet (Vespa velutina), which is an invasive species across Europe and has occasionally been sighted in the UK.
Senior author and co-founder of the Big Wasp Survey, Professor Seirian Sumner (UCL Center for Biodiversity & Environment research, UCL Biosciences) said, "Wasps are incredibly important as natural pest controllers and pollinators, so it's very exciting that we're able to improve our understanding of this common and fascinating insect with the support of citizen scientists, while also giving them the opportunity to get better acquainted with wasps, and see this much maligned insect in a different light."
The Big Wasp Survey is seeking citizen scientists for its end-of-summer sampling week, commencing 26 August; visit www.bigwaspsurvey.org to register and find out more.
New study uses video to show honey bees switch feeding mechanisms as resource conditions vary
Credit: Pixabay/CC0 Public Domain
Within nature, the compatibility of animals' feeding mechanisms with their food sources determines the breadth of available resources and how successfully the animals will feed. Those who feed on the nectar of flowers, such as honey bees (Apis mellifera), encounter a range of corolla depths and sugar concentrations. The nectar of flowers comprises the prime source of energy and water for honey bees, who are dominant pollinators throughout the world.
Regional climate conditions contribute to plants producing nectar in various volumes and concentrations, and evaporation and pollinator feeding frequently leaves the nectar reservoirs of flowers below capacity. Thus, honey bees' ability to feed "profitably" under naturally varying resource conditions is advantageous.
An international research team has studied the feeding mechanisms of honey bees and has reported on how these bees switch between using suction and lapping to derive maximum benefit from flowers of varied sizes and concentrations of sugar. The team's study, titled "Honey bees switch mechanisms to drink deep nectar efficiently," is published in Proceedings of the National Academy of Sciences(PNAS).
Prior research has studied suction and lapping feeding behaviors in honey bees, but this paper notes that earlier studies have included an "unnatural condition of virtually unlimited nectar supplies. Such large nectar pools are rare in the flowers they visit in the wild."
In this study, the team shows that during feeding, the distance between the honey bees' mouthparts and the nectar, as well as the concentration of sugar within the nectar, are determining factors in whether the bees procure it via suction or lapping.
Microparticles showing how a honey bee sucks deep nectar. Credit: Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2305436120
The feeding mechanism of honey bees consists of a long, thin proboscis that includes a pair of labial palpi inside a pair of elongated galea (lobes). This structure serves as a feeding tube, and the bee's hairy glossa (tongue) is situated inside.
For this study, the researchers pre-starved honey bees, fed them sucrose solutions of 10%, 30%, and 50% w/w contained in capillary tubes, and used high-speed videography to record the bees' feeding behavior with each. Blue dye, which had no nutritional effect, was added to each solution for visual contrast, and the bees tolerated it well.
At the 10% w/w concentration, bees inserted their proboscides deep into the solution and extended their tongues beyond the proboscis tubes to suction the liquid until they could no longer reach the meniscus.
At 30% w/w—an approximate concentration commonly found in nature, according to the research—the bees began by quickly lapping the solution, slowing down as the liquid level receded, and gradually switched to suction until the liquid receded beyond their reach.
Microparticles showing how a honey bee sucks deep nectar. Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2305436120
At 50% w/w, the bees lapped the solution, beginning rapidly and slowing as the liquid receded, and did not transition to suction at all. Notably, the bees showed a smaller decrease in lapping frequency at 50% w/w than during their transitions to suction at 30% w/w.
The researchers conclude that short-distance lapping helps honey bees most efficiently gather nectar to fill the maximum collection capacity of their tongues, but lapping at longer distances would be less efficient than suction due to more time needed for capillary filling. The decreased lapping frequency observed with the thickest of the tested nectars indicates an allowance for the capillary rise needed for maximum tongue-saturation capacity.
In summary, regardless of nectar depth, lapping is a better strategy for honey bees collecting nectars of high sugar concentrations, and suction is faster for those with lower concentrations of sugar.
The team also believes that the feeding mechanism switching behavior may be a unique ability among this species. Noting a previous study published in Soft Matter in which bumble bees (Bombus terrestris) did not switch between feeding behaviors with nectars of varying viscosities, the team in this study also used a solution of 10% w/w with bumble bees to test whether this would change according to their distance from the liquid, but it did not; the bumble bees only exhibited lapping.
Furthermore, previous research with orchid bees (Euglossini) has shown that they mainly use their long proboscides to procure nectar via suction, but that they have exhibited both suction and lapping with small amounts (films) of nectar. However, there is currently no evidence to show that orchid bees make this switch based on corolla depth or nectar properties.
More information: Jiangkun Wei et al, Honey bees switch mechanisms to drink deep nectar efficiently, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2305436120
Illustration of exfiltration and infiltration processes. Illustration explains how exfiltration or infiltration of groundwater occurs due to unloading or loading of ice sheets over saturated subglacial sediment half-space. At the ice-sediment interface, z = 0 and z increases down into sediment. Credit: Science Advances (2023). DOI: 10.1126/sciadv.adh3693
Up to twice the amount of subglacial water that was originally predicted might be draining into the ocean—potentially increasing glacial melt, sea level rise, and biological disturbances.
Two Georgia Tech researchers, Alex Robel and Shi Joyce Sim, have collaborated on a new model for how water moves under glaciers. The new theory shows that up to twice the amount of subglacial water that was originally predicted might be draining into the ocean—potentially increasing glacial melt, sea level rise, and biological disturbances.
The paper, published in Science Advances, "Contemporary Ice Sheet Thinning Drives Subglacial Groundwater Exfiltration with Potential Feedbacks on Glacier Flow," is co-authored by Colin Meyer (Dartmouth), Matthew Siegfried (Colorado School of Mines), and Chloe Gustafson (USGS).
While there are pre-existing methods to understand subglacial flow, these techniques involve time-consuming computations. In contrast, Robel and Sim developed a simple equation, which can predict how fast exfiltration, the discharge of groundwater from aquifers under ice sheets, using satellite measurements of Antarctica from the last two decades.
"In mathematical parlance, you would say we have a closed form solution," explains Robel, an assistant professor in the School of Earth and Atmospheric Sciences. "Previously, people would run a hydromechanical model, which would have to be applied at every point under Antarctica, and then run forward over a long time period." Since the researchers' new theory is a mathematically simple equation, rather than a model, "the entirety of our prediction can be done in a fraction of a second on a laptop," Robel says.
Robel adds that while there is precedence for developing these kinds of theories for similar kinds of models, this theory is specific in that it is for the particular boundary conditions and other conditions that exist underneath ice sheets. "This is, to our knowledge, the first mathematically simple theory which describes the exfiltration and infiltration underneath ice sheets."
"It's really nice whenever you can get a very simple model to describe a process—and then be able to predict what might happen, especially using the rich data that we have today. It's incredible," adds Sim, a research scientist in the School of Earth and Atmospheric Sciences. "Seeing the results was pretty surprising."
One of the main arguments in the paper underscores the potentially large source of subglacial water—possibly up to double the amount previously thought—that could be affecting how quickly glacial ice flows and how quickly the ice melts at its base. Robel and Sim hope that the predictions made possible by this theory can be incorporated into ice sheet models that scientists use to predict future ice sheet change and sea level rise.
A dangerous feedback cycle
Aquifers are underground areas of porous rock or sediment rich in groundwater. "If you take weight off aquifers like there are under large parts of Antarctica, water will start flowing out of the sediment," Robel explains, referencing a diagram Sim created. While this process, known as exfiltration, has been studied previously, focus has been on the long time scales of interglacial cycles, which cover tens of thousands of years.
There has been less work on modern ice sheets, especially on how quickly exfiltration might be occurring under the thinning parts of the current-day Antarctic ice sheet. However, using recent satellite data and their new theory, the team has been able to predict what exfiltration might look like under those modern ice sheets.
"There's a wide range of possible predictions," Robel explains. "But within that range of predictions there is the very real possibility that groundwater may be flowing out of the aquifer at a speed that would make it a majority, or close to a majority of the water that is underneath the ice sheet."
If those parameters are correct, that would mean there's twice as much water coming into the subglacial interface than previous estimates assumed.
Ice sheets act like a blanket, sitting over the warm earth and trapping heat on the bottom, away from Antarctica's cold atmosphere—and this means that the warmest place in the Antarctic ice sheet is at the bottom of a sheet, not on the surface. As an ice sheet thins, the warmer underground water can exfiltrate more readily, and this heat gradient can accelerate the melting that an ice sheet experiences.
"When the atmosphere warms up, it takes tens of thousands of years for that signal to diffuse through an ice sheet of the size of the thickness of the Antarctic ice sheet," Robel explains. "But this process of exfiltration is a response to the already-ongoing thinning of the ice sheet, and it's an immediate response right now."
Broad implications
Beyond sea level rise, this additional exfiltration and melt has other implications. Some of the places of richest marine productivity in the world occur off the coast of Antarctica, and being able to better predict exfiltration and melt could help marine biologists better understand where marine productivity is occurring, and how it might change in the future.
Robel also hopes this work will open the doorway to more collaborations with groundwater hydrologists who may be able to apply their expertise to ice sheet dynamics, while Sim underscores the need for more fieldwork.
"Getting the experimentalists and observationalists interested in trying to help us better constrain some of the properties of these water-laden sediments—that would be very helpful," Sim says. "That's our largest unknown at this point, and it heavily influences the results."
"It's really interesting how there's a potential to draw heat from deeper in the system," she adds. "There's quite a lot of water that could be drawing more heat out, and I think that there's a heat budget there that could be interesting to look at."
Moving forward, collaboration will continue to be key. "I really enjoyed talking to Joyce (Sim) about these problems," Rober says, "because Joyce is an expert on heat flow and porous flow in the Earth's interior, and those are problems that I had not worked on before. That was kind of a nice aspect of this collaboration. We were able to bridge these two areas that she works on and that I work on."
More information: Alexander A. Robel et al, Contemporary ice sheet thinning drives subglacial groundwater exfiltration with potential feedbacks on glacier flow, Science Advances (2023). DOI: 10.1126/sciadv.adh3693