Wednesday, January 20, 2021

Constructing termite turrets without a blueprint

Researchers develop a mathematical model to explain the complex architecture of termite mounds

HARVARD JOHN A. PAULSON SCHOOL OF ENGINEERING AND APPLIED SCIENCES

Research News

IMAGE

IMAGE: THE INTERIOR OF A TERMITE NEST SHOWS COMPLEX, INTERCONNECTING FLOORS AND RAMPS. view more 

CREDIT: (IMAGE COURTESY OF GUY THERAULAZ/HARVARD SEAS)

Following a series of studies on termite mound physiology and morphogenesis over the past decade, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have now developed a mathematical model to help explain how termites construct their intricate mounds.

The research is published in the Proceedings of the National Academy of Sciences.

"Termite mounds are amongst the greatest examples of animal architecture on our planet," said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics and lead author of the study. "What are they for? How do they work? How are they built? These are the questions that have puzzled many scientists for a long time."

In previous research, Mahadevan and his team showed that day-to-night temperature variations drive convective flow in the mound that not only ventilates the colony but also move pheromone-like cues around, which trigger building behavior in termites.

Here, the team zoomed in further to understand how termites build the intricately connected floors in individual mounds without a plan or a planner. With experimentalists from the University of Toulouse, France led by Guy Theraulaz, the researchers mapped the interior structures of two nests using CT scans, and quantified the spacing and arrangement of floors and ramps. Adding to the complexity of the nests is the fact that not only do termites build simple ramps to connect floors but they also build spiral ramps, like the ramps in parking garages, to connect multiple floors.

Using these visualizations and incorporating the previous findings on how factors such as daily temperature shifts and pheromone flows drive building, OEB graduate student Alexander Heyde and Mahadevan constructed a mathematical framework to explain the layout of the mound.

Heyde and Mahadevan thought of each component of the mound -- the air, the mud and the termites -- as intermixed fluids that vary in space and time.

"We can think of the collection of hundreds of thousands of termites as a fluid that can sense its environment and act upon it," said Heyde. "Then you have a real fluid, air, transporting pheromones through that environment, which drives new behaviors. Finally, you have mud, which is moved around by the termites, changing the way in which the pheromones flow. Our mathematical framework provided us with clear predictions for the spacing between the layers, and showed the spontaneous formation of linear and helical ramps."

"Here is an example where we see that the usual division between the study of nonliving matter and living matter breaks down," said Mahadevan. "The insects create a micro-environment, a niche, in response to pheromone concentrations. This change in the physical environment changes the flow of pheromones, which then changes the termite behaviors, linking physics and biology through a dynamic architecture that modulates and is modulated by behavior. "

In addition to partially solving the mystery of how these mounds work and are built, the research may well have implications for swarm intelligence in a range of other systems and even understanding aspects of tissue morphogenesis.

###

The research was co-authored by Lijie Guo and Christian Jost. It was supported in part by the US and French National Science Foundations under grant numbers DGE-1144152, ANR-06-BYOS-0008, and PHY1606895.

Mystery of Martian glaciers revealed

Research shows Mars had as many as six to 20 ice ages during the past 300-800 million years

COLGATE UNIVERSITY

Research News

IMAGE

IMAGE: THIS IMAGE OF A GLACIER ON MARS SHOWS THE ABUNDANCE OF BOULDERS WITHIN THE ICE. HIGH-RESOLUTION IMAGING OF THE SURFACE OF MARS SUGGESTS THAT DEBRIS-COVERED GLACIER DEPOSITS FORMED DURING MULTIPLE... view more 

CREDIT: JOE LEVY/COLGATE UNIVERSITY

In a new paper published today in the Proceedings of the National Academies of ScienceS (PNAS), planetary geologist Joe Levy, assistant professor of geology at Colgate University, reveals a groundbreaking new analysis of the mysterious glaciers of Mars.

On Earth, glaciers covered wide swaths of the planet during the last Ice Age, which reached its peak about 20,000 years ago, before receding to the poles and leaving behind the rocks they pushed behind. On Mars, however, the glaciers never left, remaining frozen on the Red Planet's cold surface for more than 300 million years, covered in debris. "All the rocks and sand carried on that ice have remained on the surface," says Levy. "It's like putting the ice in a cooler under all those sediments."

Geologists, however, haven't been able to tell whether all of those glaciers formed during one massive Martian Ice Age, or in multiple separate events over millions of years. Since ice ages result from a shift in the tilt of a planet's axis (known as obliquity), answering that question could tell scientists how Mars' orbit and climate have changed over time -- as well as what kind of rocks, gases, or even microbes might be trapped inside the ice.

"There are really good models for Mars' orbital parameters for the last 20 million years," says Levy. "After that the models tend to get chaotic."

Levy concocted a plan to examine the rocks on the surface of the glaciers as a natural experiment. Since they presumably erode over time, a steady progression of larger to smaller rocks proceeding downhill would point to a single, long ice age event.

Choosing 45 glaciers to examine, Levy acquired high-resolution images collected by the Mars Reconnaissance Orbiter satellite and set out to count the size and number of rocks. With a resolution of 25 centimeters per pixel, "you can see things the size of a dinner table," Levy says.

Even at that magnification, however, artificial intelligence can't accurately determine what is or isn't a rock on rough glacier surfaces; so Levy enlisted the help of 10 Colgate students during two summers to count and measure some 60,000 big rocks. "We did a kind of virtual field work, walking up and down these glaciers and mapping the boulders," Levy says.

Levy initially panicked when, far from a tidy progression of boulders by size, the rock sizes seemed to be distributed at random. "In fact, the boulders were telling us a different story," Levy says. "It wasn't their size that mattered; it was how they were grouped or clustered."

Since the rocks were traveling inside the glaciers, they were not eroding, he realized. At the same time, they were distributed in clear bands of debris across the glaciers' surfaces, marking the limit of separate and distinct flows of ice, formed as Mars wobbled on its axis.

Based on that data, Levy has concluded that Mars has undergone somewhere between six and 20 separate ice ages during the past 300-800 million years. Those findings appear in PNAS, written along with six current or former Colgate students; Colgate mathematics professor Will Cipolli; and colleagues from NASA, the University of Arizona, Fitchburg State University, and the University of Texas-Austin.

"This paper is the first geological evidence of what Martian orbit and obliquity might have been doing for hundreds of millions of years," Levy says. The finding that glaciers formed over time holds implications for planetary geology and even space exploration, he explains. "These glaciers are little time capsules, capturing snapshots of what was blowing around in the Martian atmosphere," he says. "Now we know that we have access to hundreds of millions of years of Martian history without having to drill down deep through the crust -- we can just take a hike along the surface."

That history includes any signs of life potentially present from Mars' distant past. "If there are any biomarkers blowing around, those are going to be trapped in the ice too." At the same time, eventual explorers to Mars who might need to depend on extracting fresh water from glaciers to survive will need to know that there may be bands of rocks inside them that will make drilling hazardous. Levy and his colleagues are now in the process of mapping the rest of the glaciers on Mars' surface, hoping with the data they have, artificial intelligence can now we trained to take over the hard work of identifying and counting boulders.

That will bring us one step closer to a complete planetary history of the Red Planet -- including the age-old question of whether Mars could ever have supported life.

"There's a lot of work to be done figuring out the details of Martian climate history," says Levy, "including when and where it was warm enough and wet enough for there to be brines and liquid water."

###

Astronomers dissect the anatomy of planetary nebulae using Hubble Space Telescope images

Researchers from RIT and Green Bank Observatory shed new light on nebula formation processes

ROCHESTER INSTITUTE OF TECHNOLOGY

Research News

IMAGE

IMAGE: ON THE LEFT IS AN IMAGE OF THE JEWEL BUG NEBULA (NGC 7027) CAPTURED BY THE HUBBLE SPACE TELESCOPE IN 2019 AND RELEASED IN 2020. FURTHER ANALYSIS BY RESEARCHERS PRODUCED... view more 

CREDIT: STSCI, ALYSSA PAGAN

Images of two iconic planetary nebulae taken by the Hubble Space Telescope are revealing new information about how they develop their dramatic features. Researchers from Rochester Institute of Technology and Green Bank Observatory presented new findings about the Butterfly Nebula (NGC 6302) and the Jewel Bug Nebula (NGC 7027) at the 237th meeting of the American Astronomical Society on Friday, Jan. 15.

Hubble's Wide Field Camera 3 observed the nebulae in 2019 and early 2020 using its full, panchromatic capabilities, and the astronomers involved in the project have been using emission line images from near-ultraviolet to near-infrared light to learn more about their properties. The studies were first-of-their-kind panchromatic imaging surveys designed to understand the formation process and test models of binary-star-driven planetary nebula shaping.

"We're dissecting them," said Joel Kastner, a professor in RIT's Chester F. Carlson Center for Imaging Science and School of Physics and Astronomy. "We're able to see the effect of the dying central star in how it's shedding and shredding its ejected material. We're able to see that material that the central star has tossed away is being dominated by ionized gas, where it's dominated by cooler dust, and even how the hot gas is being ionized, whether by the star's UV or by collisions caused by its present, fast winds."

Kastner said analysis of the new HST images of the Butterfly Nebula is confirming that the nebula was ejected only about 2,000 years ago--an eyeblink by the standards of astronomy - and that the S-shaped iron emission that helps give it the "wings" of gas may be even younger. Surprisingly, they found that while astronomers previously believed they had located the nebula's central star, it was actually a star not associated with the nebula that is much closer to Earth than the nebula. Kastner said he hopes that future studies with the James Webb Space Telescope could help locate the actual central star.

The team's ongoing analysis of the Jewel Bug Nebula is built on a 25-year baseline of measurements dating back to early Hubble imaging. Paula Moraga Baez, an astrophysical sciences and technology Ph.D. student from DeKalb, Ill., called the nebula "remarkable for its unusual juxtaposition of circularly symmetric, axisymmetric, and point-symmetric (bipolar) structures." Moraga noted, "The nebula also retains large masses of molecular gas and dust despite harboring a hot central star and displaying high excitation states."

Jesse Bublitz '20 Ph.D. (astrophysical sciences and technology), now a postdoctoral researcher at Green Bank Observatory, has continued analysis of NGC 7027 with radio images from the Northern Extended Millimeter Array (NOEMA) Telescope, where he identified molecular tracers of ultraviolet and X-ray light that continue to shape the nebula. The combined observations from telescopes at other wavelengths, like Hubble, and bright molecules CO+ and HCO+ from NOEMA indicate how different regions of NGC 7027 are affected by the irradiation of its central star.

"We're very excited about these findings," said Bublitz. "We had hoped to find structure that clearly showed CO+ and HCO+ spatially coincident or entirely in distinctive regions, which we did. This is the first map of NGC 7027, or any planetary nebula, in the molecule CO+, and only the second CO+ map of any astronomical source."

###

In addition to Kastner, Moraga, and Bublitz, the research team involved in the HST imaging work includes Rodolfo Montez Jr. '10 Ph.D. (astrophysical sciences and technology) from Harvard-Smithsonian CfA; Bruce Balick from University of Washington; as well as Adam Frank and Eric Blackman from University of Rochester. Bublitz's international team of collaborators on radio molecular line imaging of NGC 7027 includes Kastner, Montez Jr., and astrophysicists from Spain, France, and Brazil.

For more information, contact Luke Auburn at 585-490-3198, luke.auburn@rit.edu, or on Twitter: @lukeauburn.

Blockchain technology to optimize P2P energy trading

Mitsubishi Electric and Tokyo Tech Develop - For more flexible trading environments and expanded use of surplus electricity from renewable energy

TOKYO INSTITUTE OF TECHNOLOGY

Research News

IMAGE

IMAGE: A RESEARCH TEAM OF TOKYO INSTITUTE OF TECHNOLOGY, IN COLLABORATION WITH MITSUBISHI ELECTRIC CORPORATION, HAS DEVELOPED A NEW TECHNOLOGY AN ORIGINAL BLOCKCHAIN TECHNOLOGY THAT CAN OPTIMIZE PEER-TO-PEER (P2P) ENERGY... view more 

CREDIT: SYMPOSIUM ON CRYPTOGRAPHY AND INFORMATION SECURITY

A Tokyo Tech research team led by Specially Appointed Professor Takuya Oda of the Institute of Innovative Research and Professor Keisuke Tanaka of the School of Computing, in collaboration with Mitsubishi Electric Corporation, has developed a new technology an original blockchain[1] technology that can optimize peer-to-peer (P2P) energy trading[2]. The technology is expected to contribute to more effective use of surplus electricity from renewable energy by creating trading environments that flexibly respond to shared trading needs, particularly to maximize the amount of surplus electricity available in the market at any given time. Beginning in April, the P2P energy trading system's performance will be evaluated to further optimize the algorithm as required, aiming at the earliest possible commercialization.

Flexible P2P energy trading using blockchain technology to optimally match trades

General blockchain technologies, such as those used for trading cryptocurrencies, require a large number of high-performance computers to perform massive calculations, or mining[3], when determining the creator of a new block to record trading information. To clear buy and sell orders, a conventional method[4] can be used to match conditions and process transactions, but this does not optimize matches.

Mitsubishi Electric and Tokyo Tech have developed an original blockchain technology to optimize P2P energy trading. Their newly devised distributed-optimization algorithm enables customer computers to share trading goals and data and then optimally match buy and sell orders using minimal computations. Also, the new mining method is executable on a micro-computing server.


CAPTION

A research team of Tokyo Institute of Technology, in collaboration with Mitsubishi Electric Corporation, has developed a new technology an original blockchain technology that can optimize peer-to-peer (P2P) energy trading. The technology is expected to contribute to more effective use of surplus electricity from renewable energy by creating trading environments that flexibly respond to shared trading needs, particularly to maximize the amount of surplus electricity available in the market at any given time.

CREDIT

Symposium on Cryptography and Information Security

Conventional blockchain

*Mining to determine block generators

*Run on high-performance computers

New system's blockchain

*Mining to optimize trading

*Run on small computers

As shown in Fig. 2, using the new technology involves four steps. In the first step, information on buy and sell orders with a common trading goal (market surplus, profit, etc.) are shared by computing servers during a predetermined timeframe. Second, each server searches for buy and sell orders matched to the common goal in the first step. Third, each server shares its search results. In the fourth and final step, each server receives the search results and generates a new block by selecting trades that best meet the shared goal, which it adds it to each blockchain.

P2P energy trading benefits consumers by enabling them to engage in direct trading as buyers and sellers and sometimes make trades even above offer prices or below bid prices if the right match is found. In addition, since new offers and bids can be issued, a customer who fails to make a trade can improve the chances of making a trade by changing the price or quantity in the next offer or bid after refering to the previous offer/bid conditions.

The trading goal shared among consumers can be changed to meet specific needs. For example, if the goal is to maximize the amount of surplus electricity available in the market, the best offer-bid match can be searched with respect to this common goal. If a surplus exists, the trading price would likely decrease according to market principles and thus demand for recharging electric vehicles would likely increase due to cheaper prices. As a result, surplus power usage would be maximized without retail power companies having to respond to market fluctuations.

If the priority is the profits of prosumers and consumers, the shared goal can be adjusted to increase total profits, or raise the lowest profits, of all prosumers and consumers. By allowing the shared goal to be adjusted, the system enables flexible P2P energy trading.

To ensure fair trading, the decentralized search of a solution occurs in parallel on multiple computers where equivalent[5] matches are selected randomly.

Background

As a measure against global warming, Japan introduced a feed-in tariffs (FIT) system to enable electric utilities to purchase electricity generated from solar and other renewable energy sources at fixed prices. The law for this system, however, has been gradually retired since November 2019. With buy-seller contracts under the FIT system now expiring, prosumers need to find alternative retail-power suppliers to sell their surplus electricity at acceptable prices.

As a new method of trading surplus power, P2P energy trading is attracting attention by allowing prosumers to directly trade surplus energy at optimal prices. Since the current Electricity Utilities Industry Law allows only retail power companies to sell electricity to consumers, demonstrations are now being conducted to test the use of blockchain technology for P2P power transactions under the management of retail electric operators, aiming at verifying the effectiveness of such transactions while identifying any possible issues.

###

Footnotes

    [1] A distributed ledger technology that connects blocks of transaction information in a chain along a timeline

    [2] Direct trading of electricity between consumers (users) and prosumers (producers who are also users)

    [3] The use of extensive iterative calculations to search for a value that satisfies an uncommon condition. In a typical blockchain, the first person to succeed in a search receives a processing fee.

    [4] A contract method used in the stock and wholesale electricity markets, known as the continuous double-auction method.

    [5] Orders where all information, except order name, is the same, including order time, price, quantity and type of buy or sell.

About Tokyo Institute of Technology

Tokyo Tech stands at the forefront of research and higher education as the leading university for science and technology in Japan. Tokyo Tech researchers excel in fields ranging from materials science to biology, computer science, and physics. Founded in 1881, Tokyo Tech hosts over 10,000 undergraduate and graduate students per year, who develop into scientific leaders and some of the most sought-after engineers in industry. Embodying the Japanese philosophy of "monotsukuri," meaning "technical ingenuity and innovation," the Tokyo Tech community strives to contribute to society through high-impact research. https://www.titech.ac.jp/english/

How to train a robot (using AI and supercomputers)

UT Arlington computer scientists use TACC systems to generate synthetic objects for robot training

UNIVERSITY OF TEXAS AT AUSTIN, TEXAS ADVANCED COMPUTING CENTER

Research News

IMAGE

IMAGE: EXAMPLES OF 3D POINT CLOUDS SYNTHESIZED BY THE PROGRESSIVE CONDITIONAL GENERATIVE ADVERSARIAL NETWORK (PCGAN) FOR AN ASSORTMENT OF OBJECT CLASSES. PCGAN GENERATES BOTH GEOMETRY AND COLOR FOR POINT CLOUDS, WITHOUT... view more 

CREDIT: [WILLIAM BEKSI, UT ARLINGTON]

Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum).

To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed.

Beksi's research focuses on robotics, computer vision, and cyber-physical systems. "In particular, I'm interested in developing algorithms that enable machines to learn from their interactions with the physical world and autonomously acquire skills necessary to execute high-level tasks," he said.

Years later, now with a research group including six PhD computer science students, Beksi recalled the Roomba training problem and begin exploring solutions. A manual approach, used by some, involves using an expensive 360 degree camera to capture environments (including rented Airbnb houses) and custom software to stitch the images back into a whole. But Beksi believed the manual capture method would be too slow to succeed.

Instead, he looked to a form of deep learning known as generative adversarial networks, or GANs, where two neural networks contest with each other in a game until the 'generator' of new data can fool a 'discriminator.' Once trained, such a network would enable the creation of an infinite number of possible rooms or outdoor environments, with different kinds of chairs or tables or vehicles with slightly different forms, but still -- to a person and a robot -- identifiable objects with recognizable dimensions and characteristics.

"You can perturb these objects, move them into new positions, use different lights, color and texture, and then render them into a training image that could be used in dataset," he explained. "This approach would potentially provide limitless data to train a robot on."

"Manually designing these objects would take a huge amount of resources and hours of human labor while, if trained properly, the generative networks can make them in seconds," said Mohammad Samiul Arshad, a graduate student in Beksi's group involved in the research.

CAPTION

Examples of 3D point clouds synthesized by a progressive conditional generative adversarial network (PCGAN).

CREDIT

William Beksi, Mohammad Samiul Arshad, UT Arlington

GENERATING OBJECTS FOR SYNTHETIC SCENES

After some initial attempts, Beksi realized his dream of creating photorealistic full scenes was presently out of reach. "We took a step back and looked at current research to determine how to start at a smaller scale - generating simple objects in environments."

Beksi and Arshad presented PCGAN, the first conditional generative adversarial network to generate dense colored point clouds in an unsupervised mode, at the International Conference on 3D Vision (3DV) in Nov. 2020. Their paper, "A Progressive Conditional Generative Adversarial Network for Generating Dense and Colored 3D Point Clouds," shows their network is capable of learning from a training set (derived from ShapeNetCore, a CAD model database) and mimicking a 3D data distribution to produce colored point clouds with fine details at multiple resolutions.

"There was some work that could generate synthetic objects from these CAD model datasets," he said. "But no one could yet handle color."

In order to test their method on a diversity of shapes, Beksi's team chose chairs, tables, sofas, airplanes, and motorcycles for their experiment. The tool allows the researchers to access the near-infinite number of possible versions of the set of objects the deep learning system generates.

"Our model first learns the basic structure of an object at low resolutions and gradually builds up towards high-level details," he explained. "The relationship between the object parts and their colors -- for examples, the legs of the chair/table are the same color while seat/top are contrasting -- is also learned by the network. We're starting small, working with objects, and building to a hierarchy to do full synthetic scene generation that would be extremely useful for robotics."

They generated 5,000 random samples for each class and performed an evaluation using a number of different methods. They evaluated both point cloud geometry and color using a variety of common metrics in the field. Their results showed that PCGAN is capable of synthesizing high-quality point clouds for a disparate array of object classes.

SIM2REAL

Another issue that Beksi is working on is known colloquially as 'sim2real.' "You have real training data, and synthetic training data, and there can be subtle differences in how an AI system or robot learns from them," he said. "'Sim2real' looks at how to quantify those differences and make simulations more realistic by capturing the physics of that scene - friction, collisions, gravity -- and by using ray or photon tracing."

The next step for Beksi's team is to deploy the software on a robot, and see how it works in relationship to the sim-to-real domain gap.

The training of the PCGAN model was made possible by TACC's Maverick 2 deep learning resource, which Beksi and his students were able to access through the University of Texas Cyberinfrastructure Research (UTRC) program, which provides computing resources to researchers at any of the UT System's 14 institutions.

"If you want to increase resolution to include more points and more detail, that increase comes with an increase in computational cost," he noted. "We don't have those hardware resources in my lab, so it was essential to make use of TACC to do that."

In addition to computation needs, Beksi required extensive storage for the research. "These datasets are huge, especially the 3D point clouds," he said. "We generate hundreds of megabytes of data per second; each point cloud is around 1 million points. You need an enormous amount of storage for that."

While Beksi says the field is still a long way from having really good robust robots that can be autonomous for long periods of time, doing so would benefit multiple domains, including health care, manufacturing, and agriculture.

"The publication is just one small step toward the ultimate goal of generating synthetic scenes of indoor environments for advancing robotic perception capabilities," he said

Appearance, social norms keep students off Zoom cameras

CORNELL UNIVERSITY



Research News

ITHACA, N.Y. - When the semester shifted online amid the COVID-19 pandemic last spring, Cornell University instructor Mark Sarvary, and his teaching staff decided to encourage - but not require - students to switch on their cameras.

It didn't turn out as they'd hoped.

"Most of our students had their cameras off," said Sarvary, director of the Investigative Biology Teaching Laboratories in the College of Agriculture and Life Sciences (CALS).

"Students enjoy seeing each other when they work in groups. And instructors like seeing students, because it's a way to assess whether or not they understand the material," Sarvary said. "When we switched to online learning, that component got lost. We wanted to investigate the reasons for that."

Sarvary and co-instructor Frank Castelli, a CALS Active Learning Initiative education postdoctoral researcher, surveyed the 312 students in the class at the end of the semester to figure out why they weren't using their cameras - and to try to come up with ways to turn that trend around.

They found that while some students had concerns about the lack of privacy or their home environment, 41% of the 276 respondents cited their appearance, and more than half of those who selected "other" as their reason for keeping their camera off explained that it was the norm. This suggested that explicitly encouraging camera use could boost participation without adverse effects, the researchers said.

"We felt it would create an undue burden and add stress in an already stressful time to require the cameras to be on, and we found this could disproportionately affect certain groups of students, such as underrepresented minorities," said Castelli, first author of "Why Students Do Not Turn on Their Video Cameras During Online Classes and an Equitable and Inclusive Plan to Encourage Them to Do So," which published Jan. 10 in Ecology and Evolution.

In the survey, Castelli and Sarvary found that among underrepresented minorities, 38% said they were concerned about other people being seen behind them, and 26% were concerned about their physical location being visible; while among non-underrepresented minorities, 24% were worried about people behind them and 13% about their physical locations.

"It's a more inclusive and equitable strategy to not require the cameras but to instead encourage them, such as through active learning exercises," Castelli said. "This has to be done carefully so it doesn't create an environment where you're making those without cameras on feel excluded. But at the same time, if you don't explicitly ask for the cameras and explain why, that can lead to a social norm where the camera is always off. And it becomes a spiral of everyone keeping it off, even though many students want it on."

Establishing camera use as the norm, explaining the reasons that cameras improve the class and employing active learning techniques and icebreakers, such as beginning each class with a show-and-tell, are techniques that could boost participation, the authors suggested in the study.

"Active learning plays an important role in online learning environments," Sarvary said. "Students may feel more comfortable turning on their cameras in breakout rooms. Polling software or Zoom chats are alternatives that can help the instructor assess student learning, even without seeing nodding or smiling or confused expressions."

The authors also suggested instructors address potential distractions, give breaks to help maintain attention, and poll their students to learn about other potential barriers to camera use or participation.

Though they have not yet formally studied the effect, the instructors in the 24 sections of the laboratory class all observed improved camera participation when they used some of these strategies last fall.

"We wanted to develop an engaging and inclusive virtual learning environment, using the best pedagogical methods," Sarvary said. "That's why we wanted to know why the students are not turning their cameras on, rather than just assuming or, as some instructors do, requiring them to turn their cameras on. We wanted to take an education research approach and figure out the best practices."

###

The research was supported by the CALS Active Learning Initiative Grants Program.

Making microwaves safer for children

New journal article recounts Rush pediatricians' campaign to protect children from severe burns

RUSH UNIVERSITY MEDICAL CENTER

Research News

A 15-year research and advocacy effort to make microwave ovens safer has led to a change in national manufacturing standards that will make microwaves more difficult for young children to open, protecting them from the severe microwave-related burns that scar hundreds of kids under 5 years old in the United States each year. Researchers at Rush University Medical Center and other leaders of the campaign, who worked diligently to document the frequency and severity of these injuries and young children's vulnerability to them, published the results of their efforts in The Journal of Pediatrics on Jan. 20.

"The efforts we have made over the last 15 years have recently led to a change in how microwaves are made. In 2023, manufacturers will be required to incorporate two distinct actions into opening the door of a microwave oven," said Kyran Quinlan MD, MPH, a professor of pediatrics and the director of the Division of General Pediatrics at Rush University Medical Center. Quinlan and fellow Rush pediatrician Gina Lowell, MD, worked closely with colleagues from the University of Chicago Medical Center Burn Unit to achieve the goal of child resistant microwave doors.

"We believed that if young children were less able to open microwave oven doors, they would be less likely to be able to remove the heated items, such as liquids and noodle soups, and they would therefore be less at risk for severe burns," Quinlan added. 'These burns are terribly painful, require burn unit care, and can be life-changing because of the permanent scarring that often results."

An abstract presenting the results from the advocacy effort was chosen as the Program Description Abstract of the Year during the annual conference of the Injury Free Coalition for Kids in December 2019.

According to the American Burn Association, 22.5% of the patients in burn units across the United States are children, and most of the children are admitted for scalds rather than burns suffered in fires.

The researchers' 15-year journey began with documenting the issue, and learning how microwave ovens are regulated and how to effectively advocate for child-resistant microwave oven doors. In 2008, the researchers published an analysis of three years of burn unit data that found that a significant percentage of burns involved young children opening a microwave themselves and removing and spilling the oven's heated contents.

In that report, the youngest child burned was 18 months old, and nearly half of the cases required skin grafts -- a process of removing skin from one part of the body to cover the section of skin that was burned.

The researchers then evaluated and documented the abilities of healthy young children ages 15 months to 4 years to use microwave ovens. They recorded whether the children could open the microwave oven door with either a push mechanism or a pull mechanism, turn on the microwave and remove its contents.

The study revealed that children as young as 17 months could open both a push- and a pull-open microwave oven door, remove what was in the microwave, and even turn it on. Nearly all children were able to perform each of these tasks by the age of 2.

The researchers then worked to submit a proposed change in the design requirements of microwave oven doors to make them more difficult for young children to open. Quinlan and Lowell worked with the child product safety advocacy group Kids in Danger to enlist college engineering students to design child-resistant microwave doors to demonstrate their feasibility. The team also presented their research at national meetings and created a short video to put a face with the statistics about microwave-related childhood burns.

Eventually, Quinlan and Marla Robinson MSc, OTR/L, assistant director of therapy services at University of Chicago Medical Center and a partner in the advocacy campaign, became voting members of the Underwriters Laboratories Standards Technical Panel for microwave ovens, which sets the standards required for any microwave sold in the United States.

After previous failed attempts to change the standard for microwave oven doors, the panel passed the change by a single vote in September 2018. The new microwave ovens also will be required to include labels warning families of the risk of severe scalds to young children.

"Early on, we felt that making microwave oven doors harder to open would truly protect young children from these burns, which were frequently severe," Quinlan said. "This took longer than we expected, but it was so worth it.

"We knew we had work to do to share this idea and to provide enough information to convince others who could be a part of the change we sought, but ultimately, our idea was a good one, and with persistence we realized our goal."

###

 

Breakthrough in understanding 'tummy bug' bacteria

UNIVERSITY OF EXETER

Research News

IMAGE

IMAGE: SCANNING ELECTRON MICROSCOPE (SEM) IMAGE OF HEALTHY, GROWING VIBRIO PARAHAEMOLYTICUS. view more 

CREDIT: UNIVERSITY OF EXETER BIOIMAGING UNIT

Scientists have discovered how bacteria commonly responsible for seafood-related stomach upsets can go dormant and then "wake up".

Vibrio parahaemolyticus is a marine bacterium that can cause gastroenteritis in humans when eaten in raw or undercooked shellfish such as oysters and mussels.

Some of these bacteria are able to turn dormant in poor growth conditions such as cold temperatures - and can remain in that state of hibernation for long periods before resuscitating.

University of Exeter scientists have identified a population of these dormant cells that are better at waking up, and have discovered an enzyme involved in that waking up process.

"Most of these bacteria die when they encounter poor growth conditions, but we identified sub-populations of bacteria that are able to stay dormant for long periods of time," said lead author Dr Sariqa Wagley, of the University of Exeter.

"We found that this population has a better ability to revive when conditions improve.

"Our tests show that when these dormant bacteria are revived they are just as virulent and able to cause disease."

The findings could have implications for seafood safety, as dormant cells are not detectable using routine microbiological screening tests and the true bacterial load (amount of bacteria) could be underestimated.

"When they go dormant, these bacteria change shape, reduce respiration activities and they don't grow like healthy bacteria on agar plates used in standard laboratory tests, so they are much harder to detect," Dr Wagley explained.

"Using a range of tools, we were able to find dormant bacteria in seafood samples and laboratory cultures and look at their genetic content to look for clues in how they might survive for long periods.

"It is important to note that thorough cooking kills bacteria in seafood.

"Our results may also help us predict the conditions that dormant bacteria need in order to revive."

Working with the seafood industry, the Exeter team identified a lactate dehydrogenase enzyme that breaks down lactic acid into pyruvate, a key component of several metabolic pathways (chemical reactions in a cell).

CAPTION

Scanning Electron Microscope (SEM) image of dormant Vibrio parahaemolyticus.

CREDIT

University of Exeter Bioimaging Unit

The findings suggest that lactate dehydrogenase is essential both for maintaining bacterial dormancy and resuscitation back to an active form.

Vibrio parahaemolyticus usually grows in warm and tropical marine environments, although Dr Wagley said that due to rising sea temperatures in recent years it is now prevalent in UK waters during the summer months.

During the winter, it is not detected in the marine environment around the UK and it is thought to die due to the cold winter temperatures.

This study could explain how Vibrio parahaemolyticus is able remerge in the environment during the summer.

###

The study was partly funded by the Biotechnology and Biological Sciences Research Council (BBSRC), with additional funding and support from Lyons Seafoods.

The paper, published in the journal PLOS Pathogens, is entitled: "Bacterial dormancy: a subpopulation of viable but non-culturable cells demonstrates better fitness for revival."

CAPTION

Scanning Electron Microscope (SEM) image of dormant Vibrio parahaemolyticus

Exploration of toxic Tiger Rattlesnake venom advances use of genetic science techniques

In deciphering a simple, but particularly deadly venom, the research opens avenues for exploration of how genes produce traits

UNIVERSITY OF SOUTH FLORIDA (USF INNOVATION)

Research News




VIDEO: A RESEARCH TEAM LED BY THE UNIVERSITY OF SOUTH FLORIDA STUDIED THE GENOME OF THE TIGER RATTLESNAKE. view more 

CREDIT: MICHAEL P. HOGAN, FLORIDA STATE UNIVERSITY

The Tiger Rattlesnake possesses the simplest, yet most toxic venom of any rattlesnake species, and now new research from a team lead by a University of South Florida biologist can explain the genetics behind the predator's fearsome bite.

Published in the new edition of "Proceedings of the National Academy of Sciences," USF Department of Integrative Biology Assistant Professor Mark Margres and colleagues across the southeastern United States have sequenced the genome of the Tiger Rattlesnake to understand the genotype of the venom trait. Despite the simplicity of the Tiger Rattlesnake's venom, Margres says it is roughly 40 times more toxic than the venom of the Eastern Diamondback Rattlesnakes here in Florida.

Their work is the most complete characterization of the venom gene-regulatory network to date and its identification of key mechanisms in producing the particularly toxic venom will help scientists explain a wide array of genetic questions.

"Simple genotypes can produce complex traits," Margres said. "Here, we have shown the opposite is also true - a complex genotype can produce simple traits."

CAPTION

Mark Margres photographs an Eastern Diamondback Rattlesnake at Caladesi Island, Florida.

CREDIT

University of South Florida


Margres collaborated with colleagues at Clemson University, Florida State University and the University of South Alabama, in the project, which sought to explain whether trait differences are derived from differences in the number of genes, their sequence or how they are regulated. Their work is only the second time a rattlesnake genome has been decoded.

An organism's genotype is the set of genes it carries, and its phenotype is all of its observable characteristics, which can be influenced by its genes, the environment in which it lives, and other factors. Evolutionary biologists work to understand how genes influence the variation in phenotype among otherwise similar organisms. In this case, they looked at why different species of rattlesnakes differ in venom composition and toxicity.

Tiger Rattlesnakes are native to the Sonoran Desert of southern Arizona and northern Mexico where the relatively small pit viper preys on lizards and rodents. While some species of rattlesnakes have complex venoms that are the result of scores of genes, Margres said the Tiger Rattlesnake's venom is quite simple - as few as 15 of its 51 toxic-producing genes actively drive the production of proteins and peptides that attacks its prey's nervous system, forces blood pressure to drop and causes blood clotting to cease.

The team found that the number of venom genes greatly exceeds the number of proteins produced in the simple phenotype, indicating a complex process was at the heart of the toxic venom and the Tiger Rattlesnakes even has toxic genes to spare.

CAPTION

Tiger Rattlesnake used in the study was found in the Santa Rita Mountains in Arizona.

"Only about half of the venom genes in the genotype were expressed," Margres said. "To me, the interesting part is why are the non-expressed genes still present? These genes can make functional toxins, they just don't. That needs to be explored further."

Beyond understanding this one species of venomous snake, Margres said the research will help advance genetic science by showing the techniques more commonly used on genetic research on mice and fruit flies, organisms that are often used in genetic studies, can also work when applied to less-studied organisms like snakes. The team used genetic sequencing techniques that are common in human genetics research and in doing so, opened the door for scientists to understand the genotype-phenotype relationship in many other organisms.

Another potential side benefit of the research, Margres said, is that snake venom is used in medicine for humans to combat stroke and high blood pressure. The more scientists understand about venom, the better medical engineering can apply that knowledge in drug discovery and development.

###

The research was funded by the National Science Foundation and Clemson University.