Monday, May 11, 2026

New book ‘AI TO EYE’ brings together 40+ voices from science, art, and media to ask: how do we really want to live with AI?




Journal Center of Harbin Institute of Technology

AI TO EYE 

image: 

AI TO EYE: Between Code and Conscience offers a concise and vivid portrait of how artificial intelligence is reshaping contemporary society. 

view more 

Credit: publisher/author/designer of AI TO EYE: Between Code and Conscience






As artificial intelligence reshapes education, healthcare, work, and creativity, public debate too often swings between hype and fear. A new book from SmartBot editorial board member Prof. Robert Riener cuts through the noise — not with technical jargon or a single expert opinion, but with a chorus of human voices.

 

Artificial intelligence (AI) is transforming the world at a breathtaking pace. Few technologies have penetrated our lives so deeply in such a short time, affecting education, work, health, art, and communication. This rapid transformation both fascinates and unsettles. Public discourse swings between visions of unbounded optimism and apocalyptic warnings. Media and films amplify fears of losing control or employment, while inflated promises can strain trust in the technology itself. Amid all the hype and alarm, what we truly need is in danger of being overlooked: a thoughtful, grounded conversation about how we want to coexist with this new form of intelligence.

 

AI TO EYE seeks to open that space. Rather than offering a technical manual or a single interpretive voice, the book captures the AI moment as it is currently unfolding, through a carefully curated chorus of perspectives. It brings together contributions from science, business, art, journalism, and media. More than 40 individuals from California’s Silicon Valley and Silicon Beach, the symbolic epicenters of the digital world, share their views. Its contributors include international leaders and visionaries as well as renowned scientists, journalists, artists, composers, film producers, actors, an astronaut, and a Disney executive. Others come from the German-speaking world but maintain close personal or professional ties to California. This mosaic took shape during my research stay in the summer of 2025, when I was a Thomas Mann Fellow in California.

 

"I aim to examine the relationship between humans and machines not only from a technical standpoint but also from a cultural and societal one. AI TO EYE is neither a textbook nor a collection of scientific reports; it brings together essays and concise statements in a deliberately polyphonic exploration of artificial intelligence’s role in a society in flux. These voices do not advance a single argument. Instead, they challenge and complement one another, revealing tensions and contradictions, and allowing society itself to speak back to the technology that is increasingly shaping it. Together, they paint a vivid, often surprising, picture of how AI is reshaping our self-understanding and what it discloses about us."

———Robert Riener

 

The cultural engagement with artificial intelligence is by no means new. As early as 1968, in 2001: A Space Odyssey, Stanley Kubrick presented one of the most precise and simultaneously poetic visions of machine intelligence. The onboard computer H.A.L. 9000 initially appears as the ideal rational system, devoted to its purpose: “I am putting myself to the fullest possible use, which is all, I think, that any conscious entity can ever hope to do.” Yet this apparent perfection begins to shift. H.A.L. no longer merely serves the human crew but starts to assert control over them, captured in one of the most iconic lines of technological defiance: “I am sorry, Dave. I’m afraid I can’t do that.” In the end, the system itself seems to unravel, losing not only its function but gaining something eerily human in the process: “Dave, my mind is going. I can feel it. I can feel it. My mind is going …” Since then, this theme has echoed through film history, from Colossus: The Forbin Project to Blade Runner and The Terminator, to Her and Ex Machina. The central question remains the same: Where does the machine end, and where does the human begin?

 

Today, as AI is becoming an inseparable part of daily life, this question gains renewed urgency. AI TO EYE gathers voices that do not seek to instruct but to explore, that do not judge but observe, and that speak not merely about technology, but about the society that produces, deploys, and negotiates it.

 

The book invites readers to see AI as a mirror of our times, as an expression of our creativity, our fears, and our desire for understanding and control. It addresses anyone curious about what AI reveals about us and willing to meet it, unflinchingly, eye to eye.

 

About the content:

The book contains 14 essays, followed by about 15-20 quotes in respective themes. The quotes are from influential, partly famous protagonists, which I interviewed during my stay as a Thomas Mann Fellow in California. Here is the list of essays:

Essay 1   From Myths to Machines: How AI Learned to Think. By Robert Riener (Zurich, L.A.)

Essay 2   Chances of AI for Healthcare and Beyond. By Julia Vogt (Zurich)

Essay 3   AI: A Tool for Inclusion? By Robert Riener (Zurich, L.A.)

Essay 4   AI and Education: A Student Perspective. By Luke Reinkensmeyer (Irvine)

Essay 5   Is AI Disrupting the Path from Campus to Career? By Ursula Renold (Zurich)

Essay 6   AI and the Arts: Risks, Possibilities and Human Responsibility. By Kelli Sharp (Irvine)

Essay 7   Aura Farming: Can AI Generate Rizz? Renée Reizman (L.A.)

Essay 8   The Infinite Rehearsal: Music and AI. By Steven Walter (Bonn)

Essay 9   Reflections on AI Privacy and Security. By Verena Zimmermann (Zurich)

Essay 10 Outgrowing the Paperclip Obsession: There Is Hope That AI Will Become Ethical. By Haewon Jeong (Santa Barbara)

Essay 11 AI and Intellectual Property: Evolution, Disruption, or Both? Markus Hauschild (Pasadena)

Essay 12 When Algorithms Meet Accountability: AI and the Future of Journalism. By Lukas Görög (Zurich)

Essay 13 Could One Steer Humans and Societies with Generative AI? By Dirk Helbing (Zurich)

Essay 14 After Intelligence: On What Remains Human. By Robert Riener (Zurich, L.A.)
 

Product Details

Title: AI TO EYE: Between Code and Conscience

Author: Robert Riener

Publisher: vdf Hochschulverlag AG (ETH Zurich)

ISBN-10 ‏ : ‎ 372814228X

ISBN-13 ‏ : ‎ 978-3728142283

Print Length: 144 pages

Language: English

Retail Price: CHF 39.00 / EUR 42.00 

Formats: Paperback, eBook

Availability: Direct from publisher at https://vdf.ch/ai-to-eye.html or via Amazon AI TO EYE: Between Code and Conscience : Riener, Robert: Amazon.de: Books


AI-generated images of depression depict more stereotypes and arouse greater stigmatization

So determines a study by UPF that analyzed the opinions of associations of people suffering from depression, young people and professionals of science and health communication




Universitat Pompeu Fabra - Barcelona






Images generated using artificial intelligence (AI) depict more stereotypes and stigmas around depression than images used by the media to illustrate the disease. This is the main conclusion of a study on the perception held by different groups –including associations of patients, young people and communication professionals– of the images used by the media when talking about depression. “The images generated by AI depict more concepts related to stigma such as marginalization or social exclusion”, warns Núria Saladié, first author of the study and a member of the Science, Communication and Society Studies Centre (CCS) at Pompeu Fabra University (UPF). According to the authors, in order to convey news about mental health responsibly, avoiding reproducing stereotypes, there is a need to understand that technology is not neutral and to take into account the recommendations issued by patient associations.  

AI-generated images tend to depict people alone, in the shadows or against a backlight, with their faces hidden and without taking part in any activity. This accentuates stereotypes and stigma and has a negative effect on people suffering from depression. So determines a study published in the journal JMIR Human Factors, which has examined the perception of different groups of the population –including patient associations, young people and communication professionals– of the images used by the media to depict the disease. 

Many AI-generated images do not reflect the diversity of experiences associated with the disease”, explains Carolina Llorente, also an author of the study and a researcher at the CCS-UPF. Llorente highlights that “being able to take into account the vision of people who have experienced the disease up close has been one of the most valuable aspects to avoid perpetuating stereotypes”.  

The study also reveals that when people know that the image has been generated by AI they are more critical than when they do not, which suggests that transparency around the use of AI can influence the way these representations are interpreted. “AI is already being used –and will be increasingly used– in mental health communication”, Saladié explains. And she adds, “If we want this communication to be responsible, we require a more careful and critical approach to the use of AI”. 

To be able to communicate news about mental health responsibly, avoiding stereotypes, it must be understood that “AI tools do not generate images neutrally: they respond to the instructions they receive. Therefore, it is important to think carefully about the prompts and review the results critically”, points out Gema Revuelta, director of the CCS-UPF and leader of the study, which concludes that “improving the quality of visual representations related to depression depends on teamwork when pooling the vision and knowledge of patient organizations, mental health experts, science journalists, AI developers and researchers”. 

Journal

DOI

Method of Research

Subject of Research

Article Title

AI-embodied surgical robots can revolutionize surgery—if regulatory questions addressed

Journal

DOI

Subject of Research

Article Title

Article Publication Date

New USF study tests whether AI can reliably predict immune responses

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date

Uncovering new ways to break down tight football defenses through AI

Journal

DOI

Method of Research

Subject of Research

Article Title

Physician-reported safety outcomes of AI-generated hospital course summaries

JAMA Network Open



About The Study:

 In this study, a large language model-based agentic workflow produced hospital course summaries that were frequently used with minimal risk of harm identified. The intervention was associated with a reduction in physician burnout, supporting the viability of AI summarization to mitigate documentation burden.


Corresponding Author: To contact the corresponding author, Francois Grolleau, MD, PhD, email grolleau@stanford.edu.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2026.16556)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Media advisory: This study is being presented at the 2026 Society of General Internal Medicine Annual Meeting.

Embed this link to provide your readers free access to the full-text article 

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2026.16556?guestAccessKey=1b34668e-afe8-4888-aa3d-dd05b3b83eff&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=050826

About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication. 

Frontiers in Science Deep Dive webinar series: AI-embodied surgical robots can revolutionize surgery—if regulatory questions addressed

A complimentary virtual symposium from Frontiers



Frontiers





Embodying surgical robots with next-gen AI can safely augment practice if ethical and regulatory questions are addressed.  

This is according to a new Frontiers in Science lead article in which researchers Prof Prokar DasguptaDr Alejandro Granados, and Dr Nicholas Raison, explore how sensor-rich operating rooms and AI surgical co-pilots could enable more precise, data-driven, personalized surgery. Their article outlines how advances in multimodal data integration, machine learning, and robotic systems could enhance situational awareness, intraoperative decision-making, and team performance.  

It highlights how these technologies may enable anticipatory behaviors, adaptive learning, and improved coordination across surgical teams.    

Join the authors at our Frontiers in Science Deep Dive webinar on 11 June 202616:00–17:30 CEST, as they explore how surgical roles may evolve, and how predictive AI and robotics could improve patient outcomes while maintaining safe and effective clinical practice. 

Evolving surgical teams in the age of artificial intelligence and robotics | 11 June 2026 | Register 

Frontiers in Science Deep Dive sessions bring researchers, policy experts, and innovators together from around the world to discuss a specific area of transformational science published in Frontiers' flagship, multidisciplinary journal, Frontiers in Science, and explore next steps for the field. 

DOI


Inspired by the brain, researchers build smarter, more efficient computer hardware



A University of Missouri study shows that small material changes can boost brain-like computing, which could one day help make artificial intelligence more energy-efficient.





University of Missouri-Columbia




As traditional computer chips reach their physical limits and artificial intelligence demands more energy than ever, University of Missouri researchers are rethinking how computers work by taking cues from the human brain.

The timing is critical. Energy use from AI data centers is projected to double by the end of the decade, raising urgent questions about sustainability.

The solution may lie in neuromorphic computing, an approach that reimagines computer hardware to process information more like biological neural networks rather than conventional chips.

“One of the brain’s greatest advantages is its efficiency,” Suchi Guha, a professor of physics in Mizzou’s College of Arts and Science, said. “It performs incredibly complex tasks using about 20 watts of power — roughly the same as an old light bulb. By comparison, today’s computer architecture is extremely energy-intensive.”

Making neuromorphic computing a reality starts at the hardware level. Guha and her team are developing electronic components designed to function like the connections between neurons that allow the brain to learn, adapt and store information — laying the groundwork for computers that are not only more powerful, but dramatically more efficient.

Rethinking the computer chip

For decades, computers have relied on transistors — tiny electronic switches that let machines process information. In most modern chips, however, thinking and memory happen in separate places. Every time a computer runs a task, data must shuttle back and forth between those two areas, which slows performance and burns energy.

The brain takes a different approach. Instead of separating memory and processing, individual connections between neurons — called synapses — do both at the same time. That setup allows the brain to learn and adapt while using surprisingly little energy.

Guha’s team is borrowing that idea for electronics. They are developing organic transistors that can both store and process information in the same place, much like synapses do in the brain.

“We’re not just trying to make faster transistors,” Guha, who is also a core faculty member with the MU Materials Science and Engineering Institute, said. “We’re trying to make devices that behave more like the brain itself.”

To see how well the approach works, the researchers tested several organic materials that looked almost identical on the surface. But once those materials were built into synaptic transistors, their performance differed dramatically.

The key factor turned out to be the interface — the thin boundary where the semiconductor meets an insulating layer inside the device.

“This shows us that performance isn’t just about what a material is made of,” Guha said. “It’s also about how it interacts with everything around it. Even small structural differences can have a big impact.”

Moving toward energy‑efficient, brain‑like AI

By clarifying how molecular design and interface quality influence synaptic behavior, Mizzou’s work provides other researchers with guiding principles for building more effective neuromorphic hardware. Such systems could eventually lead to brain-like AI that learns more efficiently, consumes far less power and excels at tasks such as pattern recognition and decision-making.

While brain-inspired computing is still in its early stages, Guha said advances such as hers are narrowing the gap between biology and machines.

“The brain remains the gold standard for efficient computation,” she said. “If we want truly intelligent machines, we have to start building hardware that learns the way biology does.”

The study, “Structure–Function Coupling in Pyridyl Triazole Copolymers for Neuromorphic Synaptic Transistors,” was published in ACS Applied Electronic Materials. Co-authors are Arash Ghobadi, Abhijeet Abhi, Thomas Kallos, Dillan Gamachchi, Indeewari Karunarathne, Andrew Meng, Jospeh Mathai, Shubhra Gangopadhyay and Steven Kelley at Mizzou; and Salahuddin Attar and Mohammed Al-Hashimi at Hamad Bin Khalifa University.

Journal

DOI

Article Title

Revolutionizing clinical trials with machine learning

Simulation results: Controlled Type I Error Rates Under the Nominal Level.

Credit

Journal

DOI

Article Title

Prof. Dafna Kariv of Reichman University’s Adelson School of Entrepreneurship Wins European Union Innovation Award for AI System Simulating Entrepreneur–Investor Interactions

SPACE/COSMOS

 

New study finds no significant joint damage in astronauts after short-duration spaceflight, highlighting promise of ultrasound monitoring





National Jewish Health





DENVER - Researchers at National Jewish Health have published new findings demonstrating that short-duration spaceflight may not significantly impact lower extremity joint structures, while also identifying a promising, non-invasive tool to monitor astronaut musculoskeletal health on future long-duration missions.

The study, led by Richard Meehan, MD, and Smarika Sapkota, MD, evaluated three astronauts before and after Axiom Mission 4 (Ax-4), an 18-day mission onboard the International Space Station (ISS). Using advanced musculoskeletal ultrasound imaging, researchers assessed cartilage thickness, synovial fluid levels, and tendon and ligament integrity in the hips, knees and ankles. The results, published in the International Journal of Clinical Rheumatology(Opens in a new window), showed no statistically significant changes in joint structures or evidence of inflammation following the mission. Dr. Sapkota will present the results at the May 2026 Annual Scientific Meeting of the Aerospace Medical Association in Denver.

“This study provides encouraging early evidence that short-duration spaceflight, combined with exercise and medical countermeasures, may help preserve joint health,” said Dr. Meehan, senior author and rheumatologist at National Jewish Health. “Equally important, it demonstrates that ultrasound can serve as a powerful, real-time tool to monitor joint health in space.”

Astronauts in the study engaged in cycling exercise during the mission and used anti-inflammatory medications, both of which may have contributed to maintaining joint health. Researchers observed no significant differences in cartilage thickness across the hips, knees or ankles, no meaningful overall change in knee synovial fluid levels, and no evidence of inflammation using power Doppler imaging. Tendon and ligament thickness also remained stable before and after spaceflight.

While the findings are reassuring, researchers caution that the study’s short duration and small sample size limit broader conclusions, particularly for longer missions to the Moon or Mars, where astronauts may face extended exposure to microgravity.

“Although we did not observe measurable changes after 18 days, longer missions could present very different risks to cartilage and joint structures,” said Dr. Sapkota, co-author and rheumatologist at National Jewish Health. “Our findings highlight the importance of continued research and the potential of ultrasound to guide personalized countermeasures for astronaut health.”

The study is among the first to use quantitative ultrasound immediately following spaceflight to assess multiple joint structures in humans, capturing changes within hours of return to Earth. Researchers believe this approach could play a critical role in future missions by enabling real-time monitoring of joint health, informing personalized exercise protocols, and reducing the risk of injury during and after spaceflight. The implications may extend beyond space exploration, offering potential benefits for patients on Earth, including those recovering from prolonged immobility or facing the risk of joint degeneration.

“This technology has the potential to transform how we monitor and protect joint health, not only for astronauts, but for patients here on Earth,” Dr. Meehan added.

The observational pilot study analyzed pre- and post-flight ultrasound measurements from three astronauts participating in the Ax-4 mission. Imaging was conducted within hours of return to Earth, and the research was supported by National Jewish Health in collaboration with Axiom Space and other partners.

“Leveraging the unique environment of space provides a vital laboratory for developing the next generation of biomedical technologies and medicine for terrestrial use,” explained Emmanuel Hilaire, PhD, director of Technology Transfer at National Jewish Health. Dr. Hilaire oversees the commercialization of innovations developed at National Jewish Health and is spearheading a space research initiative to accelerate further biomedical advancements.
 

National Jewish Health is the leading respiratory hospital in the nation delivering excellence in multispecialty care and world class research. Founded in 1899 as a nonprofit hospital, National Jewish Health today is the only facility in the world dedicated exclusively to groundbreaking medical research and treatment of children and adults with respiratory, cardiac, immune and related disorders. Patients and families come to National Jewish Health from around the world to receive cutting-edge, comprehensive, coordinated care. To learn more, visit njhealth.org or the media resources page.

Venus’ atmosphere jumps and waves



Vast atmospheric waves on Venus are caused by largest known “hydraulic jump”




University of Tokyo

Making waves 

image: 

These images taken on Aug. 18 (left) and Aug. 27 (right), 2016, by the near-infrared camera on Japan’s Akatsuki Venus probe, show the clear line of denser (darker) clouds moving across the planet.

view more 

Credit: T. Imamura, Y. Maejima, K. Sugiyama et al., 2026






The mysterious origin of an impressive cloud disturbance on Venus has now been revealed by a team including the University of Tokyo. Researchers used numerical models to show that an enormous 6,000-kilometer-wide atmospheric wave front, which circumnavigates the planet for days at a time, is caused by a large “hydraulic jump.” This is when a fluid abruptly slows down, changing from shallow and fast to deep and slow. On Venus, a sudden change in airflow in the lower cloud region is coupled with the creation of a strong updraft, forcing sulfuric acid vapor higher into the atmosphere where it condenses into a massive line of cloud. Future planetary studies can consider the potential impacts of this process, and what it might mean for any exploratory missions.   

 

A grim, gray day may spoil weekend plans now and then, but on Venus, it’s cloudy all day every day with a chance of sulfuric acid showers. On the bright side, Venus’ constant thick cloud cover provides an excellent opportunity for us to study patterns and processes that would be difficult to spot on planets where clouds are more sparse or intermittent, like here on Earth.

 

A key feature of Venusian clouds is that they “superrotate,” moving about 60 times faster than the planet turns. We now know that superrotation also occurs elsewhere, including on Mars, our sun, and even Earth’s upper atmosphere. In 2016, images from Japan’s Akatsuki Venus orbiter also revealed that an enormous atmospheric wave — sometimes 6,000 km wide — repeatedly sweeps around the planet’s equator.

 

“We identified the phenomena, but for years we couldn’t understand it,” said Professor Takeshi Imamura from the Graduate School of Frontier Sciences at the University of Tokyo. “However, thanks to this research, we’re now able to show that this cloud disruption is caused by the largest known hydraulic jump in the solar system.”

 

We can see a hydraulic jump in action in the humble kitchen sink. As water from the tap hits the basin, it appears fast and shallow at first, but suddenly slows and becomes deeper as it spreads.

 

The hydraulic jump on Venus occurs when an eastward-moving atmospheric wave (called a Kelvin wave) in the lower to middle cloud region suddenly becomes unstable. Wind speed as seen from the atmospheric wave abruptly slows down and a strong localized updraft is created, which carries sulfuric acid vapor higher into the atmosphere. The droplets condense into clouds which trail behind, causing the massive wave front which can be seen sweeping around the planet.

 

“Venus has three distinct cloud layers, and the dynamics of the lower and middle layers are not so well understood,” said Imamura. “Our discovery of a hydraulic jump on Venus connecting a very large-scale horizontal process with a strong localized vertical wave is unexpected, as in fluid dynamics these are usually disconnected.”

 

The hydraulic jump was simulated using a fluid dynamic model (a numerical analysis which simulates how gas or liquids flow), and the cloud formation studied using a microphysical box model (which follows the behavior of an example section of air as it moves through the atmosphere). As well as simulating the same cloud disturbance, the team also found that this process helps to maintain the superrotation of Venus’ atmosphere.

 

“Up until now, we used a global circulation model (GCM) for Venus that is similar to Earth’s, but this model doesn’t include the hydraulic jump which we have now identified,” explained Imamura. “Our next step will be to test this discovery within a more inclusive climate model that includes other atmospheric processes. We will face some challenges due to the huge amount of processing power required to run such simulations. Even with modern supercomputers, it isn’t easy.”

 

Although this is the first observation of a hydraulic jump of this scale on another planet, the physics behind it may also occur on other celestial bodies. “Under some circumstances, Mars’ atmosphere may also have the right conditions for a hydraulic jump,” mentioned Imamura. Creating more accurate models of atmospheric conditions will aid in the success of future missions to Mars, as well as wider space exploration.  

 

--- ---- --- Paper and Contact Details --- --- ---

Journal:

Takeshi Imamura, Yasumitsu Maejima, Ko-ichiro Sugiyama, Takehiko Satoh, Javier Peralta, Kevin McGouldrick, Takeshi Horinouchi, Kohei Ikeda, “A planetary-scale hydraulic jump driving Venus' cloud front”. Journal of Geophysical Research: Planets. April 24 2026. DOI: 10.1029/2026JE009672

 

Funding:

This work was funded by JSPS KAKENHI Grant Numbers 24H00021, 24K21565, and 23H01236. J.P. acknowledges project EMERGIA20_00414 funded by Junta de Andalucía in Spain, and project PID2023-149055NB-C33 funded by the Spanish MCIN.

 

Conflicts of Interest:

The authors declare there are no conflicts of interest for this manuscript.

 

Useful links:

Graduate School of Frontier Sciences: https://www.k.u-tokyo.ac.jp/en/

JAXA's Akatsuki website: https://akatsuki.isas.jaxa.jp/en/

About The University of Tokyo:

The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 5,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on X (formerly Twitter) at @UTokyo_News_en.

Hydraulic jump in a kitchen sink. 

In this image, the clearly defined hydraulic jump can be seen in the difference between the smooth inner circle of shallow and fast water, and the ripples of deeper, slower water beyond.

Credit

Takeshi Imamura 2026


Lonely Jupiter-like planet tells us more about gas giants



Ohio astrophysicist is helping unlock secrets of exoplanets with James Webb Space Telescope



University of Cincinnati

Paul Smith 

image: 

University of Cincinnati astrophysics graduate and current geosciences student Paul Smith visits the Cincinnati Observatory's historic telescope in Mount Lookout. Smith spent a 20-year career at P&G and another 10 as a writer and speaker on business leadership before returning to UC to study physics and geosciences. He also is pursuing a master's degree in planetary science from the University of Aberdeen in Scotland.

view more 

Credit: Connor Boyle






One evening last fall, University of Cincinnati astrophysics graduate Paul Smith waited anxiously for data to start rolling across his computer screen from the James Webb Space Telescope a million miles from Earth.

The telescope was directed at an object even farther away — much farther away. Smith is studying a planet 901 light years away. That means light from its star takes 901 years to reach Earth.

The planet is named after this star, TOI-2031A, in accordance with NASA’s unpoetic, numbered naming conventions. The TOI stands for Transiting Exoplanet Survey Satellite Object of Interest.

Even though it was a clear night, the star was too faint to see with the naked eye. Its starlight captured in the space telescope was generated in the Middle Ages.

Smith and his research partners beat out other scientists for precious telescope time. Roughly 90% of research applications don’t make the cut each year in the competitive peer-review process.

Now they were hoping their calculations were correct and the planet would cross in front of its star during their allotted observation time.

Using the telescope’s powerful near-infrared spectrographic sensors, researchers would be able to learn more about the planet and its atmosphere as it transited its star’s face. As leader of the data analysis for the project’s first planet, Smith got to retrieve the data, what astrophysicists call the first look.

“It was a lifelong dream of mine coming true. I was up all night to get the first look at the data,” he said.

Smith and his research colleagues presented their findings on TOI-2031Ab at the American Astronomical Society meeting in Denver in April.

Physicists call planets outside our solar system exoplanets. To date, astrophysicists have identified about 6,400 of them.

Smith and his international collaborators from 19 other institutions are studying gas giants like Jupiter to learn more about their atmospheres and why so many of them orbit so close to their stars. The exopolanet is a quarter bigger in circumference than Jupiter, the biggest planet in our solar system, although it has 20% less mass.

Smith regularly travels to Ohio State University to meet with some of his project co-authors, grad student Everett McArthur and Professor Ji Wang. And he talks regularly with Peter Gao from the Carnegie Science Institute.

“We’re trying to figure out how these big gas giants got there. We’re studying the formation and migration pathways of big planets,” Smith said. “Where do they form in their solar systems and how do they get so close to their stars?”

TOI-2031Ab was discovered just last year, the only known planet in its solar system. The exoplanet orbits its star closer than Mercury orbits the sun.

Its year lasts just six Earth days as it hurtles through space four times faster around its star.

Researchers can study its atmosphere using the portion of its star’s light that slices through its atmosphere on its way to the James Webb Space Telescope.

“The atmosphere is very similar to Jupiter’s — mostly hydrogen and helium, water and carbon dioxide,” Smith said.

Cincinnati Observatory astronomer Wes Ryle, who was not part of the study, said planets outside our solar system are helping us understand our own.

“Exoplanets are one of the hottest topics in astrophysics right now, with the ultimate goal of learning how our solar system compares to others and the likelihood of finding other habitable worlds,” Ryle said. “Studies like this help evaluate the role of gas giant planets and their migration in creating a planetary system.”

The historic 1845 telescope at the Cincinnati Observatory. 

Credit

Andrew Higley