It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Sunday, July 23, 2023
The Canadian Press
Sat, July 22, 2023
As firefighters and other first responders battle an unprecedented summer of fires, floods, tornadoes and heat waves around the country, a group of Canadian scientists are asking why they're happening in the first place.
"May and June were record hot months in Canada and we've got the record wildfire season as well," said Nathan Gillett of Environment and Climate Change Canada. "Yes, it has been busy."
Gillett heads the Rapid Extreme Event Attribution Project, a new federal program that uses the growing field of attribution science to promptly establish to what extent — if any — a specific flood in British Columbia or wildfire in Quebec is due to climate change.
"The idea is to be able to make rapid extreme event attribution days or weeks after the extreme events occur," he said.
Twenty years ago, if you'd asked a scientist if climate change was linked to days of torrential rain or months of desiccating drought, you'd probably get an answer along the lines of "We can't say for sure but this event is consistent with the modelling."
But in 2003, a paper was published suggesting science could do better. Myles Allen of Oxford University borrowed a concept from epidemiology.
"You can say that smoking increases your risk of lung cancer by a certain amount," Gillett said. "In the same way, you can say human-induced climate change increased the risk of a certain event by a certain amount."
Since then, hundreds of attribution papers have been peer-reviewed and published. As well as Canada, governments including the United Kingdom, Australia, the Netherlands, South Korea, Japan and the United States are using attribution science.
Attribution science works by comparing climate models. One set of models will use data drawn from actual records while another, otherwise identical, set will be constructed with the influence of greenhouse gases removed.
Simulations will be run using those two sets and the difference in the results reveals the impact of climate change. It allows scientists to say to what extent the presence of greenhouse gases increased the likelihood of the event in question.
"It's probabilistic," Gillett said.
The process is now established enough, with peer-reviewed protocols and standards, that the calculations can be done quickly.
"Once you've got the method in place and it's validated, you really just have to get the observations from that event and you can provide a result," said Gillett.
Some events are easier to study than others. Gillett said his group hopes to be able to come to conclusions on heat waves in about a week, but wildfires, which involve more variables, will take longer.
Speed matters, said Clair Barnes, a researcher with the World Weather Attribution group in the U.K., which has studied the role of climate change since 2015 in more than 50 events around the world — including the finding that the heat wave preceding the fire that levelled Lytton, B.C., was made 150 times more likely by climate change.
"Our aim is to look at high-impact events that are in the news," she said. "There was an appetite in the public and the media for more information about what's really happening now."
Promptly assessing the role of climate change after extreme events brings actual insight and information to the discussion, Barnes said.
"If you spend three years thinking about it, the media has already decided it was climate change or it wasn't climate change and has moved on. If you want to be involved in that discussion and bring some science to that discussion, you've got to move quickly."
But attribution science has more uses than just shaping public debate. Governments are using it inform their adaptation strategies. Financial institutions are using it to assess risk. It's come up in hundreds of court cases around the world attempting to attribute climate liability.
It does have its limitations.
Attribution science can only work where there's enough historical weather data to build an accurate climate model. That leaves out much of the global south, where some of the worst human impacts are occurring. As well, extremely local events are often beyond its resolving power.
"You do have to be careful to communicate the uncertainties," said Gillett. "We shouldn't be overconfident."
There's certainly no shortage of work. Barnes said her group has had to establish a strict protocol that weighs the magnitude of the event, the amount of damage it inflicts and its effect on human lives to weed out which events merit study.
"There are so many events that we just don't have the time to look at them all."
But World Weather Attribution has found the time to consider Canada's wildfires. It's a complex one, so results aren't expected for another month or so.
By then, chances are there will be a new extreme event to consider. When Barnes joined World Weather Awareness, she assumed winter and summer — the times of peak temperature highs and lows — would be the busiest. Not so.
"We've had temperature records set for the last few months and it's not even the peak of boreal summer," she said. "It's just been non-stop."
This report by The Canadian Press was first published July 22, 2023.
Bob weber, The Canadian Press
The Canadian Press
Sat, July 22, 2023
REVELSTOKE, B.C. — Family, friends and fellow firefighters paid tribute today to the 19-year-old woman killed while battling wildfires in British Columbia earlier this month.
Devyn Gale died on July 13 after being struck by a falling tree while fighting a wildfire near Revelstoke, B.C.
Gale’s brother and sister, Nolan and Kayln, who are also firefighters, gave emotional speeches about their sister at a public memorial in Revelstoke, calling her compassionate, wise and nurturing.
Casey Robinson of the B.C. Wildfire Service, who interviewed and trained Gale, said he was impressed by her "smarts, her energy and her ability to work hard."
He says Gale was an "excellent firefighter" and encouraged all those in the same field to continue her legacy of "being welcoming, conscientious and open hearted to anyone who joins" their crews.
The service followed a memorial procession that included Gale's BC Wildfire Service colleagues, a Colour Party, Honour Guard and representatives from various first-responder agencies. Community members lined city streets in Revelstoke to watch the march.
Gale is one of three Canadian firefighters who have died battling the hundreds of blazes that are burning across the country.
Adam Yeadon, 25, died last Saturday while fighting a wildfire near his home in Fort Liard, N.W.T.
A 41-year-old helicopter pilot from Whitecourt, Alta., died after his aircraft crashed Wednesday during firefighting operations in that province's northwest.
This report by The Canadian Press was first published July 22, 2023.
The Canadian Press
Lesley M.M. Blume
Fri, July 21, 2023
An undated photo provided by the National Archives and Records Administration of contaminated film scans that were sent from Rochester, N.Y. to Lt. Gen. Leslie Groves, the leader of the Manhattan Project, an early indicator that the fallout from the Trinity nuclear test was spreading nationwide. (National Archives and Records Administration via The New York Times)
In July 1945, as J. Robert Oppenheimer and the other researchers of the Manhattan Project prepared to test their brand-new atomic bomb in a New Mexico desert, they knew relatively little about how that mega-weapon would behave.
On July 16, when the plutonium-implosion device was set off atop a 100-foot metal tower in a test code-named “Trinity,” the resultant blast was much stronger than anticipated. The irradiated mushroom cloud also went many times higher into the atmosphere than expected: some 50,000 to 70,000 feet. Where it would ultimately go was anyone’s guess.
A new study, released Thursday before submission to a scientific journal for peer review, shows that the cloud and its fallout went farther than anyone in the Manhattan Project had imagined in 1945. Using state-of-the-art modeling software and recently uncovered historical weather data, the study’s authors say that radioactive fallout from the Trinity test reached 46 states, Canada and Mexico within 10 days of detonation.
“It’s a huge finding and, at the same time, it shouldn’t surprise anyone,” said the study’s lead author, Sébastien Philippe, a researcher and scientist at Princeton University’s Program on Science and Global Security.
The study also reanalyzed fallout from all 93 aboveground U.S. atomic tests in Nevada and created a map depicting composite deposition of radioactive material across the contiguous U.S. (The team also hopes to study U.S. tests over the Pacific Ocean in the future).
How much of Trinity’s fallout still remains at original deposition sites across the country is difficult to calculate, said Susan Alzner, an author of the study and the co-founder of shift7, an organization that coordinated the study’s research. The study documents deposition as it originally hit the ground in 1945.
“It’s a frozen-in-time image,” she said.
The findings could be cited by advocates aiming to increase the number of people eligible for compensation by the federal government for potential exposure to radiation from atmospheric nuclear explosions.
The drift of the Trinity cloud was monitored by Manhattan Project physicists and doctors, but they underestimated its reach.
“They were aware that there were radioactive hazards, but they were thinking about acute risk in the areas around the immediate detonation site,” Alex Wellerstein, a nuclear historian at the Stevens Institute of Technology in New Jersey, said. They had little understanding, he said, about how the radioactive materials could embed in ecosystems, near and far. “They were not really thinking about effects of low doses on large populations, which is exactly what the fallout problem is.”
At the time, Dr. Stafford L. Warren, a Manhattan Project physician specializing in nuclear medicine, reported to Lt. Gen. Leslie Groves, leader of the Manhattan Project, that the Trinity cloud “remained towering over the northeast corner of the site for several hours.” Soon, he added, “various levels were seen to move in different directions.” Warren assured Groves that an assessment of the fallout’s reach could be undertaken later on horseback.
In the decades that followed, a lack of crucial data bedeviled assessments and attempted studies of the Trinity test’s fallout. The U.S. had no national monitoring stations in place in 1945 to track the fallout, Philippe said. Plus, essential historical weather and atmospheric data was available only from 1948 onward. Remodeling fallout from tests in Nevada — starting in 1951 — was easier, but Trinity remained frustratingly difficult to reanalyze.
“The data sets for the Nevada tests and the available data that we could possibly find for Trinity were not comparable,” Alzner said. “You couldn’t put them on the same map. We decided to keep pushing.”
Determined to fill in the gaps, the team started the study about 18 months ago. Philippe has extensive background in modeling fallout and was an author of a similar project in 2021 that documented the effects from French nuclear tests.
A breakthrough came in March, when Alzner and Megan Smith, another co-founder of shift7 and a former U.S. chief technology officer in the Obama administration, contacted the National Oceanic and Atmospheric Administration. There, Gilbert P. Compo, a senior research scientist at the University of Colorado and the NOAA Physical Sciences Laboratory, told the team the European Centre for Medium-Range Weather Forecasts had only a week earlier released historical data that charted weather patterns extending 30,000 feet or higher above Earth’s surface.
“For the first time, we had the most accurate hourly reconstruction of the weather back to 1940, around the world,” said Compo, who became a co-author on the study. “Every single event that puts something in the air, no matter what it is, can now be tracked, by the hour.”
Using the new data and software built by NOAA, Philippe then reanalyzed Trinity’s fallout. And while the study’s authors acknowledge limitations and uncertainties within their calculations, they maintain that “our estimates likely remain conservatively low.”
“It’s a very comprehensive, well-executed study,” said M.V. Ramana, professor and Simons chair in disarmament, global and human security at the University of British Columbia, who was not involved in the study. Ramana was unsurprised by the study’s findings about Trinity. “I expected that the old estimates were understating what was actually deposited,” he said.
The results show that New Mexico was heavily affected by Trinity’s fallout. Computations by Philippe and his colleagues show the cloud’s trajectory primarily spreading up over northeast New Mexico and a part of the cloud circling to the south and west of ground zero over the next few days. The researchers wrote that there are “locations in New Mexico where radionuclide deposition reached levels on par with Nevada.”
Trinity’s fallout, Philippe says, accounts for 87% of total deposition found across New Mexico, which also received deposition from Nevada’s aboveground tests. The study also found that Socorro County — where the Trinity test took place — has the fifth highest deposition per county of all counties in the United States.
Trinity test “downwinders” — a term describing people who have lived near nuclear test sites and may have been exposed to deadly radioactive fallout — have never been eligible for compensation under the 1990 Radiation Exposure Compensation Act (RECA). It has provided over $2.5 billion in payments to nuclear workers in much of the Western U.S. and to downwinders who were located near the Nevada test site and may have developed cancer or other diseases as a result of radiation exposure.
“Despite the Trinity test taking place in New Mexico, many New Mexicans were left out of the original RECA legislation and nobody has ever been able to explain why,” said Sen. Ben Ray Luján, D-N.M. He has helped lead efforts in Congress to expand and extend the legislation, due to sunset in 2024.
Census data from 1940 shows that as many as 500,000 people were living within a 150-mile radius of the test site. Some families lived as close as 12 miles away, according to the Tularosa Basin Downwinders Consortium. Yet no civilians were warned about the test ahead of time, and they weren’t evacuated before or after the test.
“This new information about the Trinity bomb is monumental and a long time coming,” Tina Cordova, a co-founder of the consortium, said. “We’ve been waiting for an affirmation of the histories told by generations of people from Tularosa who witnessed the Trinity bomb and talked about how the ash fell from the sky for days afterward.”
The study also documents significant deposition in Nevada, Utah, Wyoming, Colorado, Arizona and Idaho, as well as dozens of federally-recognized tribal lands, potentially strengthening the case for people seeking expanded compensation in those areas.
Although Wellerstein said that he approaches such reanalyses of historical fallout with a certain amount of uncertainty, partly because of the age of the data, he said there is value in such studies by keeping nuclear history and its legacy in the public discourse.
“The extent to which America nuked itself is not completely appreciated still, to this day, by most Americans, especially younger Americans,” he said.
c.2023 The New York Times Company
SPACE NEWS
https://www.tor.com/2018/03/29/destruction-and-renewal-nova-by-samuel-r-delany
Mar 29, 2018 ... In Nova, he created a novel that works on many levels, including myth and legend, unfolding against a solidly-researched science fiction
In new space race, scientists propose geoarchaeology can aid in preserving space heritage
UNIVERSITY OF KANSAS
LAWRENCE, KANSAS — As a new space race heats up, two researchers from the Kansas Geological Survey at the University of Kansas and their colleagues have proposed a new scientific subfield: planetary geoarchaeology, the study of how cultural and natural processes on Earth’s moon, on Mars and across the solar system may be altering, preserving or destroying the material record of space exploration.
“Until recently, we might consider the material left behind during the space race of the mid-20th century as relatively safe,” said Justin Holcomb, postdoctoral researcher at the Kansas Geological Survey, based at the University of Kansas, and lead author on a new paper introducing the concept of planetary geoarchaeology in the journal Geoarchaeology. “However, the material record that currently exists on the moon is rapidly becoming at risk of being destroyed if proper attention isn’t paid during the new space era.”
Since the advent of space exploration, humans have launched more than 6,700 satellites and spacecraft from countries around the globe, according to the Union of Concerned Scientists. The United States alone accounts for more than 4,500 civil, commercial, governmental and military satellites.
“We’re trying to draw attention to the preservation, study and documentation of space heritage because I do think there’s a risk to this heritage on the moon,” Holcomb said. “The United States is trying to get boots on the moon again, and China is as well. We’ve already had at least four countries accidentally crash into the moon recently. There are a lot of accidental crashes and not a lot of protections right now.”
Holcomb began considering the idea of planetary geoarchaeology during the COVID-19 lockdown. Applying geoarchaeological tools and methods to the movement of people into space and the solar system is a natural extension of the study of human migration on Earth, the focus of the ODYSSEY Archaeological Research Program housed at KGS and directed by Holcomb’s co-author, Rolfe Mandel, KGS senior scientist and University Distinguished Professor in the Department of Anthropology.
“Human migration out of Africa may have occurred as early as 150,000 years ago, and space travel represents the latest stage of that journey,” Mandel said. “Although the ODYSSEY program is focused on documenting the earliest evidence of people in the Americas, the next frontier for similar research will be in space.”
How planetary geoarchaeologists will determine whether an item is worth preserving is an open question.
“We feel that all material currently existing on extraterrestrial surfaces is space heritage and worthy of protection,” Holcomb said. “However, some sites, such as the very first footprints on the moon at Tranquility Base or the first lander on Mars, Viking 1, represent the material footprint of a long history of migration.”
Beyond those “firsts,” sifting through the hundreds of thousands of bits of material currently in orbit or strewn across the surfaces of the moon and Mars — what many call “trash” but Holcomb and his colleagues regard as heritage — will require case-by-case decision making.
“We have to make those decisions all the time with archaeological sites today,” Holcomb said. “The moon has such a limited record now that it’s totally possible to protect all of it. Certainly, we need to protect space heritage related to the Apollo missions, but other countries, too, deserve to have their records protected.”
With resources for protecting space heritage limited, Holcomb and his colleagues advocate for developing systems to track materials left in space.
“We should begin tracking our material record as it continues to expand, both to preserve the earliest record but also to keep a check on our impact on extraterrestrial environments,” he said. “It’s our job as anthropologists and archaeologists to bring issues of heritage to the forefront.”
Beyond the moon, Holcomb wants to see planetary geoarchaeology extend to issues related to exploration and migration to Mars. He points to NASA’s Spirit Rover as an example. The rover became stuck in Martian sand in 2008 and now risks being completely covered by encroaching sand dunes.
“As planetary geoarchaeologists, we can predict when the rover will be buried, talk about what will happen when it’s buried and make sure it’s well documented before it’s lost,” he said. “Planetary scientists are rightfully interested in successful missions, but they seldom think about the material left behind. That’s the way we can work with them.”
Holcomb believes geoarchaeologists should be included in future NASA missions to ensure the protection and safety of space heritage. Meanwhile, geoarchaeologists on Earth can lay the foundation for that work, including advocating for laws to protect and preserve space heritage, studying the effects extraterrestrial ecosystems have on items space missions leave behind and conducting international discussions regarding space heritage preservation and protection issues.
As for being part of a space mission himself?
“I’ll leave that to other geoarchaeologists,” Holcomb said. “There’s plenty to do down here, but I do hope to see an archaeologist in space before it’s all over.”
JOURNAL
Geoarchaeology
ARTICLE TITLE
Planetary geoarchaeology as a new frontier in archaeological science: Evaluating site formation processes on Earth's Moon
Two planets sharing same orbit around their
star? Astronomers find strongest evidence yet
Wed, July 19, 2023
CAPE CANAVERAL, Fla. (AP) — Astronomers reported Wednesday the discovery of what could be two planets sharing the same orbit around their star.
They said it’s the strongest evidence yet of this bizarre cosmic pairing, long suspected but never proven.
Using a telescope in Chile, the Spanish-led team spotted a cloud of debris in the same orbit as an already confirmed planet circling this star, 370 light-years away in the constellation Centaurus. They suspect it’s either a planet in formation or remnants of a planet that once was.
Asteroids are known to accompany planets around their star — for example, Jupiter and its so-called Trojan asteroids. But planets in the same orbit “have so far been like unicorns,” noted study co-author Jorge Lillo-Box of Madrid’s Center for Astrobiology.
“They are allowed to exist by theory, but no one has ever detected them,” he said in a statement.
The scientists said they will need to wait until 2026 in order to properly track the two objects around the star known as PDS 70.
The confirmed planet with the suspected tagalong takes 119 years to complete a lap. A gas giant, it’s three times the size of Jupiter. Another gas giant is known to circle this star, albeit from a much greater distance.
Lead author Olga Balsalobre-Ruza of the Center for Astrobiology in Madrid, said the findings, published in the journal Astronomy and Astrophysics, are “the first evidence” that such double worlds might exist.
“We can imagine that a planet can share its orbit with thousands of asteroids as in the case of Jupiter, but it is mind-blowing to me that planets could share the same orbit," she said in a statement.
___
The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group. The AP is solely responsible for all content.
Marcia Dunn, The Associated Press
To stick or to bounce: Size determines the stickiness of cosmic dust aggregates
TOHOKU UNIVERSITY
Microparticle dust aggregates, which are thought to play a role in the formation of new planets, are less likely to stick together after a collision when the aggregates are larger.
Current evidence suggests that microparticles of cosmic dust collide and stick together to form larger dust aggregates that may eventually combine and develop into planets. Numerical models that accurately characterize the conditions required for colliding microparticle aggregates to stick together, rather than bounce apart, are therefore paramount to understanding the evolution of planets. Recent modeling suggests that dust aggregates are less likely to stick together after a collision as the size of the aggregates increases.
A team of astrophysicists performed numerical simulations of dust aggregate collisions, with equal-mass aggregates varying between 10,000 and 140,000 microns (one to 14 cm) in size, using soft-sphere discrete element methods. The discrete modeling system accounted for each particle within the aggregate rather than treating the aggregate as a single entity, and soft-sphere simulation assumed the rigidity of each particle of the aggregate but allowed for deformations that may occur during collision. Their modeling indicated that increasing the radius of microparticle dust aggregates decreased the sticking probability, or likelihood that two aggregates would stick together and form a larger aggregate after collision.
The team published the results of their study in The Astrophysical Journal Letters.
"The formation process of kilometer-sized bodies, planetesimals, from cosmic dust, which is the initial stage of planet formation, has been one of the biggest problems in the theory of planet formation," said Hidekazu Tanaka, one of the authors of the study and professor at the Astronomical Institute in the Graduate School of Science at Tohoku University in Sendai, Japan. "The present study showed that the dust clumps that are the material for planets stop growing when they grow to a certain size, as large clumps are difficult to adhere to each other. Our results made the problem of planetesimal formation even more difficult. The adhesive growth of dust clumps is a key process in the planet-formation process."
The simulations suggest that collisional bouncing between large microparticle aggregates would decrease the formation of planetesimals, or the building blocks of planets. Kilometer-scale planetesimals form planets through collisional merging via mutual gravity.
Earlier modeling simulations and laboratory experiments characterizing the threshold for the sticking/bouncing barrier of dust aggregate collisions often produced conflicting results, which the research team and others hypothesized was due to varying sizes of aggregates. The results of the current study support this hypothesis.
It is currently unclear why the size of aggregates affects the sticking probability during a collision. Future studies aimed at dissecting the packing structure of aggregates over time may help scientists understand how aggregates can approach the scale of planetesimals. Studies of the contact sites between aggregates, where most energy is dissipated, after a collision may also unveil how larger aggregates eventually stick together.
Additionally, the simulations performed by the research team suggest that the sticking probability of particle aggregates may also be affected by the size of the individual particles that make up the aggregate and not just the radius of the entire aggregate.
The team acknowledges that the simulations they have performed in this study are far from comprehensive. Simulations that include aggregates that can be prepared by realistic procedures and that address acceleration will be performed, and laboratory experiments that will fine-tune the model are also planned.
Beyond these simulations, the team has their sights set on larger aggregates, which may fundamentally change current theories of planet development. "We will use a supercomputer to perform large-scale numerical simulations of collisions between even larger dust clumps in order to investigate how difficult it is for large dust clumps to attach to each other. This will help to settle the question of whether the formation of planetesimals is possible through the adhesion of dust clumps or not," said Tanaka.
JOURNAL
The Astrophysical Journal Letters
DOI
Hubble images a starstruck galaxy
Reports and ProceedingsNASA/GODDARD SPACE FLIGHT CENTER
IMAGE: THE IRREGULAR GALAXY ARP 263 LURKS IN THE BACKGROUND OF THIS IMAGE FROM THE NASA/ESA HUBBLE SPACE TELESCOPE, BUT THE VIEW IS DOMINATED BY A STELLAR PHOTOBOMBER, THE BRIGHT STAR BD+17 2217. ARP 263 – ALSO KNOWN AS NGC 3239 – IS A PATCHY, IRREGULAR GALAXY STUDDED WITH REGIONS OF RECENT STAR FORMATION, AND ASTRONOMERS BELIEVE THAT ITS RAGGED APPEARANCE IS DUE TO ITS HAVING FORMED FROM THE MERGER OF TWO GALAXIES. IT LIES AROUND 25 MILLION LIGHT-YEARS AWAY IN THE CONSTELLATION LEO. TWO DIFFERENT HUBBLE INVESTIGATIONS INTO ARP 263, USING TWO OF HUBBLE’S INSTRUMENTS, CONTRIBUTED DATA TO THIS IMAGE. THE FIRST INVESTIGATION WAS PART OF AN EFFORT TO OBSERVE THE SITES OF RECENT SUPERNOVAE, SUCH AS THE SUPERNOVA SN 2012A THAT WAS DETECTED JUST OVER A DECADE AGO IN ARP 263. ASTRONOMERS USED HUBBLE’S POWERFUL WIDE FIELD CAMERA 3 TO SEARCH FOR LINGERING REMNANTS OF THE COLOSSAL STELLAR EXPLOSION. THE SECOND INVESTIGATION IS PART OF A CAMPAIGN USING HUBBLE’S ADVANCED CAMERA FOR SURVEYS TO IMAGE ALL THE PREVIOUSLY UNOBSERVED PECULIAR GALAXIES IN THE ARP CATALOG, INCLUDING ARP 263, IN ORDER TO FIND PROMISING SUBJECTS FOR FURTHER STUDY USING THE NASA/ESA/CSA JAMES WEBB SPACE TELESCOPE. THE INTERLOPING FOREGROUND STAR, BD+17 2217, IS ADORNED WITH TWO SETS OF CRISSCROSSING DIFFRACTION SPIKES. THE INTERACTION OF LIGHT WITH HUBBLE’S INTERNAL STRUCTURE MEANS THAT CONCENTRATED BRIGHT OBJECTS, SUCH AS STARS, ARE SURROUNDED BY FOUR PROMINENT SPIKES. SINCE THIS IMAGE OF BD+17 2217 WAS CREATED USING TWO SETS OF HUBBLE DATA, THE SPIKES FROM BOTH IMAGES SURROUND THIS STELLAR PHOTOBOMBER. THE SPIKES ARE AT DIFFERENT ANGLES BECAUSE HUBBLE WAS AT DIFFERENT ORIENTATIONS WHEN IT COLLECTED THE TWO DATASETS. view more
CREDIT: TEXT CREDIT: EUROPEAN SPACE AGENCY (ESA) IMAGE CREDIT: ESA/HUBBLE & NASA, J. DALCANTON, A. FILIPPENKO
Space telescope detects supernova blast so bright instruments couldn’t keep up
Rob Waugh
·Contributor
Wed, July 19, 2023
The explosion was so bright instruments struggled to keep up (University of Alabama in Huntsville)
A space telescope has detected a gamma ray burst (GRBs) so bright instruments couldn’t keep up with it, triggered by the collapse of a massive, distant star.
Gamma ray bursts are hugely energetic explosions – and this is believed to be the brightest ever observed.
It was accompanied by a huge supernova explosion, leaving behind a black hole.
Dr Peter Veres, an assistant professor with CSPAR, said: "During a GRB, we see the death of a massive star, approximately 30 times more massive than the sun, and the formation of a black hole."
"The black hole launches a very fast jet close to the speed of light, and the jet will produce the gamma-ray burst. At later times, GRBs are visible at other wavelengths as well, from radio, or optical through very high-energy gamma-rays, which is called the afterglow of the GRB.
"This GRB was so bright, the afterglow showed up in the gamma ray burst monitor, which is very uncommon, and we could follow it for almost three hours.”
Read more: Mysterious “rogue planet” could be even weirder than we thought
The explosion came from 2.4 billion light-years away in the constellation Sagitta.
GRB 221009A is also one of the nearest and possibly most energetic GRBs ever found, as detailed in a paper on the arXiv preprint server, which has been accepted for publication in The Astrophysical Journal Letters.
The GBM is an instrument in low-Earth orbit aboard the Fermi Gamma-ray Space Telescope that can see the entire gamma-ray sky not blocked by the Earth and hunts for GRBs as part of its main programme.
Read more: Astronomers find closest black hole to Earth
"This gamma-ray burst was extremely bright. We expect to see one like this only every 10,000 years or so," says Dr Veres.
"We routinely detect GRBs at a rate of about five per week and keep an eye out if any of the GRBs are special in some way.
“This one was so bright, the instrument couldn't keep up with the large number of incoming photons. Most of the work, led by Stephen Lesage, was to figure out how to reconstruct the lost counts."
When the gamma rays enter these detectors, they interact with crystals in the instrument. The more energetic the gamma ray, the more light is produced.
By seeing which crystals light up, the GBM can tell the direction of the bursts. In all, the Fermi instrument has discovered over 3,500 GRBs, and 221009A is by far the brightest ever detected.
By Catherine Thorbecke, CNN
Updated Sat July 22, 2023
Maria Korneeva/Moment RF/Getty Images
CNN —
A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.
Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.
“They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”
“It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”
Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.
But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.
Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.
In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.
Neil Clarke, Editor of Clarkesworld Magazine.Lisa R. Clarke
Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.
“Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.
Some workers feel like ‘guinea pigs’
Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.
And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.
The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.
“It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”
Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”
“Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”
Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.
News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.
Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.
“You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”
iEarth: An interdisciplinary framework in the era of big data and AI for sustainable development
SCIENCE CHINA PRESS
The United Nations Sustainable Development Goals (SDGs) hold the key to humanity's future existence and growth. In a bid to optimize the implementation of these SDGs, Professor Peng Gong's team from the University of Hong Kong and Professor Huadong Guo's team from the Chinese Academy of Sciences have collaboratively introduced an innovative "iEarth" framework. This interdisciplinary framework is powered by Big Earth Data science and seeks to amalgamate various interdisciplinary methodologies and expertise. It aims to quantify the processes of Earth systems and human civilization, uncover the intricate interplay between natural ecosystems and human society, foster cross-disciplinary ideologies and solutions, and furnish explicit evidence and valuable scientific knowledge for sustainable development.
The inception of the iEarth concept springs from intelligent Mapping (iMap), and its further development is influenced by a spectrum of disciplinary and interdisciplinary studies. The team distinguishes four primary themes within the iEarth framework: iEarth data, iEarth science, iEarth analytics, and iEarth decision.
iEarth data comprises all data related to Earth systems, encapsulating natural systems and human societies. iEarth science delves into a multidisciplinary exploration of the natural system, human society, and their mutual interaction and feedback, focusing on the diverse traits of objects when interconnected. iEarth analytics presents a methodology inclusive of detection, prediction, assessment, and optimization for achieving SDGs by leveraging the "iEarth+" model, which is dedicated to transcending disciplinary boundaries and actively connecting Earth observations with other disciplines. iEarth decision supports the implementation of SDGs by monitoring progress, pinpointing drivers, simulating pathways, and performing cost-benefit evaluations. The holistic iEarth framework thus consolidates multi-source data, interdisciplinary knowledge, and advanced technology to establish a comprehensive data-science-analytics-decision support system for fostering sustainable environmental, social, and economic prosperity.
The 'intelligence' in the iEarth framework is characterized by its potential for active learning and knowledge synthesis through Big Earth Data models powered by Artificial Intelligence (AI). Consequently, the iEarth framework can also be seen as an AI model anchored on Big Earth Data. According to the team, the successful implementation of the iEarth framework necessitates significant investment in both hard and soft infrastructures.
With an aim to reinforce the vision and boost the capability of iEarth for sustainable development, the team has outlined key research directions, practical implications, and educational curricula. The ultimate objective is to shape and build an interdisciplinary and synergistic framework for research, practice, and education that helps in preserving our living planet.
See the article:
iEarth: an interdisciplinary framework in the era of big data and AI for sustainable development
https://doi.org/10.1093/nsr/nwad178
JOURNAL
National Science Review
Alistair Barr
Wed, July 19, 2023
Stefani Reynolds/Getty Images
GPT-4 users have complained that the OpenAI model is getting 'dumber.'
AI researchers studied the model to find out if this was true.
Their findings, published on Tuesday, challenge the assumption that AI models automatically improve.
One of the bedrock assumptions of the current artificial intelligence boom is that AI models "learn" and improve over time. What if that doesn't actually happen?
This is what users of OpenAI's GPT-4, the world's most-powerful AI model, have been experiencing lately. They have gone on Twitter and OpenAI's developer forum to complain about a host of performance issues.
After I reported on this, OpenAI responded that it hasn't "made GPT-4 dumber."
AI researchers decided to settle this debate once and for all by conducting a study. The results were published on Tuesday, and I can't wait any longer to tell you the conclusion: I was right.
"We find that the performance and behavior of both GPT-3.5 and GPT-4 vary significantly across these two releases and that their performance on some tasks have gotten substantially worse over time," the authors of the study wrote.
These are serious AI researchers. The main one is Matei Zaharia, the CTO of Databricks, one of the top AI data companies out there that was most recently valued at $38 billion.
You can read the rest of their findings here. What I'm most interested in is the new questions that these findings raise. Here's the most fascinating one.
"It is also an interesting question whether an LLM service like GPT4 is consistently getting 'better' over time," Zaharia and his research colleagues wrote in their paper.
Another common phrase for AI is machine learning. The magic of this technology is that it can ingest new data and use that to get better over time, without human software engineers manually updating code. Again, this is the core idea that is driving today's AI frenzy and accompanying stock market surges.
If GPT-4 is getting worse, not better, this premise begins to feel shaky.
The Microsoft factor
Microsoft has invested heavily in OpenAI, the creator of GPT-4. Microsoft is also baking this technology into its software, and charging users a lot for the new capabilities.
On Tuesday, the same day Zaharia & Co. published their paper, Microsoft unveiled pricing for Microsoft CoPilot, new AI-powered versions of popular cloud software such as Office 365. This costs $30 a month more, on top of what users are already paying.
Microsoft's market value jumped more than $150 billion after this announcement, showing that Wall Street is betting on AI, and the impact the technology will have on the company's products.
This recent GPT-4 research paper provides a healthy dose of skepticism to the assumptions that are driving these wild swings in value.
Scientist Gary Marcus read Zaharia's study and highlighted how unstable LLMs are. So unstable that relying on them for high-end business products might not be a good idea.
"Who in their right mind would rely on a system that could be 97.6% correct on a task in March and 2.4% correct on same task in June?," he tweeted, citing one of the findings in the research paper. "Important results. Anyone planning to rely on LLMs, take note."
"Prediction: this instability will be LLMs' undoing," Marcus added. "They will never be as commercially successful as the VC community is imagining, and some architectural innovation that allows for greater stability will largely displace LLMs within the next 10 years."
Spokespeople from OpenAI and Microsoft didn't respond to a request for comment on Wednesday.