Saturday, June 03, 2023

US poet laureate dedicates ode to Europa for NASA mission to Jupiter's icy moon

Story by By Colette Luke and Steve Gorman • Yesterday 

U.S. Poet Laureate Ada Limón sits for interview with Reuters in Washington
© Thomson Reuters

WASHINGTON (Reuters) - When U.S. Poet Laureate Ada Limon was asked to write a poem for inscription on a NASA spacecraft headed to Jupiter's icy moon Europa, she felt a rush of excitement at the honor, followed by bewilderment at the seeming enormity of the task.


U.S. Poet Laureate Ada Limón is interviewed by Reuters in Washington
© Thomson Reuters

"Where do you start a poem like that?" she recalled thinking just after receiving the invitation in a call at the Library of Congress, where the 47-year-old poet is serving a two-year second term as the nation's top bard.

On Thursday night, exactly one year later in a ceremony at the library, across the street from the U.S. Capitol, Limon's 21-line creation, "In Praise of Mystery: a Poem for Europa," was unveiled and read aloud to a public audience for the first time, receiving a standing ovation.

The entire poem, a free-verse ode consisting of seven three-line stanzas, or tercets, will be engraved in Limon's handwriting on the exterior of the Europa Clipper, due for launch from NASA's Kennedy Space Center in Florida in October 2024.


U.S. Poet Laureate Ada Limón interview with Reuters in Washington
© Thomson Reuters

Now being assembled at NASA's Jet Propulsion Laboratory near Los Angeles, the spacecraft - larger than any other flown by NASA on an interplanetary mission - should reach Jovian orbit in 2030 after a 1.6 billion-mile (2.6 billion km) journey.

In praise of mystery. A poem for Europa.
Duration 1:58  View on Watch


The solar-powered Clipper will have an array of instruments designed to study the vast ocean of water that scientists strongly believe lies beneath Europa's icy crust, potentially harboring conditions suitable for life.

During its mission, the spacecraft is expected to make nearly 50 fly-bys of Europa, rather than continuously orbit the moon, because doing so would bring it too close for too long to Jupiter's powerfully harsh radiation belts.

UNITING TWO WATER WORLDS

Limon's "Poem for Europa" is less a meditation on science - though its first line seems to allude to a rocket launch - as it is an ode to nature and the awe it can inspire in humankind.

Except for its title, it does not mention Europa explicitly but refers to its place among Jupiter's natural satellites, and to the commonality of water that it shares with Earth: "O second moon, we too are made of water, of vast and beckoning seas."

It concludes: "We, too are made of wonders, of great / and ordinary loves, of small invisible worlds / of a need to call out through the dark."

"I wanted to point back to the Earth, and I think the biggest part of the poem is that it unites those two things," she told Reuters in an interview in the Library of Congress poetry room hours before the piece was unveiled. "It unites both space and this incredible planet that we live on."

Limon, who won the National Book Critics Circle Award for her poetry collection "The Carrying," recounted great difficulty when she first tried composing the Europa poem at a writers retreat in Hawaii.

Her breakthrough came on a suggestion from her husband, who Limon said encouraged her to "stop writing a NASA poem" and to create "a poem that you would write" instead. "That changed everything," she remembered.

The only firm parameters NASA gave her were to relate something about the mission, to make it understandable to readers as young as 9, and to write no more than 200 words.

At the Library of Congress on Thursday night, Limon said she considers the Europa commission "the greatest honor and privilege of my life."

Reflecting earlier on what the assignment meant, Limon said she wonders at "all of the human eyes and human ears and human hearts that will receive this poem and ... it's the audience that really overwhelms me."

A writer of Mexican ancestry, Limon became the first Latina U.S. poet laureate and the 24th individual to hold the title when she was first appointed in September 2022.

(Reporting by Steve Gorman in Los Angeles and Colette Luke in Washington; Additional reporting by Kimberley Vinnell in Washington. Editing by Gerry Doyle)

Why a federal government agency is warning people not to keep a lot of money sitting in Venmo, PayPal, or Cash App

Story by amcdade@insider.com (Aaron McDade) • Yesterday 

The Consumer Financial Protection Bureau has released a new report advising Americans against storing too much money in payment apps like Venmo and PayPal
 Pavlo Gonchar/SOPA Images/LightRocket via Getty

The Consumer Financial Protection Bureau advised Americans against storing money in payment apps.

Surveys have found that about 76% of all Americans have used an app like Venmo at least once.

Money stored in apps is often not insured, unlike deposits at larger banks insured by the government.

You should probably stop leaving money in your Venmo and PayPal accounts for days, weeks, or even months at a time.

That's the official view of a federal government agency that is warning users of popular payment apps like Venmo, PayPal, Cash App, and more that they should avoid keeping large amounts of money on the app because it could be at risk.

Considering they are just apps and not federally regulated banks, deposits held in payments apps may not be federally insured like they are in a normal bank, the Consumer Financial Protection Bureau said in a report and a consumer advisory published Thursday.


BenzingaMoney at Risk in Apps
0:48


While the apps are massively popular, with a 2022 survey finding about 76% of Americans, and a staggering 85% of people aged 18-29, have used a payment app at least once, the CFPB warned that too many Americans are stashing money in them.

The apps saw a collective $893 billion in transactions last year, per the CFPB's report. That's expected to nearly double to $1.6 trillion by 2027. Given the massive amounts of money passing through the apps, the CFPB said the lack of federal regulation and oversight on the apps comparable to what banks have to face is concerning.

Without deposit insurance, if money is no longer accessible because of something like a bankruptcy filing, that money could be gone forever with little to no chance for the user to be reimbursed, the CFPB said. Deposit insurance has been a popular topic across the finance industry this year with the collapse of three regional banks sparking discussion and questions about bank deposits that are insured by the federal government.


"As tech companies expand into banking and payments, the CFPB is sharpening its focus on those that sidestep the safeguards that local banks and credit unions have long adhered to," CFPB Director Rohit Chopra said in a statement.

The CFPB also found that user agreements for the payment apps often contain little to no information about whether a user's money can be insured in a given app, or whether their money may be used for other investments or purposes within the company while it is held in the app.
The $500 million robot pizza startup you never heard of has shut down, report says

Story by nrennolds@insider.com (Nathan Rennolds) 

A Zume truck. Getty Images© Provided by Business Insider

A robot pizza startup that raised almost $500 million has shut down, The Information reported.

Zume aimed to automate the pizza-making process and raised funds from the likes of Softbank.

The company struggled with technological problems before changing its business model.



Zume, the robot pizza delivery startup that raised close to $500 million, has shut down, The Information reported.

The company was founded in 2015 and aimed to automate the pizza-making process, but suffered a series of technological difficulties. It then changed its business model and tried to become a sustainable-packaging manufacturer.

The failure comes despite Zume raising hundreds of millions from investors including Softbank and AME Cloud Ventures, per Crunchbase.

According to The Information, Zume was "insolvent," and Sherwood Partners, a restructuring firm, had been instructed to sell the company's assets. It ceased trading in May, according to a person with knowledge of the matter, per the report.

Zume had struggled with problems like stopping melted cheese from sliding off its pizzas while they cooked in moving trucks, per Bloomberg. Its difficulties led to a string of high-profile departures and financial problems.

It made a series of layoffs in 2020, cutting headcount by more than 500 employees — including all of its robotics and food-delivery truck business, Insider previously reported.

In a leaked email seen by Insider at the time, cofounder and CEO Alex Garden blamed the job cuts on a series of funding deals that had fallen through, as well as the economic impact of the pandemic.

Zume did not immediately respond to a request for comment from Insider, made outside normal working hours.











Three industries ripe for automation, according to a robotics guru

Story by Michael Wayland • CNBC

Automotive and logistics industries industries are among the most heavily invested in automation in the U.S. economy.

But there's still room to run for robotics in a host of other industries.

Jeff Burnstein, an automation industry guru and president of the Association for Advancing Automation, outlines how automation could be applied in agriculture, food processing and health care.


A software and robotics machine called mGripAI from Massachusetts-based Soft Robotics sorts artifical pieces of chicken into trays for packaging at an automation conference held by the Association for Advancing Automation in Detroit.© Provided by CNBC

DETROIT — The automotive and logistics industries are no strangers to robots.

They're among the most heavily invested businesses in automation in the U.S. economy, using robots to sort packages, transport goods and assist in building vehicles.

But other industries where robotics haven't yet taken hold may be potential investment opportunities and expansion areas for automation companies in the coming years.

Those emerging areas intrigue Jeff Burnstein, an automation-industry guru and president of the Association for Advancing Automation. His trade group represents more than 1,000 global companies involved in robotics, machine vision, motion control, and motors and related technologies.

Burnstein, who recently received a prestigious award for his more than 40 years in the industry, believes automation and robotics could greatly assist in doing the "dull, dirty, dangerous jobs" that people don't necessarily want to do.



Jeff Burnstein (right center), president of the Association for Advancing Automation, after receiving a Joseph F. Engelberger Robotics Award for his more than 40-year career in the industry
.© Provided by CNBC

"If you look at what's driving a lot of the automation in many industries it's shortage of people," he said on the sidelines of an automation convention last week in Detroit.

Labor shortages, led by the manufacturing industry, are the key driver in the growth of automation, he said.

Here are three industries Burnstein predicts are next for automation:

Agriculture

The agriculture industry is already testing or using various automated, if not autonomous, technologies to make operations more efficient and safer. It also serves to cut costs

Tractor maker Deere & Co., for example, offers a suite of automated-assistance features such as turning and guidance for crop row lines. Deere is working on an autonomous tractor that can "see, think, and work on its own, freeing up time for farmers to complete other tasks simultaneously," according to its website.

Other automated technologies for agriculture include drones that can spray pesticides over crops, remote-controlled tractors, automated harvesting systems, and other data and logistics farming apps.


Deere's autonomous 8R tractor© Provided by CNBC

Food processing

Harvesting and sorting chicken parts is exactly the kind of dull, dirty, dangerous jobs automation could assist in doing, Burnstein says.

At the automation convention, at least two companies were showcasing food-sorting robots whose abilities included identifying what types of cuts fit into a tray for packaging.

Beyond efficiency advantages, there are health and safety benefits, too, advocates point out.

"The machine can't sneeze. It can't rub its face. It can't have hair fall into anything. So, it's really safe. And less hands touching it, the less introduction for any disease," said Anthony Romeo, a representative of Massachusetts-based companies Cognex Corp. and Soft Robotics, one of the companies working on sorting food and chicken parts, who also attended the convention.


Employees of Tyson Foods© Provided by CNBC

In 2021, Tyson Foods said it would invest over $1.3 billion in new automation capabilities through 2024 to increase yields and reduce both labor costs and associated risks — and ultimately deliver savings for the meat processor.

Tyson CEO Donnie King last month told investors the company is continuing to "invest in automation and digital capabilities with opportunities to improve our yield."

He said the company has 50 lines for deboning chickens that are fully automated.

Pilgrim's Pride, one of the world's largest chicken producers, also has announced substantial investments in automation, including more than $100 million it announced in 2021
.
Health care

Automation in health care could be viable in a variety of cases — from transportation of goods and personal medications to someone's bedside, to cleaning and disinfecting tools.

"You can do that robotically," Burnstein said. "If you're having trouble finding people that could be a good solution. There's all kinds of those things and then drug discovery, of course, and other applications."

One notable company currently in the space is Aethon, a Pittsburgh-based robotics company that's made strides in the health-care sector with an autonomous mobile robot called the TUG. The robots are capable of navigating around a hospital independently, according to the company's website.

The TUG can be programmed to avoid obstacles and even operate elevators, according to the company.

It's one example of an AMR, or autonomous mobile robot: a type of vehicle that can perform several different delivery tasks, which Burnstein called "hot in automation" at the moment.



Hackers use flaw in popular file transfer tool to steal data, researchers say

Story by By Zeba Siddiqui • Yesterday 

 A computer keyboard lit by a displayed cyber code is seen in this illustration picture
© Thomson Reuters


SAN FRANCISCO (Reuters) - Hackers have stolen data from the systems of a number of users of the popular file transfer tool MOVEit Transfer, U.S. security researchers said on Thursday, one day after the maker of the software disclosed that a security flaw had been discovered.

Software maker Progress Software Corp, after disclosing the vulnerability on Wednesday, said it could lead to potential unauthorized access into users' systems.

The managed file transfer software made by the Burlington, Massachusetts-based company allows organizations to transfer files and data between business partners and customers.

It was not immediately clear which or how many organizations use the software or were impacted by potential breaches. Chief Information Officer Ian Pitt declined to share those details, but said Progress Software had made fixes available since it discovered the vulnerability late on May 28.

The software's eponymous cloud-based service had also been impacted by this, he told Reuters.

"As of now we see no exploit of the cloud platform," he said.

Cybersecurity firm Rapid7 Inc and Mandiant Consulting - owned by Alphabet Inc's Google - said they had found a number of cases in which the flaw had been exploited to steal data.

"Mass exploitation and broad data theft has occurred over the past few days," Charles Carmakal, chief technology officer of Mandiant Consulting, said in a statement.

Such "zero-day," or previously unknown, vulnerabilities in managed file transfer solutions have led to data theft, leaks, extortion and victim-shaming in the past, Mandiant said.

"Although Mandiant does not yet know the motivation of the threat actor, organizations should prepare for potential extortion and publication of the stolen data," Carmakal said.

Rapid7 said it had noticed an uptick in cases of compromise linked to the flaw since it was disclosed.

Progress Software has outlined steps users at risk can take to mitigate the impact of the security vulnerability.

Pitt did not have a comment on who might have been trying to steal data by exploiting the flaw.

"We have no evidence of it being used to spread malware," he said.

MOVEit Transfer was used by a relatively "small" number of customers compared to those of the company's other software products that number more than 20, he said.

"We have forensics partners on board and we are working with them to make sure that we have an ever-evolving grasp of the situation."

(Reporting by Zeba Siddiqui in San Francisco; Editing by Christopher Cushing)


This Play Store malware was downloaded over 420 million times

Story by MobileSyrup • Thursday, June 1,2023

New Android spyware has been discovered in the Play Store that has been downloaded over 420 million times.

The spyware, dubbed SpinOK by cybersecurity researchers Doctor Web (via Bleeping Computer), collects data from your device and sends it to remote servers. It also displays ads and manipulates your clipboard.

As shared by Doctor Web, SpinOK is a malicious SDK (software development kit) that developers can use to add mini-games, tasks and prizes to their apps. These features are meant to “spark user interest,” and keep them on the app while collecting information from the back door.

The malicious SDK’s spying and information collection capabilities include:

Sending information about your device, such as its model, OS version, screen size, battery level, etc., to remote servers.

Using your gyroscope and magnetometer sensors to detect if you are using a real device or a virtual one.

 This is done to evade security analysis and detection.

Displaying ads on your screen.

Sccaning your device for files and directories and sending their names and locations to the remote server.

Stealing specific files from your device if instructed by the server.

Copying or replacing the contents of your clipboard with malicious data.

Doctor Web has identified 101 apps on the Play Store that contain the SpinOK module. These apps have been downloaded more than 420 million times in total, posing a huge security risk for Android users worldwide.

The most popular apps among them are:

Noizz: video editor with music – At least 100 million downloads
Zapya – File Transfer, Share – At least 100 million downloads
VFly: video editor&video maker – At least 50 million downloads
MVBit – MV video status maker – At least 50 million downloads
Biugo – video maker&video editor – At least 50 million downloads
Crazy Drop – At least 10 million downloads
Cashzine – Earn money reward – At least 10 million downloads
Fizzo Novel – Reading Offline – At least 10 million downloads
CashEM: Get Rewards – At least 5 million downloads
Tick: watch to earn – At least 5 million downloads

A full list of infected apps can be found here.

Bleeping Computer suggests that Google has removed most of these apps from the Play Store, except for Zapya, which has been updated to remove the SpinOK module. However, if you have already installed any of these apps on your device, you should take action immediately.

You should uninstall the app from your device, even if it has been removed from the Play Store, followed by running an antivirus scan on your device to make sure there are no traces of malware left.

Source: Doctor Web Via: Bleeping Computer by cybersecurity researchers Doctor Web (via Bleeping Computer), collects data from your device and sends it to remote servers. It also displays ads and manipulates your clipboard.

Source: Doctor Web Via: Bleeping Computer
AI hypocrisy: OpenAI, Google and Anthropic won't let their data be used to train other AI models, but they use everyone else's content

Story by insider@insider.com (Alistair Barr) • Yesterday

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. 
Win McNamee/Getty Images© Provided by Business Insider

Microsoft-backed OpenAI, Google and Anthropic ban the use of their content to train other AI models.

However, these companies have been using other online content for their own model training. 

Can Big Tech have it both ways? Reddit and others are trying to stop this.

In the new age of generative AI, big tech companies are following a "do as I say, not as I do" strategy when it comes to the use of online content.

Microsoft-backed OpenAI, along with Google, and Google-backed Anthropic have for years been using online content created by companies to train their generative AI models. This was done without asking for specific permission, and it's part of a brewing legal battle that will decide the future of the web and how copyright laws are applied in this new world.

The tech industry will likely argue that their approach is fair use. That has yet to be decided. However, these big tech companies won't let their own content be used to train other AI models. So why should they be allowed to do this to everyone else?

Take a look at the terms of service for Claude, Anthropic's AI assistant:

"You may not access or use the Services in the following ways, and if any of these restrictions are inconsistent with or ambiguous in relation to the Acceptable Use Policy, the Acceptable Use Policy controls: To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models."

Here's an excerpt from the top of Google's generative AI terms of use:

"You may not use the Services to develop machine learning models or related technology."

And here's the relevant section from OpenAI's terms of use. This is the company behind ChatGPT.

"You may not... use output from the Services to develop models that compete with OpenAI."

These companies are not dumb, but they are hypocritical

These companies are not dumb. They know that quality content is vital for training new AI models. So it makes sense that they won't allow their output to be used this way.

But why would any other website or company let their content be freely used by these giant tech companies to train their models?

Insider asked OpenAI, Google and Anthropic for comment on Friday. At the time of publication, they had not responded.

Reddit and other companies say enough is enough

Other companies are just beginning to realize what's been happening, and they are not happy. Reddit, which has been used for years in AI model training, plans to start charging for access to its data.

"The Reddit corpus of data is really valuable. But we don't need to give all of that value to some of the largest companies in the world for free," said Steve Huffman, CEO of Reddit.

In April, Elon Musk accused Microsoft, the main backer of OpenAI, of illegally using Twitter's data to train AI models. "Lawsuit time," he tweeted.

"There is so much wrong w/ this premise I don't even know where to start," a Microsoft spokesman wrote in an email to Insider when asked for comment.

OpenAI's CEO Sam Altman is trying to be more thoughtful on this issue, by working on new AI models that respect copyright. "We're trying to work on new models where if an AI system is using your content, or if it's using your style, you get paid for that," he said recently, according to Axios.

Publishers, including Insider which produced this story, have a vested interest here. Some publishers, including News Corp., are already pushing tech companies to pay to use their content for training AI models.

The current way AI models are trained 'breaks' the web

One former Microsoft executive believes something is wrong here. Steven Sinofsky recently said the current way AI models are trained "breaks" the web.

"Crawling used to be allowed in exchange for clicks. But now the crawling simply trains a model and no value is ever delivered to the creator(s) / copyright holders," he tweeted. Insider asked him for comment, but he was traveling on Friday and couldn't respond.


Japan privacy watchdog warns ChatGPT-maker OpenAI on user data
Story by By Kantaro Komiya and Sam Nussey • Yesterday 
SEXIST NERD BOY Illustration shows ChatGPT BY Thomson Reuters

By Kantaro Komiya and Sam Nussey

TOKYO (Reuters) - Japan's privacy watchdog said on Friday it has warned OpenAI, the Microsoft-backed startup behind the ChatGPT chatbot, not to collect sensitive data without people's permission.

OpenAI should minimise the sensitive data it collects for machine learning, the Personal Information Protection Commission said in a statement, adding it may take further action if it has more concerns.

Regulators around the world are scrambling to draw up rules governing the use of generative artificial intelligence (AI), which can create text and images, the impact of which proponents compare to the arrival of the internet.

While Japan has been on the backfoot with some recent technology trends, it is seen as having greater incentive to keep pace with advances in AI and robotics to maintain productivity as its population shrinks.

 OpenAI CTO's Twitter Hacked, Promotes Fraudulent 'OPENAI' Token (CoinDesk)
We are talking about another scam on Twitter.  Duration 7:14   View on Watch


The watchdog noted the need to balance privacy concerns with the potential benefits of generative AI including in accelerating innovation and dealing with problems such as climate change.


Japan is the third-largest source of traffic to OpenAI's website, according to analytics firm Similarweb.

OpenAI CEO Sam Altman in April met Prime Minister Fumio Kishida with an eye to expansion in Japan, ahead of the Group of Seven (G7) leaders summit where Kishida led a discussion on regulating AI.

The EU, a global trendsetter on tech regulation, set up a taskforce on ChatGPT and is working on what could be the first set of rules to govern AI.

In the meantime, the rapid spread of such chatbots has meant regulators have had to rely on existing rules to bridge the gap.

Italian regulator Garante had ChatGPT taken offline before the company agreed to install age verification features and let European users block their information from being used to train the system.

Altman last week said OpenAI had no plans to leave Europe after earlier suggesting the startup might do so if EU regulations were too difficult to comply with.

(Reporting by Kantaro Komiya and Sam Nussey; Editing by Jacqueline Wong, Christopher Cushing and Sharon Singleton)

Factbox-Governments race to regulate AI tools

Story by Reuters • Yesterday

REUTERS NERD BOY SEXISM

(Reuters) - Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree laws governing the use of the technology.

Here are the latest steps national and international governing bodies are taking to regulate AI tools:

AUSTRALIA

* Seeking input on regulations

The government is consulting Australia's main science advisory body and is considering next steps, a spokesperson for the industry and science minister said in April.

BRITAIN

* Planning regulations

The Financial Conduct Authority, one of several state regulators that has been tasked with drawing up new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology, a spokesperson told Reuters.

Britain's competition regulator said on May 4 it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed.

Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body.

CHINA

* Planning regulations

China's cyberspace regulator in April unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.

Beijing will support leading enterprises in building AI models that can challenge ChatGPT, its economy and information technology bureau said in February.

EUROPEAN UNION

* Planning regulations

The U.S. and EU should push the AI industry to adopt a voluntary code of conduct within months to provide safeguards while new laws are developed, EU tech chief Margrethe Vestager said on May 31. Vestager said she believed a draft could be drawn up "within the next weeks", with a final proposal for industry to sign up "very, very soon".

Key EU lawmakers on May 11 agreed on tougher draft rules to rein in generative AI and proposed a ban on facial surveillance. The European Parliament will vote on the draft of the EU's AI Act in June.

EU lawmakers had reached a preliminary deal in April on the draft that could pave the way for the world's first comprehensive laws governing the technology. Copyright protection is central to the bloc's effort to keep AI in check.

The European Data Protection Board, which unites Europe's national privacy watchdogs, set up a task force on ChatGPT in April.

 Can AI be regulated? What to know about the technology's future. (USA TODAY)
Duration 2:07 View on Watch

The European Consumer Organisation (BEUC) has joined in the concern about ChatGPT and other AI chatbots, calling on EU consumer protection agencies to investigate the technology and the potential harm to individuals.

FRANCE

* Investigating possible breaches

France's privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbox was temporarily banned in Italy over a suspected breach of privacy rules.

France's National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups.

G7

* Seeking input on regulations

Group of Seven leaders meeting in Hiroshima, Japan, acknowledged on May 20 the need for governance of AI and immersive technologies and agreed to have ministers discuss the technology as the "Hiroshima AI process" and report results by the end of 2023.

G7 nations should adopt "risk-based" regulation on AI, G7 digital ministers said after a meeting in April in Japan.

IRELAND

* Seeking input on regulations

Generative AI needs to be regulated, but governing bodies must work out how to do so properly before rushing into prohibitions that "really aren't going to stand up", Ireland's data protection chief said in April.

ITALY

* Investigating possible breaches

Italy's data protection authority Garante plans to review other artificial intelligence platforms and hire AI experts, a top official said on May 22.

ChatGPT became available again to users in Italy in April after being temporarily banned over concerns by the national data protection authority in March.

JAPAN

* Investigating possible breaches

Japan's privacy watchdog said on June 2 it has warned OpenAI not to collect sensitive data without people's permission and to minimise the sensitive data it collects, adding it may take further action if it has more concerns.

SPAIN

* Investigating possible breaches

Spain's data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU's privacy watchdog to evaluate privacy concerns surrounding ChatGPT, the agency told Reuters in April.

U.S.

* Seeking input on regulations

The U.S. Federal Trade Commission's chief said on May 3 the agency was committed to using existing laws to keep in check some of the dangers of AI, such as enhancing the power of dominant firms and "turbocharging" fraud.

Senator Michael Bennet introduced a bill in April that would create a task force to look at U.S. policies on AI, and identify how best to reduce threats to privacy, civil liberties and due process.

The Biden administration had earlier in April said it was seeking public comments on potential accountability measures for AI systems.

President Joe Biden has also told science and technology advisers that AI could help to address disease and climate change, but it was also important to address potential risks to society, national security and the economy.

(Compiled by Amir Orusov and Alessandro Parodi in Gdansk; editing by Jason Neely, Kirsten Donovan and Milla Nissi)
Maryland school district sues Meta, Google, and TikTok over ‘mental health crisis’

Story by Emma Roth • Yesterday



 Illustration by Amelia Holowaty Krales / The Verge

AMaryland school district is suing Meta, Google, Snap, and TikTok owner ByteDance for allegedly contributing to a “mental health crisis” among students. A lawsuit filed by the Howard County Public School System on Thursday claims the social networks operated by these companies are “addictive and dangerous” products that have “rewired” the way kids “think, feel, and behave.”

The lawsuit cites a laundry list of issues on Instagram, Facebook, YouTube, Snapchat, and TikTok that it accuses of harming kids. That includes the (allegedly) addictive “dopamine-triggering rewards” on each app, such as TikTok’s For You page, which leverages data about user activity to provide an endless stream of suggested content. It also mentions Facebook and Instagram’s recommendation algorithms and “features that are designed to create harmful loops of repetitive and excessive product usage.”

Additionally, the school district accuses each platform of encouraging “unhealthy, negative social comparisons, which in turn cause body image issues and related mental and physical disorders” in kids. Other parts of the lawsuit address “defective” parental controls in each app, along with safety gaps it alleges promote child sexual exploitation.


“Over the past decade, Defendants have relentlessly pursued a strategy of growth-at-all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” the lawsuit states. “In a race to corner the ‘valuable but untapped’ market of tween and teen users, each Defendant designed product features to promote repetitive, uncontrollable use by kids.”

The Howard County Public School System is far from the only school district that has decided to take legal action against social media companies as of late. In addition to two other school districts in Maryland, school systems in Washington state, Florida, California, Pennsylvania, New Jersey, Alabama, Tennessee, and others have filed similar lawsuits over the negative effects that social media has had on the mental health of kids.

“We’ve invested in technology that finds and removes content related to suicide, self-injury or eating disorders before anyone reports it to us,” Antigone Davis, Meta’s head of safety, says in an emailed statement to The Verge. “These are complex issues, but we will continue working with parents, experts and regulators such as the state attorneys general to develop new tools, features and policies that meet the needs of teens and their families.”

Google denies the allegations outlined in the lawsuit, with company spokesperson José Castañeda saying in a statement to The Verge, “In collaboration with child development specialists, we have built age-appropriate experiences for kids and families on YouTube, and provide parents with robust controls.” Meanwhile, Snap spokesperson Pete Boogaard says that the company “vet[s] all content before it can reach a large audience, which helps protect against the promotion and discovery of potentially harmful material.” ByteDance didn’t immediately respond to The Verge’s request for comment.

Critics have drawn attention to social media’s potential impact on children and teenagers, particularly after Facebook whistleblower Frances Haugen came forward with a trove of internal documents that indicated Meta knew about the potential harm Instagram had on some young users. Last week, US Surgeon General Dr. Vivek Murthy issued a public advisory that calls social media a “profound risk of harm to the mental health and well-being of children and adolescents.”

Some states have responded to the safety issues posed by social media by enacting laws that prevent kids from signing up for social media sites. While Utah will bar children under the age of 18 from using social media without parental consent starting next year, Arkansas has passed similar legislation preventing underage kids from signing up for social networks. At the same time, a flurry of national online safety laws, some of which could implement some sort of online age verification system, has made their way to Congress despite warnings from civil liberties and privacy advocates.
Twitter's head of brand safety and ad quality to leave -source
Story by By Sheila Dang • Yesterday


(Reuters) - Twitter's head of brand safety and ad quality, A.J. Brown, has decided to leave the company, according to a source familiar with the matter on Friday, the second safety leader to depart in a matter of days.

The latest departure adds to a growing challenge for new Twitter CEO Linda Yaccarino, even before she steps into the role.

On Thursday, Ella Irwin told Reuters that she resigned from her role as vice president of product for trust and safety at the social media company, where she oversaw content moderation efforts and often responded to users with questions about suspended accounts.

Twitter Executive Resigns After Conservative Documentary Debacle (Newsweek)
Duration 0:59  View on Watch

Brown worked on efforts to prevent ads from appearing next to unsuitable content.

Platformer and the Wall Street Journal earlier reported Brown's departure.

Since Tesla CEO Elon Musk acquired Twitter in October, the platform has struggled to retain advertisers, who were wary about the placement of their ads after the company laid off thousands of employees.

Musk's hiring of Yaccarino, former ad chief at Comcast's NBCUniversal, signaled that ad sales remained a priority for Twitter even as it works to grow subscription revenue.

Twitter and Brown did not immediately respond to Reuters' requests for comment.

(Reporting by Tiyashi Datta in Bengaluru and Sheila Dang in Dallas; Editing by Maju Samuel and Marguerita Choy)


Buzzworthy: Honeybee health blooming at federal facilities across the country




CONCORD, N.H. (AP) — While judges, lawyers and support staff at the federal courthouse in Concord, New Hampshire, keep the American justice system buzzing, thousands of humble honeybees on the building’s roof are playing their part in a more important task — feeding the world.

The Warren B. Rudman courthouse is one of several federal facilities around the country participating in the General Services Administration’s Pollinator Initiative, a government program aimed at assessing and promoting the health of bees and other pollinators, which are critical to life on Earth.

“Anybody who eats food, needs bees," said Noah Wilson-Rich, co-founder, CEO and chief scientific officer of the Boston-based Best Bees company, which contracts with the government to take care of the honeybee hives at the New Hampshire courthouse and at some other federal buildings.

Bees help pollinate the fruits and vegetables that sustain humans, he said. They pollinate hay and alfalfa, which feed cattle that provide the meat we eat. And they promote the health of plants that, through photosynthesis, give us clean air to breathe.

Yet the busy insects that contribute an estimated $25 billion to the U.S. economy annually are under threat from diseases, agricultural chemicals and habitat loss that kill about half of all honeybee hives annually. Without human intervention, including beekeepers creating new hives, the world could experience a bee extinction that would lead to global hunger and economic collapse, Wilson-Rich said.

The pollinator program is part of the federal government’s commitment to promoting sustainability, which includes reducing greenhouse gas emissions and promoting climate resilient infrastructure, said David Johnson, the General Services Administration’s sustainability program manager for New England.

The GSA's program started last year with hives at 11 sites.

Some of those sites are no longer in the program. Hives placed at the National Archives building in Waltham, Massachusetts, last year did not survive the winter.

Related video: U.S. promotes honeybees with hives at buildings
Duration 1:52   View on Watch

Since then, other sites were added. Two hives, each home to thousands of bees, were placed on the roof of the Rudman building in March.

The program is collecting data to find out whether the honeybees, which can fly 3 to 5 miles from the roof in their quest for pollen, can help the health of not just the plants on the roof, but also of the flora in the entire area, Johnson said.

“Honeybees are actually very opportunistic,” he said. “They will feed on a lot of different types of plants.”

The program can help identify the plants and landscapes beneficial to pollinators and help the government make more informed decisions about what trees and flowers to plant on building grounds.

Best Bees tests the plant DNA in the honey to get an idea of the plant diversity and health in the area, Wilson-Rich said, and they have found that bees that forage on a more diverse diet seem to have better survival and productivity outcomes.

Other federal facilities with hives include the Centers for Medicare and Medicaid Services headquarters in Baltimore; the federal courthouse in Hammond, Indiana; the Federal Archives Records Center in Chicago; and the Denver Federal Center.

The federal government isn't alone in its efforts to save the bees. The hives placed at federal sites are part of a wider network of about 1,000 hives at home gardens, businesses and institutions nationwide that combined can help determine what's helping the bees, what's hurting them and why.

The GSA’s Pollinator Initiative is also looking to identify ways to keep the bee population healthy and vibrant and model those lessons at other properties — both government and private sector — said Amber Levofsky, the senior program advisor for the GSA's Center for Urban Development.

“The goal of this initiative was really aimed at gathering location-based data at facilities to help update directives and policies to help facilities managers to really target pollinator protection and habitat management regionally,” she said.

And there is one other benefit to the government honeybee program that's already come to fruition: the excess honey that's produced is donated to area food banks.

Mark Pratt, The Associated Press
ICYMI
Canada commits to backing C$3 billion in new Trans Mountain oil pipeline loans

Story by By Ismail Shakil and Nia Williams • Yesterday 

A pipe yard servicing government-owned oil pipeline operator Trans Mountain is seen in Kamloops
© Thomson Reuters

OTTAWA (Reuters) -The Canadian government is backing up to C$3 billion ($2.24 billion) in loans for Trans Mountain Corp (TMC), the crown corporation building an over-budget and long-delayed oil pipeline expansion to Canada's Pacific Coast.

The information was disclosed on Export Development Canada's (EDC) website this week, and shows two new loan guarantees signed in late March and early May.

Last year Liberal Prime Minister Justin Trudeau's government, which bought the Trans Mountain pipeline in 2018 to ensure the expansion project got built, provided a C$10 billion loan guarantee to TMC.

The Trans Mountain Expansion will nearly triple the flow of crude from Alberta's oil sands to Burnaby, British Columbia, to 890,000 barrels per day and is intended to boost access to Asian refining markets.

But the project has been beset by regulatory hurdles, environmental opposition and construction delays and is now expected to cost C$30.9 billion, quadrupling from the C$7.4 billion budgeted in 2017.

Finance Ministry spokeswoman Marie-France Faucher said the loan guarantee was "common practice" and did not reflect any new public spending. TMC is paying a fee to the government for the loan guarantee, she said.

"The federal government does not intend to be the long-term owner of the project and will launch a divestment process in due course," Faucher said in a statement, adding the project remains commercially viable.

Many analysts say TMC will not be able to recoup the full cost of construction when it sells the pipeline. Morningstar analyst Stephen Ellis estimated the pipeline will be worth around C$15 billion once it is completed.

"With the project so over-budget, the feds are going to have to sell it at a loss and taxpayers will be on the hook to repay the loans," said Keith Stewart, a senior energy strategist with Greenpeace Canada.

The expanded pipeline is expected to start shipping oil in the first quarter of 2024.

($1 = 1.3372 Canadian dollars)

(Reporting by Ismail Shakil and Nia Williams; Editing by Daniel Wallis and Richard Chang)
WAITING FOR 70 YEARS WHATS 5 MORE
A Sam Altman-backed startup is betting it can crack the code on fusion energy. Here's how it's trying to bring that to the masses by 2028.

Story by bnguyen@insider.com (Britney Nguyen) • June 3.2023

Helion’s co-founders: Chris Pihl, chief technology officer (left), David Kirtley, chief executive officer (middle), and George Votroubek, director of research (right).
© Helion Energy

Helion Energy wants to produce large amounts of electricity through fusion by 2028.
 
Microsoft has agreed to buy 50 megawatts of electricity from Helion, which can power 40,000 homes.
Helion's chief business officer talked to Insider about how the company plans to make fusion energy.

The race is on for clean energy company Helion Energy to build a fusion power plant capable of producing enough electricity for tech behemoth Microsoft in five years.

Microsoft in May agreed to buy 50 megawatts of electricity from Helion by 2028, which is enough to power around 40,000 homes.

Even OpenAI CEO Sam Altman is enthusiastic for the potential of fusion energy and Helion, previously saying he's "super excited about what's going to happen there." Altman invested $375 million into the company in November 2021, leading its Series E round.

Fusion energy could be used to power data centers, which are large consumers of electricity, Scott Krisiloff, Helion Energy's chief business officer, told Insider.

But despite the enthusiasm and promises of fusion energy, it's incredibly hard to produce. In particular, it's difficult to get up to the temperature needed to produce electricity from fusion. Helion claims its the first private fusion company to have built technology capable of reaching that temperature.

"As our population has grown and required more information, and more connectedness to the internet, the energy needs of our population grow as well," Krisiloff said.



Electromagnetic coils that will be used in Helion's seventh fusion energy prototype, Polaris. 

Krisiloff said Helion is currently working on its seventh prototype, Polaris, that is expected to be completed in 2024, and would be the first to produce electricity from fusion.

"Fusion is something that we utilize every day; all of our energy traces back to fusion in some way," Krisiloff said. "But we've never been able to harness it on Earth in a way that we can produce electricity from it."

Fusion energy is created in a 40-foot long tube

Fusion happens when two atoms come together, forming a single atom, and is is how the sun and stars make energy.

Helium-3 is produced by fusing deuterium in Helion's plasma accelerator.

For its reaction, Helion takes two materials, deuterium, a form of hydrogen found in water, and helium-3 and puts them in a 40-foot-long tube. Inside, the materials are compressed until they reach 100 million degrees Celsius.

That's when the conditions are right to produce electricity, Krisiloff said. Helion's sixth and latest prototype called Trenta, has been able to exceed 100 million degree celsius temperatures, according to Krisiloff.



A graphic explaining how Helion's fusion energy technology works. 

"That is one cycle of the machine, and then you pulse it over and over in order to get more energy out," Krisiloff said.


A gif depicting two plasmas merging inside Trenta, Helion’s sixth fusion prototype. 

Fusion could be a better source of clean energy than wind and solar

Krisiloff said what makes fusion energy promising is that there are abundant fuel sources for it.

"The fuel comes from water in the form of deuterium, which is found abundantly on Earth," Krisiloff said.

Another benefit of fusion is that it's safer, Krisiloff said, compared to other forms of energy such as nuclear energy, or fission, which is a chain reaction. That means if something happens to the machine producing fusion energy, it will shut down immediately.



An electrical engineer preparing for Helion’s next pulsed power test. 

Krisiloff said fusion also produces limited amounts of waste during the process compared to traditional fission, which is when atoms are split apart, creating unstable nuclei, some of which can be radioactive for millions of years, according to the International Atomic Energy Agency.

Fusion can produce energy without emitting carbon, Krisiloff said. It also requires the lowest amount of demand on a power grid over time.



Polaris, a prototype fusion reactor from Helion, which recently announced a deal with Microsoft to provide the tech giant with electricity produced from fusion. 

Compared to other sources of clean energy, Krisiloff said fusion is the most energy dense, meaning it can happen in a confined space, and doesn't need large amounts of land and space required by solar and wind power. It's also more reliable than wind and solar power, because it wouldn't be as impacted by extreme weather events, Krisiloff said.