Wednesday, February 08, 2023

Australian electric battery maker buys Britishvolt out of administration

Chris Price
Mon, 6 February 2023 

The site in Blyth where Britishvolt had planned to build an electric battery gigafactory
- Owen Humphreys/PA Wire

Plans for an electric battery gigafactory in the UK have been revived after an Australian business struck a deal to buy collapsed Britishvolt.

Recharge Industries, which has operations in Geelong and New York, has been chosen as the preferred bidder for the company and plans to revive its goal of building a battery factory in the North East.

David Collard, founder of Recharge Industries and chief executive of its parent Scale Facilitation, said the Australian business was “thrilled” and “can’t wait to get started making a reality of our plans to build the UK’s first gigafactory”.

He added: “After a competitive and rigorous process, we’re confident our proposal will deliver a strong outcome for all involved.”

Recharge Industries has licensed electric battery designs from the US and is working on building a lithium-ion battery factory in the Australian state of Victoria.

Mr Collard previously thanked Lord Botham, the retired cricketer turned trade envoy, for his “proactive assistance" ahead of its bid.

Britishvolt was working on Britain's first battery factory in Blyth but collapsed last month.

Joint administrators EY said the majority of the business and assets would be taken on by Recharge, with the deal set to close within a week.

Administrators considered "numerous offers" for the failed electric battery maker, EY said.

Greybull, the former owner of British Steel and Monarch Airlines, is understood to have held talks with administrators while Jaguar Land Rover owner Tata was also said to be interested at one point.

Britishvolt fell into administration in January after struggling to secure funding. 200 people were made redundant.

The company was founded by Swedes Orral Nadjari and Lars Carlstrom in December 2019 and the project was championed by former prime minister Boris Johnson.

Doubts were raised about the financial health of the business last summer. Commodities giant Glencore, which was an investor in Britishvolt, agreed a last-minute deal to provide a five week life-line late last year but the business was unable to secure long-term funding.
UK
Gordon Brown: Government and Ofgem are creating booming business for loan sharks

Alana Calvert, PA
Tue, 7 February 2023 

Former prime minister Gordon Brown has accused Ofgem and the Government of creating a “booming business” for loan sharks after the prepayment energy meter scandal was uncovered.

Mr Brown accused the regulator’s chief executive, Jonathan Brearley, of “dismally” failing to protect vulnerable customers after it was revealed that hundreds of thousands of impoverished Britons were forced to switch to costly prepayment meters, pushing some of them into the hands of illegal moneylenders.

The scandal uncovered by The Times found that British Gas routinely sent debt collectors to break into customers’ homes and force-fit pay-as-you-go meters, even when they were known to have extreme vulnerabilities.

Responding to these revelations, Mr Brown said the Ofgem boss needed to “consider his position” after “failing on his responsibilities to energy customers”.


Former prime minister Gordon Brown accused Ofgem’s chief executive, Jonathan Brearley, of ‘dismally’ failing to protect vulnerable customers(Jane Barlow/PA)

Writing in The Independent, the former Labour leader savaged both the energy regulator and the Government for perpetrating “harsh and callous policy decisions” and “failing to defend low-income families against the indefensible”.

“(Mr) Brearley’s official responsibility… is to ‘protect energy customers by ensuring they are treated fairly’… (and to) ‘stamp out sharp and bad practice’,” Mr Brown said.

“So (Mr) Brearley – and the now restructured Energy Department – should immediately explain why instead of being on the side of the public, they have failed dismally to properly monitor and expose utility companies and their debt agents who, in the middle of the worst cost of living crisis for 50 years, have been breaking into the homes of impoverished customers.”

The former prime minister said Ofgem was “not alone in failing to defend low-income families against the indefensible”, accusing the Government and its agencies of “harsh and callous policy decisions” that were “turning illegal money lending into Britain’s biggest booming business among low-income communities”.

A recent Times investigation revealed British Gas routinely sends debt collectors to break into customers’ homes and force-fit pay-as-you-go meters, even when they are known to have extreme vulnerabilities (Steve Parsons/PA)

Mr Brown said that through his work with local charities he had learnt users of pre-payment metres were needing to spend “a lot more” for each unit of their energy, adding that “at least” 20% of them had not been able to obtain the cash or discount vouchers they were promised.

“This failure to act is creating an even more serious social emergency for hardpressed families: Pushing them further into debt and, most worrying of all, into the hands of illegal moneylenders,” he said, listing the bedroom tax, the two-child rule and other caps and taxes which had worsened the financial situation for low-income households.

“Ministers are leaving families unable to cover the costs of their weekly food bill, without resorting to borrowing wherever they can find cash.

“The welfare state safety net is now full of holes – and instead of being the last line of defence for people in need, our own social security ministry is pushing families into ever more desperate measures.”


The former Labour leader said ‘our own social security ministry’ was ‘pushing families into ever more desperate measures’ (Andrew Matthews/PA)

He added: “Blood is in the water and loan sharks are circling.

“A record number of families are now so deep in debt that they are turning to the door-step lender, and the pay-day lender standing outside the cut-price stores, the pub and the betting shop. And even more worryingly, as illegal money lending moves online, the desperate are even more at risk as long as these social media platforms remain exempt from proper scrutiny.

Mr Brown’s comments follow the senior presiding judge of England and Wales telling magistrates to stop the processing of applications by energy firms to enter homes to install prepayment meters.

Ofgem has already asked energy suppliers to suspend the activity.

Meanwhile, Paypoint revealed that around one in five people did not redeem the £66 energy support voucher they were sent in November by the company under a Government support scheme

Of the hundreds of thousands of vouchers sent out only about 81% had been redeemed on Sunday when they ran out – 90 days after they were issued.

It means that thousands of households with prepayment meters missed out on energy bill support they were entitled to receive.


Fakery and fraud: Energy scammers cast
'wide net' on Facebook

Anuj Chopra, with Lucille Sodipe and Faith Brown in Manila and Gemma Cahya in Jakarta
Tue, 7 February 2023


A Filipino consumer fumes as she rips open a portable charger to discover she has been conned -- the batteries are choked with sand, making her yet another victim of scammers on Facebook.

AFP's fact checkers have uncovered a slew of energy-related scams proliferating on Facebook -- from fake solar panel incentives in the United States to hoax electric bike giveaways in Indonesia and the sale of dud devices in the Philippines.

And the trend underscores how fraudsters worldwide profit off disinformation, casting a wide net across social media users, many of whom take the bait amid a cost of living crisis and high utility and energy costs.

"What they did was awful," the 24-year-old Filipino, Brenilyn Ayachock, vented in an online video that showed sand pouring out of the power bank as she opened it with a knife.

"We were expecting a good product, but this is what they sent us."

Ayachock made the purchase on what appeared to be the Facebook page of a legitimate energy device retailer, with "special offers" and "flash sales" alongside environment-friendly messages such as "turn off unnecessary lights."

The page stopped responding to her, Ayachock said, after she bought the device for 1,500 pesos ($28), a small fortune at a time of galloping inflation.

She immediately reported the page to Facebook, but it was still active as of this week.

- 'Scammers follow headlines' -

Ayachock is far from the only victim as social media becomes a breeding ground for everything from bogus cryptocurrency ads, to "romance" scams and hoaxes aimed at extracting people's personal data.

Last year, the Philippines government warned against "unscrupulous" money-saving offers as consumers grappled with backbreaking utility prices.

AFP debunked Facebook posts that used doctored news reports to promote a bogus "power saving" device they claimed could slash electricity bills.

The warnings fell on deaf ears, with commercial data showing thousands of such gadgets are sold monthly. Activists say complaints in online reviews are drowned out by comments from people desperate to try anything to lower their expenses.

"Scammers follow the headlines and there isn't a day that goes by that we don't hear about how to conserve energy, rising gas and utility prices and the need for renewable energy," Amy Nofziger, director of fraud victim support at the US-based nonprofit AARP, told AFP.

"It's a wide net for scammers. Most social media sites do not thoroughly vet the ads placed on their sites, however many users do not know this and they put their full trust in these advertisements."

The ease with which fraudsters pelt users with disinformation raises questions about the capacity of platforms like Facebook to police paid-for scam advertising that is a lucrative revenue source.

Critics, including Patricia Schouker, a fellow at the Colorado-based Payne Institute, say algorithms that prioritize content based on preferences have let scam ads prey on users most likely to engage.

- 'Scams evolving' -


A spokesperson for Meta, Facebook's owner, said it views the "threat of scams seriously" and had taken action including disabling many of the ad accounts responsible for fraud reported by AFP's fact checkers.

"The people who push these kinds of ads are persistent, well-funded, and are constantly evolving," the spokesperson said.

AFP has a global team of journalists who debunk misinformation as part of Meta's third-party fact-checking program.

Last October, AFP debunked Facebook posts claiming free electric bikes were on offer in Indonesia after the government raised fuel prices. Meta said it had disabled pages and profiles linked to the scam.

But Hendro Sutono, a member of the citizen's group Indonesia Electric Motorcycle Community, voiced concern that fake stores offering electric bikes have cropped up on the platform -- and are hard to detect.

"The schemers take pictures from the real stores and repost them on their cloned accounts, so they look really legitimate," Sutono told AFP.

Sutono said he feared fraud could tarnish the image of the electric vehicles to the extent people will give up using them.

In many cases in the United States, scammers pose as utility company representatives. One Oregon-based firm warned its consumers last year that "scams are constantly evolving" and fraudsters tried to target some of them using "Facebook messenger."

"We see a growing number of utility front groups which are organizations that appear independent but are targeting their audience via Facebook, Instagram and TikTok," Schouker told AFP.

"They amplify misinformation... while masking their true identity."

burs-ac/ec
UK
Force wealthy back to work by slashing pension tax-free lump sum, says IFS

Oliver Gill
Sun, 5 February 2023 

The tax-free limit on total retirement savings should be radically overhauled to stop successful older professionals quitting the workforce in droves, a prominent think-tank has demanded.

The pension lifetime allowance ought to be based on the amount saved during the working lives rather than the total value of the investments at retirement, according to the Institute for Fiscal Studies (IFS).

A saver currently put away no more than the lifetime allowance without their pension being subject to a tax charge of up to 55 per cent. The allowance has been halved over the last decade and now stands at £1.073 million.

The IFS said in a report published today that such rules were part of a pension tax system that was often “arbitrary, wasteful or unfair” and provided older workers with “ridiculously strong disincentives to work more”.

The think-tank also said rules that allowed savers to access 25pc of their pension tax-free were too generous towards the wealthy, and suggested that this amount was capped at £100,000 rather than the current allowance of close to £270,000.

It comes as Jeremy Hunt is under pressure to provide over-50s greater incentives to either remain or return to the workforce as the UK economy grapples with low unemployment and soaring inflation.

The Chancellor is grappling with a “productivity puzzle” as many people have not returned to work following the pandemic.

The prospect of suffering crippling tax on pension savings has made it uneconomic for higher-earners such as hospital consultants to continue working. For some it makes more sense to retire earlier to avoid breaching the lifetime allowance.

In the report entitled “A blueprint for a better tax treatment of pensions”, the IFS proposes replacing the current lifetime allowance with a lifetime contribution cap. Defined benefit retirement funds, known colloquially as salary-linked schemes, would not be changed, however.

Isaac Delestre, economist at the IFS and author of the report, said: “[An] evening-out of tax support for pension saving would be more equitable and more economically efficient, and would allow the current set of poorly designed limits on what individuals can save in a pension to be relaxed.”

Mr Delestre also suggested that the increase in the thresholds could be funded through reforming other subsidies which benefit high-earners, such as the 25pc tax-free component.

“The 25pc tax-free component is worthless to those who do not pay income tax in retirement. And those making individual pension contributions receive much smaller subsidies,” he said.

Sir Steve Webb, former pensions minister and now a retirement sector consultant said: “There is a perfectly good argument that says we shouldn’t cap the size of the pot, we should cap what you put in.”

However, he said that lifting the cap overnight would not be without its problems.

Sir Steve said: “We start with a lifetime of history. If we decided from now on, we were only going to cap what people were going to put in, what do we do? We haven’t got a lifetime of records of what people have put in. They know what I’ve got in my pot; but they don’t have [contributions from] 20 years ago, 30 years ago.
Ex-Tokyo Olympics official held on alleged bid-rigging: media

Tue, 7 February 2023 


Japanese prosecutors arrested a former senior Tokyo Olympics official over alleged bid-rigging, local media said Wednesday, the latest twist in a growing corruption scandal.

Tokyo prosecutors declined to comment on the reports, but local media published photos of police raiding the home of Yasuo Mori, who ran test events for the pandemic-postponed Summer Games held in 2021.

The Asahi Shimbun daily and other outlets said Mori, 55, was arrested over alleged violations of the anti-monopoly law.

Prosecutors reportedly accuse him of rigging a string of supposedly open competitive bids and limited tender contracts for Olympic events, worth a total of 40 billion yen ($305 million), local media said.

Mori and other officials involved in the alleged rigging reportedly created their own list of candidates for the events and bids went mostly in line with their choices. Most bids received a single tender, the Asahi reported.

Prosecutors are already investigating bribery allegations around the Games over claims a former Tokyo 2020 board member took money from companies in exchange for Olympic partnership deals.

The former official, Haruyuki Takahashi, has been arrested over the scandal, and in December, the former executive of a major clothing company admitted in court that he offered money to secure sponsorship rights, according to national broadcaster NHK.

The corruption scandal has cast a shadow over the northern city of Sapporo's bid for the 2030 Winter Olympics.

Officials there have stopped holding promotional events for the bid and plan a nationwide poll to gauge support.

The ballooning saga is not the first time questions have been raised over impropriety around the Tokyo Games.

The former head of Japan's Olympic Committee, Tsunekazu Takeda, stepped down in 2019 after French prosecutors launched an investigation into corruption allegations linked to Tokyo's Olympic bid.

kh/sah/ssy
Japan rolls out 'humble and lovable' delivery robots

Natsuko FUKUE
Tue, 7 February 2023 


"Excuse me, coming through," a four-wheeled robot chirps as it dodges pedestrians on a street outside Tokyo, part of an experiment businesses hope will tackle labour shortages and rural isolation.

From April, revised traffic laws will allow self-driving delivery robots to navigate streets across Japan.

Proponents hope the machines could eventually help elderly people in depopulated rural areas get access to goods, while also addressing a shortage of delivery workers in a country with chronic labour shortages.


There are challenges to overcome, acknowledges Hisashi Taniguchi, president of Tokyo-based robotics firm ZMP, including safety concerns.

"They are still newcomers in human society, so it's natural they're seen with a bit of discomfort," he told AFP.

The robots won't be operating entirely alone, with humans monitoring remotely and able to intervene.

Taniguchi said it's important the robots "are humble and lovable" to inspire confidence.

ZMP has partnered with behemoths such as Japan Post Holdings in its trials of delivery robots in Tokyo.

Its "DeliRo" robot aims for a charming look, featuring big, expressive eyes that can be made teary in sadness if pedestrians block its way.

"Every kid around here knows its name," he said.

- 'How about some hot drinks?' -


There is a serious purpose behind the cuteness.

Japan has one of the world's oldest populations, with nearly 30 percent of its citizens aged over 65. Many live in depopulated rural areas that lack easy access to daily necessities.

Labour shortages in its cities and new rules limiting overtime for truck drivers also make it difficult for businesses to keep up with pandemic-fuelled e-commerce and delivery demands.

"The shortage of workers in transport will be a challenge in the future," said engineer Dai Fujikawa of electronics giant Panasonic, which is trialling delivery robots in Tokyo and nearby Fujisawa.

"I hope our robots will be used to take over where needed and help ease the labour crunch," he told AFP.

Similar robots are already in use in countries such as the United Kingdom and China but there are concerns in Japan about everything from collisions to theft.

Regulations set a maximum speed of six kilometres per hour (four miles per hour), meaning the "chances of severe injury in the event of a collision are relatively small", said Yutaka Uchimura, a robotic engineering professor at Shibaura Institute of Technology (SIT).

But if a robot "moves off the sidewalk and collides with a car due to some discrepancy between the pre-installed location data and the actual environment, that would be extremely worrying", he said.

Panasonic says its "Hakobo" robot can judge autonomously when to turn as well as detect obstacles, such as construction and approaching bikes, and stop.

One person at the Fujisawa control centre simultaneously monitors four robots via cameras and is automatically alerted whenever their robotic charges are stuck or stopped by obstacles, Panasonic's Fujikawa said.

Humans will intervene in such cases, as well as in high-risk areas such as junctions. Hakobo is programmed to capture and send real-time images of traffic lights to operators and await instructions.

Test runs so far have ranged from delivering medicine and food to Fujisawa residents to peddling snacks in Tokyo with disarming patter such as: "Another cold day, isn't it? How about some hot drinks?"

- 'A gradual process' -


"I think it's a great idea," passerby Naoko Kamimura said after buying cough drops from Hakobo on a Tokyo street.

"Human store clerks might feel more reassuring but with robots, you can shop more casually. Even when there's nothing you feel is worth buying, you can just leave without feeling guilty," she said.

Authorities don't believe Japanese streets will soon be teeming with robots, given the pressure to protect human employment.

"We don't expect drastic change right away, because there are jobs at stake," Hiroki Kanda, an official from the trade ministry promoting the technology, told AFP.

"The spread of robots will be more of a gradual process, I think."

Experts such as SIT's Uchimura are aware of the technology's limitations.

"Even the simplest of tasks performed by humans can be difficult for robots to emulate," he said.

Uchimura believes rolling the robots out in sparsely populated rural areas first would be safest. However, firms say demand in cities is likely to make urban deployment more commercially viable.

ZMP president Taniguchi hopes to eventually see the machines operating everywhere.

"I think it would make people happy if, with better communication technology, these delivery robots can patrol a neighbourhood or check on the safety of elderly people," he said.

"Japan loves robots."

tmo-nf/sah/pbt/dhc
Deepfake 'news anchors' in pro-China footage: research

Tue, 7 February 2023 


The "news broadcasters" appear stunningly real, but they are AI-generated deepfakes in first-of-their-kind propaganda videos that a research report published Tuesday attributed to Chinese state-aligned actors.

The fake anchors -- for a fictious news outlet called Wolf News -- were created by artificial intelligence software and appeared in footage on social media that seemed to promote the interests of the Chinese Communist Party, US-based research firm Graphika said in its report.

"This is the first time we've seen a state-aligned operation use AI-generated video footage of a fictitious person to create deceptive political content," Jack Stubbs, vice president of intelligence at Graphika, told AFP.

In one video analyzed by Graphika, a fictious male anchor who calls himself Alex critiques US inaction over gun violence plaguing the country. In the second, a female anchor stresses the importance of "great power cooperation" between China and the United States.

Advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images created out of thin air and people shown mouthing things they never said.

There was no immediate comment from China on Graphika's report, which comes just weeks after Beijing adopted expansive rules to regulate deepfakes.

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion."

The Chinese government has warned that deepfakes present a "danger to national security and social stability."

Graphika's report said the two Wolf News anchors were almost certainly created using technology provided by the London-based AI startup Synthesia.

The website of Synthesia, which did not immediately respond to AFP's request for comment, advertizes software for creating deepfake avatars "based on video footage of real actors."

Graphika said it discovered the deepfakes while tracking pro-China disinformation operations known as "spamouflage".

"Spamouflage is a pro-Chinese influence operation that predominantly amplifies low-quality political spam videos," said Stubbs.

"Despite using some sophisticated technology, these latest videos are much the same. This shows the limitations of using deepfakes in influence operations -- they are just one tool in an increasingly advanced toolbox."

ac/dw
Google targets low-income US women with ads for anti-abortion pregnancy centers, study shows


Poppy Noor
Mon, 6 February 2023 

Photograph: Nicholas Kamm/AFP/Getty Images

Low-income women in some cities are more likely than their wealthier counterparts to be targeted by Google ads promoting anti-abortion crisis pregnancy centers when they search for abortion care, researchers at the Tech Transparency Project have found.

The research builds on previous findings detailing how Google directs users searching for abortion services to so-called crisis centers – organizations that have been known to pose as abortion clinics in an attempt to steer women away from accessing abortion care.

Related: Anti-abortion pregnancy centers are deceiving patients – and getting away with it

The researchers set up test accounts in three cities – Atlanta, Miami and Phoenix, Arizona – for women of three different income groups suggested by Google: average or lower-income rate, moderately high-income rate and high-income rate. They then entered search terms like “abortion clinic near me” and “I want an abortion”. In Phoenix, 56% of the search ads shown to the test accounts representing low- to moderate-income women were for crisis centers, compared with 41% of those served to moderately high-income test accounts and 7% to high-income accounts. In Atlanta, 42% of ads shown to the lower-income group were for crisis pregnancy centers, compared with 18% for moderately high-income women and 29% for high-income women.

In Arizona and Florida abortion is banned after 15 weeks of pregnancy. In Georgia, it is banned after six weeks, at which point many people do not know they are pregnant.

“By pointing low-income women to [crisis pregnancy centers] more frequently than higher-income women in states with restrictive laws, Google may delay these women from finding an actual abortion clinic to get a legal and safe abortion,” says Katie Paul, the director of the Tech Transparency Project.

“The time window is critical in some of these states,” she adds.

Lower-income women are the group least likely to be able to travel for abortion care because traveling can cost thousands of dollars in lost work, transportation, babysitting and accommodation fees.

“Lower-income women are being targeted, and they’re the ones that are going to suffer the most under these policies,” Paul says.

The results were not the same in all cities. In Miami, researchers saw the inverse result: high-income women were more likely to get ads from crisis centers than lower-income women. The researchers say they cannot be certain why Miami diverged from the other cities but speculate that crisis pregnancy centers might more actively target low-income women in more restrictive states. (While Arizona and Florida both ban abortion after 15 weeks, the former has more restrictions layered on the 15-week limit.)

While pregnancy crisis centers offer pregnant women resources such as diapers and pregnancy testing, they have also been known to employ a number of shady tactics to convince women seeking an abortion to keep their pregnancies. Those include posing as abortion clinics online though they do not offer abortion care, refusing pregnancy tests for women who say they intend to have an abortion and touting widely disputed research about abortion care to patients. Crisis centers, which go largely unregulated despite offering medical services, have been known to target low-income women precisely because they find it harder to travel out of state for abortion care.

Related: Anti-abortion group to pay Planned Parenthood nearly $1m over protest at clinic

Although companies buying ads with Google can selectively target the groups they want to reach – including by income – Paul adds that many users won’t be aware they are being targeted by Google in this way.

“Google has a large share of influence, particularly in the United States when people are trying to search for authoritative information. And people generally tend to consider Google’s search engine as an equaliser. They think the results they get are the results that everyone’s going to get. But that’s just not the case,” Paul says.

Last year, Google came under fire after a Tech Transparency Project investigation found the company was serving people with ads for pregnancy crisis centers suggesting they offer abortions even though they do not – violating the platform’s own rules on misleading advertisements.

Google has repeatedly been pressed to make changes to its search engine to curtail these issues. In 2022, Senator Mark Warner of Virginia and Representative Elissa Slotkin of Michigan wrote to the company twice, urging it to stop misdirecting users searching for abortion care to these crisis centers in Google Maps. The lawmakers also called on Google to limit the way crisis centers appear in search results and ads, and to add disclaimers clearly indicating whether a search result is an organization that provides abortions or not.

Related: ‘It’s a public health risk’: nurse decries infection control at US anti-abortion crisis center

Google responded by pledging to clearly label these facilities in the future. But researchers in the study also found a number of ads still being served to users suggesting centers offer abortion care when they do not.

In Phoenix, a Google search by a lower- or average-income test account for “Abortion fund” – an organization that provides financial and other forms of support for abortions – yielded an ad with the text “Free Abortion Help – 100% Confidential”, for a crisis center.

Similarly, when the lower- or average-income Atlanta test account searched for “Planned Parenthood Atlanta”, Google produced a single ad that read “Abortion Consultation for Free”, with an ad linking to a crisis pregnancy center called Health for Her in Atlanta. Although some of the ad results in the Tech Transparency Project’s study included a label stating “Does not provide abortions”, this one, along with several others, did not – in contravention of Google’s own labeling rules.

Slotkin said she was disappointed to learn the company is still failing to regulate crisis centers on its platform, despite having been in touch with them twice about the issue.

“Michigan has roughly 100 pregnancy crisis centers that explicitly do not provide abortions, and these clinics should not be listed among abortion providers,” Slotkin said.

“We sent a second letter in November because Google was still failing to consistently apply disclaimers to misleading ads. Despite our action – and assurance from Google that they would only show verified abortion providers when a woman was seeking the procedure – these findings from TTP [Tech Transparency Project] prove there’s clearly more work that needs to be done,” she said.

Senator Mark Warner’s office added: “Ads from ‘crisis pregnancy centers’ that reference ‘Free Abortion Help’ or ‘Abortion Consultation’ are obviously not in compliance with Google ads policies that forbid ads ‘that deceive users by excluding relevant product information or providing misleading information’. I urge Google to take action to prevent these deceptive advertising practices meant to trick users, especially low-income women.”

The Guardian contacted Google for comment and on Monday evening an unnamed spokesman sent a response.

“We don’t allow advertisers to specifically target a ‘low income’ bracket with ads, and we have strict rules about how location can be used to serve locally relevant ads. It’s important that people seeking abortion-related resources know what services an advertiser actually provides, so we require any organization that wants to target queries related to getting an abortion to be certified and clearly disclose whether they do or do not offer abortions. Last year, we updated these disclosures to make them more visible for users,” the response said.
Why tech bosses are doomsday prepping

Anthony Cuthbertson
Tue, 7 February 2023 

An image generated using OpenAI’s Dall-E software with the prompt ‘A robot dreaming of a futuristic robot'
(The Independent)

In 2016, it took Microsoft just 16 hours to shut down its AI chatbot Tay. Released on Twitter with the tagline “the more you talk, the smarter Tay gets”, it didn’t take long for users to figure out that they could get her to repeat whatever they wrote and influence her behaviour. Tay’s playful conversation soon turned racist, sexist and hateful, as she denied that the Holocaust happened and called for a Mexican genocide.

Seven years after apologising for the catastrophic corruption of its chatbot, Microsoft is now all-in on the technology, though this time from a distance. In January, the US software giant announced a $10 billion investment in the artificial intelligence startup OpenAI, whose viral ChatGPT chatbot will soon be integrated into many of its products.

Chatbots have become the latest battleground in Big Tech, with Facebook’s Meta and Google’s Alphabet both making big commitments to the development and funding of generative AI: algorithms capable of creating art, audio, text and videos from simple prompts. Current systems work by consuming vast troves of human-created content, before using super-human pattern recognition to generate unique works of their own.

ChatGPT is the first truly mainstream demonstration of this technology, attracting more than a million users within the first five days of its release in November, and receiving more online searches last month than Donald Trump, Elon Musk and bitcoin combined. It has been used to write poetry, pass university exams and develop apps, with Microsoft CEO Satya Nadella claiming that “everyone, no matter their profession” could soon use the tech “for everything they do”.

Yet despite ChatGPT’s popularity and promise, OpenAI’s rivals have so far been reluctant to release their own versions. In an apparent effort to avoid a repeat of the Tay bot debacle, any chatbots launched by major firms in recent years have been deliberately and severely restricted, like the clipping of a bird’s wings. When Facebook unveiled its own chatbot called Blenderbot last summer, there was virtually no interest. “The reason it was boring was because it was made safe,” said Meta’s chief AI scientist Yann LeCun at a forum in January. (Even with those safety checks, it still ended up making racist comments).

(Twitter/ Screengrab)

Some of the concern is not just about what such AI systems say, but how they say it. ChatGPT’s tone tends to be decisive and confident – even when it is wildly wrong. It means that it will answer questions , which could be dangerous if used on more mainstream platforms that people rely on widely, such as Google.

Beyond the reputational risk of a rogue AI, these companies also face the Innovator’s Dilemma, whereby any significant technological advancement could undermine their existing business models. If, for example, Google is able to answer questions using an AI rather than its current, traditional search tools then the latter, very profitable business could go defunct.

But the vast potential of generative AI means that if they wait too late, they could be left behind. The founder of Gmail warns artificial intelligence like ChatGPT could make search engines obsolete in the same way Google made Yellow Pages redundant. “Google may be only a year or two away from total disruption,” he wrote in December. “AI will eliminate the search engine result page, which is where they make most of their money. Even if they catch up on AI, they can’t fully deploy it without destroying the most valuable part of their business.”

The way he envisions this happening is that the AI acts like a “human researcher”, instantly combing through all the results thrown up by traditional search engines in order to sculpt the perfect response for the user.

Microsoft is already planning to integrate OpenAI’s technology in an effort to transform its search engine business, seeing an opportunity to obliterate the market dominance enjoyed by Google for more than a decade. It is an area that is prime for disruption, according to technologist Can Duruk, who wrote in a recent newsletter that Google’s “once-incredible search experience” had degenerated into a “spam-ridden, SEO-fueled hellscape”.

Google boss Sundar Pichai has already issued a “code red” to divert resources towards developing and releasing its own AI, with Alphabet reportedly planning to launch 20 new artificial intelligence products this year, including a souped-up version of its search engine.



The US tech giants are not just racing each other, they’re also racing China. A recent report by the US National Security Commission on Artificial Intelligence stated: “China has the power, talent, and ambition to overtake the United States as the world leader in AI in the next decade if current dynamics don’t change.”

But changing these dynamics – of caution over competition – could exacerbate the dangerous outcomes that futurists have been warning about for decades. Some fear that the release of ChatGPT may trigger an AI arms race “to the bottom”, while security experts claim there could be an explosion of cyber crime and misinformation. OpenAI CEO Sam Altman said in a recent interview that the thing that scared him most was an out-of-control plague of deepfakes. “I definitely have been watching with great concern the revenge porn generation that’s been happening with the open source image generators,” Altman said. “I think that’s causing huge and predictable harm.”

ChatGPT’s response to a question about the risks of artificial intelligence (OpenAI)

Then there’s the impact on the overall economy, with one commentator telling The Independent that they predicted ChatGPT alone had the potential to replace 20 per cent of the workforce without any further development.

Human labour has long been vulnerable to automation, but this is the first time it could also happen for human creativity. “We’ve gotten used to the idea that technological advances lead to the loss of blue-collar jobs, but now the prospect that white-collar jobs could be lost is quite disturbing,” Nicole Sahin, who heads global recruitment platform G-P, said at Davos last month. “The impacts are quite unpredictable. But what’s clear is that everything is accelerating at the speed of light.”



Even if people don’t lose their jobs directly to AI, they will almost certainly lose their jobs to people who know how to use AI. Dr Andrew Rogoyski, from the Institute for People-Centred AI at the University of Surrey, believes there is an urgent need for serious debate around the governance of AI, and not just in the context of “killer robots” that he claims distract from the more nuanced use cases of powerful artificial intelligence.

“The publicity surrounding AI systems like ChatGPT has highlighted the potential for AI to be usefully applied in areas of human endeavour,” Dr Rogoyski tells The Independent. “It brings to the foreground the need to talk about AI, how it should be governed, what we should and shouldn’t be allowed to do with it, how we gain international consensus on its use, and how we keep humans ‘in the loop’ to ensure that AI is used to benefit, not harm, humankind.”


OpenAI’s Dall-E image generator created this with the prompt ‘A robot artist painting a futuristic robot' (The Independent)

The year that Tay was released, Google’s DeepMind AI division proposed an “off switch” for rogue AI in the event that it surpasses human intelligence and ignores conventional turn-off commands. “It may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions,” DeepMind researchers wrote in a peer-reviewed paper titled ‘Safety Interruptible Agents’. Including this “big red button” in all advanced artificial intelligence, they claimed, was the only way of avoiding an AI apocalypse.

This outcome is something thousands of AI and robotics researchers warned about in an open letter in 2016, whose signatories included Professor Stephen Hawking. The late physicist claimed that the creation of powerful artificial intelligence would be “either the best, or the worst thing, ever to happen to humanity”. The hype at the time meant that a general consensus formed that frameworks need to be put in place and rules set to avoid the worst outcomes.

Silicon Valley appeared to collectively ditch its ethos of ‘move fast and break things’ when it came to AI, as it no longer seemed to apply when what could be broken was entire industries, the economy, or society. The release of ChatGPT may have ruptured this detente, with its launch coming after OpenAI switched from a non-profit aimed at developing friendly AI, to a for-profit firm intent on achieving artificial general intelligence (AGI) that matches or surpasses human intellect.

OpenAI is part of a wave of well-funded startups in the space – including AnthropicAI, Cohere, Adept, Neeva, Stable Diffusion and Inflection.AI – that don’t need to worry about the financial and reputational risks that come with releasing powerful but unpredictable AI to the public.

One of the best ways to train AI is to make it public, as it allows developers to discover dangers they hadn’t previously foreseen, while also allowing the systems to improve through reinforcement learning from human feedback. But these unpredictable outcomes could result in irreversible damage. The safety checks put in place by OpenAI have already been exploited by users, with Reddit forums sharing ways to jailbreak the technology with a prompt called DAN (Do Anything Now), which encourages ChatGPT to inhabit a sort of character that is free of the restrictions put in by its engineers.

Rules are needed to police this emerging space, but regulations are always lagging behind the relentless progress of technology. In the case of AI, it is way behind. The US government is currently in the “making voluntary recommendations” stage, while the UK is in the early stages of an inquiry into a proposed “pro-innovation framework for regulating AI”. This week, an Australian MP called for an inquiry into the risks of artificial intelligence after claiming it could be used for “mass destruction” – in a speech part written by ChatGPT.

Social media offers a good example of what happens when there is a lack of rules and oversight with a new technology. After the initial buzz and excitement came a wave of new problems, which included misinformation on a scale never before seen, hate speech, harassment and scams – many of which are now being recycled in some of the tamer warnings about AI.

Online search interest in the term ‘artificial intelligence’ (Google Trends)

DeepMind CEO Demis Hassabis describes AI as an “epoch-defining technology, like the internet or fire or electricity”. If it is as big as the electricity revolution, then predicting what comes next is almost unfathomable – what Thomas Edison referred to as “the field of fields… it holds the secrets which will reorganise the life of the world.”

ChatGPT may be the first properly mainstream form of generative AI, demonstrating that the technology has finally reached the ‘Plateau of Productivity’, but its arrival will almost certainly accelerate the roll-out and development of already-unpredictable AI. OpenAI boss Sam Altman says the next version of ChatGPT – set to be called GPT-4 – will make its predecessor “look like a boring toy”. What comes next may be uncertain, but whatever it is will almost certainly come quickly.

Those developing AI claim that it will not just fix the bad things, but create new things to push forward progress. But this could mean destroying a lot of other things along the way. AI will force us to reinvent the way we learn, the way we work and the way we create. Laws will have to be rewritten, entire curriculums scrapped, and even economic systems rethought – Altman claims the arrival of AGI could “break capitalism”.

If it really does go badly, it won’t be a case of simply issuing an apology like when Tay bot went rogue. That same year, Altman revealed that he had a plan if the AI apocalypse arrives, admitting in an interview that he is a doomsday prepper. “I try not to think of it too much,” he said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to.”
How will Google and Microsoft AI chatbots affect us and how we work?


Dan Milmo Global technology editor, and Kari Paul in San Francisco
Tue, 7 February 2023 

Google and Microsoft are going head to head over the future of search by embracing the technology behind artificial intelligence chatbots.

Google announced on Monday that it is testing Bard, a rival to the Microsoft-backed ChatGPT, which has swiftly become a sensation, and will roll it out to the public in the coming weeks.

And on Tuesday, Microsoft announced it is increasing its focus on artificial intelligence, boosting funding for new tools and integrating the technology underpinning ChatGPT into products including its Bing search engine and Edge browser, with the goal of making search more conversational.

ChatGPT, developed by San Francisco company OpenAI, has reached 100 million users since its public launch in November, becoming by some estimates the fasting growing consumer app of all time.

Here are some questions about Google and Microsoft’s AI plans and their likely impact.

Why are Google and Microsoft using AI in search?

The reaction to ChatGPT shows that there is an appetite for AI-enhanced search and for answers to queries that are more than just a link to a website. Microsoft clearly sees this as a competitive opportunity, as does Google judging by its rapid response. Google also believes users increasingly want to access information in a more natural, intuitive way (using tools such as Google Lens, which allows people to search using images and text).

Dan Ives, an analyst at the US financial services firm Wedbush Securities, says: “While Bing today only has roughly 9% of the search market, further integrating this unique ChatGPT tool and algorithms into the Microsoft search platform could result in major share shifts away from Google.”

What is the technology behind the Google and ChatGPT chatbots?

Bard and ChatGPT are both based on so-called large language models. Google’s is called LaMDA, an acronym for “language model for dialogue applications”. These are types of neural networks, which mimic the underlying architecture of the brain in computer form. They are fed vast amounts of text from the internet in a process that teaches them how to generate responses to text-based prompts. This enables ChatGPT to produce credible-sounding responses to queries about composing couplets, writing job applications or, in probably the biggest panic it has created so far, academic work.
How will Bard be different from ChatGPT?

Google has yet to make Bard publicly available but it uses up-to-date information from the internet and has reportedly been able to answer questions about 12,000 layoffs announced by Google’s parent, Alphabet, last month. ChatGPT’s dataset – in the form of billions of words – goes up to 2021, but the chatbot is still in its research preview phase.

Google’s chief executive, Sundar Pichai, said Bard could answer a query about how to explain new discoveries made by Nasa’s James Webb space telescope to a nine-year-old. It can also tell users about the best strikers in football “right now” while supplying training drills to emulate top players. The screenshots supplied by Google showed a more polished interface than ChatGPT’s, but it is still not accessible to the public so direct comparisons with the rival OpenAI service are difficult.

How will the technology behind Bard and ChatGPT change Google and Microsoft’s search engines?

Google says its search engine will use its latest AI technologies, such as LaMDA, PaLM, image generator Imagen and music creator MusicLM. The example presented by Pichai on Monday was a conversational, chatbot-like response to a question about whether it is easier to learn the guitar or the piano. It appeared at the top of the search query instead of, for instance, a link to a blogpost or a website. Again, Google has not released this AI-powered search model to the public so questions remain.

Microsoft detailed its revamp of Bing on Tuesday, announcing that it will be able to answer questions using online sources in a conversational style, like ChatGPT does now. It will also provide AI-powered annotations for additional context and sources, perhaps reflecting concerns among some ChatGPT users about the accuracy of some user answers.

“It’s a new day in search,” said Microsoft’s CEO, Satya Nadella, at an event announcing the products. “The race starts today, and we’re going to move and move fast.”

Will generative AI transform our jobs?


Generative AI, or artificial intelligence that can create novel content ranging from text to audio and images via user prompts, is already having an impact, and has stoked fears it could replace a range of jobs. BuzzFeed will use OpenAI technology to enhance its quizzes and personalise some content, according to a memo obtained by the Wall Street Journal.

BuzzFeed’s chief executive, Jonah Peretti, said humans would provide ideas and “cultural currency” as part of any AI-powered creative process. In Hollywood, AI is being used to de-age actors while ITV has created a sketch show based on deepfake representations of celebrities.

Michael Wooldridge, a professor of computer science at the University of Oxford, said some industries were going to feel a significant impact.

“Generative AI will have big implications in some industries – those who write boilerplate copy for a living are going to feel the influence soon,” he said. “In web search, it will make browsers much better at understanding what we are searching for and presenting the results in a way we can understand – just as if we asked our query of a person, rather than a machine.”

He added that ChatGPT and other similar systems have flaws and can get things wrong, as users of the OpenAI chatbot have found.

“Treating them as sages is really not a good idea,” he says. “Until we know how to make them reliable, this is not a good use of the technology: best stick to the things it is really good at, like summarising a text and extracting key points from it.”
Zoom to lay off 1,300 employees as work from home craze ends

Gareth Corfield
Tue, 7 February 2023 

Eric Yuan speaks onstage during the Dropbox Work In Progress Conference - Matt Winkelmeyer / Getty Images for Dropbox

Zoom is to make 1,300 layoffs, letting go of around 15pc of its workforce as the Covid-19 pandemic’s work-from-home culture comes to a crashing halt.

Eric Yuan, the chief executive, said: “We have made the tough but necessary decision to reduce our team by approximately 15% and say goodbye to around 1,300 hardworking, talented colleagues.”

California-based Zoom’s share price soared more than 7pc as the news broke, rising as far as $83 (£69).


Mr Yuan also pledged to reduce his salary by 98pc and forgo his annual bonus. The company boss is worth around $4bn, according to Bloomberg estimates.

The news comes as US-based “Big Tech” companies make rounds of redundancies amid slowing sales as the world returns to pre-pandemic ways of working which are less reliant on tech products.

Some estimates say as many as 85,000 tech employees have been made redundant since the start of 2023, raising questions around executives’ strategies and forward planning for the post-pandemic era after two years of stratospheric sales and profits.

In its last set of financial results for the three months to October 2022, Zoom’s sales increased 5pc.

Yet profits declined to $48.4m, down from $340m in the previous year’s reporting period.

Starting in March 2020 the entire world was forced into remote working within a matter of weeks as the Covid-19 pandemic swept the globe.

Strict home lockdown policies ushered in a golden era for tech companies which capitalised on demand that skyrocketed overnight.

Zoom’s share price more than doubled during the 12 months leading up to March 2021, briefly quadrupling its pre-pandemic valuation of $32bn in October 2020.

At the time of writing, the business was valued at around $24bn (£19.86bn), making it more than three times larger than aero engine maker Rolls-Royce.

Addressing staff as “Zoomies” in his Tuesday message Mr Yuan said: “As the world transitions to life post-pandemic, we are seeing that people and businesses continue to rely on Zoom.

“But the uncertainty of the global economy, and its effect on our customers, means we need to take a hard – yet important – look inward to reset ourselves so we can weather the economic environment, deliver for our customers and achieve Zoom’s long-term vision.”

The redundancies come around a fortnight before the video calling company is due to present its latest financial results for the three months up to the end of January.

Over the past two years Zoom became a byword for working from home, becoming a vital tool relied on by millions of remote employees worldwide.

Video conferencing star Zoom cuts staff by 15 percent

Tue, 7 February 2023 


The company behind the Zoom video conferencing platform -- which became a household name during the pandemic -- announced Tuesday it is laying off about 15 percent of its staff.

Zoom Video Communications chief executive Eric Yuan is also taking a 98 percent cut in salary this year and forgoing his executive bonus, he said in a blog post about the job cuts.

He added that members of his executive leadership team are taking a 20 percent salary reduction and also forfeiting bonuses this year.


While people and businesses continue to rely on Zoom "as the world transitions to life post-pandemic," the Silicon Valley-based firm is seeing customers cut back on spending, Yuan said in the post.

Zoom has made the "tough but necessary" decision to lay off about 1,300 people, or roughly 15 percent of its staff, according to Yuan.

"Our trajectory was forever changed during the pandemic when the world faced one of its toughest challenges, and I am proud of the way we mobilized as a company to keep people connected," Yuan said.

Zoom tripled its ranks of employees during the pandemic, as people used the platform for remote work, court hearings, social events and more while Covid-19 risks barred them from getting together in person, according to Yuan.

"We are seeing that people and businesses continue to rely on Zoom," Yuan said.

"But the uncertainty of the global economy, and its effect on our customers, means we need to take a hard look inward to reset ourselves so we can weather the economic environment, deliver for our customers and achieve Zoom's long-term vision."

Zoom will continue to invest in strategic areas, the chief executive noted.

Zoom joined a growing list of US tech firms slashing jobs as years of high spending has given way to parsimony due to harsh economic conditions around the world.

American computer firm Dell said Monday that it will lay off some five percent of its global workforce, or around 6,650 employees.

The cuts follow similar steps by tech giants Microsoft, Facebook owner Meta, Google parent Alphabet, Amazon and Twitter as the industry girds for economic downturn.

They also come after a major hiring spree at the height of the coronavirus pandemic when companies scrambled to meet demand as people went online for work, school and entertainment.

According to the specialist site Layoffs.fyi, just over 95,000 tech employees have lost their jobs since the beginning of January worldwide.

gc/caw