Friday, December 23, 2022

Google is ‘all hands on deck’ to develop AI products to take on ChatGPT

Story by MobileSyrup • Thursday

OpenAI’s ChatGPT is a machine-learning dependent AI chat bot that generates human-like responses based on the input it receives. The chatbot has taken the world by storm, having crossed one million users earlier this month.


Google is ‘all hands on deck’ to develop AI products to take on ChatGPT© Provided by MobileSyrup

The ChatGPT storm has been noticed by Google, and it is reportedly taking an ‘all hand on deck’ approach to respond.

As reported by The New York Times, Google has declared a “code red,” and has tasked several departments to “respond to the threat that ChatGPT poses.”

“From now until a major conference expected to be hosted by Google in May, teams within Google’s research, Trust and Safety, and other departments have been reassigned to help develop and release new A.I. prototypes and products.”

Related video: “Google can now decode doctors' bad handwriting thanks to AI” (DeFiance)
Duration 1:08  View on Watch

The Wall Street JournalCheating With ChatGPT: Can an AI Chatbot Pass AP Lit?
6:58


QuickTakeThe Problem With ChatGPT's Human-Like Responses
2:18


DeFiance"Google has reportedly issued a code red to internal staff over the popularity of ChatGPT"
2:26


The likely point in future where Google describes its advancement in AI would be at its annual I/O where it shows off progress made on LaMDA, Google’s own AI chat bot.

Alphabet CEO Sundar Pichai hinted the company has “a lot” planned in the space in 2023 but added that “This is an area where we need to be bold and responsible, so we have to balance that,” according to a recent CNBC report.

Earlier this year, Google suspended one of its engineers, Blake Lemoine, after he claimed the company’s ‘LaMDA’ chatbot system had achieved sentience. Read more about it here.

Image credit: Google

Google's management has reportedly issued a 'code red' amid the rising popularity of the ChatGPT AI

Aaron Mok
Wed, December 21, 2022 

Google CEO Sundar Pichai told some teams to switch gears and work on developing artificial-intelligence products, The New York Times reported.
Brandon Wade/Reuters

Google has issued a "code red" over the rise of the AI bot ChatGPT, The New York Times reported.

CEO Sundar Pichai redirected some teams to focus on building out AI products, the report said.

The move comes as talks abound over whether ChatGPT could one day replace Google's search engine.


Google's management has issued a "code red" amid the launch of ChatGPT — a buzzy conversational-artificial-intelligence chatbot created by OpenAI — as it's sparked concerns over the future of Google's search engine, The New York Times reported Wednesday.

Sundar Pichai, the CEO of Google and its parent company, Alphabet, has participated in several meetings around Google's AI strategy and directed numerous groups in the company to refocus their efforts on addressing the threat that ChatGPT poses to its search-engine business, according to an internal memo and audio recording reviewed by The Times.

In particular, teams in Google's research, trust, and safety division, among other departments, have been directed to switch gears to assist in the development and launch of AI prototypes and products, The Times reported. Some employees have been tasked with building AI products that generate art and graphics, similar to OpenAI's DALL-E, which is used by millions of people, according to The Times.

A Google spokesperson did not immediately respond to a request for comment.

Google's move to build out its AI-product portfolio comes as Google employees and experts alike debate whether ChatGPT — run by Sam Altman, a former Y Combinator president — has the potential to replace the search engine and, in turn, hurt Google's ad-revenue business model.

Sridhar Ramaswamy, who oversaw Google's ad team between 2013 and 2018, said ChatGPT could prevent users from clicking on Google links with ads, which generated $208 billion — 81% of Alphabet's overall revenue — in 2021, Insider reported.

ChatGPT, which amassed over 1 million users five days after its public launch in November, can generate singular answers to queries in a conversational, humanlike way by collecting information from millions of websites. Users have asked the chatbot to write a college essay, provide coding advice, and even serve as a therapist.

But some have been quick to say the bot is often riddled with errors. ChatGPT is unable to fact-check what it says and can't distinguish between a verified fact and misinformation, AI experts told Insider. It can also make up answers, a phenomenon that AI researchers call "hallucinations."

The bot is also capable of generating racist and sexist responses, Bloomberg reported.

Its high margin of error and vulnerability to toxicity are some of the reasons Google is hesitant to release its AI chatbot LaMDA — short for Language Model for Dialogue Applications — to the public, The Times reported. A recent CNBC report said Google execs were reluctant to release it widely in its current state over concerns over "reputational risk."

Chatbots are "not something that people can use reliably on a daily basis," Zoubin Ghahramani, who leads the Google's AI lab Google Brain, told The Times before ChatGPT was released.

Instead, Google may focus on improving its search engine over time rather than taking it down, experts told The Times.

As Google reportedly works full steam ahead on new AI products, we might get an early look at them at Google's annual developer conference, I/O, which is expected to take place in May.


AI can now write like a human. Some teachers are worried.

Mike Bebernes
·Senior Editor
Wed, December 21, 2022 
“The 360” shows you diverse perspectives on the day’s top stories and debates

Scroll back up to restore default view.
What’s happening

Artificial intelligence has advanced at an extraordinary pace over the past few years. Today, these incredibly complex algorithms are capable of creating award-winning artpenning scripts that can be turned into real films and — in the latest step that has dazzled people in the tech and media industries — mimic writing at a level so convincing that it’s impossible to tell whether the words were put together by a human or a machine.

A few weeks ago, the research company OpenAI released ChatGPT, a language model that can construct remarkably well-structured arguments based on simple prompts provided by a user. The system — which uses a massive repository of online text to predict what words should come next — is able to create new stories in the style of famous writerswrite news articles about itself and produce essays that could easily receive a passing grade in most English classes.

That last use has raised concern among academics, who worry about the implications of an easily accessible platform that, in a matter of seconds, can put together prose on par with — if not better than — the writing of a typical student.

Cheating in school is not new, but ChatGPT and other language models are categorically different from the hacks students have used to cut corners in the past. The writing these language models produce is completely original, meaning that it can’t be detected by even the most sophisticated plagiarism software. The AI also goes beyond just providing students with information they should be finding themselves. It organizes that information into a complete narrative.
Why there’s debate

Some educators see ChatGPT as a sign that AI will soon lead to the demise of the academic essay, a crucial tool used in schools at every level. They argue that it will simply be impossible to root out cheating, since there will be no tools to determine whether writing is authentic or machine-made. But beyond potential academic integrity issues, some teachers worry that the true value of learning to write — like analysis, critical thinking, creativity and the ability to structure an argument — will be lost when AI can do all those complex things in a matter of seconds.

Others say these concerns are overblown. They make the case that, as impressive as AI writing is, its prose is too rigid and formulaic to pass as original work from most students — especially those in lower grades. ChatGPT also has no ability to tell truth from fiction and often fabricates information to fill in blanks in its writing, which could make it easy to spot during grading.

Some even celebrate advances in AI, viewing them as an opportunity to improve the way we teach children to write and make language more accessible. They believe AI text generators could be a major tool to help students who struggle with writing, either due to disabilities or because English isn’t their first language, to be judged on the same terms as their peers. Others say AI will force schools to think more creatively about how they teach writing and may inspire them to abandon a curriculum that emphasizes structure over process and creativity.
What’s next

When asked whether AI will kill the academic essay, ChatGPT expressed no concern. It wrote: “While AI technology has made great strides in natural language processing and can assist with tasks such as proofreading and grammar checking, it is not currently capable of fully replicating the critical thinking and analysis that is a key part of academic writing.”

With the technology just emerging, it may be several years before it becomes clear whether that contention will prove correct.

Perspectives

AI could kill the academic essay for good


“The majority of students do not see writing as a worthwhile skill to cultivate. … They have no interest in exploring nuance in tone and rhythm. … Which is why I wonder if this may be the end of using writing as a benchmark for aptitude and intelligence.” — Daniel Herman, Atlantic

AI can’t replace the most important parts of writing education


“Contrary to popular belief, we writing teachers believe more in the process of writing than the product. If we have done our jobs well and students have learned, reading that final draft during this time of year is often a formality. The process tells us the product will be amazing.” — Matthew Boedy, Atlanta Journal-Constitution

AI will create a cheating crisis

“An unexpected insidious academic threat is on the scene: a revolution in artificial intelligence has created powerful new automatic writing tools. These are machines optimised for cheating on school and university papers, a potential siren song for students that is difficult, if not outright impossible, to catch.” — Rob Reich, Guardian

Any competent teacher can easily spot AI-generated writing


“Many students would be hard-pressed to read with comprehension AI-generated essays, let alone pass them off as their own work.” — Robert Pondiscio, American Enterprise Institute

AI can make writing more accessible to everyone


“I think there's a lot of potential for helping people express themselves in ways that they hadn't necessarily thought about. This could be particularly useful for students who speak English as a second language, or for students who aren't used to the academic writing style.” — Leah Henrickson, digital media researcher, to Business Insider

Something incredibly important is lost when people don’t learn to write the hard way

“We lose the journey of learning. We might know more things but we never learned how we got there. We’ve said forever that the process is the best part and we know that. The satisfaction is the best part. That might be the thing that’s nixed from all of this. … I don’t know what a person is like if they’ve never had to struggle through learning. I don’t know the behavioral implications of that.” — Peter Laffin, writing instructor, to Vice

AI can enhance creativity by helping students sort through the routine parts of writing


“Keep in mind, language models are just math and massive processing power, without any real cognition or meaning behind their text generation. Human creativity is far more powerful, and who knows what can be unlocked if such creativity is augmented with AI?” — Marc Watkins, Inside Higher Ed

Educators may not be able to rely on essays to evaluate students much longer

“AI is here to stay whether we like it or not. Provide unscrupulous students the ability to use these shortcuts without much capacity for the educator to detect them, combined with other crutches like outright plagiarism, and companies that sell papers, homework, and test answers, and it’s a recipe for—well, not disaster, but the further degradation of a type of assignment that has been around for centuries.” — Aki Peritz, Slate

AI won’t kill anything we’ll miss

“By privileging surface-level correctness and allowing that to stand in for writing proficiency, we've denied a generation (or two) of students the chance to develop their writing and critical thinking skills. … Now we have GPT3, which, in seconds, can generate surface-level correct prose on just about any prompt. That this seems like it could substitute for what students produce in school is mainly a comment on what we value when we assign and assess writing in school contexts.” — John Warner, author of Why They Can’t Write

Educators shouldn’t overreact, but they need to have a plan

“Whenever there’s a new technology, there’s a panic around it. It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.” — Sandra Wachter, technology researcher, to Nature

Quora launches Poe, a way to talk to AI chatbots like ChatGPT



Kyle Wiggers
Wed, December 21, 2022 

Signaling its interest in text-generating AI systems like ChatGPT, Quora this week launched a platform called Poe that lets people ask questions, get instant answers and have a back-and-forth dialogue with AI chatbots.

Short for "Platform for Open Exploration," Poe -- which is invite-only and currently only available on iOS -- is "designed to be a place where people can easily interact with a number of different AI agents," a Quora spokesperson told TechCrunch via text message.

"We have learned a lot about building consumer internet products over the last 12 years building and operating Quora. And we are specifically experienced in serving people who are looking for knowledge," the spokesperson said. "We believe much of what we’ve learned can be applied to this new domain where people are interfacing with large language models."

Poe, then, isn't an attempt to build a ChatGPT-like AI model from scratch. ChatGPT -- which has an aptitude for answering questions on topics ranging from poetry to coding -- has been the subject of controversy for its ability to sometimes give answers that sound convincing but aren't factually true. Earlier this month, Q&A coding site Stack Overflow temporarily banned users from sharing content generated by ChatGPT, saying the AI made it too easy for users to generate responses and flood the site with dubious answers.

Quora might've found itself in hot water if, for instance, it trained a chatbot on its platform's vast collection of crowdsourced questions and answers. Users might've taken issue with their content being used that way -- particularly given that some AI systems have been shown to regurgitate parts of the data on which they were trained (e.g. code). Some parties have protested against generative art systems like Stable Diffusion and DALL-E 2 and code-generating systems such as GitHub's Copilot, which they see as stealing and profiting from their work.

To wit, Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot to regurgitate sections of licensed code without providing credit. And on the art community portal ArtStation, which earlier this year began allowing AI-generated art on its platform, members began widely protesting by placing "No AI Art" images in their portfolios.


Quora Poe, a way to talk to chatbots like ChatGPT

Image Credits: Quora

At launch, Poe provides access to several text-generating AI models, including ChatGPT. (OpenAI doesn't presently offer a public API for ChatGPT; the Quora spokesperson refused to say whether Quora has a partnership with OpenAI for Poe or another form of early access.) Poe's like a text messaging app, but for AI models -- users can chat with the models separately. Within the chat interface, Poe provides a range of different suggestions for conversation topics and use cases, like "writing help," "cooking," "problem solving" and "nature."

Poe ships with only a handful of models at launch, but Quora plans to provide a way for model providers -- e.g. companies -- to submit their models for inclusion in the near future.

"We think this will be a fun way for people to interact with and explore different language models. Poe is designed to be the best way for someone to get an instant answer to any question they have, using natural conversation," the spokesperson said. "There is an incredible amount of research and development going into advancing the capabilities of these models, but in order to bring all that value to people around the world, there is a need for good interfaces that are easy to use. We hope we can provide that interface so that as all of this development happens over the years ahead, everyone around the world can share as much as possible in the benefits."

It's pretty well-established that AI chatbots, including ChatGPT, can generate biasedracist and otherwise toxic content -- not to mention malicious code. Quora's not taking steps itself to combat this, instead relying on the providers of the models in Poe to moderate and filter the content themselves.

"The model providers have put in a lot of effort to prevent the bots from generating unsafe responses," the spokesperson said.

The spokesperson was quite clear that Poe isn't a part of Quora for now -- nor will it be in the future necessarily. Quora sees it as a separate, independent project, much like Google's AI Test Kitchen, it plans to iterate and refine over time.

When asked about the business motivations behind Poe, the spokesperson demurred, saying that it's early days. But it isn't tough to conceive how Quora, which makes most of its money through paywalls and advertising, might build premium features into Poe if it grows.

For now, though, Quora says it's focused on working out scalability, getting feedback from beta testers and addressing issues that come up.

"The whole field is moving very rapidly now and we’re more interested in figuring out what problems we can solve for people with Poe," the spokesperson said.

SEE

CRIMINAL CAPITALI$M ONLINE
The FBI Says You Need to Use an Ad Blocker

Thomas Germain
Thu, December 22, 2022 

An FBI officer using a laptop.


I want to believe.

The Federal Bureau of Investigation took a break from hunting serial killers this week to post a public service announcement: if you’re not using an ad blocker, what are you doing?

According to the Internet Crime Complaint Center, criminals are using ads in search engine results like Google and Bing to impersonate brands. These ads send unsuspecting users off to phony websites that look identical to the pages people are actually searching for, where they are then be subjected to ransomware or phishing attacks. The Bureau says an ad blocker can help.


While the government doesn’t recommend any particular ad blocker, but I just tested uBlock Origin with a few of my favorite Google searches and didn’t see a single ad in the results. An ad blocker is also a great solution if you find yourself in the comments section of this article with a bizarre impulse to complain about the ads on Gizmodo, which is a beautiful, perfect website.

When I write about online scams, people often tell me they’re too smart to fall for them. This, unfortunately, is the kind of attitude that gets you scammed. Protecting yourself takes constant vigilance, which is undermined if you think you aren’t vulnerable. All it takes is ten seconds of oversight or even a single click to get in trouble.

To that end, the FBI recommends fail-safes like checking the URL of the page your visiting, or typing in the web address directly instead of searching it. The FBI says these scams are especially prevalent in the areas of finance or cryptocurrency. When you’re doing anything involving your money online, that’s a time to be extra careful.

It’s also worth paying attention to when you’re clicking on an ad versus an actual search result. This is an increasingly annoying prospect, as services like Google work hard to make their ads look like regular search results, and often fill up the entire first screen of results with ads, making you scroll down to find what’s actually relevant.

If you do fall victim to a scam, the FBI wants to hear about it. Anyone who wants to talk to the feds can report the crime to your local FBI field office, or directly to the Internet Crime Complaint Center.
Police said a member of Elon Musk's security team is a suspect — not a victim — in what Musk alleged was a 'crazy stalker' incident

Erin Snodgrass
Tue, December 20, 2022 

Elon Musk looks down during a speech.Jim Watson/AFP via Getty Images

Police are looking to question a member of Elon Musk's security team, according to a statement.

Musk last week claimed a "crazy stalker" had jumped on a car carrying his son in Los Angeles.

But authorities said this week that a member of Musk's security team was a suspect in the incident.


A member of Elon Musk's security team is a suspect, and not a victim, police say, in an incident last week that the Twitter CEO characterized as a "crazy stalker" encounter.

On Tuesday the South Pasadena Police Department issued a statement detailing the episode that sparked a wild week of Twitter tension, confirming that an incident involving two vehicles was reported to authorities on Tuesday, December 13.

Musk tweeted last week that a "crazy stalker" had followed a car carrying his son in Los Angeles, "thinking it was me." Musk alleged that the "stalker" had climbed onto the car in an attempt to stop it from moving. The billionaire tweeted an accompanying video that showed one of his security guards filming a man inside a vehicle and the car's license plate.



But a South Pasadena Police spokesperson said Tuesday that it was the 29-year-old Connecticut man in the other vehicle who called police to report an assault with a deadly weapon involving a car. When an officer arrived on the scene minutes later, "the victim" said he had just exited the freeway and stopped to use his phone in a parking lot when another vehicle pulled directly in front of him, blocking his path.

The driver of the second vehicle is believed to be a member of Musk's security team, authorities said. The man who called police said the Musk staffer approached him and accused him of following him on the freeway, the statement said. Both parties proceeded to film each other.

As Musk's security guard was leaving the parking lot, he "struck" the victim with his vehicle; he was gone by the time police arrived on the scene, according to authorities.

"On Thursday, December 15, 2022, South Pasadena Police learned the suspect involved in this case is believed to be a member of Elon Musk's security team," the police statement said. "Detectives do not believe Mr. Musk was present during the confrontation."

South Pasadena Police did not immediately respond to Insider's request for comment. Insider reached out to Musk for more information.

The police statement sheds new light on the incident that Musk quickly blamed on 20-year-old Jack Sweeney, who created a tool that automatically posted updates about Musk's private jet's location, prompting the Tesla CEO to threaten legal action against the college student.

Following the incident, Twitter abruptly changed its rules to forbid posting a person's live location. Several journalists who tweeted about the jet tracker were suspended from the social media platform, with Musk alleging that they had "doxxed him" for posting tweets related to his flights and comparing the posts to "assassination coordinates."

While the incident's timing and location cast doubt on the billionaire's narrative — the encounter occurred 23 hours after @ElonJet had last shared Musk's location and 26 miles away from LAX — police may still be looking into the 29-year-old man involved, Marc Madero, an LAPD detective told The Washington Post this past weekend.

The outlet identified the man involved in the incident as Brando Collado, an Uber Eats driver, who expressed strange claims about the musician Grimes, Musk's former girlfriend and mother of two of his kids, whose real name is Claire Elise Boucher. Collado said he knew that Boucher lived near where the incident occurred and suggested that she was communicating with him via discrete Instagram posts.

The Tuesday police statement said Collado never indicated that the altercation with Musk's security guard was "anything more than coincidental."
CRIMINAL ONLINE CAPITALI$M
The Half-a-Billion Fortnite Fine Kicks Off a New Era of Regulating User Interfaces


Thomas Germain
Wed, December 21, 2022 

Fortnite on a Nintendo Switch

In a sweeping settlement announced Monday, the Federal Trade Commission fined Epic Games a whopping $520 million after accusing the Fortnite-maker of a variety of unsavory business practices. The complaint touches on a range of issues from alleged violations of children’s privacy to tricking users into unintentional purchases, but there’s an overarching theme: deceptive design.

Epic agreed to make a number of changes to its interfaces as part of the settlement, like adding friction in the purchase process to avoid accidental payments, and a new instant purchase cancellation system, and turning voice-chats off for minors.

“Epic used privacy-invasive default settings and deceptive interfaces that tricked Fortnite users, including teenagers and children,” said FTC Chair Lina Khan in a statement. “Protecting the public, and especially children, from online privacy invasions and dark patterns is a top priority for the Commission, and these enforcement actions make clear to businesses that the FTC is cracking down on these unlawful practices.”

After years of discussion, regulators are zeroing in on the manipulative powers of digital interfaces, and the government appears ready to act against them.

“The FTC has been doing work on deceptive design practices for years, but this is the biggest step up in terms of enforcement we’ve ever seen,” said John Davisson, director of litigation and senior counsel at the Electronic Privacy Information Center, better known as EPIC (unrelated to Epic Games).

Lawmakers have a newfound eye for the flaws of digital design. They’re paying increased attention to layout and composition on the web. An update to the California Consumer Privacy Act last year banned dark patterns, a term for deceptive design. California passed the Age Appropriate Design Code in September, which obligates companies to prioritize kids’ safety and well-being in the design of online services. A similar UK law with the same name went into effect last year—netting a $30 million fine for TikTok—and New York state is considering an even more aggressive children’s design bill of its own. U.S. federal regulators are taking up the mantle, too: the FTC held a dark patterns workshop in 2021.

“There’s definitely been a shift towards regulating design,” said Justin Brookman, director of technology policy for Consumer Reports, and former director of technology research at the FTC. “There’s recognition that choices about platform architecture are within the scope of what regulators can go after, and there’s more thinking about requiring companies to consider other values in designing products.” (Disclosure: this reporter formerly worked at Consumer Reports’ journalism division, which is separate from its advocacy wing, where Brookman works.)

Regulating design is complicated. You can influence user behavior by making one button blue and the other one red, but no one wants the government dictating the colors on websites. However, in cases like Fortnite’s, the problems are a little more clear.

Epic’s “counterintuitive, inconsistent, and confusing button configuration” tricked players into making hundreds of millions of dollars in unwanted purchases, the FTC said. Players could accidentally buy things when attempting to wake the game from sleep mode, or by tapping the instant purchase button, located right next to the preview item toggle, for example. When over a million users complained about the problem, Epic allegedly ignored them. “Using internal testing, Epic purposefully obscured cancel and refund features to make them more difficult to find,” the FTC said. Epic froze users’ accounts if they tried to dispute charges with their credit card companies.

Epic issued a statement about the settlement and its plans to address the problems raised by the FTC. “No developer creates a game with the intention of ending up here,” Epic said. “The laws have not changed, but their application has evolved and long-standing industry practices are no longer enough. We accepted this agreement because we want Epic to be at the forefront of consumer protection and provide the best experience for our players.”

“This settlement is going to wake companies up, they’re going to be taking a close look at what the FTC sees as manipulative design to make sure they’re not committing the same practices,” said EPIC’s Davisson.

Perhaps the most surprising part of the settlement has to do with Fornite’s voice chat feature. Chats were turned on by default, even for children, which exposed kids to risk of harassment or even sexual abuse. According to the FTC, this violated laws against unfair business practices. But what sets that argument apart is it treats the voice chats intrinsically dangerous and therefore subject to regulatory scrutiny.

“To say turning voice chat on by default is per se harmful is a brand new principle for the FTC. I can’t think of any analogous cases where they said that sort of design choice was inherently harmful,” Brookman said.

That logic could have broader implications considering other tech features and services that may have built-in risks. Think of criticisms that TikTok’s algorithm is too addictive, for example, or Instagram’s links to suicidal thoughts and eating disorders among teen girls.

“In a sense Fortnite is a social media platform, to the extent that it has chat features, and the FTC is saying companies have more of an obligation to design their systems to repudiate harms,” Brookman said.

According to Davisson, Fortnite’s shift is an encouraging one, especially when you think of dark patterns in the context of privacy problems. “There’s an evolving understanding and acceptance that the design of platforms and websites is a major contributing factor to extractive commercial surveillance,” Davisson said. “That’s something that needs to be addressed as part of a broader data protection push.”

Update: 12/21/2022 5:00 p.m. ET: This story has been updated with a statement from Epic.
CRIMINAL CRYPTO CAPITALI$M TOO
A global drug cartel used Binance to launder millions, the DEA says. Here's how the world's largest crypto exchange is reportedly working with investigators to track them down.

Morgan Chittum
Wed, December 21, 2022

Vladimir Kazakov/Getty Images

A global drug cartel allegedly used Binance to launder tens of millions, an ongoing DEA investigation alleges.


Roughly $15 million to $40 million in illicit profits could have been funneled through Binance, according to Forbes, which obtained a search warrant.


Here's how the largest crypto exchange in the world is reportedly working with investigators.

A global drug cartel used Binance to funnel millions of the gang's illicit profits, an ongoing US Drug Enforcement Action investigation alleges.

Between $15 million to $40 million has been laundered through the largest cryptocurrency exchange in the world, according to Forbes, citing a search warrant it obtained.


Binance, which announced plans earlier this year to buy a minority stake in Forbes, is working with investigators to help track down suspects.

The investigation into the cartel's use of Binance began in 2020, when DEA informants on a different crypto trading platform found a user offering cryptocurrency in exchange for fiat currency, Forbes reported.

The DEA found one culprit, Carlos Fong Echavarria, who later pleaded guilty the charges that included money laundering and drug dealing. Binance assisted the agency by tracking Echavarria's trading activity on the blockchain, which totalled $4.7 million. By following on-chain activity, they were able to identify an additional account getting funds from Echavarria, per the warrant.

Another account holder, who hasn't been formally charged, allegedly bought nearly $42 million worth of cryptocurrencies, with at least $16 million being from drug money.

"This is actually an example of where the transparency of blockchain transactions works against criminal actors," Matthew Price, the senior director of investigations at Binance, told Forbes. "The bad guys are leaving a permanent record of what they're doing."

Earlier this year, Binance also helped the DEA in the agency's effort to seize over 100 accounts connected to laundering in Mexico.

Meanwhile, Binance is reportedly facing a probe from the US Department of Justice of its own, according to Reuters, citing four sources familiar with the matter. The report described the company's books as akin to a "black box," where not even Binance's former chief financial officer had full access to accounts during his almost three-year tenure.

In a statement to Reuters, Binance's chief strategy officer said the report's analysis and depictions of its business units were "categorically false.


Binance Responds to ‘FUD’: ‘A Healthy Company Will Not be Destroyed By a Tweet’


Stephen Graves,Andrew Throuvalas
Thu, December 22, 2022

Binance has published a lengthy statement in response to “recent media and community questions” regarding the company’s financial health in the wake of the collapse of rival crypto exchange FTX.

“FTX fell because it misappropriated user assets, and a healthy company will not be destroyed by a tweet,” read a translated version of the article posted to Binance’s Chinese blog.

In the article, titled “Facing FUD,” Binance hit back at allegations that its finances are a “black box,” raised in a recent Reuters article.

The firm wrote that it “does not need” to disclose detailed information on its financial status since it isn’t a publicly traded company. Binance added that it is self-sufficient and “financially healthy,” with “no external financing needs and external investors, and no intention to go public at this stage.”

The canceled Mazars audit


Binance’s most recent attempt to reassure customers about the state of its finances backfired when auditing firm Mazars pulled its proof-of-reserves assessment of the exchange from its website, and—according to Binance—dropped crypto firms as clients.

“The company stopped working with [all crypto companies] including Binance, not just Binance,” contested the exchange in its most recent blog post. It noted that traditional accounting firms, including the ‘Big Four,’ find it “very difficult to verify the overall reserve assets on the chain of encrypted exchanges,” adding that, “on-chain verification of the overall reserves of encrypted companies is a very new field.”

BNB Plummets as Binance Auditor Mazars Halts Work With All Crypto Firms

Earlier this month, users withdrew their funds from Binance en masse; much of that fear stemmed from Binance’s delay in satisfying USDC withdrawals at the time, which have since resumed and are processing normally.

“All users' assets in Binance are supported 1:1, and users also have the right to withdraw coins at any time,” said Binance. It explained that the delay on USDC withdrawals was due to Binance’s need to convert its BUSD holdings back into USDC.

In the blog post, Binance noted it has a debt-free capital structure, funds its daily operations through user transaction fees, and does not misappropriate user assets. It also hit back at allegations that it sought to “destroy” FTX, levied by the likes of former FTX CEO Sam Bankman-Fried and former FTX spokesperson Kevin O’Leary.

“Binance will not regard other exchanges as ‘competitors,” the exchange wrote, adding that it’s focused on “promoting and expanding industry adoption” and hopes to see more exchanges coexisting in the crypto ecosystem.
PRISON NATION U$A
The FCC can finally hammer predatory prison phone call companies, thanks to just-passed bill




Devin Coldewey
Thu, December 22, 2022 

A brand-new law (awaiting only the president's signature) will let the Federal Communications Commission directly regulate rates in the notoriously predatory prison calling industry. Under the threat of having to provide a solid product for a reasonable price, companies may opt to call it a day and open up the market to a more compassionate and forward-thinking generation of providers.

Prison calling systems depend on the state and the prison system, and generally have run the gamut from good enough to shockingly bad. With a literally captive customer base, companies had no real reason to innovate, and financial models involving kickbacks to the prisons and states incentivized income at all costs.

Inmates are routinely charged extortionate rates for simple services like phone calls and video calls (an upsell), and have even had visitation rights rescinded, leaving paid calls the only option. Needless to say, this particular financial burden falls disproportionately on people of color and those with low incomes, and it's a billion-dollar industry.

It's been this way for a long time, and former FCC commissioner Mignon Clyburn spent years trying to change it. When I talked with her in 2017, before she left the agency, she called inmate calling "the clearest, most glaring type of market failure I’ve ever seen as a regulator." It was an issue she spent years working on, but she gave a lot of credit to Martha Wright-Reed, a grandmother who had organized and represented the fight to bring reform to the system right up until she died.

FCC Commissioner Mignon Clyburn talks privacy, compromise and connecting communities

And it is after Martha Wright-Reed that the bill today is named. It's a simple bill, imbuing the FCC with the power "to ensure just and reasonable charges for telephone and advanced communications services in correctional and detention facilities." It does this with some minor but significant changes to the Communications Act of 1934, which (among other things) established the FCC and is regularly updated for this purpose. (The bill passed the House and Senate and will almost certainly be signed by President Biden soon, when the festivities relating to the spending bill, Volodymyr Zelenskyy's visit, and the holiday address pass.)

"The FCC has for years moved aggressively to address this terrible problem, but we have been limited in the extent to which we can address rates for calls made within a state’s borders," said FCC chairwoman Jessica Rosenworcel in a statement. "Today, thanks to the leadership of Senators Duckworth, Portman and their bipartisan coalition, the FCC will be granted the authority to close this glaring, painful, and detrimental loophole in our phones rate rules for incarcerated people." (She also thanked Wright-Reed, as well as Clyburn.)

Free Press has collected a number of other comments from interested parties, all lauding the legislation for curbing "carceral profiteering" and generally benefiting inmates rather than continuing to treat them like a source of labor or easy cash.

While it's great that costs will go down as soon as the FCC can put together and pass a rule on the matter, the effect will probably be greater than just savings.

Most companies in place today will all but certainly face vastly reduced revenues along with increased scrutiny as the FCC requires reports and takes any other measures it decides are necessary to enforce the new rules. It would not be surprising at all if plenty of these companies just get out while the gettin's good.

The introduction of regulation into a space like this, dominated for years by legacy providers, may well cause a changing of the guard — something we've seen advance notice of with some states embracing new models like Ameelio's. The startup began as a way to mail postcards to inmates for free, but soon they had built a modern digital video calling infrastructure that's far cheaper and easier to operate than the legacy ones.

Ameelio’s free video calling service for inmates goes live at first facilities

Now operating in three states, Ameelio's service can also serve as the basis for activities like education and legal advocacy in prison facilities, since the cost is so much lower and access is easier. (As indeed the founders discovered, and went on to found Emerge Career.)

A bunch of shady companies in a hurry to leave means a market opportunity as states scramble to find providers — no doubt Ameelio will be looking to fill some of those gaps, but the next few years will probably see other companies stepping in to take part as well.

The prison system we have is in dire need of reformation in general, but that will happen piece by piece, as we see happening here.
THEY CREATED THE SLA
California university apologizes for prisoner experiments


California Doctors-Prison Experiments
A wheelchair-bound inmate wheels himself through a checkpoint at the California Medical Facility in Vacaville, Calif., on April 9, 2008. A prominent California medical school has apologized for conducting unethical experimental medical treatments on 2,600 incarcerated men in the 1960s and 1970s. (AP Photo/Rich Pedroncelli, File)

Thu, December 22, 2022 

SAN FRANCISCO (AP) — A prominent California medical school has apologized for conducting dozens of unethical medical experiments on at least 2,600 incarcerated men in the 1960s and 1970s, including putting pesticides and herbicides on the men's skin and injecting it into their veins.

Two dermatologists at the University of California, San Francisco — one of whom remains at the university — conducted the experiments on men at the California Medical Facility, a prison hospital in Vacaville that's about 50 miles (80.47 kilometers) northeast of San Francisco. The practice was halted in 1977.

The university's Program for Historical Reconciliation issued a report about the experiments earlier this month, writing that the doctors engaged in “questionable informed consent practices” and performed procedures on men who did not have any of the diseases or conditions that the research aimed to treat. The San Francisco Chronicle first reported the program's findings Wednesday.

“UCSF apologizes for its explicit role in the harm caused to the subjects, their families and our community by facilitating this research, and acknowledges the institution’s implicit role in perpetuating unethical treatment of vulnerable and underserved populations — regardless of the legal or perceptual standards of the time,” Executive Vice Chancellor and Provost Dan Lowenstein said in a statement.

The report said further analysis is needed to determine the extent of harms caused to the prisoners as a result of the experiments and what the university should do in response.

“We are still in the process of considering the recommendations and determining appropriate next steps,” the university said in a statement Thursday. "As we do so, it will be with humility and an ongoing commitment to a more just, equitable and ethical future.”

A spokesperson for the California Department of Corrections and Rehabilitation, Dana Simas, said officials had not yet read the report. However, the agency and California Correctional Health Care Services “strive to ensure the incarcerated population receive appropriate health care that meets the community standard of care and ethics,” Simas wrote.

The report focused on research by Dr. Howard Maibach and Dr. William Epstein. Maibach continues to work at the university, and Epstein died in 2006. It was not immediately clear whether Maibach would face any discipline in light of the report.

The experiments involved administering doses of pesticides and herbicides to the incarcerated men, who volunteered for the studies and were paid $30 a month for their participation — among the highest-paid roles at the prison and in high demand, according to a 1977 article of the university's student newspaper, The Synapse.

Other experiments included placing small cages with mosquitos close to the participants' arms or directly on their skin to determine “host attractiveness of humans to mosquitos,” the report stated.

The research ended in 1977 when California prohibited human subject research in state prisons, a year after the federal government halted the practice.

But Epstein in 1977 testified in state hearings in support of biomedical experimentation at prisons, the report found, and investigators could not find any evidence that he changed his opinion before he died.

While Maibach wrote that he regrets having participated in research that does not meet current standards in a letter to the university's dermatology department, he said he believed the experiments had offered benefits to some of the patients.

“What I believed to be ethical as a matter of course forty or fifty years ago is not considered ethical today,” he wrote. “I do not recall in any way in which the studies caused medical harm to the participants.”

The university says there is no evidence that the doctors' research was directed specifically at Black men, although they were trained by a now-deceased Philadelphia doctor whose research at a Pennsylvania prison was unethical and disrespectful toward the subjects, many of whom were incarcerated Black men.

The report also found that many of Maibach’s publications during his career perpetuate the biologization of race — which he addressed in his letter by saying he has now “come to the understanding that race has always been a social and not a biological construct, something not appreciated by so many of us in a prior era.”

“While one of his (Maibach's) recent articles hints at a possible reconsideration of the biology of race, we believe the long history of his research of skin differences along racial lines, with race as a possible biological factor, perpetuated the continuance of racial science in dermatology and has yet to be publicly addressed,” the report stated.

Maibach's son, Edward Maibach, wrote in an email Thursday to The Associated Press that his father had suffered a stroke last week and was unable to respond to press inquiries.

The younger Maibach said his father had not been allowed to meet with the report's authors or access their documents. The report and a press release from the university, he wrote, treated his father “as a ‘lone ranger’ who seemingly acted without knowledge or approval at others at UCSF. This, too, is incorrect.”

“Dr. Maibach’s activities at Vacaville were known to, and endorsed by, UCSF administrators, including the UCSF ethicist,” Edward Maibach wrote.
Silicon Valley's job cuts are everybody else's gain as tech workers find exciting new opportunities outside Big Tech

Diamond Naga Siu,Rebecca Knight
Wed, December 21, 2022 

Jetta Productions Inc/Getty Images


The headlines out of Silicon Valley are enough to send shivers down the spine of any tech worker.


Layoffs and hiring freezes have happened at companies of all sizes but hit Big Tech especially hard.


Yet experts are calm, and studies have found that tech jobs are abundant in other industries.

The bad news: The pandemic-fueled hiring spree across Silicon Valley — which allowed the tech giants to grow beyond their means — has backfired, now prompting many companies to lay off troves of workers.


Last month alone, Amazon cut 10,000 workers, Meta slashed 11,000 roles, and Twitter let go of 3,750 people. Amid high inflation and recession fears, Big Tech companies, including the historically stable Apple, have closed their doors indefinitely with hiring freezes and left their employees feeling uncertain about their futures.

But well-paying tech jobs are abundant if workers look at other industries, new research suggests. Recruiters and other experts tell Insider that tech workers are in especially high demand in sectors including insurance, healthcare, retail, government, and banking.

"That's the thing about being laid off in a tight labor market: You discover your worth," said Julia Pollak, the chief economist of ZipRecruiter.

Analysis from the tech-careers site Dice found that job postings for tech-focused roles were up 25% from January to October compared with the same period last year. About 60% of the top 100 employers of tech talent during that period were from nontech sectors, like healthcare, consulting, defense, and banking.

For many job seekers the interest in those kinds of companies is mutual, said Allison Hemming, the CEO of The Hired Guns, a tech-recruiting firm in New York City. "As a recruiter, I ask every candidate: What are your dream companies?" she said. "It was always the same for years — Google, Amazon, and Meta — and it was hard to peel people out of that."

So much of working in tech is about finding fixes to vexing problems, inventing new ways of doing things, and bringing fresh creativity to bear. Those skills and expertise — a crucial part of the job for engineers, data scientists, and product managers — are needed across a variety of industries, says Art Zeile, the CEO of Dice.

Though eye-popping compensation packages inflated with stock options and cushy perks like on-site laundry may not be as common in industries outside tech, the market is shifting and with it the priorities of workers.

"But now with this economic wobble that we're in," Hemming said, "a little job safety and security can go a long way and candidates are open to new opportunities."
Nontech companies need tech talent too

The Washington State Capitol in Olympia.

Rex_Wholster/Getty Images

A common refrain has emerged in the world of business over the past decade or so: Every company is now a tech company. Online shopping, the rise of cloud computing, and the demand for mobile apps have led just about every industry to embrace tech "because that's the way we live," said Zeile, the Dice CEO.

"You order coffee online," he said. "You do your banking online. You do telemedicine for your healthcare. All those things are creating opportunities for technologists to shine."

As you may expect, the traditional tech industry remains the largest employer of tech workers, the experts say. But when you factor in local, state, and federal government workers, the public sector isn't far behind, says Tim Herbert, the chief research officer at the tech industry association Comptia. He also pointed to finance, insurance, manufacturing, and information as hot industries for tech workers.

ZipRecruiter's Pollak said the turmoil in Big Tech was pushing "some tech workers to explore opportunities outside" the usual suspects for the first time. And because competition for employees is so stiff, they're discovering they have plenty of options.
Tech workers are finding a new job sooner rather than later

A study from the workforce-data provider Revelio Labs further identified a hidden upside of layoffs. In its analysis, it found that nearly three-fourths of workers laid-off this year found a new job within three months and more than half found a job that paid more than what they were earning before.

And discovering worth is a two-way street, said Ryan Sutton, a technology talent leader at Robert Half. He said an economic slowdown was the worst time for a hiring freeze, since it provided the perfect conditions for companies to level up the quality of talent — possibly shedding lackluster workers and giving the chance to "upgrade that seat."

"When you're in an upturn, you are typically not always able to get access to the best talent because it's so fast-paced," Sutton said. "A smart company is really in this current economic cycle, trying to strengthen their bench."


A hiring manager at a table with a candidate holding a resume.

SDI Productions/Getty Images

Tech job creation and hiring numbers remain strong, but layoffs keep coming, underscoring a tech labor market in flux.

Sutton of Robert Half urged expediency to anyone who lost their job recently, since he noticed that people laid off near the end of the year tend to delay finding a new role until after the holidays.

"There's no value in waiting," Sutton said. "If you are in the job market today, there's going to likely be less competition competing for that opportunity as there will be in 30, 45, 60 days."

As for how laid-off tech workers ought to position themselves for these jobs, the Hired Guns recruiter Hemming has some advice. "Think about what playbooks you have to offer and how they apply to Fortune 2000 companies," she said.

In other words, reflect on your ideas and experience and think about how you'd translate that knowledge to a new sector. Perhaps you helped create a data strategy, or you designed a cybersecurity system, or you wrote the code for a digital software product.

"If you can show how you'll transfer your expertise and know-how," she said, "that's a compelling offering."

Tesla, Mercedes, and GM are being probed by US Senate on whether they use forced Uyghur labor

Senator Ron Wyden (D-OR) and chair of the U.S. Senate Finance Committee is probing major car makers on their links to forced Uyghur labor.Jemal Countess/Getty Images for SEIU
  • The Senate Finance Committee sent letters to car makers about their links to forced Uyghur labor.

  • The letters ask the automakers to check their supply chains for connections to the Xinjiang region.

  • The Uyghur Forced Labor Prevention Act bans most imports from the Xinjiang region.

The US Senate Finance Committee is looking into whether major car makers are sourcing parts and metals linked to forced Uyghur labor, a Muslim minority group based in Xinjiang, China.

Ron Wyden, a Senator from Oregon and the chair of the Senate Finance Committee, sent letters to Honda, Ford, General Motors, Mercedes-Benz, Stellantis, Tesla, Toyota, and Volkswagen on Thursday requesting specific information related to their supply chains.

The committee requested that the car makers conduct their own supply chain mapping and analysis to identify links to Xinjiang. The committee also asked if they have ever ended or threatened to end relationships with suppliers — including sub-suppliers — over possible connections to Xinjiang.

"Automotive supply chains are vast and complex, but it is vital that automakers scrutinize their relationships with all suppliers linked to Xinjiang," the letters said.

The letters come just weeks after Sheffield Hallam University released new research with what they said was evidence that the car makers in question may potentially be importing materials produced by forced Uyghur labor.

The researchers said they found that at least thousands of Uyghurs have been forced to work in steel and aluminum metal-processing factories in accordance to Chinese government mandates. These metals are used to make car frames wheels, brakes, and bodies.

Kendyl Salcito, one of the researchers involved in the Sheffield study, alleged to Insider that the factory conditions are "utterly appalling."

The letters also come a year after President Joe Biden signed the Uyghur Forced Labor Prevention Act, which seeks to ban most imports from the Xinjiang region.

"The United States considers the Chinese government's brutal oppression of Uyghurs in Xinjiang an 'ongoing genocide and crimes against humanity,'" the letters said.

The Chinese government called US claims of oppression and genocide false, the Wall Street Journal reported.

A Honda spokesperson told Insider that it expects its suppliers to comply with its global sustainability guidelines and "will work with policymakers on these important issues."

Stellantis, the brand behind Chrysler and Jeep, among others, is "taking these matters extremely seriously" and currently reviewing chairman Wyden's letter and claims made in the research, a spokesperson told Insider. Stellantis referred Insider to its code of conduct that its suppliers are expected to meet.

General Motors told the Journal that its policy prohibits any form of forced or involuntary labor, abusive treatment of employees, or corrupt business practices in its supply chain, while a Volkswagen spokesman told the Journal that the company investigates any alleged violation of its policy, saying "serious violations such as forced labor could result in termination of the contract with the supplier."

The other automakers did not immediately reply to requests for comment from Insider or the Journal.

Volkswagen, Honda, General Motors, and Stellantis previously told Insider that they reject forced labor in their supply chains and take accusations of abuse seriously.

The committee said that increased transparency will help the government investigate how effective trade laws are in addressing labor and other human rights abuses in China, according to the letters.

"I recognize automobiles contain numerous parts sourced across the world and are subject to complex supply chains," the letters said. "However, this recognition cannot cause the United States to compromise its fundamental commitment to upholding human rights and US law."

Somehow, Twitter Finds More Workers to Layoff

Lauren Leffer
Thu, December 22, 2022 

Twitter’s San Francisco offices have been emptied out, with prophecies of 75% total staff cuts now fulfilled, according to estimates.


As tumbleweeds blow through the empty halls and vacant office spaces of Twitter headquarters, somehow, somewhere, Elon Musk found even more people to fire.

The flailing social media company laid off half of its remaining public policy team this week, according to LinkedIn and Twitter posts from former department employee, Theodora Skeadas, as first reported by Tech Crunch. Last Friday, the platform also cut additional engineering staff responsible for site infrastructure, according to a report from The Information.

The public policy team is/was responsible for managing legal and civil interactions regarding topics like speech rights, privacy, and safety. The team field(ed) requests from governments and other organizations to moderate content and set rules, according to a report from Reuters.

Gizmodo reached out to both Skeadas and Twitter with questions about the layoffs, but did not immediately receive a response, and the total number of people fired is unclear. However, the public policy team’s leader, Sinéad McSweeney also left the company this week, multiple unnamed sources reportedly told Reuters.

Since Musk’s hostile and chaotic takeover of the social media platform, the billionaire has cut thousands of employees from the payroll. Then, he had to rehire some. And then, Musk endeavored to make working at Twitter so unpleasant (read: “hardcore”), that yet more quit.

When Musk’s corporate acquisition finally went through at the end of October, reports surfaced that the Tesla CEO planned to scrap three quarters of the site’s staff. Though he denied those rumors, they’ve now come to fruition. Between quasi-voluntary departures and forced exits, an estimated 75% of all of Twitter’s formerly ~7,000 employees no longer work at the company, according to Tech Crunch. Next up on the chopping block? Probably the bluebird itself.

The platform’s engineering, ethical AI, content moderation, and now public policy teams have all been hollowed out or outright dissolved. Last week, Twitter disbanded its Trust and Safety Council, which Skeadas was a leader of. Even George Hotz, the notorious hacker who offered his services to Twitter at low cost for 12-weeks, quit the site.

“The work still matters!,” Skeadas wrote in her extended LinkedIn post regarding the layoffs. “I wish good fortune and strength to those who remain at Twitter.” But based on how things are going so far: The few, the proud, the enduring Twitter staff are going to need more than luck and resolve. They’re going to need a Christmas miracle.”

More from Gizmodo