Wednesday, June 14, 2023

Will AI enslave us?

BY SEBASTIAN THRUN, 
OPINION CONTRIBUTOR 
THE HILL - 06/14/23 
Getty Images


Artificial intelligence is a tool to discover patterns in very large datasets. ChatGPT, for example, uses a form of AI that combs through hundreds of billions of documents and images to find plausible ways to respond to a given question.

Other forms of AI discover patterns in videos and even sound recordings. Recent results have been astonishing.

But to answer the underlying question, no: AI will not enslave us.

ChatGPT derives its wisdom from documents written by people. It is merely a mirror of how we communicate, not a malevolent force that can reduce our civilization to ruin.

Like any tool, AI can be used as a weapon, and this is something I worry about. Bad actors already use AI to generate fake news. With the most recent advancements, they can now create fake voice recordings, fake images and fake videos that are indistinguishable from reality. Such acts will lead to new forms of cybercrime and new threats to our democracy.

AI also allows authoritarian governments to spy on people at levels never experienced. And AI will lead to more potent cyber-attacks on our infrastructure, our corporations and our democracy. These are all threats I take seriously, and about which we should all worry.

But in all this, we should not forget why we are pursuing development of artificial intelligence in the first place.

AI has already saved countless lives by helping doctors to diagnose deadly diseases such as cancer.

As I write this, a plethora of driverless cars are operating in my neighborhood in San Francisco, bringing an unprecedented level of safety and access to transportation to us all.

AI also has become an indispensable tool for creatives, professionals who generate content for marketing, education and entertainment, and even software engineers.

Udacity now provides personalized AI mentors to more than 3 million students in the Arab-speaking world and Uzbekistan. Cresta provides AI coaches to call center agents. A recent study by researchers from MIT and Stanford found a 14 percent improvement in productivity. And AI has long been used by companies such as Google to find the information you are seeking.

In the present debate, we are missing the voice of reason. Some of our leaders link AI to nuclear and pandemic apocalypses. This is not the future I see. I see a technology that will make all of us better people.

Think how much of your daily work is mind-numbing, repetitive and unenjoyable. You will soon have your personal AI assistant, to whom you can hand over your menial tasks, freeing up your mind and your time. The assistant will be entirely under your control. And all children will have personalized AI tutors. The US has started a food fight with Mexico Biden, progressives and the perpetual-emergency presidency

We should all welcome a broad debate about the pros and cons of AI. But let’s not forget that AI is a tool used by people, which derives all of its signs of intelligence from things that other people have written. AI is not a living being that has evolved to survive, but a tool developed by us.

Like a kitchen knife, AI can be used as a tool or as a weapon. Let’s make AI serve for everyone’s benefit, and let’s work hard to prevent abuses.

Sebastian Thrun is an adjunct professor at Stanford and a pioneer in the field of AI. He co-founded Google X, Waymo and Udacity.


Bipartisan bill seeks to deny AI companies liability protections

THE HILL- 06/14/23 

Photo illustration showing ChatGPT and OpenAI research laboratory logo and inscription on a mobile phone smartphone screen 
(Nicolas Economou/NurPhoto via Getty Images)

A bipartisan bill introduced Wednesday seeks to clarify that artificial intelligence (AI) companies are not eligible for protections that keep tech companies from being held legally responsible for content posted by third parties.

The bill introduced by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) aims to amend Section 230 of the Communications Decency Act with a clause that strips the immunity given to tech companies in cases involving the use of generative AI.

Dubbed the No Section 230 Immunity for AI Act, the legislation would also empower Americans harmed by generative AI models to sue AI companies in federal or state court.

The bill comes as senators weigh proposals to regulate the booming AI industry.

The latest in politics and policy. Direct to your inbox. Sign up for the Technology newsletter
Subscribe

The Senate Judiciary Committee’s privacy, technology and law subcommittee, which Blumenthal and Hawley lead, held a hearing last month with the CEO of OpenAI — the maker of ChatGPT — about the risks and potential of AI.

The Judiciary panel has held two further hearings this month on AI: one on intellectual property last week and another on human rights concerns Tuesday.

During the hearings, lawmakers on both sides of the aisle raised concerns around how the controversial Section 230 provision would apply to AI technology.

The proposal is being introduced as the tech industry argues the provision could apply to generative AI content, while some experts and advocacy groups say it will likely not.

Without clarification by Congress, the decision will likely be left to how courts interpret the provision in various cases.

“AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public,” Blumenthal said in a statement.

He said the proposal introduced Wednesday is the “first step in our effort to write the rules of AI and establish safeguards as we enter a new era.”

Both Blumenthal and Hawley are critics of the overarching Section 230 provision that provides legal protection for tech companies, yet a proposal to amend it has not moved forward in Congress amid broader debate over content moderation.

“We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” Hawley said in a statement.

“When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality,” he added.

AI must not become a driver of human rights abuses

It is the responsibility of AI companies to ensure their products do not facilitate violations of human rights.


Eliza Campbell
Technology and inequality researcher with Amnesty International,

Michael Kleinman
Director of Amnesty International’s Silicon Valley Initiative

Published On 13 Jun 2023
More than 350 scientists AI professionals have signed a letter warning of AI's risks for humanity 

On May 30, the Center for AI Safety released a public warning of the risk artificial intelligence poses to humanity. The one-sentence statement signed by more than 350 scientists, business executives and public figures asserts: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

It is hard not to sense the brutal double irony in this declaration.

First, some of the signatories – including the CEOs of Google DeepMind and OpenAI – warning about the end of civilisation represent companies that are responsible for creating this technology in the first place. Second, it is exactly these same companies that have the power to ensure that AI actually benefits humanity, or at the very least does not do harm.

They should heed the advice of the human rights community and adopt immediately a due diligence framework that helps them identify, prevent, and mitigate the potential negative impacts of their products.

While scientists have long warned of the dangers that AI holds, it was not until the recent release of new Generative AI tools, that a larger part of the general public realised the negative consequences it can have.

Generative AI is a broad term, describing “creative” algorithms that can themselves generate new content, including images, text, audio, video and even computer code. These algorithms are trained on massive datasets, and then use that training to create outputs that are often indistinguishable from “real” data – rendering it difficult, if not impossible, to tell if the content was generated by a person, or by an algorithm.

To date, Generative AI products have taken three main forms: tools like ChatGPT which generate text, tools like Dall-E, Midjourney and Stable Diffusion which generate images, and tools like Codex and Copilot which generate computer code.

The sudden rise of new Generative AI tools has been unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This far outpaces the initial growth of popular platforms like TikTok, which took nine months to reach as many people.

Throughout history, technology has helped advance human rights but also created harm, often in unpredictable ways. When internet search tools, social media, and mobile technology were first released, and as they grew in widespread adoption and accessibility, it was nearly impossible to predict many of the distressing ways that these transformative technologies became drivers and multipliers of human rights abuses around the world.

Meta’s role in the 2017 ethnic cleansing of the Rohingya in Myanmar, for example, or the use of almost undetectable spyware deployed to turn mobile phones into 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of the introduction of disruptive technologies whose social and political implications had not been given serious consideration.

Learning from these developments, the human rights community is calling on companies developing Generative AI products to act immediately to stave off any negative consequences for human rights they may have.

So what might a human rights-based approach to Generative AI look like? There are three steps, based on evidence and examples from the recent past, that we suggest.

First, in order to fulfil their responsibility to respect human rights, they must immediately implement a rigorous human rights due diligence framework, as laid out in the UN Guiding Principles on Business and Human Rights. This includes proactive and ongoing due diligence to identify actual and potential harms, transparency regarding these harms, and mitigation and remediation where appropriate.

Second, companies developing these technologies must proactively engage with academics, civil society actors, and community organisations, especially those representing traditionally marginalised communities.

Although we cannot predict all the ways in which this new technology can cause or contribute to harm, we have extensive evidence that marginalised communities are the most likely to suffer the consequences. The initial versions of ChatGPT engaged in racial and gender bias, suggesting, for instance, that Indigenous women are “worth” less than people of other races and genders.

Active engagement with marginalised communities must be part of the product design and policy development processes, to better understand the potential impact of these new tools. This cannot be done after companies have already caused or contributed to harm.

Third, the human rights community itself needs to step up. In the absence of regulation to prevent and mitigate the potentially dangerous effects of Generative AI, human rights organisations should take the lead in identifying actual and potential harm. This means that human rights organisations should themselves help to build a body of deep understanding around these tools and develop research, advocacy, and engagement that anticipate the transformative power of Generative AI.

Complacency in the face of this revolutionary moment is not an option – but neither, for that matter, is cynicism. We all have a stake in ensuring that this powerful new technology is used to benefit humanity. Implementing a human rights-based approach to identifying and responding to harm is a critical first step in this process.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance


.
Eliza Campbell
Technology and inequality researcher with Amnesty International,
Eliza Campbell is a technology and inequality researcher with Amnesty International, focusing on the human rights implications of emerging technologies.

Michael Kleinman
Director of Amnesty International’s Silicon Valley Initiative
Michael Kleinman is the Director of Amnesty International’s Silicon Valley Initiative, helping lead the organisation’s work on the human rights implications of new and emerging technologies.


The case for bottom-up AI

With an open source approach, AI can help us build a more inclusive, innovative, and democratic society.



OPINION
Jovan Kurbalija
Published On 12 Jun 2023
Bottom-up AI challenges the dominant view that powerful AI platforms can be developed only by using big data, as is the case with ChatGPT, Bard, and other large language models, writes Kurbalija [Florence Lo/Reuters]


ChatGPT and other generative artificial intelligence tools are rising in popularity. If you have ever used these tools, you might have realised that you are revealing your thoughts (and possibly emotions) through your questions and interactions with the AI platforms. You can therefore imagine the huge amount of data these AI tools are gathering and the patterns that they are able to extract from the way we think.

The impact of these business practices is crystal clear: a new AI economy is emerging through collecting, codifying, and monetising the patterns derived from our thoughts and feelings. Intrusions into our intimacy and cognition will be much greater than with existing social media and tech platforms.

We, therefore, risk becoming victims of “knowledge slavery” where corporate and/or government AI monopolies control our access to our knowledge.

Let us not permit this. We have “owned” our thinking patterns since time immemorial, we should also own those derived automatically via AI. And we can do it!

One way to ensure that we remain in control is through the development of bottom-up AI, which is both technically possible and ethically desirable. Bottom-up AI can emerge through an open source approach, with a focus on high-quality data.

Open source approach: The technical basis for bottom-up AI


Bottom-up AI challenges the dominant view that powerful AI platforms can be developed only by using big data, as is the case with ChatGPT, Bard, and other large language models (LLMs).

According to a leaked document from Google titled “We have no Moat, and Neither Does OpenAI”, open source AI could outcompete giant models such as ChatGPT.

As a matter of fact, it is already happening. Open source platforms Vicuna, Alpaca, and LLama are getting closer in quality to ChatGPT and Bard, the leading proprietary AI platforms, as illustrated below.

Open source solutions are also more cost-effective. According to Google’s leaked document: “They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”

Open source solutions are also faster, more modular, and greener in the sense that they demand less energy for data processing.

As algorithms for bottom-up AI become increasingly available, the focus is shifting to ensuring higher quality of data. Currently, the algorithms are fine-tuned mainly manually through data labelling performed mainly in low-cost English-speaking countries such as India and Kenya. For example, ChatGPT datasets are annotated in Kenya. This practice is not sustainable as it raises many questions related to labour law and data protection. It also cannot provide in-depth expertise, which is critical for the development of new AI systems.

At Diplo, the organisation I lead, we have been successfully experimenting with an approach that integrates data labelling into our daily operations, from research to training and management. Analogous to yellow markers and post-its, we annotate text digitally as we run courses, conduct research or develop projects. Through interactions around text, we gradually build bottom-up AI.

The main barrier in this bottom-up process is not technology but cognitive habits that often favour control over knowledge and information sharing. Based on our experience at Diplo, by sharing thoughts and opinions on the same texts and issues, we gradually increase cognitive proximity not only among us colleagues as humans, but also between us humans and AI algorithms. This way, while building bottom-up AI, we have also nurtured a new type of organisation which is not only accommodating the use of AI but also changing the way we work together.

How will bottom-up AI affect AI governance?

ChatGPT triggered major governance fears, including a call by Elon Musk, Yuval Harari and thousands of leading scientists to pause AI development on account of big AI models triggering major risks for society, including high concentrations of market, cognitive, and societal power. Most of these fears and concerns could be addressed by bottom-up AI, which returns AI to citizens and communities.

By fostering bottom-up AI, many governance problems triggered by ChatGPT might be resolved through the mere prevention of data and knowledge monopolies. We will be developing our AI based on our data, which will ensure privacy and data protection. As we have control over our AI systems, we will also have control over intellectual property. In a bottom-up manner, we can decide when to contribute their AI patterns to wider organisations, from communities to countries and the whole of humanity.

Thus, many AI-related fears, including those raised in relation to the very survival of humanity (leaving aside whether they are realistic or not), will become less prominent by our ownership of AI and knowledge patterns.

Bottom-up AI will be essential for developing an inclusive, innovative, and democratic society. It can mitigate the risks of power centralisation, which is inherited from generative AI. Current legal, policy, and market mechanisms cannot deal with the risk of knowledge monopolies of generative AI. Thus, bottom-up AI is a practical way to foster a new societal “operating system” built around the centrality of human beings, their dignity, free will, and realising creative potential, as Diplo proposed via our humAInism approach, we began developing back in 2019.
Will bottom-up AI take off?

Technological solutions for bottom-up AI are feasible today. Will we use them as an alternative to top-down AI? For the time being, it remains anyone’s guess. Some individuals and communities may have more incentives and abilities to experiment with bottom-up AI than others. Some may continue to rely on top-down AI out of sheer inertia. And the two approaches may even co-exist. But we owe it to ourselves and to humanity to question what is being served to us, and to both explore and encourage alternatives. And, ultimately, to make informed decisions.

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.    


Jovan Kurbalija
Founding Director of the DiploFoundation and Head of the Geneva Internet Platform
Jovan Kurbalija is the Founding Director of the DiploFoundation and Head of the Geneva Internet Platform. He previously served as Executive Director of the UN High-Level Panel on Digital Cooperation (2018-2019). Kurbalija has been a leading expert on the impact of AI and digitalisation on diplomacy and modern society. His book ‘Introduction to Internet Governance’, translated into 11 languages, is a textbook at many universities worldwide.

No comments: