Saturday, October 25, 2025

Global South's Data-Colonialism Paradox



Vivek Parat | 


Data is not the new oil; it is the new soil. If developing nations can’t grow their own crops—algorithms, services, taxes—someone else will harvest the field.

Last October, Blessing Adebayo, a small cosmetics seller in Lagos, received an e-mail from Amazon Web Services: her customers’ data would henceforth be stored in Ireland. “I thought my files lived in my own shop,” she told me. “Suddenly they’re in a cold room 6,000 kilometres away, and I have to pay dollars to reach them.”

Blessing’s complaint is a miniature of a much larger shift. Across Africa, Asia and Latin America, governments are discovering that the data their citizens produce—location histories, health records, shopping lists—are quietly shipped to server farms in Silicon Valley, Dublin or Shanghai, where they are refined into the algorithms that now shape what people watch, buy and even whom they vote for. The profits stay north; the raw material is mined south. A new colonial pipeline has been built, only this time the cargo is digital.

How Much is Leaving?

The UNCTAD Digital Economy Report 2024 calculates that developing countries attract less than 30% of global foreign investment in digital sectors, while 80% of projects are crowded into just 10 economies. In other words, the value created from Nigerian clicks or Indonesian swipes is booked as GDP (gross domestic product) in California.

Nigeria has started to push back. In March 2024, the communications regulator gave Google, Microsoft and Amazon six months to build local data centres or face service restrictions. “We told them no more waivers—we need a road map,” says Kashifu Inuwa Abdullahi, the country’s top digital official. The ultimatum is less about cables and more about sovereignty: if Lagos cannot tax or audit the data, it cannot claim a share of the wealth that data generates.

Chinese firms have laid 70% of Africa’s 4G backbone; Amazon controls roughly half of Latin American cloud contracts. These cables and server halls look like development, but they lock countries into long-term leases. Seventy per cent of Nigerian government agencies still keep their records on overseas clouds, African Development Bank figures show. Moving them home will cost an estimated $1 billion—money that could build 10,000 kilometres of urban water pipes.

India’s 2023 Digital Personal Data Protection Act and Vietnam’s 24-month local-storage rule are attempts to claw back control, yet they tackle only geography, not intelligence: the chips, models and engineers that turn raw data into high-value services remain in the Global North. Scholars call the result “sovereignty simulacrum”—flags on a map, but power elsewhere.

Environmental Bill

Southern countries also pay the hidden ecological cost. They export cobalt, lithium and copper at low prices, import expensive phones, and later receive container-loads of e-waste. The circular flow mirrors the old plantation economy: ship out cheap, bring back dear.

Three experiments point a way out. 

1. Regional cloud: The African Union’s draft Data Policy Framework treats member-state data as a pooled strategic asset, big enough to bargain with Big Tech. 

2. Public fibre: Uruguay’s state-owned ANTEL has achieved 94 % broadband coverage while keeping traffic—and profits—inside national borders. 

3. Code with capital: Chile is mapping data-centre heat and water use so that every new server farm must serve national development goals, not just foreign balance sheets.

Bottom Line

Data is not the new oil; it is the new soil. If developing countries cannot grow their own crops—algorithms, services, taxes—someone else will harvest the field. Blessing Adebayo’s tiny shop is a reminder that sovereignty begins with the simplest question: where is my information, and who is making money from it?

The writer, a technology-policy analyst and author, is Additional Personal Assistant to the Speaker of the Kerala Legislative Assembly. The views are personal.

How Technology Shapes How We Move, Speak, Think



Vanessa Chang 



From hands to feet, voice to vision, our digital tools extend, transform, and sometimes erase the human body.


The influential computer scientist Mark Weiser once wrote that “a good tool is an invisible tool. By invisible, I mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.” By this definition, many of our digital tools seem to have succeeded completely; they liberate our bodies by becoming invisible to users. By closing the gap between our bodies and our virtual selves, touchless technologies, such as gesture control, voice recognition, and eye tracking aspire to channel our pure, natural expressions.

Such an interface has long been the holy grail for designers. From the Wii motion console to Leap Motion to the gadgets we all now carry in our pockets, these devices aim to erase the boundary between our bodies and our information. These devices promise a future in which our tools are so intuitive, they vanish. Now, it seems that future has arrived.

Though invisible to our conscious minds, our tools indelibly shape us. Technologies are not simply objects but architectures that organise our bodies in space and time, and give form to what I call the digital body: how we feel, move, and become through and alongside digital technologies. And the digital body is not an abstraction—it is us, becoming, again and again, in the technologies we build and the worlds we inhabit.

Living in the era of smartphones and AI, it’s easy to think that we’re in uncharted waters without a map. Our tools have become so frictionless, so invisible, that we forget their historical origins. Long before algorithms and touchscreens, technologies like writing, musical instruments, and even roads reshaped human life. These transformative tools and systems heralded profound changes in how we interact with one another, how we engage with the world around us, and ultimately, how we live.

As increasingly personalised technologies permeate our lives, such urgent questions arise as: How did we get here? What kinds of bodies do our technologies assume, require, or erase? What’s at stake when flesh becomes interface? And how might we redesign our path?

Our interactions with technology are dramas of skin, bone, information, rhythm, and power. Technologies refine, track, translate, and choreograph our behaviours; in doing so, they introduce new ways and languages of being, feeling, moving, and knowing.

Hands

As organs that extend consciousness into our surroundings, hands might be understood as the original interface—or as cartoonist Lynda Barry calls them, “the original digital device”—between human and world.

Paleoanthropologists, neuroscientists, and philosophers have stressed the evolutionary symbiosis of hand and mind. The hand mediates the most complex interactions of the human brain and the realm of technology. At the same time, our gestures have been shaped by an ongoing dialogue with our tools and our environments. As our earliest principal technology for information storage and retrieval, writing embodies this interplay.

Hands are smart. Hands are curious. Hands learn. Hands know things.

Despite the crucial role hands have played in the development of new technologies—and our bodies with them—there have been numerous attempts to automate the human hand out of the equation.

Automata, proto-robots built to act as if working under their own power but actually following a predetermined sequence of operations, have existed for over a millennium. Many of them are dedicated to mimicking the unique human performances of the hand, although they haven’t reproduced its intelligence.

How do bodies become information? In 1804, a French weaver patented a different kind of automaton that mimics and would eventually replace the intelligent hand. Named for its inventor, Joseph-Marie Jacquard, the Jacquard machine is an oft-cited ancestor in the history of modern computing. Fitted to a handloom, it is a mechanical surrogate for the weaver’s hand, a physical addendum to the weaving apparatus that automates the production of elaborately patterned fabric. By transforming the competence and creativity of the weaver’s hand into programmable code—ultimately supplanting that human expertise—the Jacquard loom became the first numerical control machine.

A 1951 advertisement for IBM’s Type 604 Electronic Calculating Punch featured a glowing human hand overlaid with its mechanical surrogate: vacuum tube modules arranged like fingers. The tagline reads, “Fingers You Can Count On.” More than just a sales pitch, the image dramatized a broader shift: the intelligent hand, once a symbol of craftsmanship, reimagined as a modular, electronic appendage—human labour abstracted into interchangeable, replaceable parts.

History, however, reminds us of the hand’s abiding creativity. By designing interfaces that serve human needs, rather than corporate metrics, we can reclaim the hand’s role as a living bridge between mind, body, and world.

Voice

Until the dawn of sound recording, the human voice was tethered to the human body. Speech and song were ephemeral, dissipating in almost the same instant that they sprang into being. At the end of the 19th century, sound recording severed the voice from the body and gave it a new and separate existence, extending what the technology of writing had long begun to do. Human voices could now endure beyond death, transcending the limits of the human body.

Like the hand, the voice is a threshold between body and world. Once only borne aloft in the air, its vibrations now travel wires, waves, and code. If writing extended the hand’s reach, sound recording gave the voice a second existence. Translated by machines, abstracted into data, and refigured into new forms, the voice has lived a thousand new lives—pressed into vinyl, remixed by DJs, morphed by Auto-Tune, parsed by speech recognition, and now reanimated by AI-generated vocal clones. These technologies have not only transformed how the human voice sounds, but how it is made, perceived, and preserved.

Ear

If the voice is how we reach outward, the ear is how we are reached. Our ears, once tuned by acoustic communities, are now calibrated by machines. From choirs to cochlear implants, music boxes to algorithmic playlists, listening has become a mediated act— private, curated, and data-driven.

The music box marked a turning point in the modern objectification of sound. Music boxes began to divorce ears from other speaking and singing bodies, restructuring listening from a communal act into an insular exchange between individual and machine. In so doing, music boxes began to create the channels for a new kind of hearing that would lead to our digital ears.

Since then, numerous mass-produced sound technologies have nourished and evolved the intimacy between ears and listening machines. Phonographs, gramophones, transistor radios, and later, magnetic tape, made it possible for people to listen to music in the absence of a performer. Several sound recording and storage technologies emerged in the wake of the phonograph’s invention. Whereas the music box, as an automated instrument, generated sound on its own, later technologies recorded and reproduced human performances. Each has spawned new auditory cultures, and with them, consonant reimaginations of the ear. As they transformed the voice from its pure alignment with the human soul to a more machinic object, they transformed listening cultures—and the ear itself.

Eye

Contemporary cameras, as we know them, unfix the eye from the body. Though now ubiquitous—embedded in nearly every phone and capable of high-resolution, high-focus capture— this was not always the case. Photography’s chief ancestor, the camera obscura, relies on the proximity of eye and image. Essentially a pinhole device, the camera obscura projects light through a small aperture into a darkened room or box, casting a live, inverted replica of the world outside—a shadow play of reality.

By the 16th century, the camera obscura had become a metaphor for human vision. This analogy defines the relationship between the eye and the seen world by immediacy: just as the outside world is projected onto a darkened room, so too is reality believed to be projected onto the eye through rays of light—an image cast upon the body. Photography descends from the camera obscura, turning projection into permanence. Whereas the camera obscura was ephemeral, the photograph imprints projected reality onto a surface, making it durable, portable, and endlessly reproducible. In so doing, it initiated the detachment of seeing from the physical act of looking. Vision, once anchored in the immediacy of the body, became something that could be captured, stored, and transmitted.

Yet, even as both vision and photography evolved into increasingly complex systems, no longer limited to the eye or lens, the metaphor of the camera as the eye endures. The persistence of this metaphor illustrates a deeper paradox at the heart of digital embodiment: we trust what we see, even though we are aware that sight can be deceiving. When machines inherit the work of the senses, we transfer that trust to them—forgetting, once again, that the eye has always been fallible. And as developments in imaging technologies have evolved, so too has the digital eye. Today’s digital eyes—those of smartphone cameras, Photoshop algorithms, and computer vision systems—do not see as the eye sees, nor do they operate by the same principles of immediacy that the camera obscura once did. They reconstruct, enhance, filter, and infer. As we increasingly outsource seeing to machines, the very nature of sight itself is transformed. Yet, cameras and the images they produce remain important referents for our reality, even as that reality becomes ever more fluid, manipulated, and abstracted. The digital gaze does not simply record the world; it remakes the very relationship between our bodies and the realities they claim to represent, between what is seen and what is believed.

Foot

The human foot is a marvel of evolutionary engineering, distinguishing us from other animals. The first hominins, the earliest members of our lineage, didn’t have large brains like modern humans, didn’t use sophisticated technology, and didn’t talk. They did, however, walk on two legs. Our feet are the very foundation of modern humanity as we know it. Bipedalism is the most ancient human adaptation, setting the stage for many characteristics that distinguish us as humans, including our reliance on tools and technology, language, and dietary flexibility. It freed human hands for tools and communication, and breath for speech. Walking—upright, that is—is as central to our humanity as writing and singing to one another. Our feet embody this extraordinary legacy and history. As Leonardo da Vinci is said to have remarked, “The human foot is a masterpiece of engineering and a work of art.”

As vehicles for our bodies, our feet serve as a primary interface between ourselves and the world. With the advent of self-tracking technologies that turn our footsteps into information, they, too, have become fodder for systems that flatten the nuance of lived experience. The notion that one must walk ten thousand steps daily for health has become almost as much of a maxim as that ancient adage, “A journey of a thousand miles begins with a single step.” It might be truer to say instead that a journey of ten thousand steps begins with a single pedometer.

While walking may be the most natural thing in the world (for those who are ambulatory), it is increasingly being integrated into technological systems. Impregnated with information-gathering sensors, smart cities are the inexorable conclusion of this logic. Cities are becoming algorithmic labs for human movement.

Body

Technology desires disappearance. When a tool is working as intended, you don’t think about it—until it breaks. This kind of disappearance doesn’t just require a good tool; it demands skill and practice of the human using it. Like a surgeon with a scalpel or a carpenter with a chisel using their intelligent hands, disappearance is a collaboration between well-made tools and disciplined bodies. Digital technologies push this further still: the ideal tool is one that will completely dissolve, making the human body itself the interface.

We’re already living in mixed reality. Our bodies are entangled in a dance with data: computers track our keystrokes, footsteps, and heartbeats; they reproduce and organize our movements; intelligent systems choreograph our journeys, large and small; we socialize through electronic sound and through avatars in virtual spaces. Extended reality technologies don’t simply show us other worlds; they clarify the one we’re already in and reveal how deeply our lives are intertwined with computation.

We burnish our digital images (I’ll admit that mine is lightly airbrushed by the Touch Up My Appearance option in my Zoom preferences). We feed ourselves to the technologies we use, seeking to transcend the limits of our bodies and minds. We are spit out as ghosts of the platforms that puppet us. The term “ghost in the machine” has been used as a crude and derogatory jab at Descartes’s mind-body dualism—the idea that our minds animate our bodies like spirits inhabiting a shell. One version of the body digital inverts this: the mind floats free, divorced from our bodies and assimilated by platforms. But we are not disembodied minds. We are deeply rooted in flesh, blood, and bone. Any future worth building must remember that.

Mind

In their landmark 1998 paper “The Extended Mind,” philosophers of mind Andy Clark and David Chalmers asked, “Where does the mind stop and the rest of the world begin?” Their answer has become one of the most influential articulations of the extended mind thesis, which rejects the conventional view that the mind resides solely within the brain, stopping at the skull and skin. Rather, they proposed that cognition arises from the dynamic interplay of brain, body, and tool. A pencil, a notebook, or a computer screen can become so integrated into our mental processes that they functionally bring about our cognitive abilities as much as our brains. The mind, in this view, is porous: it reaches into the world, and the world reaches back. Cognition, then, is not contained but distributed—emerging from an ecology of brain, body, and environment.

From grocery lists to encyclopedias, writing extends the human mind by offloading the burdens of memory, storing and retrieving information outside the body. Writing is a technology that allows us to outsource individual and collective memory. By sustaining the creation of informational archives that can be referenced, literacy made possible new forms of interaction with language. New techniques of information storage afforded the structured accumulation of knowledge. Once formulated, information can be reformulated with increasing precision. In this way, literacy laid the groundwork for the disciplines of logic, philosophy, and science in general—the knowledge infrastructures that would, centuries later, give rise to AI.

Writing has never been a solo act. Facilitated by AI, our writing should connect us with our past as much as with our future, with one another as much as ourselves. The best human writing challenges us to open our minds, not close them. We owe it to ourselves to tell stories with this new technology that does the same. If we must write with machines, let it not be to replicate, but to reimagine ourselves.

Rather than reflexively embracing or rejecting new technologies, we must ask: Do they expand or contract our horizons? Do they sustain care, curiosity, and complexity—or reduce us to what can be measured and predicted? How do they shape how we see, move, feel, speak, and connect? The history of our digital bodies shows that the ecologies we create are never neutral. They reflect how we choose to know one another, and how we allow ourselves to be known.

Vanessa Chang is the director of programs at Leonardo, the International Society for the Arts, Sciences, and Technology.

This adapted excerpt is from Vanessa Chang’s The Body Digital: A History of Humans and Machines from Cuckoo Clocks to ChatGPT (2025, Melville House). It is licensed under the Creative Commons Attribution-Non Commercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0) with permission from Melville House.

Courtesy: Independent Media Institute.

MEDIA: THE AI STORM DROWNING PAKISTAN

Millions of Pakistanis affected by floods are turning to social media for help. Instead, they’re finding AI-generated lies.
 Published October 19, 2025
DAWN / EOS

The climate crisis in Pakistan over the years has exacerbated to the extent that it has become the new norm, with floods impacting cities, livelihoods and homes of millions of people in the country. In 2025 alone, Pakistan experienced devastating floods, with an estimated 4.2 million people affected in Punjab, according to the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA).

As the floodwaters rose, however, a parallel crisis emerged online. In response to the floods, many Pakistanis turned to digital technologies for weather updates, relief efforts and communicating with communities affected by the floods.

At the same time, in response to this reliance on updates from digital platforms, there’s also been a rapid surge in climate change disinformation, intended to sensationalise an already tragic environment and garner more views for spreading confusion.

More alarmingly, the surge of generative artificial intelligence (AI) on platforms has largely gone unchecked. This has allowed manufactured information to be widely believed, promoting dangerous narratives online.

Millions of Pakistanis affected by floods are turning to social media for help. Instead, they’re finding AI-generated lies: fake geopolitical attacks, sexualised “rescue” videos and fabricated tragedies designed to maximise engagement

THE RISE OF MISINFORMATION

A recent study by the Lahore-based Digital Rights Foundation (DRF) of which I was a co-author, Combatting Flood Misinformation in Pakistan: Generative AI and Platform Accountability in the Age of Climate Crisis, investigated some of these trends and the kind of misinformation being fuelled on platforms, leading to further polarisation and dangerous narratives being promoted.

The report points out how AI is being weaponised in the climate crisis, with viral TikTok visuals portraying India’s release of dam water into overflowing rivers downstream into Pakistani Punjab as a deliberate act of hostility. This AI-generated footage on various social media platforms has been dramatising this narrative, depicting Indian workers maliciously releasing water as a calculated tactic of war.

This framing of a natural catastrophe within a singular geopolitical narrative, fuels existing India-Pakistan tensions. More critically, it serves to divert attention from domestic shortcomings in governance, flood management and relief efforts.

Such narratives are often pushed by bad-faith actors, seeking to profit from engagement or to advance divisive political agendas, exploiting the climate crisis for their own ends.

Today, when 40 per cent of Pakistan’s population remains illiterate and generative AI is becoming more advanced and sophisticated, it is becoming increasingly difficult to identify information online as being disinformation or authentic news, with many falling into the trap of climate disinformation online. The combination of low digital literacy, the inherent realism of AI-generated content and a high-stakes crisis creates a perfect storm for misinformation to thrive.

WOMEN AS CLICKBAIT

Beyond geopolitical narratives, AI-generated content has also enabled a more insidious form of exploitation: technology-facilitated gender-based violence, including the rise of AI-generated “woman-in-crisis” content. The trend has been seen across multiple platforms, where AI-generated depictions of women are used both to elicit sympathy and to exploit a natural disaster as a vehicle for sexualising women’s bodies.

TikTok and Instagram are rife with such videos, showing rural women navigating flooded landscapes, often carrying babies. These clips attract thousands of views, yet only a small fraction carry TikTok’s “AI content” label.

Following this theme, users have been seen posting hyper-sexualised depictions of women in flood settings, shifting the focus from the floods to women’s bodies. This sets a dangerous precedent in a context where women already face heightened risks of gender-based violence during disasters in relief camps.

Pakistani TikTok and Instagram accounts have also been producing “AI village fetish” content, including videos portraying unconscious women being touched inappropriately under the guise of “rescue”, and bad actors have exploited the floods as another opportunity for engagement.

The DRF report uncovered several AI-generated videos depicting women drenched in floodwaters with men groping them under the pretext of saving them.

This sexualisation also appears in village-vlog-style videos, where an AI-generated woman ostensibly documents the destruction of her flooded village. Yet the focus of these clips remains on her physical appearance and body rather than the devastation itself. While less explicit, these videos still perpetuate objectification to maximise engagement.

Alarmingly, comments on such content show that many older Pakistani users interpret these videos as genuine depictions of women affected by floods, which shows the urgent need for clear and prominent AI labelling.

Without such safeguards, misinformation not only distorts reality but also normalises the exploitation of women’s bodies during humanitarian crises.

EXPLOITING TRAGEDIES

Similarly, AI has also been used to exploit real tragedies, such as an Instagram reel recreating the tragedy that unfolded in Swat in June this year. At least 17 people were caught for hours in the middle of the river after the water suddenly surged, before being swept away and drowning.

The entirely AI-generated clip fabricates visuals and overlays screaming voices of the victims. This type of synthetic content is deeply insensitive and dangerous. It has the ability to distort the memory of a real disaster, trivialise the suffering of survivors and spread misinformation among audiences already grappling with fear and grief.

This fictionalisation of human tragedy for clicks and engagement not only undermines trust but also risks retraumatising affected communities.

PLATFORM ACCOUNTABILITY

At this time, while social media platforms are flooded with climate-related misinformation, TikTok remains the only platform to provide cautionary warnings and links to flood-related information for Pakistani users.

Although its flood safety guide marked a positive step, the initiative has been limited in scope. TikTok, like many other social media platforms, has failed to adequately address the scale of misinformation, particularly from generative AI content. These platforms do not cater to regional languages or reflect Pakistan’s diverse media landscape. As a result, outreach has always been restricted to a relatively small segment of users who can read and comprehend English and Urdu.

In moments of crisis, social media platforms carry a responsibility to uphold global standards. The UN Guiding Principles on Business and Human Rights (UNGPs) provide a clear benchmark to companies to respect users’ rights and mitigate harm, which includes curbing disaster-related misinformation, as part of their duty to protect human rights in digital spaces.

Misinformation during emergencies is not a trivial issue. It can compromise humanitarian aid and disaster response by changing perceptions, damaging credibility, and disrupting coordination.

As climate catastrophes become a recurring norm, striking Pakistan almost every year, platforms have a responsibility to support users during crises, rather than adding to these challenges in an already impending disaster.

The writer is Research and Grants Lead at the Digital Rights Foundation, Lahore. She can be contacted via Info@digitalrightsfoundation.pk

Published in Dawn, EOS, October 19th, 2025