Showing posts sorted by date for query CYBERNETICS. Sort by relevance Show all posts
Showing posts sorted by date for query CYBERNETICS. Sort by relevance Show all posts

Thursday, October 30, 2025

TRUMP


Critiques From The Right – OpEd


October 30, 2025 
By Allen Gindler


The Trump Presidency is not boring, that is for sure. His public appearances, pressers, interviews and, most importantly, the actions of his administration have given ample cause for ongoing and heated debates.

Such exchanges of opinions, happen not only on podcasts, TV programs, or newspaper columns but also among family members and close friends. As a rule, there are two camps that take part in the discourse: Trump supporters and his opponents. However, my personal experience has added a more nuanced position, which I called critique from the right, that is, from the libertarian point of view. More precisely, from the point of view of classical liberalism.

My friends utterly reject the leftist policies of progressivism, wokeism, DEI (“Don’t Earn It”), cancel culture, mandatory redistribution of wealth, open borders and illegal immigration, and any form of collectivization. They are proponents of freedom and against all forms of terrorism or aggression (whether Hamas or Russia). And yet, we manage to find a discrepancy in the understanding and explanation of Trump’s policies. I chose the neutral, independent stance politically, as libertarianism does not have a viable political organization in the US political duopoly settings. But libertarianism has a rich philosophical tenet that forms a pretty coherent worldview. My friends jumped onto the MAGA bandwagon, and their worldview shrank to the slogan “Trump is always right.”

So, what is my critique from the right of Trump’s policies? His program started with a slew of presidential orders, and some of them caused genuine amazement and made me wary. One of the first orders was “renaming” of the Gulf of Mexico. I put renaming in quotes as the body of water designated in the presidential order does not encompass the entire gulf but renames the U.S. Continental Shelf portion, which does not adhere to the definition of the gulf. Thus, the actual renaming has never happened. Trump’s assistants fooled the President, and he in turn continues to deceive the public, firmly believing that his vision is fulfilled.

The renaming business continues. Now we have the Department of War and the Secretary of War. Did we? Not really. The order stated, “The Secretary of Defense is authorized the use of this additional secondary title — the Secretary of War — and may be recognized by that title in official correspondence, public communications, ceremonial contexts, and non-statutory documents within the executive branch.” And further, “Within 60 days of the date of this order, the Secretary of War shall submit to the President, through the Assistant to the President for National Security Affairs, a recommendation on the actions required to permanently change the name of the Department of Defense to the Department of War.” (italic is mine). Statutory references to the Department of Defense remain controlling until changed by law. Thus, the main and the official names for all branches of government and governments abroad are still the Department of Defense and the Secretary of Defense. Some can use the secondary titles (aliases) — the Department and the Secretary of War — to appear tougher, I guess.

Second, elimination of DEI (Diversity, Equity, Inclusion). These are good words in themselves; what matters is how the policy is implemented. I dreamed that the Trump administration would be smart enough and turn the tables on Democrats by using DEI to promote its own programs and actions, thus bringing this policy to reductum ad absurdum and naturally sinking it into oblivion. Instead, the administration simply banned it, which formally puts them as negators of so-called Diversity, Equity, Inclusion, giving Democrats trump cards for future attacks. And our big businesses showed their lack of principles and spinelessness: they easily implemented DEI, obeying administrative pressure from Democrats, and with the same ease changed course, following new directives from the White House.

Third is the most serious matter—the economy. My friends embraced his trade war using tariffs, citing the usual fallacy that tariffs will bring production back to the US, that inflation will not rise as tariffs will be paid by foreign countries, etc. But one can’t be simultaneously against forced wealth redistribution and embrace additional taxes on imported goods. No one sees how or where the government will spend the additional revenues, but for sure the ordinary consumers have no benefit from increased prices or a shrunken assortment of commodities. The trade war with China during Trump’s first term failed to reduce the trade deficit or bring manufacturing back onshore. The overall merchandise trade deficit grew to $911 billion in 2020, and careful estimates show even eliminating the deficit would only modestly raise manufacturing’s job share. Independent modeling found the tariff regime reduced U.S. GDP and raised consumer costs. What empirical data make someone believe that this time around the outcome would be much different? But what will happen is that we will bail out farmers this time as we did in Trump’s first term, as in 2018–2019, when USDA paid about $23B under the Market Facilitation Program.

The Trump administration’s major sin is that they try to steer the economy by interfering in the affairs of enterprises. The Intel deal, where the government holds 10%, is utterly outrageous (a 10% equity stake funded via remaining CHIPS grants and Secure Enclave awards). The previous administration decided to give them a grant, and the current one decided to have partial ownership of the company. Both policies are wrong, and the latter is even worse than the former. This opens a door for the democratic socialists to do the same, but on an even bigger scale. I mean nationalization of enterprises or entire industries because they are vital to the national defense, food supply, or any other pretext. Again, one can’t be against collectivization and agree with Trump’s policies on the issue.

The latest economic decree is adjusting the import of timber, lumber, and derivatives. The administration found out that such products are used in military applications and current import amounts, they were sure, weaken our economy. As a result, for the sake of national security, they found a solution. The solution—to tariff, to regulate, with possible additional duties pending an Oct. 1, 2026 review. One does not need a degree in economics to deduce an immediate increase in construction prices.

Both President Trump and Vice-President J.D. Vance are on record saying that the government can manipulate the prices of the products produced by private firms. In particular, it concerns the prices of drugs. President Trump on many occasions said that the prices of drugs in the US are 1,000% or so more expensive than abroad. He is determined to reduce drug prices by 1,000 percent or more, causing incredible delight among the MAGA crowd. But this makes me sad. First of all, he did not outline the exact mechanism of price reduction. If he meant to eliminate government contribution in the price construction, I am for it. But most likely, it would be yet another presidential decree pompously signed in the Oval Office. Second, for God’s sake, is there anyone in the administration capable of explaining to Mr. Trump that his percentage math is wrong? It is OK for a public person to make a slip of the tongue mistake, but repeating the same nonsense over and over again is simply embarrassing.

Another disturbing habit of the administration is announcing on social media all kinds of deals with foreign countries and companies that the White House touts as totaling “$17 trillion.” But a social media post is not a binding treaty even if the President posted it. Where are the real binding agreements, and where is the money? It took 2 years plus for the USMCA treaty to be negotiated and ratified. I refuse to believe that all announced deals are real until I see the actual text on the Commerce Department website. Those announcements send the wrong signal to the market, which, upon discovering the truth, might disappoint many.

Open borders and illegal immigration. This issue has found huge resonance with American voters not because of bigotry but because of common sense. We have the right to regulate the influx of population as any other countries do; we have the right to know and regulate who is coming and who we want to welcome in the country. The Trump administration tightened the borders, which is a positive result. But afterward we observed mistake after mistake. It is OK to promise a mass deportation before elections, but another thing to demand the execution of such a policy and hand down a deportation plan from above. Carrying out such a plan leads to many unforgivable mistakes. The administration should not target people who are in the realm of the judicial branch, that is, whose cases are in the system and await judicial resolution. As of August 2025, about 3.43 million cases are pending in Immigration Court. And yet, ICE targeted such individuals, and instead of accepting an error and moving on, they made it worse, including misinterpretation of the 9-0 Supreme Court order and losing a bunch of cases on lower courts.

Targeting illegal criminals, members of gangs first, is a logical and good idea. By targeting ethnic crime, it will make communities of both legal and illegal immigrants safer and would be politically preferable for the administration and the Republican Party. However, the administration lacks patience. The local ICE tried to fulfill and exceed the plan, earning extra bonuses in the process. They target people that they should address probably in the last turn. Such a policy has very bad optics and might have a generational effect—people will not vote Republican just on an emotional level. I suspect Democrats set up such a trap that Republicans foolishly stepped in.

When I raise my critique from the right, my friends make as they thought the powerful argument: “Do you think Kamala would be better?” Well, strictly speaking, we can’t answer this inquiry definitively, as it is a hypothetical question. We did not give her the opportunity to become president. We inferred from her speeches and actions that she would continue Biden’s course, and we did not want that. (However, some historical figures transformed the course dramatically, for example Lech Wałęsa or Mikhail Gorbachev). They, of course, conflate the issue. I am analyzing Trump’s actions and how they adhere to the classical liberal worldview and really, with common sense in the political fight. I am not comparing their election promises. But one note I might suggest. If Kamala won, we would know for sure what to expect from the left-wing politician; we would fight a political battle not compromising our ideal of economic and individual freedom.

Trump’s policies blurred the line on many and important issues between Democrats and Republicans. He opened Pandora’s box that the left will exploit on an even bigger scale. Just watch and see. He normalized statism, aggressive intervention into the economy, citing national emergency or national security, which was a taboo, at least in words, by Republicans. The political duopoly blends and shifts to the left as never before in the recent history of the USA. Which policy matters will be discussed in the near future? Are we really destined to choose not by principle, but by the dosage of proposed collectivization?

Allen Gindler

Allen Gindler is an independent scholar specializing in the Austrian School of Economics and Political Economy. He has taught Economic Cybernetics, Standard Data Systems, and Computer-Aided Work Design in Ukraine. His academic articles have been published in the Journal of Libertarian Studies and The Independent Review. He has also contributed opinion pieces to Mises Wire, Independent Institute, American Thinker, the Foundation for Economic Education, Eurasia Review, and Biblical Archaeology Review.

Saturday, October 25, 2025

Cybernation And The Transformation Of The Nation-State: A Postmodern Inquiry – Essay




October 24, 2025
By Dr. Azly Rahman

Introduction: The Cybernating Nation in a Globalized World

In the contemporary landscape of globalization and post-industrialism, the concept of a “cybernating nation” emerges as a critical lens for understanding how developing societies integrate advanced information and communication technologies (ICTs), particularly the Internet and telematics, into their socio-political and economic fabrics. Cybernation refers not merely to technological adoption but to a profound cybernetic reconfiguration of societal structures, where feedback loops between human agency, institutional power, and digital networks redefine national trajectories.

This essay expands upon a series of interconnected theses to explore the multifaceted implications of cybernation. Drawing from center-periphery dynamics, complexity theory, structuralism, and resistance paradigms, it argues that cybernation accelerates both integration into global systems and internal contestations of power, ultimately eroding traditional notions of sovereignty while fostering new forms of enculturalized discourse. These transformations, best illuminated through postmodern lenses, reveal the tensions between hegemony and subaltern agency in an increasingly wired world.

“The Enduring Grip of Center-Periphery Dynamics in Cybernation”

At the heart of cybernation lies the persistent center-periphery pattern of development, a framework originating from dependency theory that posits global economic and cultural flows as radiating from core (developed) nations to peripheral (developing) ones. In a globalized post-industrialist world, the development of a cybernating nation will continue to follow, to a degree or another, this center-periphery pattern.

Peripheral nations, eager to harness ICTs for economic leapfrogging, often replicate the infrastructural and ideological blueprints of the center—adopting Western-modeled digital platforms, data protocols, and innovation hubs—while reaping asymmetric benefits. For instance, investments in fiber-optic networks or the 5G rollout in nations like India or Kenya mirror Silicon Valley’s ecosystems but serve primarily to funnel data and labor to global corporations, perpetuating unequal exchange.

This pattern extends to the macro-level contestations of power, where hegemony between cybernating and fully cybernated nations defines global hierarchies. Fully cybernated centers, such as the United States or China, exert a gravitational pull through proprietary algorithms and standards, compelling peripherals to align or risk obsolescence. At the micro-level, however, power fractures along domestic lines, with contending political parties or groups vying for control over cybernetic resources—be it spectrum allocation or digital surveillance tools. Thus, cybernation does not dismantle center-periphery asymmetries but amplifies them, channeling peripheral creativity toward emulative models of success.

Complementing this, globalization theory underscores how creative consciousness in cybernating nations becomes centralized in business and the arts, patterned after triumphant global corporations. Entrepreneurial ecosystems in peripheral hubs, from Bangalore’s tech parks to Nairobi’s Silicon Savannah, cultivate a cosmopolitan ethos that prizes innovation and branding, often at the expense of indigenous epistemologies. This centralization fosters a hybrid cultural economy where local artisans collaborate with multinational firms, yet the fruits of such creativity—intellectual property and market access—flow disproportionately outward, reinforcing peripheral dependence.

“Complexity, Nationalism, and the Semantic Reconfiguration of Cybernetics”

Traditional historical materialism, with its linear dialectics of class struggle and productive forces, falters in explicating cybernation’s nonlinear trajectories. A purely historical materialist conception of change cannot fully explain why nations cybernate; the more a nation gets “wired,” the more complex the interplay between nationalism and internationalism becomes. Cybernation introduces emergent properties—unpredictable feedback loops where digital connectivity amplifies both centrifugal (globalizing) and centripetal (nationalist) forces. In complex systems, small inputs, such as viral social media campaigns, can cascade into regime-shifting upheavals, as seen in the Arab Spring, where Twitter’s algorithms intertwined local grievances with transnational solidarity.

This complexity manifests semantically and structurally, where the enculturalization of “cybernetics” itself becomes a battleground. The more a nation transforms itself cybernetically, the more extensive the enculturalization and transformation of the term “cybernetics” will be. Borrowed from Norbert Wiener’s foundational work on control and communication, “cybernetics” evolves from a technical term into a culturally laden signifier—recast in peripheral contexts as “digital sovereignty” in Russia or “jugaad tech” in India, blending foreign precision with local improvisation.

Structuralist semiotics reveals how these shifts in signifiers alter signified realities, embedding cybernetic logic into everyday discourses of governance, education, and identity. The political economy of this linguistic transformation is pivotal: the extent of the enculturalization of the concept of “cybernetics” will determine the speed at which a nation will be fully integrated into the global production-house of the telematics industry. Nations that swiftly domesticate cybernetic jargon—through policy glossaries, educational curricula, or media narratives—accelerate value-chain insertion, attracting foreign direct investment in data centers and AI hubs. Conversely, linguistic resistance, such as vernacular tech lexicons in non-English-dominant peripherals, can delay integration, preserving pockets of autonomous innovation but risking isolation from global standards.

“Authoritarianism, Resistance, and the Erosion of State Power”

Cybernation intersects with authoritarianism in profound ways, where regime strength dictates the scope and velocity of digital transformation. The stronger the authority of the regime, the greater the control and magnitude of the cybernating process. In a cybernating nation, authority can reside in the political will of a single individual or in a strong political entity, consequently producing the author’s “regime of truth,” to borrow Foucault’s phrase. Charismatic leaders in nations like Turkey under Erdoğan or the Philippines under Duterte have weaponized cybernetic tools—state-controlled firewalls and algorithmic propaganda—to consolidate power, crafting digital panopticons that monitor and mold public consent. This “regime of truth” naturalizes cybernation as an extension of sovereign will, masking its extractive undercurrents.

Yet, this centralization begets resistance, particularly as the Internet undermines state monopolies on narrative production. The advent of the Internet in a developing nation signifies the genesis of the erosion of the power of government-controlled print media. Universal access to the Internet will determine the total erosion of government-produced print media.

Subaltern voices will replace Grand Narratives. In cybernating peripherals, where state broadcasters once disseminated monolithic ideologies, platforms like WhatsApp and Telegram democratize discourse, amplifying marginalized groups—from indigenous activists in Bolivia to urban youth in Nigeria. This withering of the nation-state’s communicative hegemony fosters polyphonic publics, where Grand Narratives of progress yield to fragmented, user-generated counter-stories.

Resistance centralizes critical consciousness in arenas of political mobilization and personal expression, modeled after successful Internet-based groups. Emulating tactics from global movements like #MeToo or Black Lives Matter, cybernating citizens repurpose social media for hashtag activism, doxxing corrupt officials, or coordinating flash protests. The more the government suppresses voices of political dissent, the more the Internet is used to affect political transformations. Suppression—via shutdowns or troll farms—paradoxically catalyzes circumvention, with VPNs and dark web forums becoming tools of subversion, turning digital repression into a feedback loop of escalating defiance.

“Imperialism, Deep-Structuring, and the Threat to Sovereignty”

Modern imperialism permeates cybernation, where external ideologies steer internal mutations. The fundamental character of a nation will be significantly altered with the institutionalization of the Internet as a tool of cybernating change. The source of change will, however, be ideologically governed by external influences, which will ultimately threaten the sovereignty of the nation-state. Platforms engineered in the Global North—Google, Meta, Tencent—impose neoliberal logics of surveillance capitalism, reshaping peripheral subjectivities from communal to consumerist. This neo-colonialism manifests in data sovereignty disputes, where peripheral governments enact laws like India’s Data Protection Bill, only to negotiate concessions with imperial tech giants.

At deeper levels, discourse embeds these shifts in language, eroding indigenous cores. The discourse of change, as evident in the phenomena of cybernation, is embedded in language. The more a foreign concept is introduced, adopted, assimilated, and enculturalized, the more the nation will lose its indigenous character built via schooling and other means of citizenship enculturalization processes. School curricula infused with STEM jargon supplant traditional cosmologies, while algorithmic biases in education apps perpetuate Anglocentric worldviews. This deep-structuring—akin to Gramscian hegemony—subtly supplants national mythologies with globalized cybernetic myths, hollowing out cultural sovereignty.

Conclusion: “Embracing Postmodern Paradigms for Cybernetic Inquiry”

Ultimately, comprehending cybernation demands paradigms attuned to flux and multiplicity. Postmodernist perspectives of social change—discourse theory, semiotics, and chaos/complexity theory—rather than those of structural-functionalists, Marxists, or neo-Marxists, can best explain the structure and consequences of cybernetic changes. Where structural-functionalism views society as equilibrated systems and Marxism as deterministic base-superstructure dialectics, postmodernism captures the rhizomatic, non-linear sprawl of cybernetic networks: discourses that fractalize power, signs that mutate meanings, and chaotic attractors that birth emergent resistances. In cybernating nations, these lenses reveal not inevitable decline but creative potentials—hybrids of center and periphery, authority and dissent—that could redefine global orders. As peripherals wire deeper into the digital mesh, the challenge lies in harnessing cybernation for endogenous futures, lest it consummate the very imperialisms it ostensibly disrupts.

Dr. Azly Rahman grew up in Johor Bahru, Malaysia and holds a Columbia University (New York City) doctorate in International Education Development and Masters degrees in six fields of study: Education, International Affairs, Peace Studies, Communication, Creative Non-Fiction, and Fiction Writing. He has written 10 books and more than 500 analyses/essays on Malaysia. His 35 years of teaching experience in Malaysia and the United States spans over a wide range of subjects, from elementary to graduate education. He is a frequent contributor to scholarly online forums in Malaysia, the USA, Greece, and Montenegro. He also writes in Across Genres: https://azlyrahman.substack.com/about

Tuesday, September 23, 2025

FAU engineers develop smarter AI to redefine control in complex systems



Florida Atlantic University
Smarter AI 

image: 

A new AI framework improves management of complex systems with unequal decision-makers, like smart grids, traffic networks, and autonomous vehicles.

view more 

Credit: Florida Atlantic University





A new artificial intelligence breakthrough developed by researchers in the College of Engineering and Computer Science at Florida Atlantic University offers a smarter, more efficient way to manage complex systems that rely on multiple decision-makers operating at different levels of authority.

This novel framework, recently published in IEEE Transactions on Systems, Man and Cybernetics: Systems, could significantly impact the future of smart energy grids, traffic networks and autonomous vehicle systems – technologies that are becoming increasingly central to daily life.

In many real-world systems, decisions don’t happen simultaneously or equally. A utility company might decide when to cut power during peak hours, and households must adjust their energy use in response. In traffic systems, central controllers dictate signals while vehicles adapt accordingly.

“These types of systems operate under a power hierarchy, where one player makes the first move and others must follow, and they’re more complicated than typical AI models assume,” said Zhen Ni, Ph.D., senior author, IEEE senior member and an associate professor in the Department of Electrical Engineering and Computer Science. “Traditional AI methods often treat every decision-maker as equal, operating at the same time with the same level of influence. While this makes for clean simulations, it doesn’t reflect how decisions are actually made in real-world scenarios – especially in environments full of uncertainty, limited bandwidth and uneven access to information.”

To address this, Ni and Xiangnan Zhong, Ph.D., first author, IEEE member and an associate professor in the Department of Electrical Engineering and Computer Science, designed a new AI framework based on reinforcement learning, a technique that allows intelligent agents to learn from interacting with their environment over time.

Their approach adds two key innovations. First, it structures the decision-making process using a game theory model called the Stackelberg-Nash game, where a “leader” agent acts first and “follower” agents respond in an optimal way. This hierarchy better mirrors systems like energy management, connected transportation and autonomous driving. Second, the researchers introduced an event-triggered mechanism that reduces the computational burden.

“Instead of constantly updating decisions at every time step, which is typical of many AI systems, our method updates decisions only when necessary, saving energy and processing power while maintaining performance and stability,” said Zhong.

The result is a system that not only handles the power asymmetry between decision-makers but also deals with mismatched uncertainties – cases where different players operate with varying levels of information and predictability. This is especially critical in environments like smart grids or traffic control systems, where conditions change rapidly and resources are often limited. The framework allows for a more robust, adaptive and scalable form of AI control that can make better use of limited bandwidth and computing resources.

“This work fills a crucial gap in the current AI landscape. By developing a method that reflects real-world decision hierarchies and adapts to imperfect information, Professors Zhong and Ni are helping us move closer to practical, intelligent systems that can handle the complexity of our modern infrastructure,” said Stella Batalama, Ph.D., dean of the College of Engineering and Computer Science. “The implications of this research are far-reaching. Whether it’s optimizing power consumption across cities or making autonomous systems more reliable, this kind of innovation is foundational to the future of intelligent technology. It represents a step forward not just for AI research, but for the everyday systems we depend on.”

Backed by rigorous theoretical analysis and validated through simulation studies, Zhong and Ni demonstrated that their event-triggered reinforcement learning method maintains system stability, ensures optimal strategy outcomes and effectively reduces unnecessary computation. The approach combines deep control theory with practical machine learning, offering a compelling path forward for intelligent control in asymmetric, uncertain environments. Two related journal articles have recently been published on IEEE Transactions on Artificial Intelligence as well. The research work is mainly supported by the National Science Foundation and the United States Department of Transportation.

The research team is now working on expanding their model for larger-scale testing in real-world scenarios. Their long-term vision is to integrate this AI framework into operational systems that power cities, manage traffic and coordinate fleets of autonomous machines – bringing the promise of smarter infrastructure one step closer to reality.

- FAU -

About FAU’s College of Engineering and Computer Science:

The FAU College of Engineering and Computer Science is internationally recognized for innovative research and education in the areas of computer science and artificial intelligence (AI), computer engineering, electrical engineering, biomedical engineering, civil, environmental, and geomatics engineering, mechanical engineering, and ocean engineering. Research conducted by the faculty and their teams exposes students to technology innovations that push the current state-of-the-art of the disciplines. The College's research efforts are supported by the National Science Foundation (NSF), the National Institutes of Health (NIH), the Department of Defense (DOD), the Department of Transportation (DOT), the Department of Education (DOE), the State of Florida, and industry. The FAU College of Engineering and Computer Science offers degrees with a modern twist that bear specializations in areas of national priority such as AI, cybersecurity, internet-of-things, transportation and supply chain management, and data science. New degree programs include Master of Science in AI (first in Florida), Master of Science and Bachelor in Data Science and Analytics, and the new Professional Master of Science and Ph.D. in computer science for working professionals. For more information about the College, please visit eng.fau.edu

 

About Florida Atlantic University:

Florida Atlantic University serves more than 32,000 undergraduate and graduate students across six campuses located along the Southeast Florida coast. It is one of only 21 institutions in the country designated by the Carnegie Classification of Institutions of Higher Education as an “R1: Very High Research Spending and Doctorate Production” university and an “Opportunity College and University” for providing greater access to higher education as well as higher earnings for students after graduation. In 2025, Florida Atlantic was nationally recognized as a Top 25 Best-In-Class College and as “one of the country’s most effective engines of upward mobility” by Washington Monthly magazine. Increasingly a first-choice university for students in both Florida and across the nation, Florida Atlantic welcomed its most academically competitive incoming class in the university’s history in Fall 2025. For more information, visit www.fau.edu.

 



Sunday, September 21, 2025

 

Japan Joins Denmark in Pioneering Osmotic Energy Revolution

  • Japan inaugurated its first commercial osmotic power plant in Fukuoka, producing 880,000 kWh annually by leveraging desalination brine.

  • Osmotic energy, recently named a top emerging technology by the World Economic Forum, offers constant, carbon-free baseload power.

  • With the potential to meet nearly 20% of global electricity demand, osmotic energy is gaining traction despite efficiency challenges.

Japan just became the second country in the world to launch a commercial-scale osmotic energy plant, a big win for the little-known form of clean energy generation that first broke ground in Denmark. While osmotic energy is nascent and its testing grounds remain limited, it has big potential – The World Economic Forum recently named osmotic power systems as one of the top 10 emerging technologies to watch in 2025.

This form of carbon-free energy generation uses osmosis between freshwater and saltwater to create power. In other words, it works by moving water from a less concentrated solution to a more concentrated one across a semipermeable membrane. “When freshwater and seawater meet, a natural gradient in salinity is created, prompting ions to migrate from the saltier side to the less salty side in pursuit of equilibrium,” an Earth.org article explains in layman’s terms. “The movement of water and ions generates a pressure differential that can be harnessed to produce electricity.”

The result is a baseload form of totally clean and carbon-free energy production that is available 24 hours a day, seven days a week, 365 days a year. This is critical for energy security, as the majority of clean energy capacity, namely wind and solar, is variable. This means that osmotic energy could be an excellent alternative clean power from an energy security perspective. 

Denmark brought the world’s first commercial-scale osmotic power plant online in 2023. This month, Japan followed suit with a brand new plant in Fukuoka. The plant began operations on August 5, and will produce 880,000 kilowatt-hours a year. The plant was developed in tandem with a local desalination plant. The use of extra-salinated water leftover from the desalination process lends itself perfectly to the osmosis model by increasing efficiency while also reducing waste. “Those stronger gradients boost efficiency and grounds osmotic generation in existing systems rather than the lab,” reports New Atlas.

“I feel overwhelmed that we have been able to put this into practical use. I hope it spreads not just in Japan, but across the world,” Akihiko Tanioka, professor emeritus at the Institute of Science Tokyo, told Kyodo News.

Pilot-scale osmotic energy models have already been developed in other nations around the world including Norway, France, and South Korea. Other coastal nations will likely soon follow suit as Denmark and Japan demonstrate the utility of their own plants. Proponents believe that the benefits of the nascent sector will speak for themselves. 

“Osmotic power is clean, completely natural, available 24 hours a day in all coastal areas, can be turned on almost instantly and modulated very easily,” Nicolas Heuzé, co-founder of osmotic energy firm Sweetch Energy, told the World Economic Forum.

If and when osmotic energy takes off, its productive potential would be enormous. Almost 30,000 TWh of osmotic energy is naturally released by deltas and estuaries each and every year – it just needs to be harnessed. The Dubai Future Foundation calculates that osmotic systems could eventually produce approximately 5,177 terawatt-hours (TWh) annually – that’s almost a fifth of global electricity needs. 

However, scaling the technology can be difficult due to low energy efficiency. While the Japanese plant gets a relatively high energy output thanks to the concentrated brine it sources from its associated desalination plant, models elsewhere can’t necessarily expect the same level of performance. But in places where the technology makes sense, the potential is significant. 

“Globally, and particularly in salt-rich areas like Australia and the Middle East, where access to brackish or seawater exceeds access to freshwater, these power systems hold huge potential for baseload energy and clean water production,” Dr. Katherine Daniell, Director of the Australian National University’s School of Cybernetics, was quoted by the World Economic Forum.

By Haley Zaremba for Oilprice.com

Thursday, September 11, 2025

Preventing recidivism after imprisonment



Recidivism does not occur in a vacuum. It happens in the encounter between people and a fragmented system




Norwegian University of Science and Technology

Preventing recidivism 

image: 

Today, previous convictions, substance abuse and behaviour in prison are the main factors that are considered when predicting the risk of recidivism. A study from the Norwegian University of Science and Technology has revealed a number of other factors that are often ignored: mental health, relationships, motivation, support in the transition phase, system failure and resource flow. 

view more 

Credit: Illustrative photo: Christian Wangberg / Norwegian Prison and Probation Service






Why do so many people return to crime after serving their sentence – even in Norway, with one of the world’s most humane prison systems?

That is the question Olea Linnea Andersson recently explored in her master’s thesis in cybernetics at the Norwegian University of Science and Technology (NTNU). Not only has she looked at prison sentences, but at the entire journey: from before birth, through schooling, substance abuse, conviction, incarceration and life after prison.

Through a combination of interviews, surveys and data analysis from the Norwegian Correctional Service, she has identified a pattern: Recidivism does not occur in a vacuum. It happens in the encounter between people and a fragmented system.

Wrong focus and lack of data

Currently, the factors assessed when predicting the risk of reoffending are primarily previous convictions, substance abuse and behaviour in prison.

However, Andersson has uncovered a number of other factors that are often overlooked: mental health, relationships, motivation, support during the transition phase, system failures and resource flow.

These ‘soft’ factors – such as inner drive, life skills and social support – prove to be at least as important as the ‘hard’ variables. At the same time, there is a lack of effective data collection across services. As a result, we are flying blind, unaware of what works – or why things fail.

The missing key

An important finding in the thesis is the concept of augmented grit – an expanded understanding of the ability to succeed after serving a prison sentence.

It is about more than just willpower. It is about self-regulation, social support, hope, and systems that provide genuine opportunities to start afresh.

The research shows that if prison inmates have a high level of augmented grit, they are less likely to reoffend, but only if the surrounding system provides support.

Without support after release, structure in daily life, and trust when dealing with the Norwegian Labour and Welfare Administration, the healthcare system and housing services, motivation alone makes little difference.

Lack of coordination

One of the clearest patterns in the analysis is that the various measures all work, but that the systems rarely communicate with each other. The Norwegian Labour and Welfare Administration does one job, the healthcare system another, and the Norwegian Correctional Service a third. The result is missed opportunities, a lack of continuity, and ‘arbitrary’ reintegration.

A better model requires coordinated efforts, common source data, coherent planning, and individual risk assessments that include both soft and hard factors.

A holistic model

Andersson has developed a model that predicts the risk of recidivism using artificial intelligence. The model also shows how the entire support system is connected – and where it is disconnected.

The model highlights where the system is currently failing and where efforts will be most effective. This is the first time such a holistic method has been used to study recidivism in Norway, and as far as we know, internationally. The method combines technology, system analysis and practical insights from prisoners and prison staff.

From cycles to spirals

We often talk about the cycle of recidivism, but with better data, clearer understanding and coordinated efforts, we can create spirals: processes where former prison inmates are given the prerequisites to succeed, and where support systems work together rather than separately.

What if we used artificial intelligence and systems thinking – not for control, but to give people a new chance?

Reference: Olea Linnea Andersson. A societal cybernetic analysis of recidivism and systemic barriers in the Norwegian correctional system.


Overcrowding and violence in Belgium's prisons: 'I was the victim of four assassination attempts'

Front view of Forest Prison, Brussels.
Copyright SALLY VETTERS/

By Pilar Montero Lopez
Published on 

The repurposed Forest prison in Brussels has opened its doors to the public to highlight the difficulties facing the prison system in Belgium, as overcrowding threatens inmates’ safety and hinders their reintegration.

A cell of just nine square metres with no toilet can hold up to three prisoners. Sometimes, one of them has to sleep on the floor due to a lack of space. This is common in Belgium, a country that has been struggling with serious prison overcrowding for years, as reported by the non-profit association 9m².

The Forest prison in Brussels, which closed theee years ago, is an example of these conditions. The voices of those who had to survive here can still be heard. One of them is Jean-Luc Mahy, a former prisoner who earned a degree during more than 18 years behind bars and who also thought about taking his own life several times because of the harsh conditions.

"Of course there are tensions in prison. I was the victim of four murder attempts, one of them at the age of 18, when a guy came into my cell, thinking I had killed his girlfriend, and beat me up. I remember the warders saved my life," he explained to Euronews.

"They took me to the shower. I was there completely naked and the water was running and I was defecating between my buttocks and bleeding a lot. You don't forget moments like that."

A prison museum

The 9m² association was created to show society the problems prisoners face and make people think about them.

Its members have turned the empty Forest prison into a "multiperspective meeting space" where researchers, students, civil servants and former prisoners can share experiences to help find solutions to a problem that is getting worse, according to the association’s director, Manuel Lambert.

"We see that overcrowding in prisons continues to increase. There is no improvement. That's what worries us. Government after government, we seem to be stuck in the same pattern of imprisonment," Lambert says.

He also explains that "overcrowding means that inmates with very different needs are forced to share small spaces, which increases tensions."

There is also a "lack of resources" when social areas are used for accommodation, leaving no space for learning or activities. In the words of the 9m² director, "prison will not solve anything in these conditions because those who enter illiterate will leave illiterate and the stay in prison will have been a waste of time."

Staff shortages make it harder to supervise prisoners and give them personal support. "All this creates a climate more favourable to violence inside the walls, so that the integrity of both inmates and staff is at risk," Lambert explains.

Without enough psychological support, prisoners have fewer chances to reintegrate, 9m² says.

"There are not enough social workers, doctors and psychiatrists looking after prisoners to allow these people to leave in better conditions than they entered," stresses Lambert, who also underlines that "the recidivism rate of people who start again is very high in Belgium, so we see that prison is a failure to protect society."

A widespread problem in Europe

Belgium is one of the countries at the forefront when it comes to prison overcrowding in Europe with over 13,000 people in a prison system designed to hold 11,000. Overcrowding is also common in France, Italy and Cyprus.

The European Committee for the Prevention of Torture (CPT) regularly visits European prisons to check that they are functioning properly and that human rights are not being violated. According to its latest survey, carried out in 2024, the European countries which experience the most severe overcrowding in prisons, determined by more than 105 prisoners per 100 places, are Slovenia (134), Cyprus (132), France (124), Italy (118), Romania (116) and Belgium (113).

Countries with moderate overcrowding, 105 or less but still above capacity, include Croatia (110), Ireland (105) and Sweden (105). Situations close to saturation were also observed in Scotland (100), England and Wales (98) and Serbia (98).

The situation is worsening as, according to Eurostat data, the number of prisoners could increase by up to 200% between 2023 and 2027 in European prisons.

Political context

Prison overcrowding is often linked to a country's socio-political situation and the belief that long sentences are the most effective form of justice.

Hugh Chetwynd, Executive Secretary of the European Committee for the Prevention of Torture, gives the example of Italy, France or the United Kingdom as countries with "overcrowding problems" and which have "chosen to tighten criminal legislation," including for drug offences.

"The issue is that there is a lack of confidence in alternatives to imprisonment for drug offences, for example, where people could be prevented from going to prison by the imposition of electronic bracelets and community service," Hugh Chetwynd told Euronews.

At the same time, he says there is an increase in organised crime in Europe and these groups "can continue with their work and their business while they are in prison because the staff can't control them properly, because there is so much overcrowding."

Added to this is the fact that "in most countries, if a court sends a person to a prison with a valid warrant to detain them, the prison cannot expel them and will accept them even if it means that they have to go and sleep on a mattress on the floor of a cell," Chetwynd said.

Chetwynd believes there is still a long way to go before European societies widely recognise that prisons should reflect and contribute to the betterment of society.

Tuesday, August 19, 2025

AI Hype Is the Product and Everyone’s Buying It


AI’s flaws and dangers are glaring, yet the industry keeps growing, fueled by fantasies and fears of missing out (FOMO)
Truthout/Harper
August 16, 2025

A man works on the electronics of Jules, a humanoid robot from Hanson Robotics that uses artificial intelligence, at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, on July 8, 2025.
VALENTIN FLAURAUD / AFP via Getty Images


Honest, paywall-free news is rare. Please support our boldly independent journalism with a donation of any size.

This article is an excerpt adapted from the book The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want by Emily M. Bender and Alex Hanna (Copyright © 2025 by Emily M. Bender and Alex Hanna). Reprinted courtesy of Harper, an imprint of HarperCollins Publishers.

As long as there’s been research on AI, there’s been AI hype. In the most commonly told narrative about the research field’s development, mathematician John McCarthy and computer scientist Marvin Minsky organized a summer-long workshop in 1956 at Dartmouth College in Hanover, New Hampshire, to discuss a set of methods around “thinking machines”. The term “artificial intelligence” is attributed to McCarthy, who was trying to find a name suitable for a workshop that concerned a diverse set of existing knowledge communities. He was also trying to find a way to exclude Norbert Wiener — the pioneer of a proximate field, cybernetics, a field that has to do with communication and control of machines — due to personal differences.

The way the origin story is told, Minsky and McCarthy convened the two-month working group at Dartmouth, consisting of a group of ten mathematicians, physicists, and engineers, which would make “a significant advance” in this area of research. Just as it is today, the term “artificial intelligence” did not have much coherence. It did include something similar to today’s “neural networks” (also called “neuron nets” or “nerve nets” in those early documents), but also covered topics that included “automatic computers” and human-computer language interfaces (what we would today consider to be “programming languages”).

Fundamentally, the forerunners of this new field were concerned with translating dynamics of power and control into machine-readable formulations. McCarthy, Minsky, Herbert Simon (political scientist, economist, computer scientist, and eventual Nobel laureate), and Frank Rosenblatt (one of the originators of the “neural network” metaphor) were concerned with developing tools that could be used for the guidance of administrative — and ultimately— military systems. In an environment where the battle for American supremacy in the Cold War was being fought on all fronts — military, technological, engineering, and ideological — these men sought to gain favor and funding in the eyes of a defense apparatus trying to edge out the Soviets. They relied on huge claims with little to no empirical support, bad citation practices, and moving goalposts to justify their projects, which found purchase in Cold War America. These are the same set of practices that we see from today’s AI boosters, although they are now primarily chasing market valuations, in addition to government defense contracts.

The first move in the original AI hype playbook was foregrounding the fight with the Soviets. The second was to argue that computers were likely to match human capabilities by arguing that humans weren’t really all that complex. In 1956, Minsky claimed in an influential paper that “[h]uman beings are instances of certain kinds of very complicated machines.” If that were indeed the case, we could use more controllable electronic circuits in place of people in military and industrial contexts.

In the late 1960s, Joseph Weizenbaum, a German émigré, professor at the Massachusetts Institute of Technology, and contemporary of Minsky, was alarmed by how quickly people attributed agency to automated systems. Weizenbaum developed a chatbot called ELIZA, named for the working-class character in George Bernard Shaw’s Pygmalion who learns to mimic upperclass speech. ELIZA was designed to carry on a conversation in the style of a Rogerian psychotherapist; that is, the program primarily repeated what its users said, reframing their thoughts into questions. Weizenbaum used this form for ELIZA, not because he thought it would be useful as a therapist, but rather because it was a convenient setup for the chatbot: this kind of psychotherapy is one of the few conversational situations where it wouldn’t matter if the machine didn’t have access to other data about the world.


Despite its grave limitations, computer scientists used ELIZA to celebrate how thoroughly computers could replace human labor and heralded the entry into the artificial intelligence age. A shocked Weizenbaum spent the rest of his life as a critic of AI, noting that humans were not meat machines, while Minsky went on to found MIT’s AI laboratory and rake in funding from the Pentagon unhindered.

Cover of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
Harper

The murky, unethical funding networks — through unfettered weapons manufacturing then, and with the addition of ballooning speculative venture capital investments now — around AI continue to this day. So does the drawing of false equivalences between the human brain and the calculating capabilities of machines. Claiming such false equivalences inspires awe, which, it turns out, can be used to reel in boatloads of money from investors whipped into a FOMO frenzy.

When we say boatloads, think megayachts: in January 2023, Microsoft announced that it intended to invest $10 billion in OpenAI. This is after Mustafa Suleyman (former CEO of DeepMind, made CEO of Microsoft AI in March 2024) and LinkedIn cofounder Reid Hoffman received a cool $1.3 billion from Microsoft and chipmaker Nvidia in a funding round to their young startup, Inflection.AI. OpenAI alums cofounded Anthropic, a company solely focused on creating generative AI tools, and received $580 million in an investment round led by crypto-scammer Sam Bankman-Fried. These startups, and a slew of others, have been chasing a gold mine of investment from venture capitalists and Big Tech companies, frequently without any clear path to robust monetization. By the second quarter of 2024, venture capital was dedicating $27.1 billion, or nearly half of their quarterly investments, to AI and machine learning companies.

The incentives to ride the AI hype train are clear and widespread — dress something up as AI and investments flow. But both the technologies and the hype around them are causing harm in the here and now.
Of Hype and Harm

There are applications of machine learning that are well scoped, well tested, and involve appropriate training data such that they deserve their place among the tools we use on a regular basis. These include such everyday things as spell-checkers (no longer simple dictionary look-ups, but able to flag real words used incorrectly) and other more specialized technologies like image processing used by radiologists to determine which parts of a scan or X-ray require the most scrutiny. But in the cacophony of marketing and startup pitches, these sensible use cases are swamped by promises of machines that can effectively do magic, leading users to rely on them for information, decision-making, or cost savings — often to their detriment or to the detriment of others.

As investor interest pushes AI hype to new heights, tech boosters have been promoting AI “solutions” in nearly every domain of human activity. We’re told that AI can shore up threadbare spots in social services, providing medical care and therapy to those who aren’t fortunate enough to have good access to health care, education to those who don’t live in a wealthy school district, and legal services for people who can’t afford a licensed attorney. We’re told that AI will provide individualized versions of all of these things, flexibly meeting user needs. We’re told that AI will “democratize” creative activity by allowing anyone to become an artist. We’re told that AI is on the verge of doing science for us, finally providing us with answers to urgent problems from medical breakthroughs (discovering a cure for cancer!) to the climate crisis (discovering a solution for global warming!). And self-driving cars are perpetually just around the corner (watch out: that means they’re about to run into you). But as you may have surmised from our snarky tone, these solutions are, by and large, AI hype. There are myriad cases in which AI solutions have been posed but fall short of their stated goals.

In 2017, a Palestinian man was arrested by Israeli authorities over a Facebook post in which he posed next to a bulldozer with the caption (in Arabic) of “good morning.” Facebook’s machine translation software rendered that as “hurt them” in English and “attack them” in Hebrew — and the Israeli authorities just took that at face value, never checking with any Arabic speakers to see if it was correct. Machine translation has also become a weak stopgap in other critical situations, such as in handling asylum cases. Here, the problem to solve is one of communication, between people fleeing violence in their home countries and immigration officials. Machine translation systems, which can work well in cases like translating newspapers written in standard varieties of a handful of dominant languages, can fail drastically in translating asylum claims written or spoken in minority languages or dialects.

In August 2020, thousands of British students, unable to take their A-level exams due to the COVID-19 pandemic, received grades calculated based on an algorithm that took as input, among other things, the grades that other students at their schools received in previous years. After massive public outcry, in which hundreds of students gathered outside the prime minister’s residence at 10 Downing Street in London, chanting “Fuck the algorithm!” the grades were retracted and replaced with grades based on teachers’ assessment of student work. In May 2023, Jared Mumm, a professor at Texas A&M University, suspected his students of cheating by using ChatGPT to write their final essays — so he input the essays into ChatGPT and asked it whether it wrote them. After reading ChatGPT’s affirmative output, he assigned the whole class incomplete grades, and some seniors were (temporarily) denied their diplomas.

On our roads, promises of self-driving cars have led to death and destruction. A Tesla employee died after engaging the so-called “Full Self-Driving” mode in his Tesla Model 3, which ran the car off the road. (We know this partially because his passenger survived the crash.) A few months later, on Thanksgiving Day 2022, Tesla CEO Elon Musk announced the availability of Tesla’s “Full Self-Driving” mode. Hours later, it was involved in an eight-car pileup on the San Francisco–Oakland Bay Bridge.

In 2023, lawyer Steven A. Schwartz, representing a plaintiff in a lawsuit against an airline, submitted a legal brief citing legal precedents that he found by querying ChatGPT. When the lawyers defending the airline said they couldn’t find some of the cases cited and the judge asked Schwartz to submit them, he submitted excerpts, rather than the traditional full opinions. Ultimately, Schwartz had to own up to having trusted the output of ChatGPT to be accurate, and he and his cocounsel were sanctioned and fined by the court.

In November 2022, Meta released Galactica, a large language model trained on scientific text, and promoted it as able to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” The demo stayed up for all of three days, while the worldwide science community traded examples of how it output pure fabrications, including fake citations, and could easily be prompted into outputting toxic content relayed in academic-looking prose.

What all of these stories have in common is that someone oversold an automated system, people used it based on what they were told it could do, and then they or others got hurt. Not all stories of AI hype fit this mold, but for those that don’t, it’s largely the case that the harm is either diffuse or undocumented. Sometimes, people are able to resist AI hype, think through the possible harms, and choose a different path. And that brings us to our goal in writing this book: preventing the harm from AI hype. When people can spot AI hype, they make better decisions about how and when to use automation, and they are in a better position to advocate for policies that constrain the use of automation by others.

Copyright © 2025 by Emily M. Bender and Alex Hanna



Emily M. Bender

Dr. Emily M. Bender is a professor of linguistics at the University of Washington, where she is also the faculty director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural TIME 100 list of the most influential people in AI. She is frequently consulted by policy makers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.


Alex Hanna

Dr. Alex Hanna is director of research at the Distributed AI Research Institute (DAIR) and a lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in The Washington Post, Financial Times, The Atlantic, and TIME.