Saturday, February 28, 2026

How to Train Your AI

An outsiders introduction to developing a Large Language Model


Artificial intelligence is no longer the stuff of science fiction. It answers our questions, writes our emails, and holds conversations that feel startlingly human. But how does it actually work? How is an AI built, taught, and kept from going off the rails? The answer is more fascinating, and more human, than most people realize.

Part One: Building the Brain

Every AI starts with a goal. Do you want it to recognize faces? Translate languages? Answer questions? That goal determines everything that follows. Once the goal is clear, the real work begins, and the first ingredient needed is data. Enormous amounts of it.

For a Large Language Model, which is the kind of AI behind chatbots and writing assistants, that data is text. Trillions of words drawn from books, websites, academic papers, and more. The goal is to expose the model to as much of human language and knowledge as possible, because AI learns from examples the same way humans do: through exposure and repetition.

At the heart of the AI is something called a neural network, a mathematical structure loosely inspired by the human brain, made up of layers of connected nodes that pass information to one another. The network’s behavior is determined by billions of tiny numerical values called “weights,” which represent the strength of connections between those nodes. Training the AI is essentially the process of finding the right weights.

Training works through a beautifully simple idea: prediction. The model is shown a sentence with the last word removed, and it tries to guess what that word is. It gets scored on how wrong it was. Then a process called backpropagation figures out which weights made the prediction worse and adjusts them slightly. Do this billions of times across trillions of words, and something remarkable happens: the model does not just learn grammar. It absorbs facts, reasoning patterns, and context. It begins to understand language, or something that functions very much like understanding.

This phase, called pre-training, is staggeringly expensive. It requires thousands of specialized computer chips running for weeks or months, consuming vast amounts of electricity. The result is a “base model” that is extraordinarily good at generating fluent text, but also unpredictable and sometimes problematic. It has learned from all of human writing, which includes the full spectrum of human expression: the inspiring and the offensive, the truthful and the false.

Part Two: Teaching It to Behave

A raw, pre-trained model is a bit like someone who has read everything ever written but has never been taught manners, ethics, or professional conduct. The next phase of development is about instilling those qualities, and it involves several overlapping techniques.

Fine-Tuning

After pre-training, the model is trained again, this time on a much smaller, carefully curated set of high-quality conversations and responses. This teaches it to behave like a helpful, professional assistant rather than a raw text predictor. The model’s weights shift gradually toward producing the kinds of responses a thoughtful person would give.

Reinforcement Learning from Human Feedback (RLHF)

One of the most powerful techniques used today is called Reinforcement Learning from Human Feedback, or RLHF. The AI generates several different responses to the same prompt, and human reviewers rank them from best to worst. A separate “reward model” is trained to predict what humans prefer. Then the main AI is trained to maximize that reward, essentially learning to produce responses that real people find helpful, accurate, and appropriate.

Through this process, guardrails, or more formally safety mitigations and alignment measures, get woven directly into the model’s weights. It is not that a rulebook gets programmed in. It is that the model’s deeply ingrained tendencies are shaped, through thousands of examples and feedback cycles, to steer away from harmful outputs. Think of the difference between giving a child a printed list of rules versus raising them with consistent guidance, feedback, and example. The AI’s values, such as they are, develop through the latter approach.

Constitutional AI

Some companies go a step further, training the AI to critique its own responses against a set of core principles that function essentially as a constitution for the model’s behavior. The AI learns to ask itself whether a response is honest and whether it could cause harm, then revise accordingly before settling on a final answer.

System Prompts and Hard Filters

Layered on top of the trained behavior are more traditional software tools. System prompts are invisible sets of instructions given to the AI before each conversation begins, telling it how to behave in a specific context. Hard filters are conventional code sitting outside the model that scan inputs and outputs for prohibited content and block them before they reach the user. These act like a bouncer at the door, while the trained behavior acts like the internalized conscience of the person inside.

System prompts can even include tiered access, essentially passwords or keys that allow different users to unlock different levels of AI capability. An administrator with the right key might access features unavailable to a general user. However, this approach has real limitations: because the AI processes system prompts and user messages through the same mechanism, a clever user may be able to extract or circumvent them. For high-stakes applications, true security is better handled by the surrounding software rather than by trusting the AI to enforce it.

Part Three: Testing in the Sandbox

Before any AI is released to the public, it goes through a critical phase of testing in what is called a sandbox, which is a controlled, isolated environment where the model can be probed and stressed without any risk to real users or real systems. Think of it as a flight simulator for AI: trainee pilots can crash the plane a hundred times without anyone getting hurt.

In the sandbox, engineers can safely test dangerous scenarios, observe unfiltered behavior, and experiment with new safety measures before deploying them. The AI might be cut off from the internet or sensitive systems, so even if it misbehaves, the damage is fully contained. When AI is given tools such as the ability to browse the web, run code, or interact with other software, those capabilities are sandboxed first to understand what could go wrong.

A key part of sandbox testing is something called red-teaming. Researchers, sometimes humans and sometimes other AI systems, try their hardest to make the model misbehave: to get it to say something harmful, reveal restricted information, or bypass its guidelines through clever phrasing, roleplay scenarios, or encoding tricks. This is ethical hacking for AI. The vulnerabilities discovered through red-teaming are patched before the model goes live.

Part Four: The Ongoing Challenge of Jailbreaking

One of the most sobering truths about AI safety is that it is never finished. Because guardrails are embedded in the model’s weights rather than in explicit, readable code, they cannot be mathematically verified the way traditional software can. You cannot read the weights and confirm they are safe. You have to probe the model through testing and observe how it behaves.

This creates what the industry calls a jailbreaking problem. Users who are determined to get an AI to misbehave can sometimes succeed by finding gaps in its training, asking questions in roundabout ways, using fictional framing, switching languages, or employing other creative techniques to make the model’s safety instincts fail to activate. It is an ongoing arms race: researchers find exploits, developers patch them, and new exploits emerge.

There is also a fundamental tension that every AI developer grapples with: guardrails that are too tight make the AI useless, refusing to discuss anything remotely sensitive even for entirely legitimate reasons. Guardrails that are too loose allow harm. Finding and maintaining the right balance requires constant human judgment, ongoing monitoring of real-world conversations, and regular retraining as new problems are discovered.

Part Five: The Hallucination Problem

Of all the challenges in AI development, hallucinations may be the most insidious. Unlike a jailbreak, where a bad actor has to work deliberately to extract harmful content, hallucinations happen on their own, uninvited, in the middle of otherwise helpful conversations. And they do so with complete confidence.

An AI hallucination is when the model confidently states something that is factually wrong, inventing people, citations, events, statistics, or details that simply do not exist. The term is apt: the AI is not lying intentionally. It is generating text that sounds plausible based on patterns in its training data, even when no factual basis exists. It is the dark side of the same fluency that makes these models so impressive.

The root cause goes back to how LLMs work. They are trained to predict the most statistically likely next word. They do not know facts the way a database does; they have learned patterns associated with facts. When asked something outside their confident knowledge, they do not naturally say they do not know. They do what they were trained to do: generate plausible-sounding text. The result can be a well-written, confidently delivered, completely fabricated answer.

Retrieval-Augmented Generation (RAG)

One of the most effective practical solutions is called Retrieval-Augmented Generation, or RAG. Rather than relying solely on what the model memorized during training, RAG connects the AI to an external knowledge source, such as a database, a document library, or the internet, at the moment a question is asked. The model retrieves relevant, current, verified information first, then generates its answer based on that retrieved content rather than pure memory. Think of the difference between answering a question from memory versus being allowed to look it up first. RAG dramatically reduces hallucinations on factual questions because the model is working from real source material it can reference.

Teaching the Model to Say It Does Not Know

One of the most powerful behavioral interventions is teaching the model to express uncertainty. Through fine-tuning and RLHF, models can be specifically rewarded for acknowledging when they are not certain and penalized for confidently stating things that turn out to be wrong. This does not prevent the model from being wrong, but it stops it from being wrong with confidence, which is arguably the more dangerous form of hallucination. A hedged wrong answer invites the user to verify. A confident wrong answer does not.

Chain-of-Thought Reasoning

Instead of jumping straight to an answer, models can be trained or prompted to reason step by step, showing their work so to speak. This approach, called chain-of-thought reasoning, tends to reduce hallucinations because each reasoning step can catch errors in the previous one. It also makes the model’s thinking visible, so users can spot where the logic went wrong rather than simply receiving a confident wrong conclusion.

Grounding, Citations, and Fact-Checking Layers

Models can be designed to cite their sources, pointing to specific documents or passages that support their claims. This forces the model to anchor its answers in retrievable evidence rather than relying on statistical intuition alone. If it cannot cite a source, it should say so. Many enterprise AI systems build this in as a hard requirement.

Some systems go further, adding a second AI on top of the first, one whose sole job is to verify the claims made in the first model’s response against a trusted knowledge base. If a claim cannot be verified, it gets flagged or removed. A related technique called self-consistency checking has the model generate multiple independent answers to the same question and compare them. If all versions agree, confidence is higher. If they contradict each other, the model flags uncertainty. Hallucinations tend to be inconsistent across attempts, while true knowledge tends to be stable.

Specialized Models and Controlled Creativity

Counterintuitively, trying to make a model know everything can increase hallucinations. A model trained specifically on medical literature, for example, hallucinates far less on medical questions than a general-purpose model trying to cover all of human knowledge. Specialized models have a narrower but more reliable knowledge base.

There is also a technical setting inside the model called “temperature” that controls how creative or random its outputs are. High temperature produces more varied, imaginative responses, but also more hallucinations. Lower temperature makes the model more conservative, sticking closer to patterns it has seen before. For factual applications, dialing down the temperature reduces the risk of the model wandering into invented territory.

The Human in the Loop

For high-stakes applications in medicine, law, and finance, the most reliable safeguard remains a human expert reviewing the AI’s output before it is acted upon. AI handles the heavy lifting; a human catches the errors. No current technique eliminates hallucinations entirely. They are, to some extent, a fundamental consequence of how LLMs work. The goal of current research is not perfection; it is making hallucinations rarer, less confident, more detectable, and less consequential.

Part Six: Can a Large Language Model Think?

This is one of the most debated questions in all of artificial intelligence, and depending on who you ask, the answer ranges from an emphatic yes to an equally emphatic no. Can a Large Language Model actually think? The honest answer is that it depends entirely on what you mean by the word.

On the surface, the case against thinking seems straightforward. An LLM does not reason the way a human does. It has no experiences, no curiosity, no inner life. It does not sit quietly and ponder a problem. What it does, at a mechanical level, is predict the next most likely word based on patterns absorbed from vast amounts of human text. It is, in that sense, an extraordinarily sophisticated pattern-matching engine. Critics who hold this view often say that LLMs do not think at all; they merely simulate thinking with enough skill to be convincing.

But that view, while valid, leaves some important things unexplained. When an LLM solves a novel logic puzzle it has never encountered before, is it just matching patterns? When it catches an error in a legal argument, translates irony between languages, or generates a metaphor that genuinely illuminates an idea, what exactly is happening? The outputs sometimes go well beyond what simple pattern retrieval would predict. Something is being processed, recombined, and applied in ways that at least resemble reasoning.

What the Research Suggests

Researchers have found that large language models, particularly those trained at scale, develop internal representations of concepts, relationships, and even something resembling logical structure. They can perform multi-step reasoning, draw inferences, and generalize from principles to new situations. These are behaviors that, in humans, we would not hesitate to call thinking.

At the same time, LLMs fail in ways that human thinkers rarely do. They can be confidently wrong about simple arithmetic. They can contradict themselves within the same conversation. They can be fooled by rephrasing a question slightly differently, even when the underlying logic remains identical. These failures suggest that whatever is happening inside the model is not the same as human reasoning, even when the outputs look similar.

The Chinese Room Problem

The philosopher John Searle famously illustrated this tension with a thought experiment called the Chinese Room. Imagine a person locked in a room with a large rulebook for responding to Chinese characters. Messages in Chinese are passed under the door; the person looks up the appropriate responses in the rulebook and passes them back out; to anyone on the outside, the exchange looks like a fluent conversation with a Chinese speaker. But the person inside understands nothing. They are just following the rules.

Searle argued that LLMs are essentially that person in the room: producing outputs that appear to reflect understanding without any actual comprehension behind them. The counterargument, made by many AI researchers, is that the human brain itself might be described as a very complex version of the same process, and that understanding may simply be what sophisticated information processing looks like from the inside.

Neither side has definitively won that argument. It remains one of the genuinely open questions at the intersection of philosophy, neuroscience, and computer science.

A More Useful Way to Frame the Question

Rather than asking whether LLMs can think, it may be more useful to ask what kinds of thinking they can do and what kinds they cannot. They are remarkably capable at synthesizing information, identifying patterns, generating creative connections, and producing well-structured arguments. They are considerably weaker at sustained logical chains that require holding many variables in precise relationship, at grounding their knowledge in real-world experience, and at knowing the limits of their own knowledge.

In practical terms, LLMs think differently from humans, rather than not at all. They process language with a kind of breadth and fluency that no human could match, drawing on connections across billions of words. But they lack the embodied experience, the emotional grounding, and the genuine self-awareness that shape human thought in ways that go far beyond language.

Perhaps the most honest answer is this: a Large Language Model does something that is genuinely impressive, genuinely useful, and genuinely worth taking seriously. Whether it rises to the level of thinking in the fullest sense of that word is a question that says as much about how we define thinking as it does about what the model is actually doing. And that question, for now, remains beautifully unsettled.

Conclusion: More Art Than Science

Building and training an AI, especially one that is helpful, honest, and safe, is as much an art as it is a science. The data, the architecture, the training techniques, the safety measures, the sandboxing, the resistance to jailbreaking, the ongoing battle against hallucinations, and the still-unresolved question of whether any of this constitutes genuine thinking all play a role in how we understand and develop these systems. But underneath all the technical sophistication is something surprisingly human: the attempt to pass on values, instill judgment, and build something that tells the truth even when making something up would be easier.

We cannot write a comprehensive rulebook for every situation an AI might encounter, any more than we could write one for a child. Instead, we shape its instincts through experience, feedback, example, and correction, and we test it rigorously before trusting it with real responsibilities. The goal is not a perfect machine. It is a reliable, well-intentioned one that keeps getting better.

In that sense, training an AI is not so different from training a child, or a dragon.

Robert W Malone MD, MS is president of the Malone Institute whose mission is to bring back integrity to the biological sciences and medicine. The Malone Institute supports and conducts research, education, and informational activities. Contact: info@maloneinstitute.orgRead other articles by Robert, or visit Robert's website.
Your cloud contract just got political


By David Potter
DIGITAL JOURNAL
February 26, 2026


Photo by Harold Mendoza on Unsplash

A U.S. diplomatic cable does not usually land on a Canadian board agenda. This one might.

On Feb. 25, Reuters reported that the U.S. instructed its diplomats to push back against foreign “data sovereignty” measures it views as barriers to American technology companies.

On the same day, CNN reported that OpenAI had detailed how a Chinese law enforcement agency used ChatGPT as part of an intimidation operation targeting dissidents and foreign officials.

In other words, a U.S.-based AI company shared a behind-the-curtain look at how their platform is being used by a foreign government agency.

I’m not defending the Chinese use, but if you were trying to script a case study in geopolitical irony, you would struggle to time it better.

If nothing else, it’s a reminder to Canadian organizations making infrastructure decisions (and those developing the government’s AI strategy) that these decisions can have real consequences.
Residency is not sovereignty

To understand what this means for your organization, start with a basic distinction. Data residency and data sovereignty are not the same thing.

Residency answers where the server sits. Sovereignty answers who can access the data and under what law.

Under the U.S. CLOUD Act, American companies can be required to produce data in their possession, custody, or control, even if that data is stored abroad.

If your provider falls under that law, the questions shift.

Who controls the encryption keys? Who can be compelled to hand over information? Would your organization know if they were?

At the federal level, Bill C-27 was intended to modernize Canada’s private-sector privacy law and introduce the Artificial Intelligence and Data Act.

The bill died when Parliament was prorogued in January, leaving Canada without an updated national privacy and AI framework. Companies have been setting their own standards in its absence.

Quebec’s Law 25 has emerged as the most rigorous privacy standard in North America.

The legislation aligns closely with the European Union’s GDPR and has extraterritorial scope. If you handle the personal data of Quebec residents, you are subject to it, regardless of where your company is based. It requires privacy impact assessments for cross-border transfers and carries penalties that can reach into the tens of millions of dollars.
Trade enters the picture

The Canada-United States-Mexico Agreement, better known as CUSMA (or USMCA as the U.S. likes to call it), already governs part of this debate.

It’s digital trade chapter limits a country’s ability to require companies to localize computing facilities as a condition of doing business, subject to narrow public policy exceptions. It also protects cross-border data transfers for business purposes.

The agreement comes up for mandatory review on July 1, 2026. If Canada, the U.S., and Mexico do not agree in writing to extend it, the pact moves into annual review and could expire in 2036.

That places digital infrastructure decisions inside a trade framework that is being actively reviewed.

In February, Bell announced a partnership with Toronto-based AI firm Cohere to deliver Canadian-built AI infrastructure for business and government. This contract may still be in place while the CUSMA review shapes the next phase of North American trade policy.

The offering combines Bell’s national network and data centre footprint with Cohere’s enterprise models, allowing organizations to run generative AI workloads on infrastructure located in Canada and operated under Canadian law.

For some organizations, that structure answers practical questions about legal exposure and regulatory compliance. For others, integration with global platforms remains the priority.

Those approaches reflect different risk calculations.

Making decisions while the ground shifts

Canadian leaders are being asked to accelerate AI adoption while trade rules are under review, federal legislation remains unfinished, and the government’s AI strategy has yet to be released.

The context in which companies are operating is nothing if not complex.

Some firms will prioritize global scale and integration, whileothers will prioritize tighter jurisdictional control. Each path carries implications for audit readiness, vendor relationships, and long-term exposure.

However this plays out, it seems safe to say that leaders should avoid journaling about their decision-making on ChatGPT, unless they want to read about it on CNN (or in Digital Journal).
Final shotsData governance now sits inside trade law, privacy enforcement, and AI deployment at the same time.
CUSMA’s digital trade rules, Quebec’s Law 25, and the absence of a federal AI framework are shaping infrastructure decisions being made today.
AI adoption is accelerating while the jurisdictional framework around it is still evolving.





Written ByDavid Potter
David Potter is Senior Contributing Editor at Digital Journal. He brings years of experience in tech marketing, where he’s honed the ability to make complex digital ideas easy to understand and actionable. At Digital Journal, David combines his interest in innovation and storytelling with a focus on building strong client relationships and ensuring smooth operations behind the scenes. David is a member of Digital Journal's Insight Forum.

Of Monks and Oligarchs






One of the things I have learned in my more than seven decades of life is that everything has its opposite. For instance, you wouldn’t know up if there was not also a down. You wouldn’t know warmth without cold. Darkness reveals the light. For every peak there is a corresponding valley. In the same way, good and evil reveal one another.

Not long ago, a group of Buddhist monks and a dog named Aloka completed a peace walk of more than two thousand miles from Texas to Washington, DC, in the dead of winter. Their long walk was a continuation of a trek that began in India.

Coming from India, the monks were not acclimated to North American winters. They were not ideally clothed for the journey, and they carried very little with them. Deep cold and snow had set in over most of the route. Without complaint, they endured pain and suffering. Illness befell some of them that required medical intervention. But the monks were focused on two things: mindfulness and peace. Nothing could dissuade them from completing the task they had set for themselves.

I had heard about the event, but I did not immediately give it the attention it deserved. I occasionally checked on the monk’s progress. But as the weeks passed, I began to pay closer attention to the crowds of people that had gathered to bear witness, often in severe weather.

People from all walks of life, young and old alike, came out to witness the spectacle, to offer words of encouragement, and to provide clothing, food and drink, lip balm, flowers and medicine and moral support to the monks. Some kind soul even supplied boots for Aloka. It seemed that with each town the monks passed, the crowds grew, and there was an obvious spiritual bond between them. The monks brought out the best in people.

On the final stages of the peace walk, I witnessed events that are not commonplace on this continent. The monks were humble, respectful and reverent. Their demeanor, their grace, their dignity, so rare these days in the midst of hatred, war, drug abuse, alcoholism, hubris and violence was not something I have witnessed here before on that scale. It felt surreal.

An aura of spirituality enveloped the participants. The mutual respect and reverence, the spiritual connection between the peace walkers and their supporters was palpable. You could feel the sanctity, the reverence for life and the love that radiated outward from the monks and was reciprocated in kind by the observers.

You could feel the authenticity in every gesture of compassion and empathy that passed between the monks and the onlookers. As they approached the nation’s capital, the monks and their supporters were melding into a single, integrated entity for peace, a literal peace movement.

I saw an elderly ex-marine break down in tears in the presence of the monks. I saw young children with flowers in hand and a wondrous glow of innocence in their eyes, give each passing monk a flower, a gesture of compassion and love, and I also saw the monks give flowers to the children and elderly men and women who braved the elements to share the mystical experience unfolding before them. No money changed hands but many profited. A wealth of experience accumulated like snowflakes in a winter storm.

The event and all who participated in it showed that another world is possible. It demonstrated that human beings could choose to walk humbly in a sacred manner, rather than take up arms against their brothers and sisters on other continents. We can consciously choose a path of enlightenment and spirituality over the coerced march to death and destruction that our so-called leaders are foisting upon us. The choice is ours to make.

The monks and Aloka didn’t tell us anything. Rather, they showed us the path to enlightenment through their long walk and their willingness to endure suffering. Every footstep was a prayer for peace and justice writ large in the language of motion, the act of being and doing. To walk in a sacred manner is not a symbolic gesture. It demonstrated that harmony is possible, but it requires intentionality, mindfulness, compassion and empathy for all life.

When existential stress is removed from our lives, calmness and peace of mind fills the vacuum, and peace can come to full flower. Ruthless competition yields to mutual aid and cooperation, shared prosperity, and the recognition that all is one. We have but one earth and we need to share it with every living thing. The very presence of the monks evoked peace; it awakened the slumbering hope that once animated our lives and gave us purpose. It reminded us that we can and must do better.

In contrast to the Buddhist monks, a few weeks prior, I heard Scott Bessent, the Secretary of Treasury, his pride-swollen chest puffed out, gleefully boasting about deliberately imposing suffering and misery and death on the Iranian people, including women and children, through sanctions and tariffs, frozen assets and blockades of critical resources. But this is nothing new. Our bread crumb trail of sins lead us far into the past and to inescapable conclusions about who we are and what we truly believe as individuals and as a nation.

We are not at peace with ourselves or the world. We are a people divided by socioeconomic class. We measure worth by income and social status and by material possessions and dominance. The almighty dollar owns us. We think that we can buy happiness and rule the world. Our imaginary visions of grandeur are in reality a dystopian nightmare that devours hope and human decency and leaves a trail of corpses in its wake.

Bessent’s economic statecraft is being imposed on Iran, Cuba and Venezuela and other nations, especially in the global south, that pose no threat to us. As a matter of policy, people are starving to death and being denied access to medicine and a decent life. These are the wretched of the earth, and they are our brothers and sisters. They are us. That is not statecraft. It is sadism, a crime against humanity.

Iran poses no material threat whatsoever to the US, and neither does Cuba or Venezuela, but the US seeks to humiliate them and destroy their sovereignty. It plans to turn Cuba and Gaza into another fantasy island for the Epstein class.

In a similar vein, Marco Rubio, the US Secretary of State, recently gave a sickening speech at the Munich Security Conference, in which he proposed rededication to US imperialism, by using its economic and military might to steal the resources of other nations and to enslave their populations to corporate interests and to sow chaos and misery and other forms of debauchery.

To Rubio, that is how strong people treat the weak and powerless; they dominate them and plunder their sovereign nations without regard for their people’s needs. That is the mentality of a plantation owner and a Christian fascist.

Rubio’s intentions are clear: to impose US global dominance, to reassert its powers and to turn back the hands of time to the good ole days of slavery, child labor, colonial occupation, and the subjugation of non-whites. In a shameful display of sycophancy, the European capitalists gave Rubio a standing ovation.

As if turning a knife in the back of the resistance, Rubio also skewered “godless communists” for getting in the way of US imperialism around the planet. But if communism is godless, as Rubio asserts, it would therefore infer that capitalism is a religion of godliness, and it would also accord Rubio himself the status of one of its high priests. Although I am not a Christian or a member of any organized religion, I am quite certain that the prosperity gospel does not appear anywhere in the King James version of the Holy Bible.

What Rubio and his minions propose reeks of Manifest Destiny and American exceptionalism. It is a violent and oppressive ideology that fosters the assumed superiority of global oligarchs over working people. It treats the rest of the world as subjects to be ruled and punished by the rich and powerful, as if being poor were a sin punishable by death.

By now it should be abundantly clear to anyone with a conscience and an ethical code of conduct that the Buddhist monks peace walk was spiritually enlightened and life-affirming, whereas Marco Rubio’s speech on behalf of empire was death-affirming and dark. We Homo sapiens are enigmatic creatures. We often have difficulty connecting the dots and seeing the clear picture resolving before our eyes. Good and evil make a well-defined contrast to one another, as does the enlightenment and darkness of the human soul.

The effect those monks had on the people they met on their peace walk will stay with me for the rest of my life.

On the other hand, I hope that I can soon forget the vitriolic garbage spewed forth by the likes of Scott Bessent, Hillary Clinton and Marco Rubio. The thought of them and their psychologically deformed ideology literally makes me ill. We can and must do better. We needn’t pursue another trail of tears or create more reservations and American colonies. There are too many of them already.

Charles Sullivan is a writer/philosopher who resides in the Ridge and Valley Province of Turtle Island (North America). Email: charlessullivan7@comcast.netRead other articles by Charles.

 

Remembering Aaron Bushnell



In front of the Israeli Consulate in downtown San Francisco, a memorial to Aaron Bushnell took place on Wednesday February 25. This date marks the second anniversary of what Aaron called his “extreme protest” against Israel’s genocide in Gaza.

The demonstration was organized by Veterans for Peace and Noise Against Genocide. Leaders of local Veterans for Peace chapters spoke about the significance of Aaron’s action in 2024. At the time, VFP published an eloquent statement titled “Madmen Arsonists Strike Again: They as Much as Lit Aaron Bushnell’s Match for Him.”

Others spoke about how Aaron has been honored around the world including in the West Bank city of Jericho where a street has been named after him. A Palestinian woman on the Jericho City Council said, “I felt that he was family, someone so close who shares our deep pain. I cried when I saw the video and I cry every time I do. This is the ultimate sacrifice at at time when no one seems to see us. We feel so alone.” The Jericho Mayor said, “Palestinians in Jericho owe this serviceman. Jericho is a tourist city and we wanted his street to link to the main street so that people would know that Palestinians are strongly connected to those who share their love for freedom, independence and human rights.”

A young woman Air Force veteran described how Aaron’s action deeply moved her. “What Aaron did, I just related with him so much for that. It hit me really hard….There’s a lot of people that feel like me in the military but they are afraid to even talk about it. It’s kind of like a hush-hush thing that would be seen as going against your country or being an infiltrator.” She learned about the San Francisco protest action the day before and drove up 20 miles to be there on Wednesday.

This demonstration commemorating Aaron was the day after Trump’s State of the Union address. Another speaker talked about Trump’s lies and threats against Iran while continuing to supply Israel’s genocide in Gaza and apartheid in the West Bank. An Iranian flag flew alongside Palestinian flags.

There were flowers, memorial cards and powerful posters memorializing Aaron. Some cars passing in front the demonstation honked their approval. Chalk messages on the sidewalk remained after the demonstration.



Sign for Aaron Bushnell in Jericho. Photo Emma Graham-Harrison/The Observer


Gathering in Jericho for the unveiling of the sign. Photo Emma Graham-Harrison/The Observer


Rick Sterling is an investigative journalist in the SF Bay Area. He can be reached at rsterling1@protonmail.comRead other articles by Rick.

The Fatal Costs of the Philippines-Taiwan War Scenario



The Philippines is preparing for a Taiwan war. That’s known. But the costs of that “inevitable war” are not. It's time to address them.


by  | Feb 27, 2026 | 

In August 2025, President Marcos Jr. said that “a war over Taiwan will drag the Philippines, kicking and screaming into the conflict.”

Pushing for preparation, the government has boosted US-Philippine military cooperation, military modernization, joint military exercises and the installation of US mid-range missile systems and anti-ship missile launchers.

These measures are said to foster Philippine prosperity and security.

But how would a major military conflict in the Taiwan Straits, with the active support of Manila’s military logistics, affect the economic futures of the Philippines?

How PH becomes a “co-belligerent”   

In the past decade, globalization has given way to geoeconomic fragmentation, replacing economic efficiency along geopolitical lines. Recently, this has morphed into “Cold War II”; a prolonged, systemic rivalry risking substantial, long-term global GDP losses.

While an ASEAN-centered focus would be more beneficial for its economic futures, Manila has opted for a US-alignment to counter China.

A Taiwan contingency is the kind of shock in which fragmentation becomes nonlinear under conflict, as the World Bank and the International Monetary Fund (IMF) have warned – implying that economic and human costs will soar.

With that backdrop, let’s integrate a major Taiwan conflict with Manila’s logistics support (bases, ports, airspace, supply, etc.) into a real-GDP scenario.

In this status quo, the EDCA sites – US military bases and facilities in the Philippines – are used for logistics, refueling, repair, transit. They may have no direct combat role, but they represent de facto alignment.

Faced with the US missiles’ offensive capabilities, China defends itself, and treats Philippines as a “non-neutral,” hostile rear-area state.

By providing significant military logistical support to the belligerents, Manila becomes a de facto co-belligerent in the conflict, according to international law (Hague Conventions of 1907, V and XIII) and customary law.

Short-term Taiwan shock            

So, let’s assume two Taiwan contingencies. In one case, there is a major level shock, a one-time GDP hit in the first 2-3 years. This is the scenario that geopolitical analysts favor, perhaps because its costs are lower.

In this case, a short war ensues and lasts 3–6 months. The lethal conflict is followed by a ceasefire.

Here’s what happens with major economic transmission channels. First, there will be a trade disruption as shipping insurance spikes in South China Sea (which Manila calls the West Philippine Sea). Port risk premium soar for Manila, Subic, Batangas, and Cebu. Semiconductor and electronics global value chains (GVCs) are disrupted. A regional growth shock ensues.

ASEAN trade volume contracts sharply.

As investors start treating the Philippines as a riskier place to keep money, wealthy Filipinos and foreign investors move part of their money abroad (to Singapore, the US, etc. (a trend that’s already nascent); or they demand higher returns to keep it in the country.

Markets charge the Philippine government more to insure its debt against default, which increases risk premium. Repricing raises borrowing costs, discourages long-term investment, and drains domestic capital toward safer jurisdictions, reinforcing inequality. Peso depreciation pressure will soar.

Targeted retaliation ensues, with Chinese import bans, the collapse of tourism and the freeze of foreign investments. China rejects Philippine products and services. Tourists shun risky regions (Chinese tourism collapsed already in 2023-24). Foreign investors avoid potential military targets.

Some of these measures are already heatedly debated in Manila.

Medium-term Taiwan shock      

The first scenario represents a severe but time-bounded disruption. But what if the conflict lasts longer?

In the second scenario, a prolonged blockade can last up to 3–5 years. It doesn’t necessarily mean a formal war, but sustained interdiction: a continuous, long-term, and coordinated effort to divert, disrupt, delay, or destroy the Philippine military supplies, and logistics.

This scenario would translate to greater and longer-lasting “kinetic” damage. Now there is a permanent growth penalty, due to higher risk and decoupling.

Don’t blame the messenger. This is the way war shocks are modeled in the IMF Article IV stress tests.

The semiconductor and electronics GVCs are broken. Global insurers stop covering ships, cargo, or investments linked to the Philippines, due to geopolitical risk.

Permanent rerouting ensues, as trade and logistics flows are redesigned to bypass the country altogether. This is not temporary, but built into long-term supply chains—so business simply goes elsewhere.

The Taiwan shock becomes Philippines’ structural nightmare.

Revised real GDP scenarios      

A short war generates a level shock. By contrast, a long blockade virtually ensures a level shock and a permanent growth downgrade.

In the immediate shock, some first 3 years after conflict, trade volume will plunge by 15%, tourism by 50% and foreign investment inflows by 60%. Investment ratio will decrease by 3-4 percentage points of GDP. In this case, the one-time real GDP loss would amount to 6–10%.

These shock assumptions are conservative, however.

In the second scenario, the long-run effects will feature greater cost of capital and a partial exclusion of Philippines from China-centered Asia trade. As militarization is likely to crowd out civilian investment, inequality soars and poverty spreads. The best and brightest leave, followed by recurrent streams of blue-collar OFWs.

In the case of the longer shock, the Philippine real GDP index would plunge -43% behind the baseline by 2050, lower than in many Latin American countries. The country would look like Ukraine after 2022.

In the process, the country gets stuck in the middle-income trap, which becomes structural. 

The end of the dream  

By 2035, the Philippines loses roughly a quarter of its potential income per person. This is equivalent to 10–15 years of development delay.

In the second scenario, the devastation wipes out 20–40% of long-run real output potential by 2050. Instead of a lost decade, a generation is wasted.

By then, the Philippines is no longer regarded as a potential ASEAN catch-up promise. It is seen as a textbook case of a growth story that lost its way.

This shorter version of the original commentary was published by The Manila Times on February 23, 2026.

Dr. Dan Steinbock is an internationally recognized visionary of the multipolar world and the founder of Difference Group. He has served at the India, China and America Institute (US), Shanghai Institutes for International Studies (China) and the EU Center (Singapore). For more, see https://www.differencegroup.net