Tuesday, December 05, 2023

Unlocking the secrets of the brain’s dopaminergic system


Breakthrough organoid model replicates essential neural network


Reports and Proceedings

IMBA- INSTITUTE OF MOLECULAR BIOTECHNOLOGY OF THE AUSTRIAN ACADEMY OF SCIENCES

Dopaminergic neurons in brain organoids. 

IMAGE: 

DOPAMINERGIC NEURONS IN THE VENTRAL MIDBRAIN (RED) AND VENTRAL MIDBRAIN PROJECTIONS INTO STRIATAL AND CORTICAL TISSUE (GREEN).

view more 

CREDIT: (C) DANIEL REUMANN/IMBA




A new organoid model of the dopaminergic system sheds lights on its intricate functionality and potential implications for Parkinson’s disease. The model, developed by the group of Jürgen Knoblich at the Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences, replicates the dopaminergic system’s structure, connectivity, and functionality. The study, published on December 5 in Nature Methods, also uncovers the enduring effects of chronic cocaine exposure on the dopaminergic circuit, even after withdrawal.  

A completed run, the early morning hit of caffeine, the smell of cookies in the oven - these rewarding moments are all due to a hit of the neurotransmitter dopamine, released by neurons in a neural network in our brain, called the “dopaminergic reward pathway”. Apart from mediating the feeling of “reward”, dopaminergic neurons also play a crucial role in fine motor control, which is lost in diseases such as Parkinson’s disease. Despite dopamine’s importance, key features of the system are not yet understood, and no cure for Parkinson’s disease exists. In their new study, the group of Jürgen Knoblich at IMBA developed an organoid model of the dopaminergic system, which not only recapitulates the system’s morphology and nerve projections, but also its functionality.  

A model of Parkinson’s disease 

Tremor and a loss of motor control are characteristic symptoms of Parkinson’s disease and are due to a loss of neurons that release the neurotransmitter dopamine, called dopaminergic neurons. When dopaminergic neurons die, fine motor control is lost and patients develop tremors and uncontrollable movements. Although the loss of dopaminergic neurons is crucial in the development of Parkinson’s disease, the mechanisms how this happens, and how we can prevent – or even repair – the dopaminergic system is not yet understood. 

Animal models for Parkinson’s disease have provided some insight into Parkinsons disease, however as rodents do not naturally develop Parkinson’s disease, animal studies proved unsatisfactory in recapitulating hallmark features of the disease. In addition, the human brain contains many more dopaminergic neurons, which also wire up differently within the human brain, sending projections to the striatum and the cortex. “We sought to develop an in vitro model that recapitulates these human features in so called brain organoids”, explains Daniel Reumann, previously a PhD student in the lab of Jürgen Knoblich at IMBA, and first author of the paper. “Brain organoids are human stem cell derived three-dimensional structures, which can be used to understand both human brain development, as well as function”, he explains further.  

The team first developed organoid models of the so-called ventral midbrain, striatum and cortex – the regions linked by neurons in the dopaminergic system – and then developed a method for fusing these organoids together. As happens in the human brain, the dopaminergic neurons of the midbrain organoid send out projections to the striatum and the cortex organoids. “Somewhat surprisingly, we observed a high level of dopaminergic innervation, as well as synapses forming between dopaminergic neurons and neurons in striatum and cortex”, Reumann recalls.  

To assess whether these neurons and synapses are functional, the team collaborated with Cedric Bardy’s group at SAHMRI and Flinders University, Australia, to investigate if neurons in this system would start to form functional neural networks. And indeed, when the researchers stimulated the midbrain which contains dopaminergic neurons, neurons in the striatum and cortex responded to the stimulation. “We successfully modelled the dopaminergic circuit in vitro, as the cells not only wire correctly, but also function together”, Reumann sums up.  

The organoid model of the dopaminergic system could be used to improve cell therapies for Parkinson’s disease. In first clinical studies, researchers have injected precursors of dopaminergic neurons into the striatum, to try and make up for the lost natural innervation. However, these studies have had mixed success. In collaboration with the lab of Malin Parmar at Lund University, Sweden, the team demonstrated that dopaminergic progenitor cells injected into the dopaminergic organoid model mature into neurons and extend neuronal projections within the organoid. “Our organoid system could serve as a platform to test conditions for cell therapies, allowing us to observe how precursor cells behave in a three-dimensional human environment”, Jürgen Knoblich, the study’s corresponding author, explains. “This allows researchers to study how progenitors can be differentiated more efficiently and provides a platform which allows to study how to recruit dopaminergic axons to target regions, all in a high-throughput manner.”  

Insights into the reward system 

Dopaminergic neurons also fire whenever we feel rewarded, thus forming the basis of the “reward pathway” in our brains. But what happens when dopaminergic signaling is perturbed, such as in addiction? To investigate this question, the researchers made use of a well-known dopamine reuptake inhibitor, cocaine. When the organoids were exposed to cocaine chronically, over 80 days, the dopaminergic circuit changed functionally, morphologically and transcriptionally. These changes persisted, even when cocaine exposure was stopped 25 days before the end of the experiment, which simulated the withdrawal condition. “Even after almost a month after stopping cocaine exposure, the effects of cocaine on the dopaminergic circuit were still visible, which means that we can now investigate what the long-term effects of dopaminergic overstimulation are in a human-specific in vitro system”, Reumann summarizes.

About IMBA 

IMBA - Institute of Molecular Biotechnology - is one of the leading biomedical research institutes in Europe. IMBA is located at the Vienna BioCenter, a vibrant cluster of research institutes, universities and biotech companies in Austria. IMBA is an institute of the Austrian Academy of Sciences, the leading national sponsor of non-university academic research. Research topics pursued at IMBA include organoid and developmental biology, neuroscience, RNA biology and chromosome biology. For further information, please visit imba.oeaw.ac.at or follow us on social media. 

Research in the lab of Jürgen Knoblich for this publication was funded by the ERC under the European Union’s Horizon 2020 program, the Austrian Federal Ministry of Education, Science, and Research, the Austrian Academy of Sciences, the City of Vienna, the Austrian Science Fund and the Austrian Lotteries. 

 IT'S QUANTUM REALITY

Diamonds and rust help unveil ‘impossible’ quasi-particles


Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE




Researchers have discovered magnetic monopoles – isolated magnetic charges – in a material closely related to rust, a result that could be used to power greener and faster computing technologies.

Researchers led by the University of Cambridge used a technique known as diamond quantum sensing to observe swirling textures and faint magnetic signals on the surface of hematite, a type of iron oxide.

The researchers observed that magnetic monopoles in hematite emerge through the collective behaviour of many spins (the angular momentum of a particle). These monopoles glide across the swirling textures on the surface of the hematite, like tiny hockey pucks of magnetic charge. This is the first time that naturally occurring emergent monopoles have been observed experimentally.

The research has also shown the direct connection between the previously hidden swirling textures and the magnetic charges of materials like hematite, as if there is a secret code linking them together. The results, which could be useful in enabling next-generation logic and memory applications, are reported in the journal Nature Materials.

According to the equations of James Clerk Maxwell, a giant of Cambridge physics, magnetic objects, whether a fridge magnet or the Earth itself, must always exist as a pair of magnetic poles that cannot be isolated.

“The magnets we use every day have two poles: north and south,” said Professor Mete Atatüre, who led the research. “In the 19th century, it was hypothesised that monopoles could exist. But in one of his foundational equations for the study of electromagnetism, James Clerk Maxwell disagreed.”

Atatüre is Head of Cambridge’s Cavendish Laboratory, a position once held by Maxwell himself. “If monopoles did exist, and we were able to isolate them, it would be like finding a missing puzzle piece that was assumed to be lost,” he said.

About 15 years ago, scientists suggested how monopoles could exist in a magnetic material. This theoretical result relied on the extreme separation of north and south poles so that locally each pole appeared isolated in an exotic material called spin ice.

However, there is an alternative strategy to find monopoles, involving the concept of emergence. The idea of emergence is the combination of many physical entities can give rise to properties that are either more than or different to the sum of their parts.

Working with colleagues from the University of Oxford and the National University of Singapore, the Cambridge researchers used emergence to uncover monopoles spread over two-dimensional space, gliding across the swirling textures on the surface of a magnetic material.

The swirling topological textures are found in two main types of materials: ferromagnets and antiferromagnets. Of the two, antiferromagnets are more stable than ferromagnets, but they are more difficult to study, as they don’t have a strong magnetic signature.

To study the behaviour of antiferromagnets, Atatüre and his colleagues use an imaging technique known as diamond quantum magnetometry. This technique uses a single spin – the inherent angular momentum of an electron – in a diamond needle to precisely measure the magnetic field on the surface of a material, without affecting its behaviour.

For the current study, the researchers used the technique to look at hematite, an antiferromagnetic iron oxide material. To their surprise, they found hidden patterns of magnetic charges within hematite, including monopoles, dipoles and quadrupoles.

“Monopoles had been predicted theoretically, but this is the first time we’ve actually seen a two-dimensional monopole in a naturally occurring magnet,” said co-author Professor Paolo Radaelli, from the University of Oxford.

“These monopoles are a collective state of many spins that twirl around a singularity rather than a single fixed particle, so they emerge through many-body interactions. The result is a tiny, localised stable particle with diverging magnetic field coming out of it,” said co-first author Dr Hariom Jani, from the University of Oxford.

“We’ve shown how diamond quantum magnetometry could be used to unravel the mysterious behaviour of magnetism in two-dimensional quantum materials, which could open up new fields of study in this area,” said co-first author Dr Anthony Tan, from the Cavendish Laboratory. “The challenge has always been direct imaging of these textures in antiferromagnets due to their weaker magnetic pull, but now we’re able to do so, with a nice combination of diamonds and rust.”

The study not only highlights the potential of diamond quantum magnetometry but also underscores its capacity to uncover and investigate hidden magnetic phenomena in quantum materials. If controlled, these swirling textures dressed in magnetic charges could power super-fast and energy-efficient computer memory logic.

The research was supported in part by the Royal Society, the Sir Henry Royce Institute, the European Union, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

New theory unites Einstein’s gravity with quantum mechanics


A radical theory that consistently unifies gravity and quantum mechanics while preserving Einstein’s classical concept of spacetime is announced today in two papers published simultaneously by UCL physicists

Peer-Reviewed Publication

UNIVERSITY COLLEGE LONDON

Illustration from PRX depicting an artistic version of Figure 1, from “A postquantum theory of classical gravity” 

IMAGE: 

THE IMAGE DEPICTS AN EXPERIMENT IN WHICH HEAVY PARTICLES(ILLUSTRATED AS THE MOON), CAUSE AN INTERFERENCE PATTERN (A QUANTUM EFFECT), WHILE ALSO BENDINGSPACETIME. THE HANGING PENDULUMS DEPICT THE MEASUREMENT OF SPACETIME. THE ACTUAL EXPERIMENT ISTYPICALLY PERFORMED USING CARBON-60, ONE OF THE LARGEST KNOWN MOLECULES. THE UCL CALCULATIONINDICATES THAT THE EXPERIMENT SHOULD ALSO BE PERFORMED USING HIGHER DENSITY ATOMS SUCH AS GOLD. THE OTHER TWO IMAGES REPRESENT THE TWO EXPERIMENTS PROPOSED BY THE UCL GROUP, BOTH OF WHICHCONSTRAIN ANY THEORY WHERE SPACETIME IS TREATED CLASSICALLY. ONE IS THE WEIGHING OF A MASS, THE OTHERIS AN INTERFERENCE EXPERIMENT.

view more 

CREDIT: ISAAC YOUNG





A radical theory that consistently unifies gravity and quantum mechanics while preserving Einstein’s classical concept of spacetime is announced today in two papers published simultaneously by UCL (University College London) physicists.

Modern physics is founded upon two pillars: quantum theory on the one hand, which governs the smallest particles in the universe, and Einstein’s theory of general relativity on the other, which explains gravity through the bending of spacetime. But these two theories are in contradiction with each other and a reconciliation has remained elusive for over a century.

The prevailing assumption has been that Einstein’s theory of gravity must be modified, or “quantised”, in order to fit within quantum theory. This is the approach of two leading candidates for a quantum theory of gravity, string theory and loop quantum gravity.

But a new theory, developed by Professor Jonathan Oppenheim (UCL Physics & Astronomy) and laid out in a new paper in Physical Review X (PRX), challenges that consensus and takes an alternative approach by suggesting that spacetime may be classical – that is, not governed by quantum theory at all. 

Instead of modifying spacetime, the theory - dubbed a “postquantum theory of classical gravity” - modifies quantum theory and predicts an intrinsic breakdown in predictability that is mediated by spacetime itself. This results in random and violent fluctuations in spacetime that are larger than envisaged under quantum theory, rendering the apparent weight of objects unpredictable if measured precisely enough.  

A second paper, published simultaneously in Nature Communications and led by Professor Oppenheim’s former PhD students, looks at some of the consequences of the theory, and proposes an experiment to test it: to measure a mass very precisely to see if its weight appears to fluctuate over time. 

For example, the International Bureau of Weights and Measures in France routinely weigh a 1kg mass which used to be the 1kg standard. If the fluctuations in measurements of this 1kg mass are smaller than required for mathematical consistency, the theory can be ruled out. 

The outcome of the experiment, or other evidence emerging which would confirm the quantum vs classical nature of spacetime, is the subject of a 5000:1 odds bet between Professor Oppenheim and Professor Carlo Rovelli and Dr Geoff Penington – leading proponents of quantum loop gravity and string theory respectively. [link]

For the past five years, the UCL research group has been stress-testing the theory, and exploring its consequences. 

Professor Oppenheim said: "Quantum theory and Einstein's theory of general relativity are mathematically incompatible with each other, so it's important to understand how this contradiction is resolved. Should spacetime be quantised, or should we modify quantum theory, or is it something else entirely? Now that we have a consistent fundamental theory in which spacetime does not get quantised, it’s anybody’s guess.”

Co-author Zach Weller-Davies, who as a PhD student at UCL helped develop the experimental proposal and made key contributions to the theory itself, said: "This discovery challenges our understanding of the fundamental nature of gravity but also offers avenues to probe its potential quantum nature.

“We have shown that if spacetime doesn’t have a quantum nature, then there must be random fluctuations in the curvature of spacetime which have a particular signature that can be verified experimentally.

“In both quantum gravity and classical gravity, spacetime must be undergoing violent and random fluctuations all around us, but on a scale which we haven’t yet been able to detect. But if spacetime is classical, the fluctuations have to be larger than a certain scale, and this scale can be determined by another experiment where we test how long we can put a heavy atom in superposition* of being in two different locations."

Co-authors Dr Carlo Sparaciari and Dr Barbara Šoda, whose analytical and numerical calculations helped guide the project, expressed hope that these experiments could determine whether the pursuit of a quantum theory of gravity is the right approach. 

Dr Šoda (formerly UCL Physics & Astronomy, now at the Perimeter Institute of Theoretical Physics, Canada) said: “Because gravity is made manifest through the bending of space and time, we can think of the question in terms of whether the rate at which time flows has a quantum nature, or classical nature. 

“And testing this is almost as simple as testing whether the weight of a mass is constant, or appears to fluctuate in a particular way.”

Dr Sparaciari (UCL Physics & Astronomy) said: “While the experimental concept is simple, the weighing of the object needs to be carried out with extreme precision. 

“But what I find exciting is that starting from very general assumptions, we can prove a clear relationship between two measurable quantities – the scale of the spacetime fluctuations, and how long objects like atoms or apples can be put in quantum superposition of two different locations. We can then determine these two quantities experimentally.”  

Weller-Davies added: “A delicate interplay must exist if quantum particles such as atoms are able to bend classical spacetime. There must be a fundamental trade-off between the wave nature of atoms, and how large the random fluctuations in spacetime need to be.”

The proposal to test whether spacetime is classical by looking for random fluctuations in mass is complementary to another experimental proposal which aims to verify the quantum nature of spacetime by looking for something called “gravitationally mediated entanglement.” 

Professor Sougato Bose (UCL Physics & Astronomy), who was not involved with the announcement today, but was among those to first propose the entanglement experiment, said: “Experiments to test the nature of spacetime will take a large-scale effort, but they're of huge importance from the perspective of understanding the fundamental laws of nature. I believe these experiments are within reach – these things are difficult to predict, but perhaps we'll know the answer within the next 20 years.”

The postquantum theory has implications beyond gravity. The infamous and problematic “measurement postulate” of quantum theory is not needed, since quantum superpositions necessarily localise through their interaction with classical spacetime.  

The theory was motivated by Professor Oppenheim’s attempt to resolve the black hole information problem. According to standard quantum theory, an object going into a black hole should be radiated back out in some way as information cannot be destroyed, but this violates general relativity, which says you can never know about objects that cross the black hole’s event horizon. The new theory allows for information to be destroyed, due to a fundamental breakdown in predictability.

* Background information

Quantum mechanics background: All the matter in the universe obeys the laws of quantum theory, but we only really observe quantum behaviour at the scale of atoms and molecules. Quantum theory tells us that particles obey Heisenberg’s uncertainty principle, and we can never know their position or velocity at the same time. In fact, they don’t even have a definite position or velocity until we measure them. Particles like electrons can behave more like waves and act almost as if they can be in many places at once (more precisely, physicists describe particles as being in a “superposition” of different locations). 

Quantum theory governs everything from semiconductors which are ubiquitous in computer chips, to lasers, to superconductivity to radioactive decay. In contrast, we say that a system behaves classically if it has definite underlying properties. A cat appears to behave classically – it is either dead or alive, not both, nor in a superposition of being dead and alive. Why do cats behave classically, and small particles quantumly? We don’t know, but the postquantum theory doesn’t require the measurement postulate, because the classicality of spacetime infects quantum systems and causes them to localise.

Gravity background: Newton’s theory of gravity, gave way to Einstein’s theory of general relativity (GR), which holds that gravity is not a force in the usual sense. Instead, heavy objects such as the sun, bend the fabric of spacetime in such a way that causes the earth to revolve around it. Spacetime is just a mathematical object consisting of the three dimensions of space, and time considered as a fourth dimension. General relativity predicted the formation of black holes and the big bang. It holds that time flows at different rates at different points in space, and the GPS in your smartphone needs to account for this in order to properly determine your location.

Historical context: The framework presented by Oppenheim in PRX, and in a companion paper with Sparaciari, Šoda and Weller-Davies, derives the most general consistent form of dynamics in which a quantum system interacts with a classical system. It then applies this framework to the case of general relativity coupled to quantum fields theory. It builds on earlier work and a community of physicists. An experiment to test the quantum nature of gravity via gravitationally mediated entanglement was proposed by Bose et. al. and by C. Marletto and V. Vadral. Two examples of consistent classical-quantum dynamics were discovered in the 90’s by Ph. Blanchard and A. Jadzyk, and by Lajos Diosi, and again by David Poulin around 2017. From a different perspective, in 2014 a model of Newtonian gravity coupled to quantum systems via a “measurement-and-feedback” approach, was presented by Diosi and Antoinne Tilloy in 2016, and by D Kafri, J. Taylor, and G. Milburn, in 2014. The idea that gravity might be somehow related to the collapse of the wavefunction, dates back to F. Karolyhazy (1966), L. Diosi (1987) and R. Penrose (1996). That classical-quantum couplings might explain localistation of the wavefunction has been suggested by others including M. Hall and M. Reginatto, Diosi and Tilloy, and David Poulin. The idea that spacetime might be classical dates back to I. Sato (1950), and C. Moller (1962), but no consistent theory was found until now. 

Precision mass measurement - artistic STEAMPUNK concept 

How Israel's AI use is resulting in indiscriminate civilian deaths in Gaza

Israel's growing reliance on AI in conflict with Hamas has resulted in a drastic surge in civilian casualties, sparking concerns about ethical implications and the opacity of AI-driven target selection.



Aftermath of an Israeli strike on a house after a temporary truce between Hamas and Israel expired in Rafah, southern Gaza

The staggering number of children and women killed in besieged Gaza has put the spotlight on Israel's increasing use of artificial intelligence (AI), which does not differentiate between combatants and civilians in picking targets.

As Israel resumes its offensive in devastated Gaza after a seven-day ceasefire, the number of casualties has risen sharply – with close to 16,000 people killed since Israel launched what is being described as a "collective punishment" of Gaza residents following the October 7 Hamas attacks.

In a bid to target Hamas hideouts, IDF's strikes are far from surgical, and reports suggest limited attention to target selection methods in Gaza. The integration of AI into lethal operations has played a pivotal role in recent Israel-Hamas wars.

In May 2021, officials said Israel had fought its 'first AI war' during the bombardment of Gaza for 11 days using machine learning and advanced computing.

And months ahead of the Hamas attacks in Israel on October 7, the IDF revealed its integration of AI into lethal operations.

According to Bloomberg, as of July 15, the IDF has initiated the use of AI to select a target in air strikes and logistical planning during wartime.

Israeli officials also disclosed the implementation of an AI system for target selection in aerial bombardments, alongside another model named 'Fire Factory'.

The Fire Factory system utilises data on military-approved targets to calculate munition loads, prioritise and allocate thousands of targets to aircraft and drones, and propose a schedule for subsequent raids.


AFP
Israel and Gaza at war after Hamas launches surprise attack

And in the latest Israel-Hamas war, the use of an AI platform called "The Gospel" is said to have been a notable aspect of their operations in Gaza.

Current and former members of Israel's intelligence community, Palestinian testimonies, data from Gaza, and official statements suggest that authorisation for bombing non-military targets, a relaxation of constraints regarding expected civilian casualties, and the use of AI to generate an unprecedented number of potential targets have all contributed to one of the deadliest military campaigns against Palestinians since 1948.

Hence, in Israel's "Operation Iron Swords," there has seen a significant increase in the bombing of non-military targets, including private residences, public buildings, infrastructure, and high-rise blocks, categorised by the army as "power targets".

'Expected casualties'

Several sources, speaking to +972 Magazine and Local Call, confirmed that the Israeli army possesses files on the majority of potential targets in Gaza, including residences.

These files specify the number of civilians likely to be killed in an attack on a particular target. The army's intelligence units calculate and know the expected civilian casualties before executing an attack.

Specifying a case, one source said that the military command knowingly approved an operation to assassinate a single top Hamas military commander – resulting in the deaths of hundreds of Palestinian civilians.

Another source said that the decision-making extends to civilian casualties, emphasising that nothing happens by accident.

The investigation by the +972 Magazine suggests that the widespread use of the "Habsora" ("The Gospel") system, primarily built on AI, also contributed to the high number of targets and extensive harm to civilian life in Gaza.

'The Gospel' and its role in Gaza


Described as a "mass assassination factory," this AI system can generate targets almost automatically at a rate exceeding previous capabilities.

It also enables the army to conduct extensive strikes on residential homes, even targeting those who are junior Hamas operatives.

Palestinian testimonies suggest that since October 7, the army has also attacked private residences without known or apparent Hamas members. These strikes, as confirmed by sources, knowingly result in the death of entire families.



A senior intelligence officer reportedly emphasised the goal of "killing as many Hamas operatives as possible" after October 7, relaxing criteria around civilian harm. This has led to shelling based on broad cellular pinpointing – resulting in civilian casualties – to save time rather than investing in more accurate targeting.

The outcome of these policies is a staggering loss of human life in Gaza since October 7. Over 300 families have lost ten or more members in Israeli bombings in the past two months — a figure 15 times higher than in Israel's previous deadliest war on Gaza in 2014.

"All of this is happening contrary to the protocol used by the IDF in the past," one source quoted by +972 Magazine.

"There is a feeling that senior officials in the army are aware of their failure on October 7 and are busy with the question of how to provide the Israeli public with an image [of victory] that will salvage their reputation."

What forms of data are fed into the Gospel is not yet known. Still, it typically involves analysing extensive information from various channels, including drone footage, intercepted communications, surveillance data, and details obtained from monitoring individual and group movements and behaviour patterns.

An official involved in previous Gaza operations' targeting decisions said that the IDF hadn't previously bombed the homes of junior Hamas members. However, they believe this approach has changed in the current conflict, with suspected Hamas operatives' houses targeted irrespective of rank.

"Hamas members who don't really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there," the official told +972/Local Call.

Type of targets

According to sources speaking to +972 and Local Call, Israeli aircraft have targeted Gaza in roughly four categories. The first includes "tactical targets," such as standard military objectives like armed militant cells, weapon warehouses, rocket launchers, anti-tank missile launchers, launch pits, mortar bombs, military headquarters, and observation posts.

The second category is "underground targets," mainly tunnels dug by Hamas under Gaza's neighbourhoods, including beneath civilian homes. Aerial strikes on these targets can lead to the collapse of homes above or near the tunnels.

The third category, "power targets," involves high-rises, residential towers, and public buildings like universities, banks, and government offices in city centres.

Three intelligence sources suggest that targeting these structures aims to exert "civil pressure" on Hamas by deliberately attacking Palestinian society.

The final category comprises "family homes" or "operatives' homes." The stated purpose is to destroy private residences to assassinate a single resident suspected of being a Hamas or the Islamic Jihad group's operative.

"Hamas is everywhere in Gaza; there is no building that does not have something of Hamas in it, so if you want to find a way to turn a high-rise into a target, you will be able to do so," said one former intelligence official.

"They will never just hit a high-rise that does not have something we can define as a military target," said another intelligence source, who carried out previous strikes against power targets.

"There will always be a floor in the high-rise [associated with Hamas]. But for the most part, when it comes to power targets, it is clear that the target doesn't have a military value that justifies an attack that would bring down the entire empty building in the middle of a city with the help of six planes and bombs weighing several tons."


Destruction from Israeli aerial bombardment is seen in Gaza.


OTHERS

However, Palestinian testimonies in the current war indicate that some families killed did not include any operatives from these organisations.

As of November 10, the IDF Spokesperson reported that Israel had attacked a total of 15,000 targets in Gaza during the first 35 days of the current conflict.

This figure is notably higher than in the four previous major operations in the besieged enclave: Guardian of the Walls in 2021 (1,500 targets in 11 days), Protective Edge in 2014 (between 5,266 and 6,231 targets in 51 days), Pillar of Defense in 2012 (about 1,500 targets in eight days), and Cast Lead in 2008 (3,400 targets in 22 days).

As Israeli commanders receive lists of targets generated by AI tools like the Gospel, the opacity of the method raises concerns: increasing dependence on AI may turn humans into mere components in a mechanised process, jeopardising their ability to assess the impact on civilian lives effectively.

SOURCE: TRT WORLD