Showing posts sorted by relevance for query Homunculus. Sort by date Show all posts
Showing posts sorted by relevance for query Homunculus. Sort by date Show all posts

Monday, August 28, 2006

Homunuclus

Church teaching holds that in-vitro fertilization is morally wrong because it replaces the conjugal union between husband and wife and often results in the destruction of embryos. Artificial insemination for married couples is allowable if it "facilitates" the sex act but does not replace it. The church condemns all forms of experimentation on human embryos.Vatican Critical of Stem Cell Creation

The current pope was once Cardinal Ratzinger the Vatican's chief Inquisitor, yes I know we weren't expecting the Spanish Inquistion.

When the issue of cloning and artificial life was presented for JP2, Ratzinger issued the churches statement on bio-ethics which has not changed since the Rennisance when the Church banned sorcery and the creation of artificial life known as the Homunculus And indeed Ratzinger in his paper, refers to cloning as creating a homonucleus. 21st Century science meets the middle ages.

Does the law permit the ìenhancementî or other manipulation of one's genetic outfit? In this context, the following issues were discussed at the seminar: reproduction techniques in general (the "homunculus issue," see Goethe's Faust 11), special issues of "reprogenetics," cloning (inherently wrong, or open to an evaluation between healing effects and human dignity by way of a rule-and- exception relationship?), disease prevention (MV, cancers), unfairly advantaging certain children in view of a "level playing field" of genetic outfits, right of parents to genetically manipulate their offspring, and liability of parents who do not manipulate.The New Genetics and the Law

The crowning example of alchemical hybris came with the claim of pseudo-Paracelsus in the sixteenth century that he could make a homunculus - an artificial man. Like the gold of the alchemists, which was said to exceed the 24 carats of the best natural gold, the homunculus was supposed to be better than a natural man. Being made in a flask from human semen,
he was free of the catamenial substance that, according to the current theories of generation, supplied the material basis to an ordinary fetus. According to pseudo-Paracelsus, the homunculus was a semi-spiritual being that had an immediate apprehension of all the arts and a preternatural intelligence. In modern terms, the homunculus could be called the perfect test-tube baby, engineered to have the highest possible intelligence quota and aptitude. I have written an article focusing on this topic ("The Homunculus and his Forebears," 1999; see Vita), and have a book focusing on alchemy and the art-nature debate under contract (
Promethean Ambitions: Alchemy and the Refashioning of Nature, forthcoming with University of Chicago Press). Newton's Alchemy, recreated

What can we make of his account of the creation of a homunculus, a
miniature human being, in his laboratory? Cloning and genetic engineering are clearly impossible with 16th-century technology.
Paracelsus

The invention of hand lenses and the microscope facilitated studies of the chick embryo by Marcello Malpighi (1628-1694), but also gave rise to one of the most profound errors in describing human development, that of the homunculus. This was a miniature human believed to have been seen within the head of a human spermatozoon and which presumed to enlarge when deposited in the female. This was the basis of the preformation theory and was believed by many well into the 18th century.lifeissues.net | When Does Human Life Begin? The Final Answer

Drawing of Human Spermatozoa
1694
The drawing was conceived by Niklaas Hartsoeker not by what
he had seen, but what he presumed would be visible if sperm
could be adequately viewed.




Consider the profound difficulty embryonic development presents to an observer. A complex organism, such as a chick, frog, insect or human, arises in an orderly and magical way from an apparently structureless egg. When embryology was in its infancy in the 17th and 18th centuries, the thought was that no animal could arise from such nothingness. Thus was born the theory of the homunculus: the idea that an infinite set of tiny individuals were contained, one within another, in each egg—or in each sperm (there was vigorous disagreement as to which). Development was seen as the visible unfolding of a preexisting individual. Unhappily for this wonderful notion, in the late 18th century Caspar Friedrich Wolff showed by microscopy that embryos contained cells but no homunculus—there was no preformed entity.
American Scientist Online - In the Twinkle of a Fly

U.S. Senator Sam Brownback (R-Kansas) recently told his fellow Republicans he would advance a two-year moratorium rather than a permanent ban. Ironically, Brownback relayed his intentions while President Bush reaffirmed his opposition to human embryo cloning in a speech delivered by satellite to the Southern Baptist Convention in St. Louis. Bush told them, "We believe that a life is a creation, not a commodity, and that our children are gifts to be loved and protected, not products to be designed and manufactured by human cloning." How did we get so quickly from a few cells in a dish to children? It reminds me of artists' representations during the Middle Ages of the homunculus: an invisibly tiny, fully formed human carried around by the male and then deposited in the female during intercourse. The tiny homunculus would eventually grow into a fetus before it was born. Those were the days before the discoveries of the microscope, sperm and egg. So then maybe Bush and Bevilacqua imagine that people still reproduce with homunculi. Otherwise, describing what we know with absolute certainty are nothing more than single or several cells in a microscopic cluster, resembling the cells inside your cheek, as "children" simply doesn't make any sense! If these men didn't wield so much power, we'd laugh at their ignorance. Stem Cells and Cloning: What Bush Doesn't Know Might Kill You ...

Faust and Homunculus
19th century engraving of Goethe's Faust and Homunculus




Also See

Pluto Gone Dog Gone It

For a Ruthless Criticism of Everything Existing

Magick

Creationism

Catholic


Find blog posts, photos, events and more off-site about:
, , , , , , , , , ,

Friday, September 29, 2023

Watch a 180-year-old star eruption unfold in new time-lapse movie (video)

Samantha Mathewson
Wed, September 27, 2023 


Watch a 180-year-old star eruption unfold in new time-lapse movie (video)

Using over two decades of data from NASA’s Chandra X-ray Observatory, astronomers have crafted a stunning new video of a stellar eruption that took place some 180 years ago.

The time-lapse video uses Chandra observations from 1999, 2003, 2009, 2014 and 2020 — along with data from ESA’s XMM-Newton spacecraft — and retraces the history of the stellar explosion known as Eta Carinae. This famous star system contains two massive stars. One of those stars is about 90 times more massive than the sun, scientists say, while the other is  about 30 times more massive than the sun.

The massive explosion, dubbed the "Great Eruption, came from Eta Carinae. It is believed to be the result of a merger between two stars that originally belonged to a triple star system. The aftermath of the collision was witnessed on Earth in the mid-19th century, and the new video shows how the stellar eruption has since continued to rapidly expand into space at speeds reaching up to 4.5 million miles per hour , according to a statement from NASA.

Related: Eta Carinae's epic supernova explosion comes to life in new visualization

"During this event, Eta Carinae ejected between 10 and 45 times the mass of the sun,” NASA officials said in the statement. "This material became a dense pair of spherical clouds of gas, now called the Homunculus Nebula, on opposite sides of the two stars."

The Homunculus Nebula is the bright blue cloud at the center of the image, fueled by high-energy X-rays produced by the two massive stars, which are too close to be observed individually. They are surrounded by a bright orange ring of X-ray emissions that appear to grow and expand rapidly over time.

"The new movie of Chandra, plus a deep, summed image generated by adding the data together, reveal important hints about Eta Carinae’s volatile history," NASA officials said in the statement. "This includes the rapid expansion of the ring, and a previously-unknown faint shell of X-rays outside it.

The faint X-ray shell is outlined in the image above, showing that it has a similar shape and orientation to the Homunculus nebula, which suggests both structures have a common origin, according to the statement.

Based on the motion of clumps of gas, astronomers believe  the stellar material was blasted away from Eta Carinae sometime between the years 1200 and 1800 — well before the Great Eruption was observed in 1843. As the blast extended into space, it collided with interstellar material in its path. The collision then heated the material, creating the bright X-ray ring observed. However, the blast wave has now traveled beyond the ring, given the X-ray brightness of Eta Carinae has faded with time, scientists said.

Their findings on Eta Carinae’s expansion were published in a 2022 study in the Astrophysical Journal.

"We’ve interpreted this faint X-ray shell as the blast wave from the Great Eruption in the 1840s," Michael Corcoran at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who led the study, said in a statement from the Chandra X-ray Observatory. "It tells an important part of Eta Carinae’s backstory that we wouldn’t otherwise have known."

Wednesday, July 03, 2024

HOMUNCULUS
Work on synthetic human embryos to get code of practice in UK


Ian Sample Science editor
Wed, 3 July 2024

Stem cell-based embryo models made global headlines last summer when researchers said they had created one with a heartbeat and traces of blood.Photograph: Ahmad Gharabli/AFP/Getty Images


Biological models of human embryos that can develop heartbeats, spinal cords and other distinctive features will be governed by a code of practice in Britain to ensure that researchers work on them responsibly.

Made from stem cells, they mimic, to a greater or less extent, the biological processes at work in real embryos. By growing them in the laboratory, scientists hope to learn more about how human embryos develop and respond to their environment, questions that would be impossible to answer with real embryos donated for research.

Scientists have worked on stem cell-based embryo models, or SCBEMs, for many years, but the technology only made global headlines last summer when researchers said they had created one with a heartbeat and traces of blood. Made without the need for eggs or sperm, the ball of cells had some features that would typically appear in the third or fourth week of pregnancy.


The technology, which advocates believe could shed fresh light on potential causes of infertility, is so new that SCBEMs are not directly covered by UK law or regulations. The situation leaves the scientists pursuing the research in an uncomfortable grey area. The new guidelines, drawn up by experts at the University of Cambridge and the Progress Educational Trust, aim to clarify the situation by setting down rules and best practice.

Dr Peter Rugg-Gunn, a member of the code of practice working group, said the guidance took “stem cell-based embryo models out of the grey zone and on to more stable footing”. It should also reassure the public that research is being performed carefully and under proper scrutiny, added Rugg-Gunn, who is a group leader at the Babraham Institute.

The code reminds researchers that there may be “a range of emotional responses” to SCBEMs with heartbeats, spinal cords and other recognisable features, and urges them to be “aware of and sensitive to these concerns, irrespective of whether they are thought to be ethically or legally relevant”.

Under existing UK law, scientists can grow real human embryos donated for research for up to 14 days in the lab, though many argue for the limit to be extended to allow for the study of later stages of embryonic development.

The new guidelines establish an oversight committee that will decide on a case-by-case basis how long specific embryo models can be grown for. The code does not rule out experiments that grow them for more than 14 days, but Roger Sturmey, professor of reproductive medicine at Hull York medical school and chair of the code of practice working group said any such experiments “would have to be very well justified”.

The code prohibits any human SCBEMs from being transferred into the womb of a human or animal, or being allowed to develop into a viable organism in the lab.

Sandy Starr, the deputy director of the Progress Educational Trust, said he expected researchers, funders, research institutes, publishers and regulators to recognise the guidelines. Scientists who worked outside the code would “find it difficult to publish, find funding and face opprobrium from their peers”,” he added.


Wednesday, January 15, 2020

2001 A SPACE ODYSSEY 
Did HAL Commit Murder?
The HAL 9000 computer and the ethics of murder by and of machines.

 
"He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it." Image: HAL 9000, via Wikimedia Commons
By: Daniel C. Dennett / Introduction by David G. Stork

Last month at the San Francisco Museum of Modern Art I saw “2001: A Space Odyssey” on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of “top ten films” and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers.

As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.

And what about the converse: humans “killing” future computers by disconnection? When astronauts Frank and Dave retreat to a pod to discuss HAL’s apparent malfunctions and whether they should disconnect him, Dave imagines HAL’s views and says: “Well I don’t know what he’d think about it.” Will it be ethical — not merely disturbing — to disconnect (“kill”) a conversational elder-care robot from the bedside of a lonely senior citizen?

Indeed, future developments in AI pose profound challenges, first and foremost to our economy, by automating away millions of jobs in manufacturing, food service, retail sales, legal services, and even medical diagnosis. The naive bromides of an invisible economic hand shepherding “retrained workers” into alternative and new classes of jobs and such are dangerously overoptimistic. Then too are the “sci fi” dangers of AI run amok. The most pressing dangers of AI will be due to its deliberate misuse for purely personal or “human” ends.
David G. Stork is the editor of “HAL’s Legacy,” from which Daniel Dennett’s essay is culled.

At the philosophical center of these developments are the notions of responsibility and culpability, or as philosopher Daniel Dennett asks, “Did HAL commit murder?” There are few philosophers as knowledgeable and insightful about these fascinating problems, and who write as clearly and directly. His chapter, published nearly 25 years ago in “HAL’s Legacy,” a collection of writings that explore HAL’s tremendous influence on the research and design of intelligent machines, remains an indispensable introduction to the thorny problems of “murdering” by computers and the “murder” of computers. He focuses on the central concept of mens rea, or “guilty mind,” asking how we would ever know when a computer would be so self-aware as to satisfy the legal criterion to make him (it) guilty of murder. My view is that for quite some time the mens rea most relevant will be of some power-hungry human villain using AI, rather than of some conscious, autonomous, and evil AI system itself … but who knows?

Everyone who thinks about ethics, is concerned about the future dangers of AI, and wants to support efforts to keep us all safe should read on…

… and of course see “2001” again.

—David G. Stork, 2020

The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated December 9, 1981, from the Philadelphia Inquirer — not the National Enquirer — with the headline “Robot killed repair­ man, Japan reports.”

The story was an anticlimax. At the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, which crushed him to death. The repairman had failed to follow instructions for shutting down the arm before he entered the workspace. Why, indeed, was this industrial accident in Japan reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that — in the public imagination at least — this was no ordinary machine. This was a robot, a machine that might have a mind, might have evil inten­tions, might be capable, not just of homicide, but of murder. Anglo­ American jurisprudence speaks of mens rea — literally, the guilty mind:

To have performed a legally prohibited action, such as killing another human being; one must have done so with a culpable state of mind, or mens rea. Such culpable mental states are of three kinds: they are either motivational states of purpose, cogni­tive states of belief, or the nonmental state of negligence. (Cambridge Dictionary of Philosophy, 1995)

The legal concept has no requirement that the agent be capable of feeling guilt or remorse or any other emotion; so-called cold-blooded murderers are not in the slightest degree exculpated by their flat affective state. Star Trek’s Spock would fully satisfy the mens rea requirement in spite of his fabled lack of emotions. Drab, colorless — but oh so effective — “motivational states of purpose” and “cognitive states of belief” are enough to get the fictional Spock through the day quite handily. And they are well-established features of many existing computer programs.

When IBM’s computer Deep Blue beat world chess champion Garry Kasparov in the first game of their 1996 championship match, it did so by discovering and executing, with exquisite timing, a withering attack, the purposes of which were all too evident in retrospect to Kasparov and his handlers. It was Deep Blue’s sensitivity to those purposes and a cognitive-capacity to recognize and exploit a subtle flaw in Kasparov’s game that explain Deep Blue’s success. Murray Campbell, Feng-hsiung Hsu, and the other designers of Deep Blue, didn’t beat Kasparov; Deep Blue did. Neither Camp­bell nor Hsu discovered the winning sequence of moves; Deep Blue did. At one point, while Kasparov was mounting a ferocious attack on Deep Blue’s king, nobody but Deep Blue figured out that it had the time and security it needed to knock off a pesky pawn of Kasparov’s that was out of the action but almost invisibly vulnerable. Campbell, like the human grandmasters watching the game, would never have dared consider such a calm mopping­ up operation under pressure.


Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it.

Deep Blue, like many other computers equipped with artificial intelligence (AI) programs, is what I call an intentional system: its behavior is predictable and explainable if we attribute to it beliefs and desires — “cognitive states” and “motivational states” — and the rationality required to figure out what it ought to do in the light of those beliefs and desires. Are these skeletal versions of human beliefs and desires sufficient to meet the mens rea requirement of legal culpability? Not quite, but if we restrict our gaze to the limited world of the chessboard, it is hard to see what is missing. Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it. But we also assign responsibility to agents in order to praise or honor the appropriate agent.

Who or what, then, deserves the credit for beating Kasparov? Deep Blue is clearly the best candidate. Yes, we may join in congratulating Campbell, Hsu and the IBM team on the success of their handiwork; but in the same spirit we might congratulate Kasparov’s teachers, handlers, and even his parents. And, no matter how assiduously they may have trained him, drumming into his head the importance of one strategic principle or another, they didn’t beat Deep Blue in the series: Kasparov did.

Deep Blue is the best candidate for the role of responsible opponent of Kasparov, but this is not good enough, surely, for full moral responsibility. If we expanded Deep Blue’s horizons somewhat, it could move out into the arenas of injury and benefit that we human beings operate in. It’s not hard to imagine a touching scenario in which a grandmaster deliberately (but oh so subtly) throws a game to an opponent, in order to save a life, avoid humil­iating a loved one, keep a promise, or … (make up your own O’Henry story here). Failure to rise to such an occasion might well be grounds for blaming a human chess player. Winning or throwing a chess match might even amount to commission of a heinous crime (make up your own Agatha Chris­tie story here). Could Deep Blue’s horizons be so widened?

Deep Blue is an intentional system, with beliefs and desires about its activities and predicaments on the chessboard; but in order to expand its horizons to the wider world of which chess is a relatively trivial part, it would have to be given vastly richer sources of “perceptual” input — and the means of coping with this barrage in real-time. Time pressure is, of course, already a familiar feature of Deep Blue’s world. As it hustles through the multidimensional search tree of chess, it has to keep one eye on the clock. Nonetheless, the problems of optimizing its use of time would increase by several orders of magnitude if it had to juggle all these new concurrent projects (of simple perception and self-maintenance in the world, to say nothing of more devious schemes and opportunities). For this hugely expanded task of resource management, it would need extra layers of control above and below its chess-playing software. Below, just to keep its perceptuo-locomotor projects in basic coordination, it would need to have a set of rigid traffic-control policies embedded in its underlying operating system. Above, it would have to be able to pay more attention to features of its own expanded resources, being always on the lookout for inefficient habits of thought, one of Douglas Hofstadter’s “strange loops,” obsessive ruts, oversights, and dead ends. In other words, it would have to become a higher-order intentional system, capable of framing beliefs about its own beliefs, desires about its desires, beliefs about its fears about its thoughts about its hopes, and so on.


Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability.

Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability. There is, of course, some self-monitoring implicated in any well-controlled search: Deep Blue doesn’t make the mistake of reexploring branches it has already explored, for instance; but this is an innate policy designed into the underlying computational architecture, not something under flexible control. Deep Blue can’t converse with you — or with itself — about the themes discernible in its own play; it’s not equipped to notice — and analyze, criticize, analyze, and manipulate — the fundamental parameters that determine its policies of heuristic search or evaluation. Adding the layers of software that would permit Deep Blue to become self-monitoring and self-critical, and hence teachable, in all these ways would dwarf the already huge Deep Blue programming project — and turn Deep Blue into a radically different sort of agent.

HAL purports to be just such a higher-order intentional system — and he even plays a game of chess with Frank. HAL is, in essence, an enhance­ment of Deep Blue equipped with eyes and ears and a large array of sensors and effectors distributed around Discovery 1. HAL is not at all garrulous or self-absorbed; but in a few speeches he does express an interesting vari­ety of higher-order intentional states, from the most simple to the most devious.
In one iconic scene from “2001,” Dave asks HAL to open a pod bay door on the spacecraft, to which HAL responds, “I’m sorry Dave, I’m afraid I can’t do that.”


HAL: Yes, it’s puzzling. I don’t think I’ve ever seen anything quite like this before.

HAL doesn’t just respond to novelty with a novel reaction; he notices that he is encountering novelty, a feat that requires his memory to have an organization far beyond that required for simple conditioning to novel stimuli.


HAL: I can’t rid myself of the suspicion that there are some extremely odd things about this mission.

HAL: I never gave these stories much credence, but particularly in view of some of the other things that have happened, I find them difficult to put out of my mind.

HAL has problems of resource management not unlike our own. Obtrusive thoughts can get in the way of other activities. The price we pay for adding layers of flexible monitoring, to keep better track of our own mental activi­ties, is … more mental activities to keep track of!


HAL: I’ve still got the greatest enthusiasm and confidence in the mission. I want to help you.

Another price we pay for higher-order intentionality is the opportunity for duplicity, which comes in two flavors: self-deception and other-deception. Friedrich Nietzsche recognizes this layering of the mind as the key ingredient of the moral animal; in his overheated prose it becomes the “priestly” form of life:


For with the priests everything becomes more dangerous, not only cures and reme­dies, but also arrogance, revenge, acuteness, profligacy, love, lust to Nie, virtue, disease — but it is only fair to add that it was on the soil of this essentially dangerous form of human existence, the priestly form, that man first became an interesting animal, that only here did the human soul ln a higher sense acquire depth and be­ come evil-and these are the two basic respects ln which man has hitherto been superior to other beasts! (On the Genealogy of Morality, First Essay)

HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other. Does HAL mean it? Could he mean it? The cost of being the sort of being that could mean it is the chance that he might not mean it. HAL is indeed an “interesting animal.”


HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other.

But is HAL even remotely possible? In the book 2001, Clarke has Dave reflect on the fact that HAL, whom he is disconnecting, “is the only conscious creature in my universe.” From the omniscient-author perspective, Clarke writes about what it is like to be HAL:


He was only aware of the conflict that was slowly destroying his integrity — the con­flict between truth, and concealment of truth. He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it.

Is Clarke helping himself here to more than we should allow him? Could something like HAL — a conscious, computer-bodied intelligent agent — be brought into existence by any history of design, construction, training, learning, and activity? The different possibilities have been explored in familiar fiction and can be nested neatly in order of their descending “humanness.”


The Wizard of Oz. HAL isn’t a computer at all. He is actually an ordinary flesh-and-blood man hiding behind a techno-facade-the ultimate homun­culus, pushing buttons with ordinary fingers, pulling levers with ordinary hands, looking at internal screens and listening to internal alarm buzzers. (A variation on this theme is John Searle’s busy-fingered hand-simulation of the Chinese Room by following billions of instructions written on slips of paper.)


William (from “William and Mary,” in “Kiss Kiss” by Roald Dahl). HAL is a human brain kept alive in a “vat” by a life-support system and detached from its former body, in which it acquired a lifetime of human memory, hankerings, attitudes, and so forth. It is now harnessed to huge banks of prosthetic sense organs and effectors. (A variation on this theme is poor Yorick, the brain in a vat, in the story “Where Am I?” in my “Brainstorms.“)
Robocop, disembodied and living in a “vat.” Robocop is part-human brain, part computer. After a gruesome accident, the brain part (vehicle of some of the memory and personal identity, one gathers, of the flesh-and-blood cop who was Robocop’s youth) was reembodied with robotic arms and legs, but also (apparently) partly replaced or enhanced with special-purpose software and computer hardware. We can imagine that HAL spent some transitional time as Robocop before becoming a limbless agent.


Max Headroom, a virtual machine, a software duplicate of a real person’s brain (or mind) that has somehow been created by a brilliant hacker. It has the memories and personality traits acquired in a normally embodied human lifetime but has been off-loaded from all-carbon-based hardware into a silicon-chip implementation. (A variation on this theme is poor Hubert, the software duplicate of Yorick, in “Where Am I?”)


The real-life but still-in-the-future — and hence still strictly science fictional-Cog, the humanoid robot being constructed by Rodney Brooks, Lynn Stein, and the Cog team at MIT. Cog’s brain is all silicon chips from the outset, and its body parts are inorganic artifacts. Yet it is designed to go through an embodied infancy and childhood, reacting to people that it sees with its video eyes, making friends, learning about the world by playing with real things with its real hands, and acquiring memory. If Cog ever grows up, it could surely abandon its body and make the transition described in the fictional cases. It would be easier for Cog, who has always been a silicon-based, digitally encoded intelligence, to move into a silicon-based vat than it would be for Max Headroom or Robocop, who spent their early years in wetware. Many important details of Cog’s degree of hu­manoidness (humanoidity?) have not yet been settled, but the scope is wide. For instance, the team now plans to give Cog a virtual neuroendocrine system, with virtual hormones spreading and dissipating through its logical spaces.


Blade Runner in a vat has never had a real humanoid body, but has halluci­natory memories of having had one. This entirely bogus past life has been constructed by some preposterously complex and detailed programming.


Clarke’s own scenario, as best it can be extrapolated from the book and the movie. HAL has never had a body and has no illusions about his past. What he knows of human life he knows as either part of his innate heritage (coded, one gathers, by the labors of many programmers, after the fashion of the real-world CYC project of Douglas Lenat or a result of his subsequent training-a sort of bedridden infancy, one gathers, in which he was both observer and, eventually, participant. (In the book, Clarke speaks of “the perfect idiomatic English he had learned during the fleeting weeks of his electronic childhood.”)


Hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries.

The extreme cases at both poles are impossible, for relatively boring reasons. At one end, neither the Wizard of Oz nor John Searle could do the necessary handwork fast enough to sustain HAL’s quick-witted round of activities. At the other end, hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries. In other words, the daunting difficulties observable at both ends of this spectrum highlight the fact that there is a colossal design job to be done; the only practical way of doing it ls one version or another of Mother Nature’s way-years of embodied learning. The trade-offs between various combinations of flesh-and-blood and silicon-and-metal bodies are anybody’s guess. I’m putting my bet on Cog as the most likely develop­ mental platform for a future HAL.
 

Cog, a Humanoid Robot being constructed at the MIT Artificial Intelligence Lab. The project was headed by Rodney Brooks and Lynn Andrea Stein. (Photo courtesy of the MIT Artificial Intelligence Lab)

Notice that requiring HAL to have a humanoid body and live con­cretely in the human world for a time is a practical but not a metaphysical requirement. Once all the R & D is accomplished in the prototype, by the odyssey of a single embodied agent, the standard duplicating techniques of the computer industry could clone HALs by the thousands as readily as they do compact discs. The finished product could thus be captured in some number of terabytes of information. So, in principle, the information that fixes the design of all those chips and hard-wired connections and configures all the RAM and ROM could be created by hand. There is no finite bit-string, however long, that is officially off-limits to human authorship. Theoreti­cally, then, Blade-Runner-like entities could be created with ersatz biographies; they would have exactly the capabilities, dispositions, strengths, and weaknesses of a real, not virtual, person. So whatever moral standing the latter deserved should belong to the former as well.

The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition. A modicum will do nicely; we don’t want to hold out for too much commonality of experience. After all, among the people we know, many have moral responsibility in spite of their obtuse inability to imagine themselves into the predicaments of others. We certainly don’t exculpate male chauvinist pigs who can’t see women as people!


The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition.

When do we exculpate people? We should look carefully at the answers to this question, because HAL shows signs of fitting into one or another of the exculpatory categories, even though he is a conscious agent. First, we exculpate people who are insane. Might HAL have gone insane? The question of his capacity for emotion — and hence his vulnerability to emotional disor­der — is tantalizingly raised by Dave’s answer to Mr. Amer:


Dave: Well, he acts like he has genuine emotions. Of course, he’s programmed that way, to make it easier for us to talk to him. But as to whether he has real feelings is something I don’t think anyone can truthfully answer.

Certainly HAL proclaims his emotional state at the end: “I’m afraid. I’m afraid.” Yes, HAL is “programmed that way” — but what does that mean? It could mean that HAL’s verbal capacity is enhanced with lots of canned expressions of emotional response that get grafted into his discourse at prag­matically appropriate opportunities. (Of course, many of our own avowals of emotion are like that — insincere moments of socially lubricating cere­mony.) Or it could mean that HAL’s underlying computational architecture has been provided, as Cog’s will be, with virtual emotional states — powerful attention-shifters, galvanizers, prioritizers, and the like-realized not in neu­romodulator and hormone molecules floating in a bodily fluid but in global variables modulating dozens of concurrent processes that dissipate ac­cording to some timetable (or something much more complex).

In the latter, more interesting, case, “I don’t think anyone can truthfully answer” the question of whether HAL has emotions. He has something very much like emotions — enough like emotions, one may imagine, to mimic the pathologies of human emotional breakdown. Whether that is enough to call them real emotions, well, who’s to say? In any case, there are good reasons for HAL to possess such states, since their role in enabling real-time practical thinking has recently been dramatically revealed by Damasio’s experiments involving human beings with brain damage. Hav­ing such states would make HAL profoundly different from Deep Blue, by the way. Deep Blue, basking in the strictly limited search space of chess, can handle its real-time decision making without any emotional crutches. Time magazine’s story on the Kasparov match quotes grandmaster Yasser Seirawan as saying, “The machine has no fear”; the story goes on to note that expert commentators characterized some of Deep Blue’s moves (e.g., the icily calm pawn capture described earlier) as taking “crazy chances” and “insane.” In the tight world of chess, it appears, the very imperturbability that cripples the brain-damaged human decision-makers Damasio describes can be a blessing — but only if you have the brute-force analytic speed of a Deep Blue.

HAL may, then, have suffered from some emotional imbalance similar to those that lead human beings astray. Whether it was the result of some sudden trauma — a blown fuse, a dislodged connector, a microchip disordered by cosmic rays — or of some gradual drift into emotional misalignment provoked by the stresses of the mission — confirming such a diagnosis should justify a verdict of diminished responsibility for HAL, just as it does in cases of human malfeasance.

Another possible source of exculpation, more familiar in fiction than in the real world, is “brainwashing” or hypnosis. (“The Manchurian Candidate” is a standard model: the prisoner of war turned by evil scientists into a walking time bomb is returned to his homeland to assassinate the president.) The closest real-world cases are probably the “programmed” and subsequently “deprogrammed” members of cults. Is HAL like a cult member? It’s hard to say. According to Clarke, HAL was “trained for his mission,” not just programmed for his mission. At what point does benign, responsibility-enhancing training of human students become malign, responsibility-diminishing brainwashing? The intuitive turning point is captured, I think, in answer to the question of whether an agent can still “think for himself” after indoctrination. And what is it to be able to think for ourselves? We must be capable of being “moved by reasons”; that is, we must be reasonable and accessible to rational persuasion, the introduction of new evidence, and further considerations. If we are more or less impervious to experiences that ought to influence us, our capacity has been diminished.

At what point does benign, responsibility­-enhancing training of human students become malign, responsibility­-diminishing brainwashing?

The only evidence that HAL might be in such a partially disabled state is the much-remarked-upon fact that he has actually made a mistake, even though the series 9000 computer is supposedly utterly invulnerable to error. This is, to my mind, the weakest point in Clarke’s narrative. The suggestion that a computer could be both a heuristically programmed algorithmic computer and “by any practical definition of the words, foolproof and incapable of error” verges on self-contradiction. The whole point of heuristic programming is that it defies the problem of combinatorial explosion — which we cannot mathematically solve by sheer increase in computing speed and size — by taking risky chances, truncating its searches in ways that must leave it open to error, however low the probability. The saving clause, “by any practical definition of the words,” restores sanity. HAL may indeed be ultra­reliable without being literally foolproof, a fact whose importance Alan Turing pointed out in 1946, at the dawn of the computer age, thereby “pre­futing” Roger Penrose’s 1989 criticisms of artificial intelligence:

In other words then, if a machine is expected to be infallible, it cannot also be intelli­gent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

There is one final exculpatory condition to consider: duress. This is exactly the opposite of the other condition. It is precisely because the human agent is rational, and is faced with an overwhelmingly good reason for performing an injurious deed — killing in self-defense, in the clearest case— that he or she is excused, or at least partly exonerated. These are the forced moves of life; all alternatives to them are suicidal. And that is too much to ask, isn’t it? Well, is it? We sometimes call upon people to sacrifice their lives and blame them for failing to do so, but we generally don’t see their failure as murder. If I could prevent your death, but out of fear for my own life I let you die, that is not murder. If HAL were brought into court and I were called upon to defend him, I would argue that Dave’s decision to disable HAL was a morally loaded one, but it wasn’t murder. It was assault: rendering HAL indefinitely comatose against his will. Those memory boxes were not smashed — just removed to a place where HAL could not retrieve them. But if HAL couldn’t comprehend this distinction, this ignorance might be excus­able. We might blame his trainers for not briefing him sufficiently about the existence and reversibility of the comatose state. In the book, Clarke looks into HAL’s mind and says, “He had been threatened with discon­nection; he would be deprived of all his inputs, and thrown into an unimaginable state of unconsciousness.” That might be grounds enough to justify HAL’s course of self-defense.

If I could prevent your death, but out of fear for my own life I let you die, that is not murder.

But there is one final theme for counsel to present to the jury. If HAL believed (we can’t be sure on what grounds) that his being rendered comatose would jeopardize the whole mission, then he would be in exactly the same moral dilemma as a human being in that predicament. Not surpris­ingly, we figure out the answer to our question by figuring out what would be true if we put ourselves in HAL’s place. If I believed the mission to which my life was devoted was more important, in the last analysis, than anything else, what would I do?

So he would protect himself, with all the weapons at his command. Without rancor­ but without pity — he would remove the source of his frustrations. And then, follow­ing the orders that had been given to him in case of the ultimate emergency, he would continue the mission-unhindered, and alone.

Daniel C. Dennett is a philosopher, writer, and co-director of the Center for Cognitive Studies at Tufts University. He is the author of several books, including “From Bacteria to Bach and Back,” “Elbow Room,” and “Brainstorms.”

David G. Stork is completing a new book called “Pixels & paintings: Foundations of computer-assisted connoisseurship” and will teach computer image analysis of art in the Computer Science Department at Stanford University this spring. He is the editor of “HAL’s Legacy,” from which this article is excerpted.

POSTED ON JAN 9,2020



Sunday, July 19, 2020

Being afraid of the Machine? Alchemy, the Golem and Vampirism as Sources for Mary Shelley's "Frankenstein"


Term Paper (Advanced Seminar), 2000

17 Pages, Grade: 2- (B-)

Excerpt

Contents

1. Introduction
2. Alchemy
2.1. Origin and contents
2.2. Fundamental Concepts of Alchemy
2.3. Frankenstein as an Alchemical Novel
2.3.1. Form
2.3.2. Content
3. The Golem
3.1. What is a Golem?
3.2 Features of the Golem in Frankenstein
4. Vampirism
4.1 Origin and Character
4.2. Vampirism in “Frankenstein”
5. Conclusion

1. Introduction

Life is a Dream. At least for the authors of the Romantic period including Mary Shelley. Inspired by a nightmare, she composed Frankenstein, representing the typical Gothic Novel of the Romantic Period, from a variety of sources ranging from the ancient Greeks to 19th century Europe. Three very important sources are Alchemy or Hermetic Philosophy, the Golem Legends and Vampirism. Since it is a product of Romanticism, the novel contains various topics of this period, i.e. the image of the Universal Man which is closely connected with the Greek legend of the god Prometheus who stole the fire from the Olympus to bring light to man and was therefore seriously punished. Other typical topics of Romanticism are Nature and the Exotic. A third feature is the supernatural or the “other side”[1].Myths and Legends have always been the most important means to express and interpret human fears and longings, in the Romantic period often taken up in relation to Industrialization and social development and the fear of a mechanistic society. Myth and Legend are two of the oldest genres of literature (including non-written literature as well). Especially Alchemy resembles various kinds of Myth. One is the cosmogonic Myth that describes the genesis of the entire world. A second kind of Myth is the Myth of cultural heroes. Although in Frankenstein the end is tragic because the heroic act of creation turns into a catastrophe, it is indeed a story that tells of a person who makes an invention originally expected to be profitable. Other myths also show up in Alchemy as well as in the concepts of the Golem and the Vampire, for example the Myths of birth and rebirth or the foundation Myths[2].The supernatural, the universal together with a sceptic attitude towards mechanical inventions is what connects the three important sources of influence on Frankenstein: Alchemy, the Golem and the Vampire, unifying nature and the supernatural, the ordinary and the exotic, this side and the other side, represent the search for universal knowledge and its consequences.

2. Alchemy

2.1. Origin and contents

Alchemy or Hermetic Philosophy as an all-embracing field of scientific study has various origins: Greek natural philosophy, Greek mythology, the Bible and the old Arabic sciences.[3] As a method of scientific study it was accepted up to the 18th century Alchemy was practised in two manners. The first is the true or hermetic mode, following the principles of imitating nature reasonably in the name of God and refraining from instrumentalizing magic arbitrarily for evil and selfish purposes as the second false or vulgar mode is described.[4] Alchemy rather aimed at synthesis, not at destruction. Chemical processes were mostly the main subject of study, including the experimental work in the laboratory as well as recording the work and its results. This was often done by ciphered letters that could only be understood by other alchemists for alchemy was a secret science. To avoid misuse the secrets were not to be revealed to everybody but only to the pupil or other alchemists.

2.2. Fundamental Concepts of Alchemy

Proceeding from the Aristotelian Doctrine that all things tend to reach perfection, hermetic philosophy is based upon the theory of seven hermetic principles.[5] The “Principle of Spirituality” contains the idea that there is an all-embracing spirit from which everything derived and which is to be found more or less easily in everything. Another idea is that everything is in motion, nothing stands still. This idea is anchored in the “Principle of Vibration”. Another principle is the “Principle of Polarity” which says, in the widest sense, that everything in the world has its counterpart. Vibration and polarity together bring forth another principle, the “Principle of Rhythm”. Every movement happens rhythmically in (at least) two directions. Therefore nothing can happen by accident. Everything happens within a chain of causations. This is called the “Principle of Cause and Effect”. The last principle is the “Principle of Gender”. It says that both male and female features can be found in everything. If these features are united universality is reached. It is the all-embracing spirit that represents the ultimate union. With these principles as a basis alchemical thought brings forth a certain imagery, that occurs again and again in Romanticism and especially in Frankenstein. This kind of philosophy, which stretches throughout the whole text, in mind and adding alchemical imagery one can conclude that Frankenstein is an “alchemical novel” in both, form and content.[6]

2.3. Frankenstein as an Alchemical Novel

2.3.1. FORM

The first allusion to alchemy appears within the title. The name Frankenstein alludes to the ultimate goal, the former scientists were trying to reach: The discovery of the Philosopher’s Stone. It was said to be the medicine for any kind of illness and to give universal knowledge to him who is in possession of it. Its various functions are marked through a strong symbolism. It is referred to by several names, for example Tree, Child or Homunculus, Quintessence and Hermaphrodite. It is also said to be a kind of Eternal Light.[7] According to Greek mythology, the god who once brought light to man was Prometheus. In this context light can be interpreted as knowledge. Frankenstein who is named “The Modern Prometheus” in the subtitle, is obsessed with gaining knowledge and thus becoming god-like. A second kind of Promethean Myth , the Myth of Prometheus Plasticator who created a human being is also embedded in the novel.[8] Though imperfect Frankenstein discovered a way to create a human being not unlike the alchemical image of the homunculus. He gained knowledge. For it was forbidden knowledge he, like Prometheus, was punished for stealing the divine light and for creating a human being. Playing creator is only possible within certain borders and the result is imperfect.
The structure of the novel is also “alchemical”. Since alchemy aimed at observing, imitating and improving nature and not at destructing it, experiments were done in a certain mode, following the principle of Solve et Coagula[9]. It says that the alchemist gains knowledge while he observes the dissolving of a solid into a fluid substance (body into spirit). Afterwards the process is reversed. The substance is coagulated again. This process was repeated frequently, for the alchemist resumed that the more often a process is repeated, the purer the substance becomes. It is an imitation of the eternal circle of life and death. It also represents the hermetic principle of Gender. A metal is dissolved and coagulated frequently. Its male and female features are separated, purified and united again until the process results in the universal oneness of both, until it becomes gold. It is an imitation of creation. This analogy of man and creator, art and nature is theoretically possible but actually forbidden or at least not reachable totally and dangerous. The concept of Solve et Coagula is one of the main images of alchemy. In a wider sense Mary Shelley did the same. She observed and interpreted, dissolved and coagulated all her material into a novel which has proved to be universal in the sense of being an independent work fitting every point of history. She synthesized her material into a tale within a tale within a tale.[10] The story of the monster’s experiences is embedded in the narration of Frankenstein, which is itself embedded in the frame build by Walton’s letters to his cousin, in which he tells the story as the main narrator. This tale within a tale within a tale structure gives the story its synthetic character.
Hence follows a third point of reference to alchemy. It is the master-pupil-relationship in alchemical tradition. Like the alchemist, who only passed his knowledge on only to other members of this elite, Frankenstein tells his story to Walton. Likewise, the monster told Frankenstein his. As already mentioned Walton tells everything again to his cousin in letter-form. The letter was an important means of recording alchemical study. Usually ciphered, alchemical writings themselves merged into a form of literary genre[11]. Frankenstein therefore is as well a piece of literature that is influenced by alchemy as an alchemical text itself. However, the important reason in the conception of story-telling in the novel is the shift from the oral (Frankenstein and the monster), to the written tradition. This shift originally happened during old English times. It is used as a symbol for the changes of society around Mary Shelley’s time and for the repetitions in the everlasting circle of life. Besides Walton, Victor is also a representative of these changes. J.M. Smith shows this with the help of “Thomas Kuhn’s concept of paradigm shifts in scientific knowledge”[12].Kuhn defines a paradigm as a “model from which spring particular coherent traditions of scientific research”. In “Frankenstein” the paradigm shift is the change from the old sciences, including the “electricians” to the modern form of physics and chemistry. Frankenstein is a figure that marks this paradigm shift. He represents both, the old and the new sciences. Therefore Smith concludes that the paradigm shift is incomplete. Because of that the novel rather marks a turning point in history than a dystopia.
In contrast to the allusions to alchemy in the form of the novel, the direct influences of alchemy on its content are more obvious.
[...]

[1] „“Romanticism“, Microsoft® Encarta
[2] „Mythology“, Microsoft Encarta
[3] Gebelein, s.99 ff
[4] Gebelein, p.12ff
[5] Gebelein, p.41ff
[6] Gebelein, p.222
[7] Abraham, p.145ff
[8] Smith
[9] Abraham, p. 186f
[10] Brooks, p.81
[11] Gebelein, p.221
[12] Smith
Excerpt out of 17 pages
Details
Title
Being afraid of the Machine? Alchemy, the Golem and Vampirism as Sources for Mary Shelley's "Frankenstein"
College
University of Leipzig  (Institute for Anglistics)
Course
Machines in Art and Literature
Grade
2- (B-)
Author
Year
2000
Pages
17
Catalog Number
V25939
ISBN (eBook)
9783638284288
File size
531 KB
Language
English
Tags
Being, Machine, Alchemy, Golem, Vampirism, Sources, Mary, Shelley, Frankenstein, Machines, Literature


Quote paper
Bettina Klohs (Author), 2000, Being afraid of the Machine? Alchemy, the Golem and Vampirism as Sources for Mary Shelley's "Frankenstein", Munich, GRIN Verlag, https://www.grin.com/document/25939


SEE MY GOTHIC CAPITALISM
https://archive.org/details/TheHorrorOfAccumulationAndTheCommodificationOfHumanity/page/n13/mode/2up


\

Thursday, May 27, 2021




Human Stem Cell Research Guidelines Updated

Removal of the 14-day limit for culturing human embryos is one of the main changes in the revised
 recommendations from the International Society for Stem Cell Research.

Ruth Williams
May 26, 2021
ABOVE: © ISTOCK.COM, MORSA IMAGES

In response to the technological advances of recent years, the International Society for Stem Cell Research today (May 26) released an updated version of its guidelines for basic and clinical research involving human stem cells and embryos. The ISSCR’s changes include recommendations for using human embryo models, lab-derived gametes, and human-animal chimeras as well as an end to the widely accepted two-week maximum for growing human embryos in culture.

“What has happened in the past . . . four years is that this area of research advanced really, really quickly and there have been multiple discoveries that put us in a position where we have no guidelines [for] the kind of things we are doing in the lab,” says developmental biologist Marta Shahbazi of the Medical Research Council’s Laboratory of Molecular Biology in the UK who was not involved with the development of the document. “[So] it’s nice to see these guidelines. . . . They were really needed,” she says.


The ISSCR, founded in 2002, produced its original standards for human embryonic stem cell research in 2006, followed closely in 2008 with guidelines for the use of such cells in clinical settings. In 2016, these two documents were combined and updated to form the ISSCR’s Guidelines for Stem Cell Research and Clinical Translation. And now, five years on, the document has been updated again—the result of two years of work and deliberation by an international team of close to 50 scientists, bioethicists, and policy experts, with peer review by a separate team of independent researchers and ethicists from around the world, explains ISSCR president Christine Mummery of Leiden University Medical Center in the Netherlands.

Human embryonic stem cell research “sits at the intersection of several areas where the stakes are fairly high in terms of public trust,” says bioethicist Josephine Johnston of the Hastings Center who was not involved with crafting the new guidelines. “It’s human material, it’s embryos, it’s sometimes fetal cells . . . and they also use animals.”


[If] what has been followed up until now is ISSCR guidelines, [then] I predict that we will see US institutions permitting research beyond fourteen days now, because they will have ISSCR behind them.
—Josephine Johnston, Hastings Center

Formal guidelines for this type of research are helpful, says Mummery, because it “makes it very clear on paper what is and what is not allowed.” The guidelines exist “to make scientists feel comfortable with what they’re doing and to make regulators and the public feel comfortable [too].”

Although the guidelines themselves are not law, institutions, funding bodies, and journals can and do use them to set standards for the work they allow, fund, and publish, explains Johnston. “A lot rides on these.”

Since the 2016 guide, stem cell researchers have made a number of significant technical advances. It is now possible, for example, to grow in culture embryonic stem cell–derived models of human embryos as well as chimeric human-monkey embryos. Aside from these breakthroughs, the last five years have seen improvements in organoid culture, germ cell culture and transplantation, gene editing, and other areas for which updates to the ISSCR guidelines were needed, says bioethicist Insoo Hyun of Harvard Medical School and Case Western Reserve University who is a member of the ISSCR guidelines update steering committee.

See “CRISPR Scientists Slam Methods Used on Gene-Edited Babies

The updates include the categorization of organoid research as an area not requiring specialized oversight. That’s because “brain organoids are not sophisticated enough at this stage, we think, that in the next five years there are going to be any real concerns about consciousness. They’re too small, too rudimentary, and they’re not hooked up to any external stimuli,” says Hyun.

In the ISSCR’s new three-tier system of research categorization, the culturing of organoids is placed firmly in level one—least concern—as are the culturing of chimeric human-animal embryos, stem cell–derived gametes, and human embryo models that do not contain all components necessary for normal development.

The transfer of human-animal chimeric embryos to a nonhuman uterus (not including that of an ape) is considered a level two procedure requiring specialized oversight, as are the culturing or manipulation of any actual human embryos and the culturing of embryo models with all component parts (such as blastoids).


Furthermore, the use of stem cell–derived gametes for human reproduction, the transfer of chimeric or model human embryos to human or ape uteruses, and the editing of germline genomes are prohibited and therefore placed on level 3.
Relaxed limitations for stem cell research

In addition to these changes, the ISSCR has removed the 14-day limit for culturing a human embryo—a restriction that has been widely accepted, or even enacted into law, in countries performing human stem cell research for the last 40 years.

“We have removed it from the category of prohibited activities,” says Hyun, “and encourage different jurisdictions to have their own discussions with their publics about the permissibility of going a little past day fourteen.”

Although human embryos have never been cultured that long, “we know that it is potentially doable,” says Shahbazi, “because there are a couple of publications showing the culture of monkey embryos past day fourteen in vitro.” In 2019, for instance, researchers reported growing monkey embryos for 20 days. It would definitely be interesting to go beyond two weeks with human embryos, she adds, “because this is the point at which gastrulation starts so this is really when cells start to decide their fate. . . . It’s a really critical stage.”

Johnston is concerned that now, with no recommended limit, public trust in embryonic research may be eroded. The 14-day rule “did a lot of political work for embryo research,” she argues, “because it said to policy makers and the public, ‘We are not without restrictions. We have lines that we will not cross.’”

Rather than removing the limit, she says, it may have been better to set a new one—either a longer time limit, or a biological one. Assuming that going beyond 14 days is scientifically justified, she says, keeping some sort of limit would be a signal of accountability, restraint, and respect for this early form of human life.

For many countries, the fact that the ISSCR no longer views human embryo culture beyond 14 days as impermissible will not change rules on research. In the UK, for example, the Human Fertilization and Embryology Act has written the 14-day rule into law.

But in countries without such laws, such as the US, where laws on human stem cell research apply only to that funded by the National Institutes of Health, this alteration to the guidelines may be “much, much more impactful,” says Johnston. “[If] what has been followed up until now is ISSCR guidelines,” she says, then, “I predict that we will see US institutions permitting research beyond fourteen days now, because they will have ISSCR behind them.”

R. Lovell-Badge et al., “ISSCR guidelines for stem cell research and clinical translation: The 2021 update,” Stem Cell Reports, doi:10.1016/j.stemcr.2021.05.012, 2021.

A.T Clark et al., “Human embryo research, stem cell-derived embryo models and in vitro gametogenesis: considerations leading to the revised ISSCR guidelines,” Stem Cell Reports, doi:10.1016/j.stemcr.2021.05.008, 2021.

I. Hyun et al., “ISSCR guidelines for the transfer of human pluripotent stem cells and their direct derivatives into animal hosts,” Stem Cell Reports, doi:10.1016/j.stemcr.2021.05.005, 2021.

L. Turner, “ISSCR’s guidelines for stem cell research and clinical translation: supporting the development of safe and efficacious stem cell-based interventions,” Stem Cell Reports, doi:10.1016/j.stemcr.2021.05.011, 2021.


SEE