The Conversation
January 29, 2022
Woman playing video games (Shutterstock)
Most research into the ethics of Artificial Intelligence (AI) concerns its use for weaponry, transport or profiling. Although the dangers presented by an autonomous, racist tank cannot be understated, there is another aspect to all this. What about our responsibilities to the AIs we create?
Massively-multiplayer online role-playing games (such as World of Warcraft) are pocket realities populated chiefly by non-player characters. At the moment, these characters are not particularly smart, but give it 50 years and they will be.
Sorry? 50 years won’t be enough? Take 500. Take 5,000,000. We have the rest of eternity to achieve this.
You want planet-sized computers? You can have them. You want computers made from human brain tissue? You can have them. Eventually, I believe we will have virtual worlds containing characters as smart as we are – if not smarter – and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.
So knowing all that…should we?
Ethical difficulties of free will
As I’ve explored in my recent book, whenever “should” is involved, ethics steps in and takes over – even for video games. The first question to ask is whether our game characters of the future are worthy of being considered as moral entities or are simply bits in a database. If the latter, we needn’t trouble our consciences with them any more than we would characters in a word processor.
The question is actually moot, though. If we create our characters to be free-thinking beings, then we must treat them as if they are such – regardless of how they might appear to an external observer.
That being the case, then, can we switch our virtual worlds off? Doing so could be condemning billions of intelligent creatures to non-existence. Would it nevertheless be OK if we saved a copy of their world at the moment we ended it? Does the theoretical possibility that we may switch their world back on exactly as it was mean we’re not actually murdering them? What if we don’t have the original game software?
Can we legitimately cause these characters suffering? We ourselves implement the very concept, so this isn’t so much a question about whether it’s OK to torment them as it is about whether tormenting them is even a thing. In modern societies, the default position is that it’s immoral to make free-thinking individuals suffer unless either they agree to it or it’s to save them (or someone else) from something worse. We can’t ask our characters to consent to be born into a world of suffering – they won’t exist when we create the game.
So, what about the “something worse” alternative? If you possess free will, you must be sapient, so must therefore be a moral being yourself. That means you must have developed morals, so it must be possible for bad things to happen to you. Otherwise, you couldn’t have reflected on what’s right or wrong to develop your morals. Put another way, unless bad things happen, there’s no free will. Removing free will from a being is tantamount to destroying the being it was previously, therefore yes, we do have to allow suffering or the concept of sapient character is an oxymoron.
As I’ve explored in my recent book, whenever “should” is involved, ethics steps in and takes over – even for video games. The first question to ask is whether our game characters of the future are worthy of being considered as moral entities or are simply bits in a database. If the latter, we needn’t trouble our consciences with them any more than we would characters in a word processor.
The question is actually moot, though. If we create our characters to be free-thinking beings, then we must treat them as if they are such – regardless of how they might appear to an external observer.
That being the case, then, can we switch our virtual worlds off? Doing so could be condemning billions of intelligent creatures to non-existence. Would it nevertheless be OK if we saved a copy of their world at the moment we ended it? Does the theoretical possibility that we may switch their world back on exactly as it was mean we’re not actually murdering them? What if we don’t have the original game software?
Can we legitimately cause these characters suffering? We ourselves implement the very concept, so this isn’t so much a question about whether it’s OK to torment them as it is about whether tormenting them is even a thing. In modern societies, the default position is that it’s immoral to make free-thinking individuals suffer unless either they agree to it or it’s to save them (or someone else) from something worse. We can’t ask our characters to consent to be born into a world of suffering – they won’t exist when we create the game.
So, what about the “something worse” alternative? If you possess free will, you must be sapient, so must therefore be a moral being yourself. That means you must have developed morals, so it must be possible for bad things to happen to you. Otherwise, you couldn’t have reflected on what’s right or wrong to develop your morals. Put another way, unless bad things happen, there’s no free will. Removing free will from a being is tantamount to destroying the being it was previously, therefore yes, we do have to allow suffering or the concept of sapient character is an oxymoron.
Afterlife?
Accepting that our characters of the future are free-thinking beings, where would they fit in a hierarchy of importance? In general, given a straight choice between saving a sapient being (such as a toddler) or a merely sentient one (such as a dog), people would choose the former over the latter. Given a similar choice between saving a real dog or a virtual saint, which would prevail?
Bear in mind that if your characters perceive themselves to be moral beings but you don’t perceive them as such, they’re going to think you’re a jerk. As Alphinaud Leveilleur, a character in Final Fantasy XIV, neatly puts it (spoiler: having just discovered that his world was created by the actions of beings who as a consequence don’t regard him as properly alive): “We define our worth, not the circumstances of our creation!”.
World of Warcraft is massively-multiplayer online role-playing game.
Daniel Krason/Shutterstock
Are we going to allow our characters to die? It’s extra work to implement the concept. If they do live forever, do we make them invulnerable or merely stop them from dying? Life wouldn’t be much fun after falling into a blender, after all. If they do die, do we move them to gaming heaven (or hell) or simply erase them?
These aren’t the only questions we can ask. Can we insert ideas into their heads? Can we change their world to mess with them? Do we impose our morals on them or let them develop their own (with which we may disagree)? There are many more.
Ultimately, the biggest question is: should we create sapient characters in the first place?
Now you’ll have noticed that I’ve asked a lot of questions here. You may well be wondering what the answers are.
Well, so am I! That’s the point of this exercise. Humanity doesn’t yet have an ethical framework for the creation of realities of which we are gods. No system of meta-ethics yet exists to help us. We need to work this out before we build worlds populated by beings with free will, whether 50, 500, 5,000,000 years from now or tomorrow. These are questions for you to answer.
Be careful how you do so, though. You may set a precedent.
We ourselves are the non-player characters of Reality.
Richard A. Bartle, Professor of Computer Game Design, University of Essex
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Accepting that our characters of the future are free-thinking beings, where would they fit in a hierarchy of importance? In general, given a straight choice between saving a sapient being (such as a toddler) or a merely sentient one (such as a dog), people would choose the former over the latter. Given a similar choice between saving a real dog or a virtual saint, which would prevail?
Bear in mind that if your characters perceive themselves to be moral beings but you don’t perceive them as such, they’re going to think you’re a jerk. As Alphinaud Leveilleur, a character in Final Fantasy XIV, neatly puts it (spoiler: having just discovered that his world was created by the actions of beings who as a consequence don’t regard him as properly alive): “We define our worth, not the circumstances of our creation!”.
World of Warcraft is massively-multiplayer online role-playing game.
Daniel Krason/Shutterstock
Are we going to allow our characters to die? It’s extra work to implement the concept. If they do live forever, do we make them invulnerable or merely stop them from dying? Life wouldn’t be much fun after falling into a blender, after all. If they do die, do we move them to gaming heaven (or hell) or simply erase them?
These aren’t the only questions we can ask. Can we insert ideas into their heads? Can we change their world to mess with them? Do we impose our morals on them or let them develop their own (with which we may disagree)? There are many more.
Ultimately, the biggest question is: should we create sapient characters in the first place?
Now you’ll have noticed that I’ve asked a lot of questions here. You may well be wondering what the answers are.
Well, so am I! That’s the point of this exercise. Humanity doesn’t yet have an ethical framework for the creation of realities of which we are gods. No system of meta-ethics yet exists to help us. We need to work this out before we build worlds populated by beings with free will, whether 50, 500, 5,000,000 years from now or tomorrow. These are questions for you to answer.
Be careful how you do so, though. You may set a precedent.
We ourselves are the non-player characters of Reality.
Richard A. Bartle, Professor of Computer Game Design, University of Essex
This article is republished from The Conversation under a Creative Commons license. Read the original article.
No comments:
Post a Comment