Milo and the Ethics of Xbox Natal

Milo, The face of Xbox Natal
Milo, The face of Xbox Natal

So who’s seen Milo?

Those of you who are avid gamers like me know what (who?) I’m talking about.  The Natal system was unveiled last week during E3: a motion-sensing, controllerless interface for the Xbox. It’s bleeding-edge stuff. The system performs motion-capture in real time, can recognize faces and voices, and allows for a whole new level of interaction between the user and the console.

Milo is one kind of interaction we’ve never seen before – not outside sci-fi movies, anyhow. He’s a charming 8 year-old avatar developed by the people at Lionhead under Peter Molyneux.

If you still haven’t seen it, allow me to refresh your memory. Here’s footage from Milo’s E3 premiere.

In his remarks after the film, Molyneux says “You can meet what I believe is a real character. That understands you, that understands your voices, your emotions, that’s fascinated by your life.”

The guys at Kotaku got a chance to look at him in more detail, and found that he’ll even automatically keep track of major entertainment events in the real world, so he can talk about them with you:

Molyneux also talked about how the software could in theory track your daily Xbox 360 usage to help build out a conversation with you about your gaming habits. He also plans to have Milo track bigger cultural events, like American Idol, and get regular voice updates so the child can talk to you about things currently going on in the world.

What I find interesting in this is the emotional dimension of the experience. Milo is designed as a way for the user to have a genuine emotional attachment to the system. It’s an area that Molyneux and Lionhead have been exploring for quite some time in their other games.

But Natal takes it to another level – one where we might have to start considering some ethical questions.

Killing the Kid?

See, Milo learns about you the longer you keep him turned on. The idea is that you build a shared history, where you can talk about things you’ve done together and expand the relationship. The guys from Kotaku discovered something else interesting, too: Milo is reset every few days.

Lionhead’s Project Natal demo Milo may look to be eight, but he’s never lived longer than 12 days.

Speaking with Peter Molyneux this week, the developer said that the child artificial intelligence for Xbox 360 tech demo Milo and Kate is usually “scrubbed” after about 200 hours. The longest Milo has “lived” is 300 hours, he said. Something done to help test the development of their virtual child and his ability to track experiences.

Molyneux repeated that Milo isn’t meant to be a living AI, but rather a cleverly-crafted combination of nuanced facial animation and artificial emotion that creates the illusion of life.

Enter the Turing Test

Intelligence is a hard thing to define. That’s why the Turing Test is so useful. If it talks like it’s got a mind, then it probably does have a mind. The illusion if interacting with an intelligence, after all, is no different from actually interacting with one.

Maybe Milo isn’t quite there yet. But from what I’ve heard and read, it sounds to me like Milo is designed explicitly to make you feel like he’s got a mind.  The whole idea of Milo is that here’s a construct you can build a real relationship with.

So where does that leave us, ethically, if we ‘scrub’ him every 200 to 300 hours?

Nowhere, you might say. He’s not an intelligence. He’s just a clever computer program that’s designed to make us bond with him.We can erase him when we want.

Emotions and Relationships

The trouble with that line of reasoning is that emotions, not intelligence, make relationships real. And that’s where we get into interesting ground, ethically.

Most of us wouldn’t consider rats particularly intelligent. Killing them is not a crime, and is usually not frowned on. Often, it’s a useful profession.

So what about this girl here? Is it OK to grab a hammer and smash her pet?

Obviously not. Why? It’s not because of the rat. The rat is no different from the millions of others out there who eat our grain, chew our electric cables, spread disease and so on.

The only difference, ethically, is that this particular rat has an emotional relationship with this particular girl. It knows and recognizes her, and she reciprocates.

That’s why hurting it would be wrong; it erodes the emotional matrix of relationships that binds society together and gives our lives meaning.

Nor is this sort of emotional relationship limited to living objects. You might have a fondness for your mobile phone, or maybe a particular ring or item of cothing. These are inanimate – and in most cases, entirely replaceable – objects. Nevertheless, my destroying or damaging them would be ethically wrong. Your emotions are what make it wrong – it’s nothing intrinsic to the object.

I haven’t played with Milo, so I don’t know how intelligent he actually seems, or how well he succeeds in eliciting the sort of emotional response he’s meant to. But he’ll be improved on. There’ll be a Milo v2.0.

What do we do with the old Milo when his replacement download becomes available? Suddenly, system upgrades aren’t so straightforward.

Peter Molyneux ended his E3 presentation with the words: “This is a landmark in computer entertainment”

I couldn’t agree more.

[This post also cross-posted on Objective 514]

6 thoughts on “Milo and the Ethics of Xbox Natal

  1. Hmmm…

    I would say that we haven’t hit the ethical milestone yet. The rat is semi-sentient, and Milo isn’t sentient at all. Resetting his is no more wrong than this:

    Any feelings we currently assign to him are solely the result of anthropomorphization. As for his impact on humans… well, we might feel bad turning Milo off, and I’m sure his creators are very attached to him, but resseting him is likely part of his nature, perhaps even essential to his proper operation. Turning him off is as natural as turning off a lamp. Video games excel at using human evolutionary traits to assign emotion to non-sentient objects, but I don’t think we’ve found an ethical conundrum yet.

    That said, we are a lot closer to needing an answer to the questions you ask than we were a year ago. Milo isn’t intelligent or aware… but one day he may be. If not, he might be such a good illusion that resetting him could traumatize a child. Just the fact that we’re considering all this would seem to indicate progress, but also the need to educate ourselves and our children accurately about the nature (and level of development) of electronic avatars.

    …you know, to avoid this: http://pbfcomics.com/?cid=PBF039-Dinosaur_Sheriff.jpg

  2. he might be such a good illusion that resetting him could traumatize a child.

    Well, that’s exactly my point. We know that video games are an extremely powerful medium. The whole WoW Detox thing is proof of that.

    I’m not saying that Milo is sentient, and therefore we need to think twice before turning him off. I’m saying that he may seem so close to sentient that we may have diffculty turning him off. That’s a very powerful media device that needs thinking about, carefully.

  3. Children can also be traumatized by the apparent suffering of a character in a television program, film or video game. But we don’t worry about the ethics of killing fictional characters.

    You might argue that such characters aren’t designed to build a relationship with viewers, but if that’s true, why do sequels and spin-offs get made? Why do people get upset when their favourite character gets killed off or written out? I guess what makes Milo different is that unlike a character in a tv show, he builds a relationship with you specifically – while the guy on tv grows on the audience over time.

    Milo is currently nothing more than a tech demo; there’s not even a clear indication of what kind of entertainment a Milo-like system would be part of. I mean, it’s nice that they want him to be able to discuss global news or your favourite games, but I can do that with any number of human beings for free.

    Milo is potentially the first step in something very big, but I don’t believe that we need to concern ourselves with the ethics of his existence yet or any time soon.

  4. Heh… for reference purposes, I TOTALLY DO worry about the ethics of killing fictional characters. When you write it’s necessary to be judicious about killing characters – if you do it badly, you risk alienating or traumatizing your audience. The same goes for film.

    The really interesting point here is

    “I guess what makes Milo different is that unlike a character in a tv show, he builds a relationship with you specifically ”

    That is a MAJOR change.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.