The moral compass of autonomous driving cars

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov’s “Three Laws of Robotics.”

1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In a perfect world, these points should explain how machine ethics is separate to a human one — or in other words: what keeps human on top of the food chain. Likely for me, we do not live in a perfect world so I can allow myself to challenge the status-quo.

“Throughout history, whenever we tried to “enslave” free minds to bow to our wishes, that journey always ended up in bloodshed.”

There are a lot of discussions in the automotive industry (and a lot of misunderstanding) about what is an autonomous driving car. While most of them revolve around the technology itself, asking questions how to do things, and mainly looking at this problem from “a yet another car accessory” point of view, I can’t escape the philosophical complexities that define autonomous driving cars maybe as the first general intelligence infrastructure.

“Car” is probably the most immersive environment available. You can’t sit inside your iPhone or Android, or inside your laptop or tablet, but you can sit inside your car. Unfortunately, today you get s%#t in return. There is no connection between your existence and the metal, somewhat interactive box that is designed to take you from point A to point B.

“The human-machine inside the machine. It will not be long until we realize that we are not observing information but became part of it — we, as humans are sensory data that serves a bigger model of what reality is.”

The computation power that makes autonomous systems is impressive and as such holds some exciting applications. In the science fiction series Farscape, Moya is living biomechanical spaceship; she is a living creature, with a set of rules, perceptions and even responsibilities but also with a unique interface. Moya doesn’t have a “voice,” but she connects to a “pilot” via a synaptic interface and instantly shares the same experiences. Moya and the pilot become an extension of each other.

While we’re more than a few years away from developing biomechanical pods, we can already now experience the cognitive outsourcing effects technology is bringing. Ask yourself, how many phone numbers you remember? How does your calendar look like tomorrow or the day after? We trust technology around us to “host” function we once mastered. Some will say we are becoming stupid, but I argue that we are adjusting our being to a new reality. We are surrounding ourselves with sensors, algorithms and touch points that help drive our intentions faster and with fewer frictions yet when we step into a car we are “forced to use” a set of predefined, hardcoded interfaces that ultimately have no connection to us.

Now, imagine been able not just sitting in a car but connecting yourself to it. Making this synaptic connection, extending your brain so the vehicle can use your intentional wishes as an algorithm but also instantly become an extension of your body, expanding your subjective sensory perception with technological sensors. Wouldn’t it be something?

What is a car? What is human? If we establish, symbiotic connections between human and machine. I ask, not what a human would do, or a machine but what the human-machine would do?

Unfortunate, the automotive industry is still building cars rather than imagining a space full of Moya’s.