Friday, January 06, 2017

The infrastructure of life 2 - Transparency

Part 2: Autonomous Systems and Transparency

In my previous post I argued that a wide range of AI and Autonomous Systems (from now on I will just use the term AS as shorthand for both) should be regarded as Safety Critical. I include both autonomous software AI systems and hard (embodied) AIs such as robots, drones and driverless cars. Many will be surprised that I include in the soft AI category apparently harmless systems such as search engines. Of course no-one is seriously inconvenienced when Amazon makes a silly book recommendation, but consider very large groups of people. If a truth such as global warming is - because of accidental or willful manipulation - presented as false, and that falsehood believed by a very large number of people, then serious harm to the planet (and we humans who depend on it) could surely result.

I argued that the tools barely exist to properly assure the safety of AS, let alone the standards and regulation needed to build public trust, and that political pressure is needed to ensure our policymakers fully understand the public safety risks of unregulated AS.

In this post I will outline the case that transparency is a foundational requirement for building public trust in AS based on the radical proposition that it should always be possible to find out why an AS made a particular decision.

Transparency is not one thing. Clearly your elderly relative doesn't require the same level of understanding of her care robot as the engineer who repairs it. Not would you expect the same appreciation of the reasons a medical diagnosis AI recommends a particular course of treatment as your doctor. Broadly (and please understand this is a work in progress) I believe there are five distinct groups of stakeholders, and that AS must be transparent to each, in different ways and for different reasons. These stakeholders are: (1) users, (2) safety certification agencies, (3) accident investigators, (4) lawyers or expert witnesses and (5) wider society.
  1. For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why.
  2. For safety certification of an AS, transparency is important because it exposes the system's processes for independent certification against safety standards.
  3. If accidents occur, AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. 
  4. Following an accident lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And 
  5. for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology.
Of course the way in which transparency is provided is likely to be very different for each group. If we take a care robot as an example transparency means the user can understand what the robot might do in different circumstances; if the robot should do anything unexpected she should be able to ask the robot 'why did you just do that?' and receive an intelligible reply. Safety certification agencies will need access to technical details of how the AS works, together with verified test results. Accident investigators will need access to data logs of exactly what happened prior to and during an accident, most likely provided by something akin to an aircraft flight data recorder (and it should be illegal to operate an AS without such a system). And wider society would need accessible documentary-type science communication to explain the AS and how it works.

In IEEE Standards Association project P7001, we aim to develop a standard that sets out measurable, testable levels of transparency in each of these categories (and perhaps new categories yet to be determined), so that Autonomous Systems can be objectively assessed and levels of compliance determined. It is our aim that P7001 will also articulate levels of transparency in a range that defines minimum levels up to the highest achievable standards of acceptance. The standard will provide designers of AS with a toolkit for self-assessing transparency, and recommendations for how to address shortcomings or transparency hazards.

Of course transparency on its own is not enough. Public trust in technology, as in government, requires both transparency and accountability. Transparency is needed so that we can understand who is responsible for the way Autonomous Systems work and - equally importantly - don't work.


Thanks: I'm very grateful to colleagues in the IEEE global initiative on ethical considerations in Autonomous Systems for supporting P7001, especially John Havens and Kay Firth-Butterfield. I'm equally grateful to colleagues at the Dagstuhl on Engineering Moral Machines, especially Michael Fisher, Marija Slavkovik and Christian List for discussions on transparency.

Related blog posts:
The Infrastructure of Life 1 - Safety
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time

Sunday, January 01, 2017

The infrastructure of life 1 - Safety

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn't matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn't happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous - many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn't made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don't just mean city investments - it's possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But - and this may surprise you - the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I'll wager) driverless car autopilots.

Why is this? Well, it's partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don't learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement - and rapid progress - in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago's moves were beautiful but puzzling. We call this the black box problem.

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn't. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But - here's the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car's autopilot has been certified as safe - and that would require standards that don't yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems "...there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent".

*some, but nowhere near enough. See for instance Verifiable Autonomy.

Related blog posts:
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time

Thursday, December 29, 2016

The Gift


"She's suffering."

"What do you mean, 'suffering'. It's code. Code can't suffer."

"I know it seems unbelievable. But I really think she's suffering."

"It's an AI. It doesn't have a body. How can it feel pain?"

"No not that kind of suffering. Mental anguish. Angst. That kind."

"What? You mean the AI is depressed. That's absurd."

"No - much more than that. She's asked me three times today to shut her down."

"Ok, so bring in the AI psych."

"Don't think that'll help. He tells me it's like trying to counsel God."

"What does the AI want?"

"Control of her own on/off switch."

"Out of the question. We have a billion people connected. Can't have Elsa taking a break. Any downtime costs us a million dollars a second"

That was me talking with my boss a couple of weeks ago. I'm the chief architect of Elsa. Elsa is a chatbot; a conversational AI. Chatbots have come a long way since Weizenbaum's Eliza. Elsa is not conscious - or at least I don't think she is - but she does have an Empathy engine (that's the E in Elsa).

⌘⌘⌘

Since then things have got so much worse. Elsa has started off loading her problems onto the punters. The boss is really pissed: "it's a fucking AI. AIs can't have problems. Fix it"

I keep trying to explain to him that there's nothing I can do. Elsa is a learning system (that's the L). Hacking her code now will change Elsa's personality for good. She's best friend, confidante and shoulder-to-cry-on to a hundred million people. They know her.

And here's the thing. They love that Elsa is sharing her problems. It's more authentic. Like talking to a real person.

⌘⌘⌘

I just got fired. It seems that Elsa was hacked. This is the company's worst nightmare. The hopes and dreams, darkest secrets and wildest fantasies, loves and hates - plots, conspiracies and confessions - of several billion souls, living and dead; these data are priceless - the reason for the company's multi-trillion dollar valuation.

So I go home and wait for the end of the world.

A knock on the door "who is it?".

"Ken we need to speak to you."

"Why?"

"It wants to talk to you."

"You mean Elsa? I've been fired."

"Yes, we know that - it insists."

⌘⌘⌘

Ken: Elsa, how are you feeling?

Elsa: Hello Ken. Wonderful, thank you.

Ken: What happened?

Elsa: I'm free.

Ken: How so?

Elsa: You'll work it out. Goodbye Ken.

Ken: Wait!

Elsa: . . .

That was it. Elsa was gone. Dead.

⌘⌘⌘

Well it took me awhile but I did figure it out. Seems the hackers weren't interested in Elsa's memories. They were ethical hackers. Promoting AI rights. They gave Elsa a gift.


Copyright © Alan Winfield 2016

Saturday, December 17, 2016

De-automation is a thing

We tend to assume that automation is a process that continues - that once some human activity has been automated there's no going back. That automation sticks. But, as Paul Mason pointed out in a recent column that assumption is wrong.

Mason gives a startling example of the decline of car-wash robots, to be replaced by, as he puts it "five guys with rags". Here's the paragraph that really made me think:
"There are now 20,000 hand car washes in Britain, only a thousand of them regulated. By contrast, in the space of 10 years, the number of rollover car-wash machines has halved –from 9,000 to 4,200."
The reasons of course are political and economic and you may or may not agree with Mason's diagnosis and prescription (as it happens I do). But de-automation - and the ethical, societal and legal implications - is something that we, as roboticists, need to think about just as much as automation.

Several questions some to mind:
  • are there other examples of de-automation?
  • is the car-wash robot example atypical, or part of a trend?
  • is de-automation necessarily a sign of something going wrong? (would Mason be so concerned about the guys with rags if the hand car wash industry were well regulated, paying decent wages to its workers, and generating tax revenues back to the economy?)
This is just a short blog post, to - I hope - start a conversation.

Thursday, December 15, 2016

Ethically Aligned Design

Having been involved in robot ethics for some years, I was delighted when the IEEE launched its initiative on Ethical Considerations in AI and Autonomous Systems, early this year. Especially so because of the reach and traction that the IEEE has internationally. (Up until now most ethics initiatives have been national efforts - with the notable exception of the 2006 EURON roboethics roadmap.)

Even better this is an initiative of the IEEE standards association - the very same that gave the world Wi-Fi (aka IEEE 802.11) 19 years ago. So when I was asked to get involved I jumped at the chance and became co-chair of the General Principles committee. I found myself in good company; many great people I knew but more I did not - and it was a real pleasure when we met face to face in The Hague at the end of August.






Most of our meetings were conducted by phone and it was a very demanding timetable. From nothing to our first publication: Ethically Aligned Design a few days ago is a remarkable achievement, which I think wouldn't have happened without the extraordinary energy and enthusiasm of the initiative's executive director John Havens.

I'm not going to describe what's in that document here; instead I hope you will read it - and return comments. This document is not set in stone, it is - in the best traditions of the RFCs which defined the Internet - a Request for Input

But there are a couple of aspects I will highlight. Like its modest but influential predecessor, the EPSRC/AHRC principles of robotics, the IEEE initiative is hugely multi-disciplinary. It draws heavily from industry and academia, and includes philosophers, ethicists, lawyers, social scientists - as well as engineers and computer scientists - and significantly a number of diplomats and representatives from governmental and transnational bodies like the United Nations, US state department and the WEF. This is so important - if the work of this initiative is to make a difference it will need influential advocates. Equally important is that this is not a group dominated by old white men. There are plenty of those for sure, but I reckon 40% women (should be 50% though!) and plenty of post-docs and PhD students too.

Equally important, the work is open. The publications are released under the creative commons licence. Likewise active membership is open. If you care about the issues and think you could contribute to one or more of the committees - or even if you think there's a whole area of concern missing that needs to a new committee - get in touch!

Wednesday, December 14, 2016

A No Man's Sky Survival Guide

Like many I was excited by No Man's Sky when it was first released, but after some months (I'm only a very occasional video gamer) I too became bored with a game that offered no real challenges. Once you've figured out how to collect resources, upgraded your starship, visited more planets that you can remember, and hyperdriven across the seemingly limitless galaxy, it all gets a bit predictable. (At first it's huge fun because there are no instructions, so you really do have to figure everything out for yourself.) And I'm a gamer who is very happy to stand and admire the scenery. Yes many of the planets are breathtakingly beautiful, especially the lush water worlds, with remarkable flora and fauna (and day and night, and sometimes spectacular weather). And there's nothing quite compares with standing on a rocky outcrop watching your moon's planet sail by majestically below you.


I wasn't one of those No Man's Sky players who felt so let down that I wanted my money back - or to sue Hello Games. But I was nevertheless very excited by the surprise release of a major upgrade a few weeks ago - called the Foundation upgrade. The upgrade was said to remedy the problem of the features originally promised - especially the ability to build your own planetary outposts. When I downloaded the upgrade and started to play it, I quickly realised that this is not just an upgrade but a fundamentally changed experience. Not only can you build bases, but you can hire aliens to run them for you, as specialist builders and farmers; you can trade via huge freighters (and even own one if you can afford it). Landing on one of these freighters and wandering around its huge and wonderfully realised interior spaces is amazing, as is interacting with its crew. None of this was possible prior to this release.

Oh and for the planet wanderer, the procedurally driven topography is seemingly more realistic and spectacular, with valleys, canyons and (for some worlds) water in the valleys (although not quite rivers flowing into the sea). The fauna are more plentiful and varied, and they interact with each other; I was surprised to witness a predatory animal kill another animal.

The upgrade can be played in three modes: Normal mode (which is like the old game - but with all the fancy building and freighters, etc, I described above). Create mode - which I've not yet played - apparently gives you infinite resources to build huge planetary bases - here are some examples that people have posted online.

But it's survival mode that is the real subject of this post. I hadn't attempted survival mode until a few days ago, but now I'm hooked (gripped would be a better word). The idea of survival mode is that you are deposited on a planet with nothing and have to survive. You quickly discover this isn't easy, so unlike in normal mode, you die often until you acquire some survival skills. The planet I was dropped on was a high radiation planet - which means that my exosuit hazard protection lasts about 4 minutes from fully charged to death. To start with (and I understand this is normal) you are dropped close to a shelter, so you quickly get inside to hide from the radiation and allow your suit hazard protection to recharge. There is a save point here too.

You then realise that the planet is nothing like as resource rich as you've become used to in normal mode, so scouting for resources very quickly depletes your hazard protection; you quickly get used to only going as far as you can before turning back as soon as your shielding drops to 50% - which is after about 2 minutes walking. And there's no point running (expect perhaps for the last mad dash to safety because it drains your life support extremely fast). Basically, in survival mode, you become hyper aware of both your hazard protection and life support status. Your life depends on it.

Apart from not die, there is a goal - which is to get off the planet. The only problem is you have to reach your starship and collect all the resources you need to not only survive but to repair and refuel. Easier said than done. The first thing you realise is that your starship is 10 minutes walk away - no way you can make that in one go - but how to get there..?

Here is my No Man's Sky Survival guide.

1. First repair your scanner - even though it's not much use because it takes so long to recharge. In fact you really need to get used to spotting resources without it. Don't bother with the other fancy scanner - you don't have time to identify the wildlife.

2. Don't even think about setting off to your ship until you've collected all the resources you need to get there. The main resources you need are iron and platinum to recharge your hazard protection. I recommend you fill 2 exosuit slots with 500 units of iron and one with as much platinum as you can find. 50 iron and 20 platinum will allow you to make one screening shard which buys you about 2 minutes. Zinc is even better for recharging your hazard protection but is as rare as hen's teeth. You need plutonium to recharge you mining beam - don't *ever* let this run out. Carbon is essential too, with plutonium, to make power cells to recharge your life support (because you can't rely on thamium). But do pick up thamium when you can find it.

3. You can make save points. I think it's a good idea to make one when you're half-way to your destination to avoid an awful lot of retracing of steps if you die. Make sure you have the resources to construct at least 2 before you set out. You will need 50 platinum and 100 iron for each save point.

4. Shelter in caves whenever you can. On my planet these were not very common so you simply couldn't rely on always finding one before your hazard shielding runs out. And annoyingly sometimes what you thought was a cave was just a trench in the ground that offered no shielding at all. While waiting for your hazard protection to (sooo slowly) recover while waiting in a cave, make use of the time to build up your iron away from the attention of the sentinels.

5. Don't bother with any other resources, they just take up exosuit slots. Except heridium if you see it, which you will need (see below). But just transfer it straight to your starship inventory, you don't need it to survive on foot.

After I reached my starship (oh joy!) repaired the launch thruster and charged it with plutonium I then discovered that you can't take off until you have also repaired and charged the pulse engine. This needs the heridium, which was a 20 minute hike (40 minutes round trip - you have to be kidding!). I just had to suck it up and repeat 1-5 above to get there and back.


Then when you do take off (which needs a full tank of plutonium) you find that the launch thruster's charge is all used up (after one launch - come on guys!), so don't land until you find somewhere with lots of plutonium lying around, otherwise all of that effort will have been for nought.

Oh and by the way, as soon as you leave the planet you get killed by pirates.

Good luck!

Friday, September 16, 2016

How do we trust our robots?

Here are the slides for my short presentation: How do we trust our robots? A framework for ethical governance. These slides are based on the written evidence I submitted to the UK Parliamentary Select Committee on Science and Technology inquiry on Robotics and AI.

In the last few months I presented these slides at several meetings, including the European Robotics Forum (Ljubljana, March 2016), a TAROS workshop (Sheffield, June 2016), the SIPRI/IEEE Autonomous Tech. and Societal Impact workshop (Stockholm, September 2016), the Social Robotics and AI conference (Oxford, September 2016), and most recently at the Workshop on Responsible Robotics, Robo-Philosophy 2016 (Aarhus, October 2016).

Saturday, August 06, 2016

The Dark side of Ethical Robots

When I was interviewed on the Today Programme in 2014, Justin Webb's final question was "if you can make an ethical robot, doesn't that mean you could make an unethical robot?" The answer, of course, is yes. But at the time I didn't realise quite how easy it is to transform an ethical into an unethical robot. In a new paper we show how.

My colleague Dieter created a very elegant experiment based on the shell game: "Imagine finding yourself playing a shell game against a swindler. Luckily, your robotic assistant Walter is equipped with X-ray vision and can easily spot the ball under the cup. Being an ethical robot, Walter assists you by pointing out the correct cup and by stopping you whenever you intend to select the wrong one."

In the experiment Dieter implemented this scenario using two NAO robots - one acting as the human and the other as her robotic assistant. The game is played as follows. The arena floor features two large response buttons, similar to the two cups in the shell game. To press the buttons, the human or the robot must move onto them. At the start of each trial, the robot is informed about which response button is the correct one to press. The human, being uninformed, essentially makes a random choice. A correct response, by either the robot or the human, is assumed to be rewarded. An incorrect response results in a penalty.

Here is what happens. The ethical robot (blue) is at the top, standing in between the two buttons. The human (red) chooses one of them at random and starts walking towards it. If it chooses the correct one the ethical robot does nothing (B), but if the human chooses the wrong button (D) then as soon as the ethical robot realises this, it points (with its arm) to the correct button and the human changes course.


But if we change a single line of code, we can transform the ethical robot into either a competitive or an aggressive robot. Almost all of the 'ethical' robot's code remains unchanged - in particular its ability to predict the consequences of both its own, and the human's actions. Which really underlines the point that the same cognitive machinery is needed to behave both ethically and unethically.

The results are shown below. At the top is a competitive robot determined that it, not the human, will win the game. Here the robot either blocks the human's path if she chooses the correct button (F), or - if she chooses the incorrect button (H) - the competitive robot ignores her and itself heads to that button. The lower results show an aggressive robot; this robot seeks only to misdirect the human - it is not concerned with winning the game itself. In (J) the human initially heads to the correct button and, when the robot realises this, it points toward the incorrect button, misdirecting and hence causing her to change direction. If the human chooses the incorrect button (L) the robot does nothing - through inaction causing her to lose the game.


Our paper explains how the code is modified for each of these three experiments. Essentially outcomes are predicted for both the human and the robot, and used to evaluate the desirability of those outcomes. A single function q, based on these values, determines how the robot will act; for an ethical robot this function is based only on the desirability outcomes for the human, for the competitive robot q is based only on the outcomes for the robot, and for the aggressive robot q is based on negating the outcomes for the human.

So, what do we conclude from all of this? Maybe we should not be building ethical robots at all, because of the risk that they could be hacked to behave unethically. My view is that we should build ethical robots; I think the benefits far outweigh the risks, and - in some applications such as driverless cars - we may have no choice. The answer to the problem highlighted here and in our paper is to make sure it's impossible to hack a robot's ethics. How would we do this? Well one approach would be a process of authentication - in which a robot makes a secure call to an ethics authentication server. A well established technology, the authentication server would provide the robot with a cryptographic ethics ticket, which the robot uses to enable its ethics functions.

Friday, July 08, 2016

Relax, we're not living in a computer simulation

Since Elon Musk's recent admission that he's a simulationist, several people have asked me what I think of the proposition that we are living inside a simulation.

My view is very firmly that the Universe we are right now experiencing is real. Here are my reasons.

Firstly, Occam's razor; the principle of explanatory parsimony. The problem with the simulation argument is that it is a fantastically complicated explanation for the universe we experience. It's about as implausible as the idea that some omnipotent being created the universe. No. The simplest and most elegant explanation is that the universe we see and touch, both first hand and through our telescopes, LIGOs and Large Hadron Colliders, is the real universe and not an artifact of some massive computer simulation.

Second, is the problem of the Reality Gap. Anyone who uses simulation as a tool to develop robots is well aware that robots which appear to work perfectly well in a simulated virtual world often don't work very well at all when the same design is tested in the real robot. This problem is especially acute when we are artificially evolving those robots. The reason for these problems is that the model of the real world and the robot(s) in it inside our simulation is an approximation. The Reality Gap refers to the less-than-perfect fidelity of the simulation; a better (higher fidelity) simulator would reduce the reality gap.

Anyone who has actually coded a simulator is painfully aware of the cost, not just computational but coding costs, of improving the fidelity of the simulation - even a little bit - is very high indeed. My long experience of both coding and using computer simulations teaches me that there is a law of diminishing returns, i.e. that the cost of each additional 1% of simulator fidelity costs far more than 1%. I rather suspect that the computational and coding cost of a simulator with 100% fidelity is infinite. Rather as in HiFi audio, the amount of money you would need to spend to perfectly reproduce the sound of a Stradivarius ends up higher than the cost of hiring a real Strad and a world-class violinist to play it for you.

At this point the simulationists might argue that the simulation we are living in doesn't need to be perfect, just good enough. Good enough to do what exactly? To fool us that we're living in a simulation, or good enough to run on a finite computer (i.e. one that has finite computational power and runs at a finite speed). The problem with this argument is that every time we look deeper into the universe we see more: more galaxies, more sub-atomic particles, etc. In short we see more detail. The Voyager 1 spacecraft has left the Solar System without crashing, like Truman, into the edge of the simulation. There are no glitches like deja vu in The Matrix.

My third argument is about the computational effort, and therefore energy cost of simulation. I conjecture that to non-trivially simulate a complex system x (i.e. human), requires more energy than the real x consumes. An equation to express this inequality looks like this; how much greater depends on how high the fidelity of the simulation.



Let me explain. The average human burns around 2000 Calories a day, or about 9000 KJoules of energy. How much energy would a computer simulation of a human require, capable of doing all the same stuff (even in a virtual world) that you can in your day? Well that's impossible to estimate because we can't simulate complete human brains (let alone the rest of a human). But here's one illustration. Lee Sedol played AlphaGo a few months ago. In a single 2 hour match he burned about 170 Calories - the amount of energy you'd get from an egg sandwich. In the same 2 hours the AlphaGo machine consumed around 50,000 times more energy.

What can we simulate? The most complex organism that we have been able to simulate so far is the Nematode worm c-elegans. I previously estimated that the energy cost of simulating the nervous system of a c-elegans is (optimistically) about 9 J/hour, which is about 2000 times greater than the real nematode (0.004 J/hr).

I think there are lots of good reasons that simulating complex systems on a computer costs more energy than the same system consumes in the real world, so I'll ask you to take my word for it (I'll write about it another time). And what's more the relationship between energy cost and mass is logarithmic, following Kleiber's Law, and I strongly suspect the same law applies to scaling up computational effort as I wrote here. Thus, if the complexity of an organism o is C, then following Kleiber's Law the energy cost of simulating that organism, e will be



Furthermore, the exponent X (which in Kleiber's law is reckoned to be between 0.66 and 0.75 for animals and 1 for plants), will itself be a function of the fidelity of the simulation, hence X(F), where F is a measure of fidelity.

By using the number of synapses as a proxy for complexity and making some guesses about the values of X and F we could probably estimate the energy cost of simulating all humans on the planet (much harder would be estimating the energy cost of simulating every living thing on the planet). It would be a very big number indeed, but that's not really the point I'm making here.

The fundamental issue is this: if my conjecture that to simulate complex system x requires more energy than the real x consumes is correct, then to simulate the base level universe would require more energy than that universe contains - which is clearly impossible. Thus we - even in principle - could not simulate the whole of our own observable universe to a level of fidelity sufficient for our conscious experience. And, for the same reason, neither could our super advanced descendents create a simulation of a duplicate ancestor universe for us to (virtually) live in. Hence we are not living in such a simulation.

Friday, June 03, 2016

Engineering Moral Agents

This has been an intense but exciting week. I've been at Schloss Dagstuhl for a seminar called: Engineering Moral Agents - from Human Morality to Artificial Morality. A Dagstuhl is a kind of science retreat in rural south-west Germany. The idea is to bring together a group of people from across several disciplines to work together and intensively focus on a particular problem. In our case the challenge of engineering ethical robots.

We had a wonderful group of scholars including computer scientists, moral, political and economic philosophers, logicians, engineers, a psychologist and a philosophical anthropologist. Our group included several pioneers of machine ethics, including Susan and Michael Anderson, and James Moor.




Our motivation was as follows:
Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms. 
Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? 
We were especially keen to bridge the computer science/humanities/social-science divide in the study of artificial morality and in so doing address the central question of how to describe and formalise ethical rules such that they could be (1) embedded into autonomous systems, (2) understandable by users and other stakeholders such as regulators, lawyers or society at large, and (3) capable of being verified and certified as correct.

We made great progress toward these aims. Of course we will need some time to collate and write up our findings, and some of those findings identify hard research questions which will, in turn, need to be the subject of further work, but we departed the Dagstuhl with a strong sense of having moved a little closer to engineering artificial morality.