The Three Laws of Automation

This article is based on an article I originally wrote in 2017. I have rewritten it for a general audience. Enjoy!


The Three Laws of Automation

I am not a fan ​of self-driving cars.

That may seem like a surprising statement coming from me.

Since I was 10 years old, I knew I wanted to be an engineer. I was fascinated with all things futuristic. I dreamed of working on rockets, and by 15, I understood the math that explained how rockets and aircraft worked.

I earned bachelor's and master's degrees in mechanical engineering, and later switched to software engineering. For the past 25 years I have worked as a software engineer. Most of my career has been spent working on automation. From build and deploying software, testing software, and setting up the machine it runs on, I have found ways to make software drive itself without human invention. It is fun, a great fit for my personality and interests, and it has provided a great living to support my family.

I believe that automation is a good thing that can make our lives better in most cases. But not all automation is a good use of an engineer's time, or good for people generally.

Science fiction fans may remember Isaac Asimov's Three Laws of Robotics from his Robot series of novels:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

When I consider whether or not attempting to automate a task is a good idea, I look at three simple rules. Inspired by Asimov's list, I call them the Three Laws of Automation:

1. A human shall not automate a task that happens infrequently
2. A human shall not automate a task that is not well defined
3. A human shall not automate a task that requires creativity

These simple rules go a long way in helping me to determine if automation is a good idea. To see how they apply, and what exactly I mean by these statements, let's look at a specific case.

Automated software testing means is one of the common applications of automation in the software industry. Test cases are automated checks that verify if a sequence of steps still produce the same expected results as an earlier version of the software being tested.

Software developers often create "unit tests" which run so fast that they can be run every time the software is built, often multiple times a day. More complex and realistic "system tests" will be run less frequently, say once a day,  because they take longer to run. Still, software testing in general happens frequently enough it satisfies Automation Law 1.

Software testing is well defined. A well-written test case is a series of action steps followed by a verification step. Did the expected result occur, Yes or No? If Yes, the test case Passed. If No, the test case Failed. Software testing satisfies Law 2.

Running software test cases manually requires no creativity.  It is repetitive and mindless. Most software testers dislike that part of their job, and are happy to hand that over to a machine. Other parts of the software tester's job, such as analyzing test results, are creative activities. They require intelligence, imagination, and insight. Automating those tasks is possible, but much harder. I assert though that apart from automating analysis, software testing satisfies Law 3. Since all Three Laws of Automation are satisfied, it makes sense to automate software testing.​ 

There are many other examples in daily life where what was once a manual task has been automated during our lifetimes. Here are just a few: maps, filing personal income taxes, buying airplane tickets. You can probably think of many others. In each case the Three Laws of Automation apply.


How about those self-driv​​ing cars?


Automation Law 1 certainly applies to cars. Driving is a frequent task that most of us have to do every day.

What about Automation Law 2? Is driving well defined? Not even close.

Consider what makes for a good taxi or rideshare passenger experience. Being a good driver is about much more than being able to deliver your passengers quickly, to the right location, without getting pulled over by the police, or without injuring or killing anyone or damaging property. Typically they involve local knowledge, friendliness, confidence, considerate driving, and many other things that cannot be described to a computer. And this has to be done while traveling together with other drivers and pedestrians who may or may not behave rationally.

We could stop here, but what about the Automation Law 3? Is driving creative? Absolutely.

Driving can feel routine and monotonous most of the time. But that is an illusion. Routine driving is not stable. It can become emergency driving in a fraction of a second, at any time.

Let me share a personal example One day my wife was driving our family back from a road trip. Traffic on the highway was light, weather was good, and the situation was so routine that I was able to take a nap in the front passenger seat.

Suddenly a large tire came loose from a truck going the opposite direction on the highway. It bounced towards our car at over a hundred miles an hour. My wife instantly realized that she needed to speed up rather than slow down to avoid it. So she did.

Because of her quick thinking, the tire bounced just behind our car by inches and rolled harmlessly to the side of the road. We made our way home safely with a great story to share but none the worse for the experience. If not for my wife's counter-intuitive reaction, we would have been in a major accident.

I don't know how to describe such a situation to a computer. I don't believe that anyone does. I could say the same about much less extreme situations that happen in driving every day.

It isn't for lack of trying. There are many people trying to use machine learning to "teach" software how to drive automobiles.

The PBS series NOVA did an episode on the challenges facing engineers trying to develop self-driving automobile technology with machine learning. This video discusses those challenges.

https://www.pbs.org/wgbh/nova/video/tasks-driverless-cars-need-learn/

Automating the technology of yesteryear

For me, cars are a bad candidate for automation, at least at the present state of technology. We would need something very close to Asimov's imaginary robots, with their powerful sensors, ability to think creatively, and hyper focus on the fragility and sanctity of human life, for self-driving cars to ever be safe.

Asking the question "How do we automate cars" also feels like asking the wrong question. It feels like an attempt to automate the technology of yesteryear rather than the technology of tomorrow.

Consider how the car replaced the horse in personal transportation. Though we speak of horsepower, a car is nothing like a horse. A mechanical horse powered by an internal combustion engine would be a monstrosity. That is probably why such a thing was never built. A car is so much simpler.

If we asked the question: "How do we automate the movement of people?" we might come up with answers that are much simpler, and safer, than a self-driving car. I don't know what those answers look like, but I wish more people were asking the question.

For software engineers, a similar skepticism can be helpful in deciding if automating what you are currently doing is a good idea. 

If automating what you are currently doing by hand fails to meet the criteria of the Three Laws of Automation, ask yourself if the task could be done in a completely different way that would meet the criteria. This might point you toward a better solution.

Happy automating!

Comments

Popular posts from this blog

After the heart attack

Marking one year since the murder of George Floyd

A spiritual summer