Prose Header


Danger, Will Robinson!

by John McBain


How do you know danger is approaching without a helpful robot to warn you? Once you see the danger, how can you avoid it? These questions — sans robot — were important to cavemen wary of sabretooth tigers. They are important to cellphone-using pedestrians crossing the street today, and they will be important to future space travelers, whether or not they are lost in space.

After working several decades as a product safety engineer, I understand that most people are poor at avoiding danger. Products cannot be foolproof, only fool-resistant, because fools are so darned clever at bypassing safeguards and ignoring warnings. In other words, people do what is convenient, not what is safe.

Look at the safety warnings in the user guides for your tools or kitchen appliances. Can you honestly say you’ve followed all those instructions? Or even read them?

We have a word for cavemen who ignored warning signs of sabretooth tigers or cave bears: lunch. Lions and tigers and bears, unlike modern chainsaws, didn’t come with large yellow, warning labels reading, “DANGER! Stay well clear of pointy end!” Perhaps a few mistakes were understandable. Nevertheless, people still manage to avoid reading cautions, seeing warning labels or using common sense.

Where is the robot we need to save us? In the future, of course!

Let’s speculate about possible future hazards, and how they might be perceived and mitigated — or not — by ordinary people. To help chart our speculative trajectory, we first need to consider past and present hazards. Any trends can be projected forward with the usual warning that past performance is no guarantee of future behavior.

A few hundred years ago — yes, we’re done with cavemen — many hazards fell into the same categories as now: fire, explosion, impact, mechanical (pinching, cutting and crushing), chemical (poison), electricity, disease and even light. Some categories were more limited: electricity included only lightning, and light did not include lasers. Some hazard sources and mitigations were quite different, and the ranking of hazards, from “That’s probably how I’ll die” to “I couldn’t care less,” certainly were not the same.

One other important difference is the increased effort in our time to make products, houses, vehicles, clothing and everyday life safer. Producers of goods and services, from toy makers to road builders, take precautions previously never considered, perhaps because the state of the art has advanced, government regulations and standards have multiplied, and lawyers have had success in winning large settlements.

To recap, we have past, present and future hazards, and mitigations to reduce risk of harm. For each of these we can consider past trends and extrapolate — i.e. guess — what might happen. Please note that no alien races, elves or other supernatural beings, time travel, matter transmitters or even psychic powers will be considered! Oh, all right, I might consider one or two, but that’s it! And any technological singularity that will abruptly trigger runaway technological growth resulting in unfathomable changes to human civilization... well, it will have to stay unfathomable.

First example: take public transportation hazards, subcategory land travel, in the years 1700, 2000 and 2300. The primary direct hazards are impact and crushing for pedestrians and riders (drivers and passengers), and these hazards do not change. However, the mitigations and risks do change!

Let’s digress for a moment to define “risk,” which we agree should be as low as possible, right? (Risk for the hazard) = (severity of the outcome) × (probability of occurrence).

If the possible severity does not change, then the probability must change in order to affect the risk. Therefore, mitigations to reduce the risk often will reduce the probability that the hazard will actuate and cause harm. Now, back to public transportation.

Early transportation used animal power, and runaway horses could cause severe injury or death! You’ve seen the old cowboy movies with the traditional stagecoach heading for the cliff, only to be stopped at the last minute. Typical mitigations — if you don’t count heroic leading men — would be training for both horses and drivers or riders, and personal vigilance.

The horse usually did not want to run over people, so that helped, too. The risk of a serious accident — remember “severity × probability” — while not zero, was reasonably low. You could see the stagecoach or horse coming and get out of the way, thus reducing the probability. The stagecoach was not going 65 miles per hour, and the road was not paved, thus reducing the likely severity; one would more likely suffer broken bones rather than death.

Now we have automobiles and trucks that annually kill tens of thousands of people every year. The possible severity of injury remains the same: death, but the probability has increased. The reasons include faster speeds, more interactions — we travel more these days — and the absence of horses; they really did not want to run over people. Moms tell their kids, “Look both ways before crossing the road.” Cars are a significant danger to pedestrians, cyclists and motorists, and many of us have been in car accidents and even lost friends. The risk is higher today than it used to be.

Does that mean the trend is towards ever-increasing danger? Should we expect the future to be dominated by vast slaughters of people by cars and trucks? Absolutely not, in spite of the Transformer movies!

We already have seen the beginning of the future: self-driving cars. Annual transportation deaths started to decrease with seat belts, air bags and crumple zones, but a point of downward inflection is starting today. Mitigation will be built into the product, so that future Moms will tell their kids, “...” That is, they won’t have to tell them anything. Why would anyone think a car could run over somebody? Ridiculous! The helpful robot finally appears, though not like “Robbie,” in humanoid form.

Speaking of robots, consider the dark side: the hazardous robot. Ever since Rossum’s Universal Robots, almost a hundred years ago, stories have warned about the danger of human-created automatons. More accurately, I should include Talos, the mythical bronze giant, probably the first robot in history, which protected Minoan Crete from would-be invaders. Wonderful, helpful robot, when he was on your side, not so much when he was not. Such behavior has been shown more recently by the robot in the Terminator movies.

Robots’ product safety is a big deal these days, and new safety standards have been published in the last few years. There is a lot of interest in collaborative robots — “cobots” — that can work with humans in the same space, such as side-by-side in a factory, for example, without having to be separated from human workers by fences, light curtains or physical distance. Risk reduction can include limiting force or slowing motion, thus reducing possible severity, and adding proximity detectors or by programming robots to anticipate and avoid possible harmful situations, thus reducing probability.

The state of the art is advancing rapidly, but consider the product safety axiom, “It’s not how it works, it’s how it breaks.” Risk assessment must consider not only the harm that could happen with correct functionality, but also the harm that could happen with a component or a software failure. But that’s not all; the worst is yet to come!

The concern that keeps product safety engineers up at night is “foreseeable misuse.” When I tell R&D engineers they must add a safeguard to a product to prevent a potentially dangerous action by the user, the response often is, “Who would ever do something that stupid?” The Darwin Award celebrates the people who have rushed to answer that question and consequently removed themselves from the human gene pool. Of course, their actions may remove a number of bystanders as well, which is a bit unfair.

The more powerful the device, the more serious the possible result could be. A mishandled 747 jet is more bothersome than a mishandled bicycle. A mishandled interstellar transport would be still more disastrous, as some science fiction stories have pointed out. Unfortunately, we worry more now about intentional misuse, rather than stupidity or component failure. Looking once more at transportation hazards, a foreseeable misuse would be that a terrorist could drive a truck into a crowd of people to kill and injure them. What is the mitigation for that?

Robots are even scarier, especially as they are given control over more aspects of human life with the expansion of the Internet of Things (IoT). When they work correctly, they can make our lives easier, our work more efficient and our leisure more plentiful. When they break — unintentionally or on purpose — well, you’ve read those stories, right?

The trend for robots, unlike transportation, seems likely to be increasing risk in the future. Even if the probability of harm for each robot is kept as low as possible, and that could be very low, there will be many, many more robots, and robots will be controlling more dangerous situations. As I said before, the self-driving car may be much less likely to run over a pedestrian, and it will be able to drive more safely on highways, with closer vehicle spacing and faster speeds. That will get you places faster and more easily and reduce the cost of building new highways. But what happens if a component fails or the software glitches when cars are 5 feet apart at 100 miles per hour? Or what if someone hacks your brakes or steering? Hm... there must be a good mystery story here.

We see there can be conflicting trends in product safety, as in most aspects of life. The key is to project what may be possible, both good — penicillin saves thousands of lives — and bad; drug-resistant germs kill thousands of people. And one can imagine even worse: a bio-engineered plague destroys the human race. What happens if the new device works and the trend continues? What happens if it breaks? What if it does both? How could it break or be broken, and how can that be prevented?

Only after we pursue those ideas to the point of silliness and beyond, to the one in a thousand chance, or one in a million, or one in a billion, will we be approaching what happens in the real world. The stories I could tell you! But that’s for next time; right now, I hear my helpful robot calling me.


Copyright © 2018 by John McBain

Home Page