Dec 22, 2018

Michael Crichton's Andromeda Strain & Consequences of Cautery

I was in college when I read Michael Crichton's Andromeda Strain. If I try hard I might remember the broad storyline and, may be, a few scenes from the book. But, it's so many years since I read the book that it's unlikely that my recollections will be accurate.

But, there is one thing I clearly remember from the book: the shock, a kind of moral reprehension, I felt after I read this section.
Directive 7-12

DIRECTIVE 7-12 WAS A PART OF THE FINAL Wildfire Protocol for action in the event of a biologic emergency. It called for the placement of a limited thermonuclear weapon at the site of exposure of terrestrial life to exogenous organisms.

The code for the directive was Cautery, since the function of the bomb was to cauterize the infection “- to burn it out, and thus prevent its spread.

As a single step in the Wildfire Protocol, Cautery had been agreed upon by the authorities involved-- Executive, State, Defense, and AEC-- after much debate. The AEC, already unhappy about the assignment of a nuclear device to the Wildfire laboratory, did not wish Cautery to be accepted as a program; State and Defense argued that any aboveground thermonuclear detonation, for whatever purpose, would have serious repercussions internationally.

The President finally agreed to Directive 7-12, but insisted that he retain control over the decision to use a bomb for Cautery. Stone was displeased with this arrangement, but he was forced to accept it; the President had been under considerable pressure to reject the whole idea and had compromised only after much argument. Then, too, there was the Hudson Institute study.

The Hudson Institute had been contracted to study possible consequences of Cautery. Their report indicated that the President would face four circumstances (scenarios) in which he might have to issue the Cautery order. According to degree of seriousness, the scenarios were:
  1. A satellite or manned capsule lands in an unpopulated area of the United States. The President may “cauterize the area with little domestic uproar and small loss of life. The Russians may be privately informed of the reasons for breaking the Moscow Treaty of 1963 forbidding aboveground nuclear testing.
  2. A satellite or manned capsule lands in a major American city. (The example was Chicago.) The Cautery will require destruction of a large land area and a large population, with great domestic consequences and secondary international consequences.
  3. A satellite or manned capsule lands in a major neutralist urban center. (New Delhi was the example.) The Cautery will entail American intervention with nuclear weapons to prevent further spread of disease. According to the scenarios, there were seventeen possible consequences of American-Soviet interaction following the destruction of New Delhi. Twelve led directly to thermonuclear war.
  4. A satellite or manned capsule lands in a major Soviet urban center. (The example was Stalingrad.) Cautery will require the United States to inform the Soviet Union of what has happened and to advise that the Russians themselves destroy the city. According to the Hudson Institute scenario, there “were six possible consequences of American-Russian interaction following this event, and all six led directly to war. It was therefore advised that if a satellite fell within Soviet or Eastern Bloc territory the United States not inform the Russians of what had happened. The basis of this decision was the prediction that a Russian plague would kill between two and five million people, while combined Soviet-American losses from a thermonuclear exchange involving both first and second-strike capabilities would come to more than two hundred and fifty million persons.
As a result of the Hudson Institute report, the President and his advisers felt that control of Cautery, and responsibility for it, should remain within political, not scientific, hands.

I don't remember exploring why I felt that way, why I felt that I was reading about something that is morally lacking.

My sense is we don't spend too much time thinking about why something is right or wrong. We go by our instincts, by our feelings. Abraham Lincoln said, "when I do good, I feel good; when I do bad, I feel bad." I suppose we use a similar heuristic when we judge the morality of an act: "If we feel good, we decide it's moral, and if we feel bad, we decide it's immoral."

Now, after all these years, I wonder why I felt that way. Was it because of the unfairness of it all? After all, people in Chicago or New Delhi or Stalingrad had no role in making the satellite, they didn't do anything to attract it. Yet there were some people sitting in Washington, who could decide their fate. The fate of hundreds of thousands of people. Was it because the example was New Delhi. Would I have felt differently, if it had been Beijing instead? Was I even comfortable with thinking consciously about having to make a decision about dropping nuclear bombs?

I don't know. It's not a luxury that everyone has. Some have to explain their decisions. They have to give reasons. The passage from Andromeda Strain is an example of that. The president had to explain why he took a specific decision to the people who elected him. It's because he was making decisions on behalf of them. He has to convince them that he took the right decision. 

We have outsourced some decision making to the elected officials, and they owe us an explanation.

Increasingly, as a society, we are outsourcing some of our decision making to technology. Machines are doing a lot of stuff that we used to do. Most of them are rules based. And right now they are acting in a rather controlled environment, like industrial robots in shop floors.

But, machines are coming out of shop floor. They are getting into areas where people used their judgements to make decisions, rather than a flowchart in their rulebook. Like giving loans, or driving a car.  Some of it might involve choosing between two equally right options.

There are a lot of debates around how to ethical machines will be. (In 2015, even I wrote a piece on that in The Hindu blog, arguing that utilitarian approach has a better chance, compared to deontological approach).

May be, it will take some time before machines learn ethics. What is important as of today is to understand how technologists think about ethics and how their moral world view gets reflected in the algorithms they write. Which means that journalists who write about tech should also be well versed in that - not just about what tech will do, not just about what the business model is, not just about their value proposition - but also about their moral worldview.


No comments:

Post a Comment