Saturday, July 12, 2014

Man and Machine - Part 4 - All Hail Colossus!

We begin our last chapter in this completely idiosyncratic and all-too-brief overview of the cultural relationship between man and his machines with a look man's greatest technological innovation (to date), and his greatest looming threat - the computer.  To do so, we naturally begin on a lonely island, far any form of civilization, and a group of stranded schoolboys:

There's the first page and the last page, and a giant disturbing sandwich in between.

Nobel-laureate William Golding's tale of the English schoolboy survivors of a plane crash adapting to a new life and society on a remote South Seas island is full of the stuff that makes great literature - lots of questions and ambiguities, few certain answers.  Warning - there be spoilers ahead, matey!

Simply put, Ralph leads the small group of boys who want to retain the order of their previous existence, while Jack and the "hunters" begin reverting back to a more primeval culture.  The two groups clash, and this conflict results in some shocking violence.  The question at the core of the book is this: does Jack and his crew's attitudes and behavior represent immorality, unmorality or amorality?

The answer is key to understanding human behavior. For Jack and Co., we are left wondering if they knowingly, immorally, violated the laws and codes of a society to which they no longer belong, or has their circumstances forced them to adopt a primitive amoral pattern of behavior as the only choice for survival?

It's a question scientists may one day have to grapple with as the push toward creating "thinking" computers.  Will we expect these advanced machines to simply "think" very quickly and supply objective solutions for problems -  amoral servants coming up with answers, or do we want them to "reason" through information by applying society's standards of moral and ethical codes?

Once again, philosophers, writers, and film makers all jump into the fray to show the potential benefits and problems of creating computers capable of independent thought - but free from the social and cultural restraints and expectations that modify human behavior.

I am, or course, talking about Colossus, the Forbin Project!


Outdated tech - still creepy!

Colossus was created by the soon-to-be-less smug Dr. Charles Forbin as a completely logical solution to ending the Cold War - a machine, devoid of human ego and pettiness, free from rancor and pride, objectively weighing costs and benefits to every decision.



But doing so too independently and objectively.  Colossus wouldn't think killing 20 million Russians was a big deal if only 10 million Americans turned into glow sticks in exchange.  Cold War?  Problem solved, so stop whining, you human crybabies!



It was not always thus.  Charles Babbage probably spent precious little time worrying about morality when designing his difference engine.  This early computer, if he had gone ahead and constructed one from his designs, was really a beautifully efficient counting machine - nothing more.  In this, machines are considered unmoral - the ideas of morality are as completely foreign as asking the same things about a pencil.



It really is a work of art.


The next step was to miniaturize these machines, so instead of taking up entire buildings, as did our friend Colossus, they became more manageable.  Still, early computers weren't all that powerful or reliable.  When NASA ran into a thorny math problem during Apollo 13, they didn't crank up the Univac:

Slide rule goodness!


I'll bet Tom Hanks wished he had brought along one of these:


The Curta pocket mechanical calculator - just Google it!


But soon computers did become small and powerful - thanks to this guy:


Not the Rob Roy - the radio!

Transistors began the march toward smaller, faster, more complex processors - and the birth of the silicon-based computer chip. 

Early chip-based items, like calculators, were really simple - but extremely costly.  Calculators were crazy expensive in the 1970s, and having one with a square root key tagged you as a complete nerd.  Most chip-based machines were unbelievably crude by today's comparison.  Today, if you go back and watch the James Bond film "Live and Let Die," Roger Moore holds up his arm in a close-up and punches up the time - red numbers on a black background - on his Hamilton Pulsar P-2.  Viewers these days may wonder, "what up with the close-up?" When released in 1973, audiences actually gasped when Bond did this, never seeing such a watch or imagining such a thing was possible in this small size - and all it did was display the time.

But it was just a interesting cultural milestone toward making computers insanely fast and accurate.  While these nonmoral machines kept computing away, scientists , such as Alan Turing, began wondering if this speed and computational ability could be structured to simulate human thinking.  Trying to outline what goes into human cognition is probably way too much for a single blog post, but put simply, if human thought could be broken down into a series of steps, choices and alternatives, and filtered with guidelines to apply when making those choices, then algorithms could be written for computers that would allow them to simulate thought.  As Marvin Minsky of the Massachusetts Institute of Technology points out, we are talking about simulating thought, not the real thing: 
Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.
So this looks like thinking, but really it is just calculations:



Minsky explains what just happened like this:
... today, many Al researchers aim toward programs that will match patterns in memory to decide what to do next. I like to think of this as "do something sensible" programming. A few researchers -- too few, I think -- experiment with programs that can learn and reason by analogy. These programs will someday recognize which old experiences in memory are most analogous to new situations, so that they can "remember" which methods worked best on similar problems in the past.

Computers have not ever, and will not ever think like humans.  But are there other forms of cognition, not human cognition, that lead to an operational form of intelligence?  An independent form of intelligence unlike that of human beings.  The answer seems to be no, for now.  But that is now.  Let's hope it never becomes this, which is really one of the saddest and disturbing moments in sci fi film history:




Theorist Marshall McLuhan once said “we become what we behold... we shape our tools and afterwards our tools shape us”  - it was a different context - let's hope it is never applicable to this topic.

Next week - designs for learning!




No comments: