“Writer of common sense into computers” was the header of an obituary I read in The Economist this week for Douglas Lenat. I don’t generally find obituaries fascinating, and I had never heard of Douglas Lenat, but the description of his work piqued my interest. One particular sentence in that obituary, I believe, illustrates his work: He said, “We humans intuitively understand that if you take a cup of coffee, and turn it upside down, the coffee will fall out.” He added, “A computer doesn’t know that and, so, has to be taught.” Fascinating!

     Another example follows the same line, and was taken from Lenat’s actual experience. He and his wife were driving, and a garbage truck ahead of them started to shed its load. Bags of garbage bounced all over the road. What were they to do? With cars all around them, they couldn’t swerve, change lanes, or jam on the brakes. They had to just drive over the bags. The question became, which bags to drive over? Instant decision: not to drive over the household ones, because families threw away broken glass and sharp, opened cans. The restaurant one looked better because it probably only contained waste food and Styrofoam plates. He was right. The car, and the passengers survived.

     That strategy had taken him seconds to think up. Lenat asked himself, “How long would it have taken a computer?” The answer was, way too long. AND confused computers tend to stop, which might have meant a computer-controlled, driverless, car would just have stopped in the middle of moving traffic………….

     Lenat spent almost four decades trying to teach computers to think in a more human way, but the two examples above illustrate that the task is monumental, and filled with potential danger. If you don’t think of every contingency, the result could be calamitous.

     Another example, which I came across a short while ago, after the covid pandemic, shows the problem of AI in a different, possibly more dangerous light: Ask a computer to solve the problem of future pandemics? The computer looks at all the data. The data says that the rapid spread of a virus is the main problem in controlling the virus. The computer then analyses how the virus spreads. The answer is through contact with human beings that have the virus. Solution is simple. Eliminate human beings. That’s completely logical if peripheral “thinking” is not pre-programmed into the machine. A crass example, perhaps, but, possibly, a real one nonetheless.

     I think I was fascinated by Lenat’s work partially because of concern over the apparent unrestricted, and unregulated, experimentation going in in the AI field, and partially because of the expressed concerns of top people in the field.

     Another interesting example, which the obituary mentioned, and which would not occur to almost anyone: A computer is asked to read a page of text. If the instructions were not precise enough, it might read the “white” spaces instead of the “black” text, with totally unpredictable results.

     When Lenat started his “Cyc” project to teach computer common sense, he asked the six smartest people he knew how many rules might be needed, and how long it might take. Their verdict was around a million rules and about 100 person-years. His incomplete task took him 2,000 such years.

     Lenat’s approach was challenged when machine learning advanced to the point where its designers thought they had taught machines to teach themselves, but Lenat argued that if the machine didn’t have the basics programmed in, it could very quickly go “off the reservation”. He developed his own AI system, which he called Eurisko. Eventually, Eurisko began to ask the question “why”, which allowed it to seek more information. However, when asked if Eurisko was intelligent, Lenat demurred. He demurred more when asked if Eurisko exhibited “consciousness”.

     Eurisko, he was convinced, knew it was a computer. It remembered what it had done in the past, even the stages of improvement it had undergone. It also recognized that the sources that were allowed to make changes to its knowledge base were persons. One day, it asked Lenat, “Am I a person?” He told it, reluctantly, “No”.

     I thought this story was fascinating because it clearly illustrates the positive and negative potentials of AI, and how a headlong rush in its development, without reflection and regulation, is extremely dangerous for human existence.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

For security, use of hCaptcha is required which is subject to their Privacy Policy and Terms of Use.

I agree to these terms.

Scroll to Top