Technology that outthinks us: A partner or a master?

In Vernor Vinge’s version of Southern California in 2025, there is a school named Fairmont High with the motto, “Trying hard not to become obsolete.” It may not sound inspiring, but to the many fans of Vinge, this is a most ambitious — and perhaps unattainable — goal for any member of our species.

Vinge is a mathematician and computer scientist in San Diego whose science fiction has won five Hugo Awards and earned good reviews even from engineers analyzing its technical plausibility. He can write space operas with the best of them, but he also suspects that intergalactic sagas could become as obsolete as their human heroes.

The problem is a concept described in Vinge’s seminal essay in 1993, “The Coming Technological Singularity,” which predicted that computers would be so powerful by 2030 that a new form of superintellligence would emerge. Vinge compared that point in history to the singularity at the edge of a black hole: a boundary beyond which the old rules no longer applied, because post-human intelligence and technology would be as unknowable to us as our civilization is to a goldfish.

The Singularity is often called “the rapture of the nerds,” but Vinge doesn’t anticipate immortal bliss. The computer scientist in him may revel in the technological marvels, but the novelist envisions catastrophes and worries about the fate of not-so-marvelous humans like Robert Gu, the protagonist of Vinge’s latest novel, “Rainbows End.”

Read it all.

print

Posted in * Culture-Watch, Science & Technology

3 comments on “Technology that outthinks us: A partner or a master?

  1. Rich Gabrielson says:

    The professional society I belong to has collected views across the range of opinion on “the singularity” here. See especially the lead article, The Consciousness Conundrum.

    Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” My earliest recollection of the hype surrounding “just plain old computers” was the 1954 collection [i]Science Fiction Thinking Machines[/i] by Groff Conklin. Computers falling in love, getting spring fever, flawlessly imitating a virtuoso pianist, etc. In later years it was transferred to what would become known as artificial intelligence (AI) and the debate rages on over whether AI will eventually [i]replace[/i] human intelligence or [i]enhance[/i] it.

    With 30 years in the field of Computer Engineering and Computer Science my take is that there’s less to it than meets the eye. Hindsight has always shown that we overrate our abilities in two key areas: the ability to understand how the human mind works, and the ability to create fault-free computer code. In the first area, by the time an AI project achieves the ability to emulate the human intellect as understood at the beginning of the project, that understanding proves to have been woefully inadequate.
    [blockquote]Psychologist: “The human mind does X.”
    AI practicioner, several years later: “Now computers can do X.”
    Psychologist: “Compared to what we know now, X is trivial and uninteresting.”[/blockquote]
    (The second area is part of the reason the project took so long.)

    All this is pretty trivial compared to the metaphysical questions, which are probably what Kendall had in mind in the first place.

  2. John Miller says:

    As someone who has studied a bit of AI theory, I can say that Vinge’s theory is based on a poor understanding of cognition and computers. Two thoughts here. First on human cognition: There is a latency of ~1ms for the movement of an impulse from one neuron to the next. Most human reactions happen in less then 1 second, (some in as little as 350 to 450ms). Given these two observations, we can conclude that the longest path for a cognitive impulse can be no more then 1000 neurons long. Compare that to a relatively slow modern computer that can chain together 1 billiion instructions per second (1GHz clock speed). The trick of course it that brains have massively parallel networks while a CPU is considered high end if it has 8 execution cores.

    Second thought is on computers. Computers are developed to think about things that human brains are poor at. Complex arithmetic calculations, accurate storage of information, and boring repetitive tasks. Even in fields where researchers try to mimic neural models, it is not to create a cognitive equivalent to humans, but to complete specific tasks that humans are so good at that doing that to so on a large quantity of data would get boring. (i.e. Identifying the content of pictures on the web, or sorting documents by content.)

    The idea that computers are going to develop creature like cognition just because they are more powerful is rather obtuse. transistor gates and neurons form very different optimal solutions to problems, and as tools computers are far more likely to augment rather then mimic human cognition. In this sense computer already out think humans when it comes to arithmetic, but the computer that contemplates it own existence is still technologically distant, and also quite useless except as a research gimmick.

  3. Harvey says:

    A quote – ” To err is human but to really screw things up requires a computer” My personal feeling for my great-grandaughters is that we should be more concerned with rightly training that “slow” computer between their ears. I still repeat the story of my favorite professor in college that didn’t mind anyone using a hand held programable computer but made the point of insisting they give him a copy of the programs they developed for the computer so he could at least give them credit for the good part of the program and take credit away for the incorrect entries they made. Those who didn’t supply the program received little if any credit for their work because there was nothing available to check their work with!