A Grand Unified Theory of Artificial Intelligence

the thinker

Early AI researchers saw thinking as logical inference: if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly. One of AI’s first projects was the development of a mathematical language — much like a computer language — in which researchers could encode assertions like “birds can fly” and “waxwings are birds.” If the language was rigorous enough, computer algorithms would be able to comb through assertions written in it and calculate all the logically valid inferences. Once they’d developed such languages, AI researchers started using them to encode lots of commonsense assertions, which they stored in huge databases.

The problem with this approach is, roughly speaking, that not all birds can fly. And among birds that can’t fly, there’s a distinction between a robin in a cage and a robin with a broken wing, and another distinction between any kind of robin and a penguin. The mathematical languages that the early AI researchers developed were flexible enough to represent such conceptual distinctions, but writing down all the distinctions necessary for even the most rudimentary cognitive tasks proved much harder than anticipated.

Embracing uncertainty

In probabilistic AI, by contrast, a computer is fed lots of examples of something — like pictures of birds — and is left to infer, on its own, what those examples have in common. This approach works fairly well with concrete concepts like “bird,” but it has trouble with more abstract concepts — for example, flight, a capacity shared by birds, helicopters, kites and superheroes. You could show a probabilistic system lots of pictures of things in flight, but even if it figured out what they all had in common, it would be very likely to misidentify clouds, or the sun, or the antennas on top of buildings as instances of flight. And even flight is a concrete concept compared to, say, “grammar,” or “motherhood.”

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

PhysOrg: A Grand Unified Theory of Artificial Intelligence

(Thanks Josh!)

1 Comment

  1. “You’ve got Google Goggles, where you take a picture of like anything,”

    Um… If by “like anything”, they mean any product or corporate logo, then I agree. Try taking a picture of a tree, your dog, a sandwich or just about anything else and google goggles has no idea what the hell it is.

    Funware is currently not much more than the digital version of record clubs or the McDonald’s Scrabble value meal game. I care nothing about not leveling up at Starbucks.

Comments are closed.

© 2024 Technoccult

Theme by Anders NorénUp ↑