- "Lemming Tracks: Triskaidekaphobia and a Rambling Lemming"
(January 13, 2012) - "Eleven; and Why is Six Afraid of Seven?"
(November 11, 2011)
Computers That Think: Just Around the Corner (Again)
"Artificial Intelligence Could Be on Brink of Passing Turing Test"Brandon Keim, Wired Science (April 12, 2012)
"One hundred years after Alan Turing was born, his eponymous test remains an elusive benchmark for artificial intelligence. Now, for the first time in decades, it's possible to imagine a machine making the grade.
"Turing was one of the 20th century's great mathematicians, a conceptual architect of modern computing whose codebreaking played a decisive part in World War II. His test, described in a seminal dawn-of-the-computer-age paper, was deceptively simple: If a machine could pass for human in conversation, the machine could be considered intelligent.
"Artificial intelligences are now ubiquitous, from GPS navigation systems and Google algorithms to automated customer service and Apple's Siri, to say nothing of Deep Blue and Watson - but no machine has met Turing's standard. The quest to do so, however, and the lines of research inspired by the general challenge of modeling human thought, have profoundly influenced both computer and cognitive science.
"There is reason to believe that code kernels for the first Turing-intelligent machine have already been written...."
Artificial intelligence, the sort exhibited by C3PO and HAL 9000, has been 'just around the corner' for decades. So far, what's been achieved is the release of several movies; and some robots. The robots are, sometimes, useful. But "intelligent?" Not so much.
- "Meet Fenway the Robot, Hospital Helper"
(September 21, 2010) - "HAL 9000, Skynet, and C3PO"
(January 26, 2010) - "Robovie-II and Robovie-IV: Robot Assistants for Store and Office"
(January 7, 2010)
- Computers
- Get correct answers
- Based on vast quantities of data
- All of which is precisely correct
- Based on vast quantities of data
- Get correct answers
- Human brains
- Get correct answers
- Based on very little data
- Most of which is wrong
- Based on very little data
- Get correct answers
The point is that so far, artificial intelligence has been very good at doing rapid calculations that involve accurate data. Humans have survived because they're pretty good at sorting out relevant information from the deluge of data fragments pouring into the brain.
Back to that Wired article.
Subcognitive Low-Level Association, Dental Hygiene, and Perambulators
"...'Two revolutionary advances in information technology may bring the Turing test out of retirement,' wrote Robert French, a cognitive scientist at the French National Center for Scientific Research, in an Apr. 12 Science essay. 'The first is the ready availability of vast amounts of raw data - from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.'" 'Is it possible to recreate something similar to the subcognitive low-level association network that we have? That's experiencing largely what we're experiencing? Would that be so impossible?' French said...."
"...The human mind was thought to be logical. Computers run logical commands. Therefore our brains should be computable. Computer scientists thought that within a decade, maybe two, a person engaged in dialogue with two hidden conversants, one computer and one human, would be unable to reliably tell them apart.
"That simplistic idea proved ill-founded. Cognition is far more complicated than mid-20th century computer scientists or psychologists had imagined, and logic was woefully insufficient in describing our thoughts. Appearing human turned out to be an insurmountably difficult task, drawing on previously unappreciated human abilities to integrate disparate pieces of information in a fast-changing environment...."
(Brandon Keim, Wired)
The Lemming isn't sure that it's accurate to say that the human brain isn't "logical" because it doesn't operate the way that an Intel chip does. It sounds like saying that calculus isn't mathematics because it works differently from high school algebra. But the Lemming also thinks that parameter sounds like perambulator, and some calculus has more to do with dental hygiene than integrating functions. And that's another topic. Topics.
Good Question
"...He [Robert French] continued, 'Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions' and even pass a Turing test...."(Brandon Keim, Wired)
Okay, let's assume that. An ideal set of software, loaded into an ideal computer, run by an ideal operating system, could - ideally - slice and dice a whole bunch of data. Really fast.
And, ideally, after lots and lots of data-crunching, this system could pass a Turing Test.
Maybe French is right, and real-world equivalents of C3PO are just around the corner. That would be - very impressive.
And, from the Lemming's point of view, very surprising. Making the jump from writing a sentence that human brains will process and put in the 'this might work' category is one thing.
Getting real software and hardware to process data, and successfully imitate what human brains do? That, so far, has been a fascinating and rewarding occupation.
But we still haven't seen an AI that can pass a Turing test. Some chatbots get pretty close to emulating the responses of a sleep-deprived, hung-over, heavily-caffeinated college student: and that's yet another topic.
Impossible! Or, Not
Does the Lemming think that AI will become intelligent? Briefly:- Yes
- No
- It depends
A somewhat more sophisticated version of that assumption was that being "intelligent" meant being able to memorize lots of information, and use bits and pieces of it in appropriate ways. That's probably why authors often used playing chess as a way to say 'this character is really smart.'
Then Deep Blue turned out to be a better chess player than some human. The Wired article mentions that AI.
The point is that definitions of "intelligence" have shifted, as folks have realized that an overclocked abacus can out-memorize any human being.
What we don't have, yet, is an AI that can successfully handle large amounts of fuzzy facts; find patterns that are vaguely similar to previous experiences; reject the patterns that are silly; and come up with a shifting short list of patterns that might - or might not - apply in the current situation.
The Lemming thinks that an AI that's able to pass a Turning test might be possible: but it may require a new sort of logic. We've had this sort of thing happen before, sort of. Newton (or Leibniz, or Madhava of Sangamagrama, or somebody else) pulled calculus out of his head, when he needed a mathematical tool that didn't exist yet.
We know that it's possible for a data-processing system to act the way human beings do. Human brains do it all the time. The trick, in the Lemming's opinion, will be learning just how human brains pass Turing tests; developing a system to represent the process in an abstract way; and then developing languages and hardware to process data using that system.
That's a whole lot of developing.
Along the way, though, folks are likely to learn quite a lot about how human beings think, and how the human brain works. And, occasionally, doesn't work.
And those are - what else? more topics.
Related (sort of) posts:
- "The Lemming Meets Cleverbot"
(January 27, 2012) - "Nine Decades of Robots Turning on Their Masters"
(January 25, 2011) - "Asimov's 3 Laws, Real Robots, and Common Sense"
(August 20, 2009) - "Memristor: Cool New Technology from HP Labs"
(April 30, 2008) - "C3PO Will Have to Wait: Artificial Intelligence Today"
(February 9, 2008)
No comments:
Post a Comment