 |
The Turing Test
As of more recent, I've been posting relevant things in my AI blog.
To view the blog from beginning to end, click here
Introduction
The Turing test is a setup where you have a person sitting at a computer chatting via something like an instant messenger program (like MSN Messenger) with someone that could either be a person or computer at the other end. The job of the person is to figure out whether what they're chatting with is a computer. If they can tell, the computer has failed the Turing test. This is one of the holy grails of computer science, and while I don't feel especially motivated in this area (I have a good appreciation for how immensely difficult it is), I feel like I should take a kick at this can!
July 24, 2007
Here are my current thoughts on how I would construct an AI. I would start by creating a language that would draw on some of the ideas found in object oriented programming languages to encode information about the world and how human language constructs map onto that information about the world. For example:
word house house1 house2 end
def noun house1 ... end
def verb house2 ... end |
|
The word "house" has (at least) two homonyms. The first is the noun we typically think of, while the second is a verb, as in: "The little cottage was used to house the orphans". What the above example does is map the word house onto its two possible meanings. Each meaning is then itself defined using a unique identifier (house1, house2).
Defining house1 and house2 is where the real work begins. Object oriented programming gives us a head start:
1. | Use the fact that the world is very hierarchical as a means for encoding information about the world (OOP: classes / inheritance) |
2. | Things are composed of smaller things (OOP: objects have typed properties) |
For example:
def noun person1 is a: animal1 ... has: birth_date: date1 height: distance1 weight: mass1 ... end
def noun date1 ... end
def noun animal1 ... end
def noun distance1 ... end
def noun mass1 ... end |
|
This is still quite a coarse way to represent information about the world, but I think it represents a solid starting point. Here's what I'm getting at: Given some minor additions to what I describe above, if I told the computer "My birthday is November 20, 1980", it should assume that "me" is a person (noun person1); it would follow that "birthday" is a property (birth_date) of person1, and so the computer should create an instance in memory of person1, assign it to the context of the current conversion, and populate its birth_date property with "11/20/1980". If I then asked the computer, "What is my birthday", it should come back with "11/20/1980".
Now, at a certain level, it becomes hard to define a concept in terms of smaller concepts. For example, distance and time. If you ask me, the implementor needs to draw a line in the sand and say "the concepts on the right are defined using this home brewed language and are understood in terms of smaller concepts, while the concepts on the left are either partially or totally hard coded into the program". I would liken this to the human brain in that our powers of introspection, while they are amazing, do have boundaries. The analogy is that the "hard coded" meanings in the AI would be similar to a person's subconcious. The subconcious is there, and it's doing things, but it's at a lower level that we can't access using our upper level brain function. At an even lower level would be things such as the autonomic nervous system.
And finally, I will suggest that this AI be constructed like an onion, with many layers. The huge data model of how the world works is an inner layer, while the mapping onto words is another layer, and yet above that would be layers for parsing the language using the data model and word mappings. Likewise, the layer for forming responses, personality, etc, would be separate / distinct.
Something that gets me excited about this is that my interested in programming languages, and more generally, deriving language to encode information about the world, appears to be one of the fundamental areas for potential research.
December 13, 2006
I saw this link on Digg today. The Israeli company, Linguistic Agents, claims to have made a "golden fleece" breakthrough in natural language parsing. I'm fascinated, and yet somewhat sceptical at the same time. Details are hard to come by. I've sent them an email requesting a demo of their software, so we'll see if they reply.
Update: I've had a brief look at this and I have mixed feelings. I think it's potentially quite a useful tool that could be used to create some intelligent applications, but simply breaking a sentence into nouns and verbs and attaching some singular/plural attributes to them is a relatively small step towards a truely intelligent human/computer interface. My username/password isn't working at the moment so hopefully I'll be able to spend some more time with this thing.
October 3, 2006
Yesterday I came across the Loebner Prize in Artificial Intelligence. [Website] A bronze medal is awarded each year to the most life-like chatbot. I thought I'd take the 2006 winner for a spin. Her name is Joan. Can she interpret facts about the world and recall them upon request? Let's find out:
Daniel: "I have two sisters." ... Daniel: "How many sisters do I have?" Joan: "I have two eyes!"
Can't we do better than this in 2006?
September 30, 2006
I'm realizing that having a firm grasp of the many and various language constructs is essential to being able to analyze the written word and break it down -- it's simply not enough to know what a noun, verb, adjective are.
English class here we come...
September 28, 2006
I saw a link on Digg a couple of days ago for jabberwacky.com and George the avatar. Looks like George has achieved quite a bit of press -- congrats to the jabberwacky people!
I have to admit, my experience with George has been a bit dissapointing. The site is extremely slow and it takes too long for George to respond. George, are you there? George??
Some of the conversations on their conversation blog and pretty humorous, even suprising. What I find most interesting is that you can create your own bot for $30/year. As you train your bot, it becomes more like you -- you can teach it things and over time is gets smarter. Maybe I'll try back in a few weeks when the site isn't getting so much traffic.
September 27, 2006
My guess is that the first AI to pass the Turing test will be more akin to a magician pulling off an amazing feat, and less about creating a machine that truly does what the Turing test is supposed to test for: AI with roughly the same intelligence as a human mind. It would be like a magician who causes someone to disappear, convincing an audience full of guests that he's done the impossible. In reality, what he has done is use smoke and mirrors in a clever way -- he certainly didn't make a person vanish into thin air. And so passing the Turing test becomes a game of deceit; creating a facade that fools the observer.
This doesn't make the Turing Test an unworthy pursuit, but I think we need to realize that the first AI to pass it may not be as advanced as we would suppose.
|
|
 |