 |
Thoughts on AI
Here are my current thoughts on how I would construct an AI. I would start by creating a language that would draw on some of the ideas found in object oriented programming languages to encode information about the world and how human language constructs map onto that information about the world. For example:
word house house1 house2 end
def noun house1 ... end
def verb house2 ... end |
|
The word "house" has (at least) two homonyms. The first is the noun we typically think of, while the second is a verb, as in: "The little cottage was used to house the orphans". What the above example does is map the word house onto its two possible meanings. Each meaning is then itself defined using a unique identifier (house1, house2).
Defining house1 and house2 is where the real work begins. Object oriented programming gives us a head start:
1. | Use the fact that the world is very hierarchical as a means for encoding information about the world (OOP: classes / inheritance) |
2. | Things are composed of smaller things (OOP: objects have typed properties) |
For example:
def noun person1 is a: animal1 ... has: birth_date: date1 height: distance1 weight: mass1 ... end
def noun date1 ... end
def noun animal1 ... end
def noun distance1 ... end
def noun mass1 ... end |
|
This is still quite a coarse way to represent information about the world, but I think it represents a solid starting point. Here's what I'm getting at: Given some minor additions to what I describe above, if I told the computer "My birthday is November 20, 1980", it should assume that "me" is a person (noun person1); it would follow that "birthday" is a property (birth_date) of person1, and so the computer should create an instance in memory of person1, assign it to the context of the current conversion, and populate its birth_date property with "11/20/1980". If I then asked the computer, "What is my birthday", it should come back with "11/20/1980".
Now, at a certain level, it becomes hard to define a concept in terms of smaller concepts. For example, distance and time. If you ask me, the implementor needs to draw a line in the sand and say "the concepts on the right are defined using this home brewed language and are understood in terms of smaller concepts, while the concepts on the left are either partially or totally hard coded into the program". I would liken this to the human brain in that our powers of introspection, while they are amazing, do have boundaries. The analogy is that the "hard coded" meanings in the AI would be similar to a person's subconcious. The subconcious is there, and it's doing things, but it's at a lower level that we can't access using our upper level brain function. At an even lower level would be things such as the autonomic nervous system.
And finally, I will suggest that this AI be constructed like an onion, with many layers. The huge data model of how the world works is an inner layer, while the mapping onto words is another layer, and yet above that would be layers for parsing the language using the data model and word mappings. Likewise, the layer for forming responses, personality, etc, would be separate / distinct.
Something that gets me excited about this is that my interested in programming languages, and more generally, deriving language to encode information about the world, appears to be one of the fundamental areas for potential research. |
|
 |