Asking questionsJuly 15, 2008
A nice thing to have is the ability to query the AI with questions. This allows a statement to be posed and then a question to be asked to determine whether the AI understood the statement.
Parsing a question uses transformations much like parsing a statement. The only difference is that our end goal is a reference to an entity's value rather than an assignment.
For example:
We already have transformations that will get us to:
| What is [speaker.first_name]? |
|
Next we apply:
Which results in:
A built-in rule is that any value followed by a question mark prompts that the value be output:
The language-in data structureJuly 15, 2008
IntroductionSeparate from the AI's core data structure is its language data structure. We can represent the language data structure using a separate textual representation.
Mapping words to entitiesWe can represent mappings like this:
"dog" -> dog "cat" -> cat "house" -> house "house" -> house_v |
|
Notice how the word
"house" maps to two different entities: House the noun and house the verb.
Defining nouns, verbs, etc.We need to define the entities that words map to as being a noun, verb, etc. We will modify the above representation to do this inline:
"dog" -> dog: noun "cat" -> cat: noun "house" -> house: noun "house" -> house_v: verb |
|
Although our noun/verb/etc. designations fall into the realm of the language-in textual representation, they end up getting applied as
is_a relationships in the core data structure.
The language-in layerJuly 15, 2008
In summary:
  | The language-in layer's input is a list of words. |
  | The language-in layer's output is one or more assignments that can be applied to the AI's data structure. |
  | Transformations are applied iteratively to convert inputs to outputs. Each transformation consists of an input specification and an output specification. For example: |
 | | {noun} is {word} -> $1 = $2 |
|
|
  | There will be many cases where it will be ambiguous which transformation to apply. ie. There will be more than one possibility. A depth-first or breadth-first search will need to be employed here, possibly using heuristics to determine which transformations to try first. |
  | Some transformations imply additional work: |
 |   | Output specifications that contain x.$1 imply that $1 needs to be mapped from a word to an entity before the transformation can be applied. |
 |   | Output specifications that contain x = y imply that y need not necessarily be mapped to an entity. In some cases, it will remain a string. For example: |
 |  | | speaker.first_name = "Daniel" |
|
|
  | Transformations require two major data sets: |
 |   | A mapping from words to entities. This highlights that one word might map to several different entities. When a word is encountered, it is ambiguous which entity it represents until the context is taken into account. |
 |   | For each entity that represents a word, we need to define whether it is a noun, verb, etc. |
This is only a very basic outline but gets the ball rolling.
older >>