topics:  main-page   everything   99things   things-to-do   software   space   future   exercise & health   faith  
  thought   web   movies+TV   music   mymusic   food   curiosity   tidbits   I remember   wishlist   misc   links


Asking questions
July 15, 2008

A nice thing to have is the ability to query the AI with questions. This allows a statement to be posed and then a question to be asked to determine whether the AI understood the statement.

Parsing a question uses transformations much like parsing a statement. The only difference is that our end goal is a reference to an entity's value rather than an assignment.

For example:

What is my name?

We already have transformations that will get us to:

What is [speaker.first_name]?

Next we apply:

what is {noun} -> $1

Which results in:

[speaker.first_name]?

A built-in rule is that any value followed by a question mark prompts that the value be output:

"Daniel"


The language-in data structure
July 15, 2008

Introduction

Separate from the AI's core data structure is its language data structure. We can represent the language data structure using a separate textual representation.

Mapping words to entities

We can represent mappings like this:

"dog" -> dog
"cat" -> cat
"house" -> house
"house" -> house_v

Notice how the word "house" maps to two different entities: House the noun and house the verb.

Defining nouns, verbs, etc.

We need to define the entities that words map to as being a noun, verb, etc. We will modify the above representation to do this inline:

"dog" -> dog: noun
"cat" -> cat: noun
"house" -> house: noun
"house" -> house_v: verb

Although our noun/verb/etc. designations fall into the realm of the language-in textual representation, they end up getting applied as is_a relationships in the core data structure.


The language-in layer
July 15, 2008

In summary:

The language-in layer's input is a list of words.

The language-in layer's output is one or more assignments that can be applied to the AI's data structure.

Transformations are applied iteratively to convert inputs to outputs. Each transformation consists of an input specification and an output specification. For example:
{noun} is {word} -> $1 = $2

There will be many cases where it will be ambiguous which transformation to apply. ie. There will be more than one possibility. A depth-first or breadth-first search will need to be employed here, possibly using heuristics to determine which transformations to try first.

Some transformations imply additional work:
Output specifications that contain x.$1 imply that $1 needs to be mapped from a word to an entity before the transformation can be applied.
Output specifications that contain x = y imply that y need not necessarily be mapped to an entity. In some cases, it will remain a string. For example:
speaker.first_name = "Daniel"

Transformations require two major data sets:
A mapping from words to entities. This highlights that one word might map to several different entities. When a word is encountered, it is ambiguous which entity it represents until the context is taken into account.
For each entity that represents a word, we need to define whether it is a noun, verb, etc.

This is only a very basic outline but gets the ball rolling.

older >>