Exercise 21: Query transformationsOctober 6, 2008
SummaryIn language, we often qualify an ambiguous noun with an attribute that resolves that ambiguity. For example, since I have two sisters, I can't just refer to my older sister by saying:
I need to say either:
or
In the first case, "older" is a qualifier. In the second case, the name "Rebekah" resolves the ambiguity.
What we need is a transformation such as:
my sister {first_name} -> $x: speaker sister $x, $x.first_name = $1 |
|
In other words, we're not just transforming a fragment into a literal entity, we're transforming it into an entity that needs to be determined by resolving a set of conditions.
For the other example, we could use:
my older sister -> $x: speaker sister $x, speaker sister $y, $x.age > $y.age |
|
SolutionClick hereWeb UIClick here
Book review: On Intelligence October 6, 2008

On Friday when I got home from work, there was my ebay order in some brown cardboard packaging. It was a busy weekend, but somehow I managed to read the whole book!
Overall, I'm excited by Jeff Hawkin's efforts. Like the author, I've been dumbfounded by the lack of any overarching theories of the brain. There are literally thousands of neurobiologists working on the minute details, but those details haven't been brought together in any satisfactory way.
The author's theory is that the neocortex plays the central role in intelligence. He is convinced that the cortex is a layered, hierarchical structure that uses prediction to evaluate, and ultimately, interpret, inputs. This theory is intimately linked with the observation that the cortex is remarkably uniform in nature regardless of whether you're looking at the regions which process vision, sound, touch, etc: It is thought that all of these areas use essentially the same cortical algorithm.
Here is my positive and negative feedback:
The Good
  | I like the overall theory very much. I think Jeff is on to something. |
  | I love the thought of predictions flowing down from the upper layers connecting with sensory inputs flowing up from the lower layers. I envision it sort of like lightening where you have two probing fingers that finally meet and than *WHAMO*. This is an image I've had in my head a number of times when I've thought about how to use a directed graph of neurons to solve problems, so it was a big "aha" moment to see the idea being used in this way. |
The Bad
  | While I think the theory explains a lot, I have this lingering sense that there is a lot that it doesn't explain. It seems more focused on perception: How we transform millions of inputs into an interpretation, but what about rational thought? Jeff's description is that rational thought is simply the result of higher layers in the hierarchy, but this analogy doesn't quite fit for me. |
  | I was a bit alarmed when I read the opening of the book and the author, in a very forceful way, says that behavior is not where intelligence is at. Wow. That's a pretty bold thing to say, especially for a computer guy. I think I get what he's trying to say: That you can be intelligent without behaving. I agree 100%, but the way in which he says it makes it sound like using a behavior-mindset is bad, which I don't agree with. I think he went too far poo-pooing behavior. |
  | Consciousness. Ok, let me be honest: The purpose of Jeff's theory isn't to explain consciousness, and he spends all of about a page talking about it, but I was a bit disappointed that Jeff poo-pooed the concept of conscious experience a bit. C'mon people: It's pretty much the most amazing, unexplained problem in the universe. |
  | Evolution: Of course, I disagree with Jeff when he delves into the evolutionary history of the brain, etc. |
Even though I've listed more negative stuff, my overall impression of the book is extremely positive. I think the author has done a great job putting together a solid theory, and has gone so far as to list a bunch of testable predictions. It will be interesting to see how the theory fares.
See also: http://www.rni.org/OnIntelligence.html
| 
  |
Entities and relationships VS the brainOctober 4, 2008
As I'm reading chapter 4 of
On Intelligence, the author makes the comment that "All memories are stored in the synaptic connections between neurons".
Immediately my mind wandered to the ideas of entities and relationships that I've been playing with these last few months: Is this statement about the brain analogous?
In a sense, I already knew that it was since I used this model for the very reason that it was analogous to nodes and edges, and ultimately, neurons/axons/synapses. But what suddenly struck me is that, since the brain uses neurons at different levels, so it is likely that it would be useful for what I'm working on to use the entity/relationship pattern at different levels and in different contexts.
What I'm getting at is that while an "entity" can represent a concrete idea such as "person", "dog", "run", etc., it might be as well suited to represent something at a completely different level of processing, such as how bright a certain pixel on the screen is, etc. These other "layers" are completely different areas of study, but this is an interesting idea to consider.
older >>