Autonomous carsJune 17, 2007
News story: Stanford robot passes driving testLast year I watched a PBS documentary on the
Grand Challenge, which was sponsored by DARPA. A team from Stanford had succeeded in winning an autonomous driving challenge by overhauling a Subaru SUV with laser range finders, a GPS device, cameras, and a suite of software to allow the vehicle to navigate across hundreds of miles of rugged terrain ahead of everyone else. I was spellbound! This is an area of computer science that captivates me, and while I don't think I'll persue a masters in this area, it's tempting.
I was excited to read some news coverage this week of this year's competition, which has been staged in an urban setting. Qualifications are under way, with the actual competition set for November.
Hierarchical Temporal MemoryApril 13, 2007
As seen on Slashdot: Jeff Hawkins (founder of Palm Computing) has written an article entitled
Why can't a compute be like a brain. He covers progress since his book
On Intelligence and gives details on
Hierarchical Temporal Memory (HTM), which is a platform for simulating neocortical activity. Numenta has created a framework and tools, free in a "research release," that allow anyone to build and program HTMs.
Given a few days of free time it would be interesting to dabble with this stuff. Some noteable quotes from the article:
"Perhaps we've been going about it in the wrong way. ... Even so-called neural network programming techniques take as their starting point a highly simplistic view of how the brain operates."
"It is clear to many people that the brain must work in ways that are very different from digital computers. To build intelligent machines, then, why not understand how the brain works, and then ask how we can replicate it?"
I like this line of thinking, although I think I can answer the question of why computer scientists have shied away from the brain... let's be serious... if we had any reasonble success in figuring out how it works, then sure, we'd focus on it.
"My colleagues and I have been pursuing that approach for several years. We've focused on the brain's neocortex, and we have made significant progress in understanding how it works. We call our theory, for reasons that I will explain shortly, Hierarchical Temporal Memory, or HTM."
"The neocortex is a thin sheet of cells, folded to form the convolutions that have become a visual synonym for the brain itself."
"Because of the neocortex's uniform structure, neuro-scientists have long suspected that all its parts work on a common algorithm-that is, that the brain hears, sees, understands language, and even plays chess with a single, flexible tool."
"HTM is a theory of the neocortical algorithm."
If these guys are right, then watch out. But I've got my skeptics hat on.
HTTPMarch 31, 2007
Something occurred to me today. I'm working on an application that, unfortunately, consists of about 2MB of JavaScript code that needs to be downloaded to the web browser. Each time the JavaScript is modified a little bit, and the page refreshed, the web browser has to completely re-download that JavaScript file. And because there are 50+ JavaScript files, my understanding is that the web browser has to make an HTTP GET request for each one.
Idea #1: My experience with the 'rsync' utility has been that it is a phenominal piece of technology. Why not use it in conjunction with HTTP so that only the parts of a file that have changed are re-downloaded?
Idea #2: Making 50 HTTP GET requests when the server replies back for each one that the file hasn't changed is quite slow. Why not introduce a "MULTIGET" request where each file that needs to be gotten could be listed within the same request. The server would then send back a two-part response. The first part would list all of the files that had not changed since last gotten, while the second part would be the rsync differences of each file that had.
That could be an order of a magnitude faster in some cases... I'm tempted to simulate this to see just how much faster it would be.
older >>