topics:  main-page   everything   99things   things-to-do   software   space   future   exercise & health   faith  
  thought   web   movies+TV   music   mymusic   food   curiosity   tidbits   I remember   wishlist   misc   links


Modeling a Person
February 5, 2016

One of the odd things that Ray Kurzweil has talked about is the idea of recreating his father as an artificial intelligence, and to do so using old photographs and writings of his father's. Upon hearing this, even though it sounds bizarre, I felt compassion for him -- he lost his father at a relatively young age, and it was obviously a huge loss for him. His faith in bringing him back seems to have grown out of his deep hope that it might be possible, however unlikely.

All this said, something struck me today that isn't completely unrelated. In past months I've pondered the (not new, I don't think) idea of building up an AI by feeding it a stream of video, sound, and touch, and having its algorithms attempt to build a model that can predict the next "frame" of sensory data. Thus, one gets "free" supervised training data.

Let's connect this idea to Ray's dream of modeling his father, but let's assume his father was still living. What we'll do is have his father wear something like Google Glass, which will record everything his father is seeing and hearing. We'll also have him wear a thin nylon-like suit that will record the X/Y/Z position of his body parts and what touch stimuli he is receiving. The system will then record everything that he says, everything he types, and every motor control that he does.

Once we've done this, we again have a "supervised" training set... the stimuli he is receiving are the inputs, and his behaviors are the outputs... everything from what he says, to the exact tone of his voice, to the precise way he holds his head, or how often he blinks.

Let's imagine we capture a few petabytes of this data, and then we use a fancy computer in 2065 with an insane amount of neural network capability to train a neural net that tries to predict what his father's behavior will be in a given situation.

Finally, we'll run the neural net, and have it run in VR, creating a photo-realistic representation of the man, and to interact with him, you strap on a VR headset, and enter that virtual world.

I'm curious what this might be like in a year like 2065... would such a technique exist?  Would it be in any way compelling?  And how about the limit... given enough time, might we be able to "model" a person well enough to create a VR likeness of them that was very compelling?


Tesla 7.1 Software: Summoning Car From/To Garage
January 10, 2016

Tesla has released the next iteration of their software, and videos are cropping up of people summoning their cars to or from their garage. The car even autonomously opens their garage door.

These "firsts" are really delightful for us tech folk. They bring a smile to my face. Well done Tesla people.


Hyperloop Progress
January 9, 2016

The Hyperloop is of course one of those ideas that is very uncertain. Will it ever turn into a reality? Maybe?!

But things suddenly seem to be heating up -- this week a picture showed up with 20 or so massive tubes sitting out in the desert, ready to be assembled into a test track. Word is that they want to have a prototype track (3 km?) setup by late 2016 or into 2017. That's coming up fast.

That photo for me, I think, was the inflection point. It's when I went from feeling hopeful but skeptical to feeling even a little bit confident that, yes, this thing is going to happen.

And that's a bit startling, because... well, this is the Hyperloop, and it's a very wild and crazy thing to be real.

I'm trying to imagine this miles-long tube, with something whizzing through it at 500+ miles per hour. It feels so imaginary! Time will tell, I guess.

older >>