<rss version="2.0">
  <channel>
    <title>danielbigham.ca: ai</title>
    <link>http://danielbigham.ca/cgi-bin/blog.pl?keywords=ai</link>
    <description>Daniel Bigham's Blog</description>
    <language>en-us</language>
  <item>
    <title>Parsing Flight Searches Using GPT-3</title>
    <description>https://danielbigham.blogspot.com/2020/07/parsing-flight-searches-using-gpt-3.html</description>
    <pubDate>18 Jul 2020 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1058</guid>
  </item>
  <item>
    <title>Experiment: Can OpenAI's GPT-3 Write Wolfram Language Code?</title>
    <description>Blog post on Wolfram Community</description>
    <pubDate>16 Jul 2020 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1057</guid>
  </item>
  <item>
    <title>Alternating Analog/Digital</title>
    <description>I once heard it described that the universe seems to be viewable through both an "analog"/continuous lens as well as a "digital"/discrete lens. And furthermore, that as you zoom out from the minute, there can be a kind of back-and-forth between analog and digital models making the most sense.  It makes me wonder whether computation will evolve in a similar way. Maybe our deep neural nets will evolve into systems that mix and match continuous and discrete layers, giving rise to a synergy between the two that takes its abilities to the next level.  That's a pretty raw and not-thought-through idea, but it has a certain ring to it for me...</description>
    <pubDate>23 Jan 2018 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1055</guid>
  </item>
  <item>
    <title>Draw Giraffe, See a Movie</title>
    <description>It struck me today as I was drawing giraffes with 4 year old Hazel that within a few years it should be possible for a child to draw an animal, whether it be a giraffe or a dog, and have the computer use that as a style blueprint.  It should then be able to take the pre-rendered model for a short movie involving that animal and use the stylized version of the animal when rendering a new cut of that movie.  It might also be possible for the child to give some direction about what should happen in the movie, such as the giraffe running from a lion and then escaping by jumping over a river, and have the movie include those components.  How far away are we from that? Could a compelling prototype be built within 10 years perhaps?</description>
    <pubDate>08 Oct 2016 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1052</guid>
  </item>
  <item>
    <title>Graph-powered Machine Learning at Google</title>
    <description>https://research.googleblog.com/2016/10/graph-powered-machine-learning-at-google.html  This looks very promising and fits very well with the way I tend to think about things.</description>
    <pubDate>07 Oct 2016 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1051</guid>
  </item>
  <item>
    <title>Modeling a Person</title>
    <description>One of the odd things that Ray Kurzweil has talked about is the idea of recreating his father as an artificial intelligence, and to do so using old photographs and writings of his father's. Upon hearing this, even though it sounds bizarre, I felt compassion for him -- he lost his father at a relatively young age, and it was obviously a huge loss for him. His faith in bringing him back seems to have grown out of his deep hope that it might be possible, however unlikely.  All this said, something struck me today that isn't completely unrelated. In past months I've pondered the (not new, I don't think) idea of building up an AI by feeding it a stream of video, sound, and touch, and having its algorithms attempt to build a model that can predict the next "frame" of sensory data. Thus, one gets "free" supervised training data.  Let's connect this idea to Ray's dream of modeling his father, but let's assume his father was still living. What we'll do is have his father wear something like Google Glass, which will record everything his father is seeing and hearing. We'll also have him wear a thin nylon-like suit that will record the X/Y/Z position of his body parts and what touch stimuli he is receiving. The system will then record everything that he says, everything he types, and every motor control that he does.  Once we've done this, we again have a "supervised" training set... the stimuli he is receiving are the inputs, and his behaviors are the outputs... everything from what he says, to the exact tone of his voice, to the precise way he holds his head, or how often he blinks.  Let's imagine we capture a few petabytes of this data, and then we use a fancy computer in 2065 with an insane amount of neural network capability to train a neural net that tries to predict what his father's behavior will be in a given situation.  Finally, we'll run the neural net, and have it run in VR, creating a photo-realistic representation of the man, and to interact with him, you strap on a VR headset, and enter that virtual world.  I'm curious what this might be like in a year like 2065... would such a technique exist?  Would it be in any way compelling?  And how about the limit... given enough time, might we be able to "model" a person well enough to create a VR likeness of them that was very compelling?</description>
    <pubDate>05 Feb 2016 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1048</guid>
  </item>
  <item>
    <title>Fuzziness: Discrete VS Continuous / Digital VS Analog / Approximation</title>
    <description>I remember a few years ago that I was feeling excited about injecting a degree of uncertainty into computation. The point was that while we often create systems that only act if they are certain about their inputs, a world of new possibilities opens up if you are willing to act when you're almost but not completely sure about something.  I gave the example of Google search -- it will often say "assuming you meant ...", and the addition of those smarts is incredibly useful.  Sometimes I think of these systems as "95% systems" -- systems that are willing to treat things as tentatively true if it's &gt;= 95% likely that they're true, and if it is later determined that a false assumption was made, then you go back and fix that assumption.  Recently my mind has been resonating on that theme again. This time the perspective is slightly different -- it is the realization that in math, there are things we call "discrete" -- such as integers, prime numbers, etc, and there are things we call "continuous" -- such as the function y = x^2.  The realization is that the "fuzziness" of "95% systems" is really the introduction of continuity / continuous functions into the realm of computation, and furthermore, neural networks and probabilistic modelling are the primary examples of continuity in computing today. They have opened up a whole new world of possibilities, solving all sorts of problems that discrete systems struggled with.  Another way of looking at this dichotomy is the digital/analog divide.  An intuition/bias that I've had for a number of years is that good things come when you figure out how to property synergize discrete and continuous systems. The point is that neither system of reasoning is the slam dunk answer to intelligence, but rather each of them is more or less useful depending on the domain.  My vague guess is that the intelligent systems of the future will harness both discrete and continuous models in powerful ways, and that those intelligent systems will excel at having those two systems play nicely with one another. Well, more than that -- that those systems will employ a kind of "resonant synergy" that will achieve something far more mind bending than an attempt that used only one of the approaches could achieve.  One last analogy I'll throw in is the notion of "approximate algorithms": There are many problems in computer science for which it is impossible to calculate an exact solution. For example, the famous "travelling salesman" problem... rather than computing the optimal solution, we focus on computing solutions that are likely to be really close to the optimal solution, and to do it in what might be a trillion times faster than trying to get the very best solution. These approximate algorithms I think are yet another example of "fuzziness", and how it can be such an important and exciting area of development.</description>
    <pubDate>30 Nov 2015 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1041</guid>
  </item>
  <item>
    <title>Using AI to Optimize Parameters</title>
    <description>http://arxiv.org/pdf/1402.1694v4.pdf  I came across this interesting paper today, which describes a way of doing Markov Chain Monte Carlo in a way that speeds up optimizing the parameters of a probabilistic model up to 200x. And of course, no big surprise, it's about using intelligent approximation.  This brings to mind to possibility that one day the algorithm we use to do optimization may actually use "AI". What I mean by that is that exploring a curve in hyperspace to find the minimum is in some sense like many other problems: You can look around and collect clues as to the characteristics of the curvature, and use those clues to build a kind of mental model of the dynamics at play, and then ultimately to use what you know about the space to intelligently explore it as quickly as you can.  Imagine a "neural net" of sorts that had been trained on millions of example optimization problems, and was able to very efficiently build an internal model of the topology of a hyper-surface, using that to calculate the optimum parameters... it might be hundreds, or thousands of times faster than naive human attempts that use exact algorithms... and for problems that are just mind-blowingly complex and viewed as semi-intractable today, it might be trillions of times faster.  An amusing side realization is that optimization is a core requirement for building AIs / neural nets in the first place, and so it's conceivable that there could be a recursive benefit to using AIs to do optimization... back to our good old exponential improvements in technology game: Your now supercharged ability to do optimization allows you to train an even more capable neural net, which allows an even faster optimization algorithm, which allows even more capable neural nets, and back and forth you go.</description>
    <pubDate>16 Nov 2015 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1040</guid>
  </item>
  <item>
    <title>Scientific Method as Analogous to Human Perceptual System / Recurrent Neural Network Learning</title>
    <description>I've posted earlier about a fascinating realization that I've had: You can use sensory input to a recurrent neural network as a supervised learning problem by making the outputs of the network a prediction of what comes next.  A further realization is that the scientific method is essentially the same thing: You create a model, and make predictions using that model. You then examine the results of an experiment and compare that to your predictions, and adjust your model based on the outcome.  The terms "microcosm" and "macrocosm" come to mind... perhaps what the brain does is a kind of microcosm of the scientific method.</description>
    <pubDate>09 Nov 2015 00:00:00</pubDate>
    <guid>http://www.danielbigham.ca/cgi-bin/blog.pl?mode=view&amp;id=1037</guid>
  </item>
  </channel>
</rss>