topics:  main-page   everything   99things   things-to-do   software   space   future   exercise & health   faith  
  thought   web   movies+TV   music   mymusic   food   curiosity   tidbits   I remember   wishlist   misc   links


Fuzziness: Discrete VS Continuous / Digital VS Analog / Approximation
November 30, 2015

I remember a few years ago that I was feeling excited about injecting a degree of uncertainty into computation. The point was that while we often create systems that only act if they are certain about their inputs, a world of new possibilities opens up if you are willing to act when you're almost but not completely sure about something.

I gave the example of Google search -- it will often say "assuming you meant ...", and the addition of those smarts is incredibly useful.

Sometimes I think of these systems as "95% systems" -- systems that are willing to treat things as tentatively true if it's >= 95% likely that they're true, and if it is later determined that a false assumption was made, then you go back and fix that assumption.

Recently my mind has been resonating on that theme again. This time the perspective is slightly different -- it is the realization that in math, there are things we call "discrete" -- such as integers, prime numbers, etc, and there are things we call "continuous" -- such as the function y = x^2.

The realization is that the "fuzziness" of "95% systems" is really the introduction of continuity / continuous functions into the realm of computation, and furthermore, neural networks and probabilistic modelling are the primary examples of continuity in computing today. They have opened up a whole new world of possibilities, solving all sorts of problems that discrete systems struggled with.

Another way of looking at this dichotomy is the digital/analog divide.

An intuition/bias that I've had for a number of years is that good things come when you figure out how to property synergize discrete and continuous systems. The point is that neither system of reasoning is the slam dunk answer to intelligence, but rather each of them is more or less useful depending on the domain.

My vague guess is that the intelligent systems of the future will harness both discrete and continuous models in powerful ways, and that those intelligent systems will excel at having those two systems play nicely with one another. Well, more than that -- that those systems will employ a kind of "resonant synergy" that will achieve something far more mind bending than an attempt that used only one of the approaches could achieve.

One last analogy I'll throw in is the notion of "approximate algorithms": There are many problems in computer science for which it is impossible to calculate an exact solution. For example, the famous "travelling salesman" problem... rather than computing the optimal solution, we focus on computing solutions that are likely to be really close to the optimal solution, and to do it in what might be a trillion times faster than trying to get the very best solution. These approximate algorithms I think are yet another example of "fuzziness", and how it can be such an important and exciting area of development.


Using AI to Optimize Parameters
November 16, 2015

http://arxiv.org/pdf/1402.1694v4.pdf

I came across this interesting paper today, which describes a way of doing Markov Chain Monte Carlo in a way that speeds up optimizing the parameters of a probabilistic model up to 200x. And of course, no big surprise, it's about using intelligent approximation.

This brings to mind to possibility that one day the algorithm we use to do optimization may actually use "AI". What I mean by that is that exploring a curve in hyperspace to find the minimum is in some sense like many other problems: You can look around and collect clues as to the characteristics of the curvature, and use those clues to build a kind of mental model of the dynamics at play, and then ultimately to use what you know about the space to intelligently explore it as quickly as you can.

Imagine a "neural net" of sorts that had been trained on millions of example optimization problems, and was able to very efficiently build an internal model of the topology of a hyper-surface, using that to calculate the optimum parameters... it might be hundreds, or thousands of times faster than naive human attempts that use exact algorithms... and for problems that are just mind-blowingly complex and viewed as semi-intractable today, it might be trillions of times faster.

An amusing side realization is that optimization is a core requirement for building AIs / neural nets in the first place, and so it's conceivable that there could be a recursive benefit to using AIs to do optimization... back to our good old exponential improvements in technology game: Your now supercharged ability to do optimization allows you to train an even more capable neural net, which allows an even faster optimization algorithm, which allows even more capable neural nets, and back and forth you go.


Scientific Method as Analogous to Human Perceptual System / Recurrent Neural Network Learning
November 9, 2015

I've posted earlier about a fascinating realization that I've had: You can use sensory input to a recurrent neural network as a supervised learning problem by making the outputs of the network a prediction of what comes next.

A further realization is that the scientific method is essentially the same thing: You create a model, and make predictions using that model. You then examine the results of an experiment and compare that to your predictions, and adjust your model based on the outcome.

The terms "microcosm" and "macrocosm" come to mind... perhaps what the brain does is a kind of microcosm of the scientific method.

older >>