07 October 2012

grokking htm for ai

or .. Notes on Understanding Hierarchical Temporal Memory and its use in Artificial Intelligence and Knowledge Representation.


A few years ago I blogged about a TedTalk by Jeff Hawkins on how brain science will change computing. To summarize, the idea was that intelligence was more about prediction than behaviour, that the neocortex evolved to to basically be a mechanism to predict the future, and that it could be simply modeled as vast networks of hierarchical elements that predict their future input sequences - a hierarchical temporal memory (HTM) system.
Importantly, rather than the much more difficult task of modeling the entire brain, including the ancient and incredibly complex areas below the neocortex that deal with things like emotions and behaviours, one could approximate intelligent behaviour by modeling the much simpler cortex as a HTM - with simple repeated structure and algorithm.

Here's an update with more from Jeff Hawkins, and HTM ..

The following links are for a 2008 talk given by him on AI.. give it a watch
Jeff Hawkins on Artificial Intelligence - Part 1/5
Jeff Hawkins on Artificial Intelligence - Part 2/5
Jeff Hawkins on Artificial Intelligence - Part 3/5
Jeff Hawkins on Artificial Intelligence - Part 4/5
..some notes from the above:
- Work started by looking at what the structure of the brain could tell us about memory/knowledge storage.
- Memory - the bottom is close to the sensory system - retina for visual system, skin for touch, ears .. etc.
- Top nodes in the hierarchy get assigned to specific concepts/objects - like the individual neurons that fire every time you see or imagine Tupac and only Tupac (true story) 
- All nodes in the hierarchy are basically the same, and they all ..
    - look for temporal and spatial patterns/sequence
    - and pass the name of the recognized sequence up
    - pass the predictions they make down the hierarchy
- You get fast changing patterns at the bottom, slower changing as you move up the hierarchy.
- After training an HTM system (in silicon or neurons), you get something that learns hierarchical models of causes (statistical regularity) in the world - using bayesian techniques to build a belief propagation network.
- HTM's make the assumption that the world is hierarchical
- Predicting what can come from htm:
   - we cant, but ..
   - it could be much faster - neurons are slow
   - it could have other architectures - bigger bottom layers, fueled by big-data for example, or from large sensory arrays, etc. 

The latest thing I've come across from Hawkins' company Numenta, is their new Grok system (love the Heinlein reference). Grok is a cloud-based prediction engine that finds complex patterns in data streams and generates actionable predictions in real time. Check it out on Numenta's site, and their tech page
...
(more to follow.. soonish)
Post a Comment