Hello all! We are planning an HTM Meetup at by the SF airport. I am looking for volunteers to:
present something about HTM
help set up or tear down for the meetup
I will be presenting on the same day at the Open Data Science Conference, and I’ll be happy to give the same presentation I give there at the meetup. We had a really nice meetup attached to the Boston ODSC conference.
Would you guys be willing to record this for those of us who can’t make it in person? I’m very interested in hearing the presentation on way or another
Jeff Hawkins is now headlining this event! He’s going to talk about layers and columns. Also added:
TouchNet - Training AIs to touch, move, and interact with their environment
Jason Toy, Somatic Founder/CEO
In this talk, Jason Toy will walk you through a preview of TouchNet, a project he has been working on to enable training of AIs to interact with their environment via touch and movement . TouchNet is a dataset of 3D objects and a simulator to interact with those objects. Jason will show how you could implement your own AI using HTM or other algorithms to learn through the simulator.
Location Relative to the “Environment” Object: A Brief Review
Marcus Lewis, Numenta Research Engineer
In Numenta’s latest paper, they propose that every part of the neocortex computes and uses a “location relative to the object.” Separately, in the last 45 years, it has become clear that higher cortex and hippocampus compute and use a “location relative to the environment.” In this talk, Marcus Lewis will introduce you to what we know about this higher location-processing. He will focus on entorhinal cortex and the hippocampus, with a special focus on grid cells, place cells, and head-direction cells. These cells give us many clues about the brain’s representation of space. Marcus will try to share some of those clues.
This is now a full schedule, so we don’t need any more speakers. Should be some fun discussion. Please RSVP now!
I’m changing my talk subject. I’ll talk about the allocentric location. Here’s the new description!
Recognizing Locations on Objects
Marcus Lewis, Numenta Research Engineer
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. In this talk, Marcus will show how the brain might determine this “location relative to the object.” He’ll extend the model from Numenta’s recent paper so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body”. The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Tried to replay the YouTube of the talks. Slides were too washed out to be readable. Can the slides be published somewhere to make it possible to follow along? Thanks