SoN 2014 Projects

Mid Term Reports

Contents

Project Student(s) Mentor
Reconstruction
  • Brendan Berman
  • Ross Story
Marek Otahal
Benchmarking and Visualizing HTMs
  • Curtis SerVaas
Ian Danforth
Image Interpretation
  • Steven Karapetyan
Fergal Byrne
Insights Into the CLA
  • Ruaridh O'Donnell
Chetan Surpur
Spatial Pooler OCR
  • Jim Bridgewater
Scott Purdy
Simple AI for Games
  • Fernando Martinez
Matt Keith
Epilepsy Seizure Prediction
  • Anubhav Chaturvedi
Matt Taylor

Reconstruction

  • Students:
    • Brendan Berman (United States)
    • Ross Story
  • Mentor: Marek Otahal

From Brendan:

I’m interested in project 6 on the idea list wiki page.

My plan of action to tackle this project is to start by working though some of the examples, writing my own simple NuPIC based programs and reading through a significant portion of the source code in nupic.core, which will give me how the code is structured and worked with. Ideally I will accomplish this by May 1. Then I’ll work with my mentors trying to gain a deep understanding of the theory behind reconstruction and how it was previously accomplished with the ‘Classifier’ code. After evaluating the ‘Classifier’ to see what does and does not work, I’ll follow TDD principles to get a minimum viable product and then continue optimize the solution.

From Ross:

I’m a computational neuroscientist working on a brain-machine interface for stroke rehabilitation and I’d love to apply a reconstruction-capable NuPIC to my thesis. Magnetoencephalography data seems well suited to NuPICs strengths. I’d also like to explore integrating it with ROS for robotics, especially combining the expanded layer V model with a TD style basal ganglia for goal oriented path planning.

My C++ is a bit rusty but I’m excellent with Python and can easily pick up C++ again.


Back to Top

Benchmarking and Visualizing HTMs

Full Title: Benchmarking and Visualizing HTMs in JS on MNIST dataset by exploring hyper-parameter space
  • Student: Curtis SerVaas (United States)
  • Mentor: Ian Danforth

What

This is a combination of ideas 1, 5, and 8 in the SON idea list.

This is an implementation of Deep Neural Networks in JS that allows the user to tune the hyper-parameters in real time, as well as see some of the properties of the DNN in real-time.

This is a draft of a paper on exploring the hyper-parameter space of neural-networks.

I would like to create a JS implementation of the CLA algorithm that combines the ideas in the previous two links.

It will visualize the SDRs of the network in real-time like in the Cerebro demo. So, we’ll be able to answer questions like:

  1. “Trained vs an untrained SpatialPooler: how do the SDR’s differ?”
  2. “Impact of similar inputs: how does the output change as inputs have varying levels of similarity?” Also, how similar are the SDR’s for similar inputs?

In addition, it will explore and visualize hyper-parameter space by plotting accuracy of prediction as a function of number of columns, sparsity, and increment decrement ratios.

While implementing these features, I will write lots of good documentation in workflowy.
To see what this might look like, here’s my in-progress summary of “On Intelligence” in workflowy.

I think workflowy is a really awesome tool for groking things faster. I use it to take notes on pretty much everything I read. However, it has several limitations such as the fact that it is a tree rather than a general graph, and it doesn’t have revision control ,which is why I’m currently working on creating an open source implementation that has those features (and more) (see below).

Why?

  1. Greatly Increase the usability of the CLA algorithm. https://github.com/numenta/nupic/wiki/2014-Goals-For-NuPIC
  2. Serve as a benchmark of the classification accuracy of the CLA algorithm on the MNIST dataset. I will survey the literature on benchmarking so as to follow best-practices:
    http://page.mi.fu-berlin.de/prechelt/NIPS_bench.html
  3. Demonstrate the core properties of the spatial pooler.
  4. It will be a compelling Nupic demo.

Experience

  1. Have read On Intelligence and the CLA white-paper.
  2. Have implemented Decision trees as part of the CS 390 course at Purdue.
    https://www.cs.purdue.edu/homes/neville/courses/390DM/schedule.html
  3. Have implemented PGMs for OCR as part of the Cousera PGM class.
    coursera.org/course/pgm
  4. Founded an early-stage open source project: https://github.com/CurtisSV/ndentJS
  5. App Academy graduate: appacademy.io

Back to Top

Insights Into The CLA

  • Student: Ruaridh O’Donnell (Scotland, UK)
  • Mentor: Chetan Surpur

General Project Idea:

The algorithm underlying NuPIC is powerful and has a lot of potential. There are many people wanting to use the algorithm and extend it (sensorimotor, hierarchy, etc.). But to do both of these people must first understand how the algorithm works. However this both takes programming skill and a lot of time.

A deeper understanding of the behaviour of the CLA could be obtained faster with the right tools. A good explanation with interactive diagrams could explain how it works to beginners. And interactive examples to demonstrate its behaviour over different data or parameter settings would be accessible to non programmers. These things aren’t easy or fast to explore in NuPIC just now.

Project Plan:

After some thought I pared back the aims to this idea - I would aim to make an interactive document to explain the spatial pooler.
This would explain how the spatial pooler works (to non programmers) with rich visualisations. It would demonstrate the behaviour, starting at the details of individual synapse permanences and working up to broad ideas such as noise invariance, and good parameter values. It would also crucially demonstrate the spatial pooler working over lots of data sets. With good visualisations to see and compare this behaviour.
I would be planning to publish this online for everyone to access (hopefully on the NuPIC Github).

Benefits To NuPIC:

One of the main advantages of this project would be in helping people new to NuPIC. It would be a useful gap in to bridge the gap between reading about the CLA and seeing it work in code.
Also, examples that are easy to play around with but have powerful visualisations could hopefully benefit the more experienced members of the community. Particularly in seeing the SP behaviour over multiple data sets.
This idea is also rather similar to Subatai’s SoNuPIC idea - “Demonstrate the Core Properties of the Spatial Pooler”. I would be interesting incorporating more of these ideas if he wanted.

Knowledge and Skills Needed:

I worked on this in February, including creating some preliminary visualisations, and I have a rough idea of what would be needed.
Good knowledge of the CLA - I implemented a version of the CLA myself a couple of years ago and spent a summer playing around with it. Although I was mainly implementing and doing a parameter search rather than any very clever. I’ve also read on intelligence and know the white paper well.
Programming Skill/Language - I’m very comfortable with Mathematica which I use frequently for programming projects. It can run algorithms, run interactive visualisations, and combine these together into documents which can be published on the web. I also have a working knowledge of python.
Knowledge of how to create good visualisations - I’m by no means an expert int this area but I have a good understanding of the concepts. I’m currently reading a very interesting book about learning, which I’ve found applicable.


Back to Top

Image Interpretation

So I am steering toward Data Visualization and the CLA, using a a generative art Python interpreter ( Nodebox / Processing).

Purpose of Data visualization and the CLA is for studying and understanding how several algorithms function and how their parameters affect and modify the results in problems of classification, regression, clustering, dimensionality reduction, dynamical systems and reward maximization.

If this is not approved, then either the ML Benchmark or Anomaly Detection will work.

I have the following skills, if I need additional prerequisites I will definitely have time in May to do the studying.

Linear Algebra
Probability/statistics (In Progress)
Differential Equations
Machine Learning in Theory (In Progress)

Basic Programming with C++ / Java (search algorithms, data structures)
Beginner in Python, will do more practice in May.

Basic programming with PHP / SQL

Digital artist and designer for 8 years, if that can be applied anywhere?
Adobe Creative Suite Applications (All of them)


Back to Top

Spatial Pooler OCR

  • Student: Jim Bridgewater (United States)
  • Mentor: Scott Purdy

I am interested in working on optical character recognition (OCR) in order to enable searching for words in scanned images of paper documents. This could be accomplished by converting all document images to text files and searching with existing search tools, but a more interesting approach is to create an intelligent search tool that can find words in graphic images, many of which are not images of documents and cannot be converted to a text format in any meaningful way.

The repeatability of scanning a document with a bed scanner provides an excellent platform for testing the spatial pooler’s sensitivity to similar inputs. Two images of the same document can be almost identical when the document is not disturbed between scans and the same scanner settings are used. Variations between images of the same document can be quantified when different scanner settings are used to produce those variations. Image resolution, gamma, brightness and contrast can all be quantifiably controlled in a scanner front-end like XSane. Real-world variation can also be introduced by removing and then replacing the document in the bed scanner between scans resulting in small changes in the placement and orientation of the document that would be difficult reproduce via software modification of the image data.

Scanning a document at different resolutions provides a means to test the spatial pooler’s ability to store invariant forms. Due to their resolution differences, these images will provide very different inputs to the spatial pooler even though the desired output is the same. I spent some time writing python code to parse JPEG images of documents into lines of text and then individual characters and the problem of how to identify a letter regardless of its size/resolution is a challenging problem that is in need of a good solution.

The spatial pooler’s sensitivity to noise can be tested by modifying the image data to introduce noise. This could be as simple as changing pixel values by random amounts or it could get as elaborate as time allows for the development of image processing software that reproduces effects seen in real documents like non-uniform fading over time that resulted in a document with lighter print at the top and darker print at the bottom.

This project would also provide a means to test the capacity of the spatial pooler. The tremendous number of fonts available today provides a nearly endless supply of characters for the training of an optical character recognition HTM.


Back to Top

Simple AI for Games

  • Student: Fernando Martinez (United States)
  • Mentor: Matt Keith

I am very interested in the simple AI for games using NuPIC being proposed by Kevin Martin Jose. If I remember correctly this is being based on a paper written by a group called DeepMind where they use their DeepQLearning algorithm to play six different Atari games with some success on some games. I am currently working on implementing their algorithm for the game Atari Asteroids in C++. I also have other training algorithms involving using a finite state machine to create neural network training data, this might be used as a form of weight initialization. I have high level insight on playing games competitively from when I used to play fighting games during my spare time. I know many methods used by humans which I think will be helpful to improving the DeepQLearning reinforcement learning algorithm.


Back to Top

Epilepsy Seizure Prediction

  • Student: Anubhav Chaturvedi (India)
  • Mentor: Matt Taylor

Please find the full proposal here.

The sudden and seemingly unpredictable occurrence of seizures is the most compromising aspect of a disease like epilepsy. It affects life of numerous patients worldwide just because of the fear of seizure at a wrong place and time. Recent studies have indicated that there is a change in brain activity minutes or even hours before the actual seizure takes place. Such activity can be measured using continuous EEG recording. The algorithms so far have been unable to produce acceptable results. Because of the huge variety in EEG patterns of different patients, most of these algorithms generated some acceptable result when tweaked for individuals. There has also been a significant lack of statistical verification of existing techniques.

This projects proposes the use of NuPIC to read multichannel EEG input and predict the ictal event. NuPIC is primarily focussed on online learning and this problem seems to be a good fit for the platform. Advances in this project will not only provide great boost to the community but also help numerous epilepsy patients lead a much better life.


Back to Top