Introduce yourself!

Welcome @david.pfx! with enough aussies, eventually we can hold a HTM hackers hangout in AEST :grin:

1 Like

We’re a bit thin on the ground so far, and you’re 1000km from here, so get-togethers are going to be a bit virtual I think.

1 Like

hi all

I am Trung from VietNam. I am looking for something new in anomaly detection and finally i found this HTM. I want to study more about this subject. I hope i can discuss, help and get helps to and from the current members.

Thank you very much.

6 Likes

Hello! This is Kiran Narasimha. I have been curious about Artificial Intelligence for a few years now. Though I broadly agree with Jeff’s views on Intelligence being about Prediction I have begun to question this and now believe Intelligence could be part creation itself. As in something about the brain heart combination making things happen rather than only predicting. I am also interested in the philosophical angle and also identifying causality as the core aspect of intelligence. So my interests go beyond HTM and the Thousand Brain theory to if so what caused that in the first place. To put it simply I’m looking for a much simpler theory of intelligence. And I’m tinkering on those lines.

4 Likes

Hi! Me name’s Mark of the Philippines.

I am a machine learning newbee, embedded software engineer, electronic hobbyist and a father of 3.
It all started when I was looking for references on teaching machines about concept of things after realizing that deep learning is narrow and most of the time I have limited dataset at hand when doing training.

Unlike other company, Numenta open-sourced their research findings and I am very thankful that I found materials that I can start to work on. Plus the idea of a framework is really exciting.

I am now reading Numenta’s papers and watching videos like HTM School, etc. to learn this exciting theory and I am hoping that, like the leaders and members of this company, I can contribute on innovating this idea and give humanity a better world.

Best regards,
Mark

8 Likes

Hello everyone, I am Teng Jiek See and I am from Australia. I am currently studying chemistry and pharmacology in my undergraduate degree but I am also passionate about other subjects such as mathematics, computer sci and phy as well. Right now, I am developing ways to tackle the one/few shot learning problem in the Monash University because I think it is one of the most defining feature of any intelligent entity such as us.

I heard about HTM when I was watching some AI youtube channels and I decided to explore more about it since the vision of HTM matches with mine. Overall, I can’t wait to learn more about this theory and see if we can work together to create a true machine intelligence.

8 Likes

Merhaba everyone! I’m Toprak.

I live in Turkey. I’m a medical student in his 4th year. I’ve been interested in computers in programming since my age of 13 and I always kept that passion within me. I code in python and interested in data science (as to work with EEG/EMG data). I plan to develop myself on neuroscience, so HTM as a concept and product is very interesting to me. Currently I study brain-computer interfaces and brain oscilations on the side, but cracking the mystery out of human brain would be the utmost achievement for me.

At first when I was researching and reading about artificial intelligence, well unsuprisingly, I found out about deep learning and all those frameworks that come with it. Studied some time, trained myself a bit, and developed some deep learning models with python. I used DL to classify EEG data. It was just one application and I’m still a novice in all this, that was huge to me. But to be fair, deep learning seemed to miss the whole point of “strong intelligence” or “intelligence as a whole”. Even before finding HTM, I thought about that a lot. Now I found HTM and am very excited about it.

I found HTM on some youtube recommended video which I do not remember at the moment. It was mentioned as “alternative to deep learning”. Then I digged down and found out that’s the exact thing I was looking for. Just ordered Jeff Hawkins’ book “On Intelligence”. Hope I’ll have time to study deeper to understand amazing work you’ve done. Cheers!

8 Likes

Hello,
My brief background - trained to be Physician, currently working as IT professional in US for a healthcare company.

  • Stumbled upon HTM coincidentally when I was trying to learn how to use pytorch.
  • Went through HTM School videos to get familiar with it, followed by using sample of HTM jupyter notebook.
  • Recently started reading through “encoders.py” to understand their structure and how they are designed.
7 Likes

Welcome!
If you have not already see this you may want to check out:

1 Like

Hello All,
Am a software engineer with focus in Messaging which has been my bread-n-butter. Been always fascinated with the way human brain works and followed the AlphaGO competitions with equal fascination. Got interested in AI/ML/DL from there (suspect Nick Bostrom’s seminal book Superintelligence, might have something to do with it as well) but somewhere deep down, was not really convinced that these are “true” intelligence…for they can always be put off track, if one really works at it. Until I stumbled into HTM…and was instantly hooked onto it. Still have light-years to go in my learning- but am already a huge fan of HTM and am trying to get my feet wet in this domain.

5 Likes

Hello to all HTM-enthusiasts,

I am Taher from Osnabrück, Germany. Currently doing my Master’s in Cognitive Science with majors in AI and Comp. Neuroscience. I learnt about HTM from a very unexpected source: Dileep George’s PhD thesis titled “HOW THE BRAIN MIGHT WORK: A HIERARCHICAL AND TEMPORAL MODEL FOR LEARNING AND RECOGNITION”, when I was looking for related topics on the net.

My current interest is in studying the problem of plausible Sequence Learning mechanisms in the brain along the lines of the five distinct taxonomies of sequence memory as laid out in Dehaene et al. (2015): https://www.sciencedirect.com/science/article/pii/S089662731500776X.

Given this, I have now begun learning about Reber Grammar (SRG) and its Extended (ERG) and Continual (CERG) versions and how they were somewhat successfully learnt by LSTM and ELSTM architectures, respectively. From my initial research, I have found out that HTM hasn’t been applied to learning of ERG and CERG (please let me know if I am wrong here), except SRG – a short sentence (in the limitations section) in Numenta’s 2016 paper on ‘Continuous Online Sequence Learning’ where it was mentioned that HTM network could only achieve 98.4% accuracy on RG tasks. I was kinda impressed by this because the only work (here:https://www.researchgate.net/publication/264383634_Self-Organized_Artificial_Grammar_Learning_in_Spiking_Neural_Networks) on learning SRG using a “bio-inspired” network, that I have found, hasn’t been able to achieve even this much.

Anyway, in conclusion, I would like to thank all the members of Numenta for making their research accessible for student researchers like myself. Also, I would appreciate if anyone who could hint me at any current trends in Numenta’s research directions on the problem of Sequence Learning – transitional memories, chunking, recursive, algebraic patterns. Thank you! :slight_smile:

2 Likes

Welcome!
If you have not already see this you may want to check out:

I am 41 years old, born, raised and living in the Netherlands.
I started ‘programming’ music when I was 15 years old.
I started programming software professionally when I was 19 years old.
When I was about 25 years old I got really interested in artificial life/intelligence.
About 2 years ago Youtube offered a video of Jeff Hawkins giving a lecture about 1000 brains theory.
I then started learning with the HTM school videos, writing my own Java HTM implementation in my open source GitHub repository (DyzLecticus/Zeesoft).
At first I had difficulty implementing active dendrites in my version of the temporal memory but by studying the open source python code I was able to understand and recreate it in my own implementation.
I really enjoy all the content that Numenta and it’s community produces; the open source code, the videos and the interesting/helpful discussions on the community forum (the post about the ‘repeating input problem’ was really helpful in understanding the unexpected test results I got).

6 Likes

Hi, my name is Nick Warren. I cannot remember how I discovered HTM etc., but it was years ago, probably from a TED talk or something on youtube. At that time I read a few of the papers on the Numenta website and watched many of the HTM school series of videos. But I didn’t do anything with it (work got in the way - I am an Oracle/SQL Server DBA/“anything you bungled in SQL fixer”. I have sometimes dabbled with C and Java too). Anyway, I am nearing retiring age and just might be free to play about with some ideas that are to do with machine learning in the medium to long term.

I am a bridge player in my spare time and although some good(ish) bridge playing programs exist none are what I would call expert. Bridge, as a game, only partially surrenders to the relatively brute force computation approach that Chess & Go have seen. There is an element of psychology/logical inference/obfuscation that are simply not part of the Chess/Go game type. And current AI is, as far as I can see, utterly hopeless at this sort of challenge.

I am also vaguely interested in chatbots and, as far as I can see, the same problem exists with them too i.e. they don’t understand what they say/do - they just learn by number crunching - with no ability to “see things” from the point of view of the other end of the communication channel.

Anyway, I am curious to what extent HTM and thinking outside of the box can do for this sort of challenge?!

4 Likes

Hi Nick and welcome.

You might want to look closer into how leading Go software works with AI algorithms. You are correct that old chess and Go approaches used brute force. That was successful to some extent with chess. But a disaster with Go - even an amateur could beat the best Go engines using that approach. For similar reasons you mention about bridge, Go is very much an aesthetic game - the search space is much larger than chess so simple search strategies fail. Professional Go players no longer stand a chance against the best Go AI, and they remark on the beauty and creativity of moves discovered by Go AI players. It was a real paradigm shift in AI when the world’s best Go player lost to an AI - at the time people predicted it was decades away or may never happen.

1 Like

The breakthrough with Go was Monte Carlo tree search combined with machine learning (ANN). Dumb programs, but they play an unbeatable game. Look for AlphaGo. That approach doesn’t work with games of hidden information such as bridge and poker.

Better to look at poker programs such as DeepStack, Libratus and Pluribus. You can’t afford the hardware, but at least there are interesting ideas to pursue.

HTM is not really in that race. It’s about sequence memory and spotting anomalies, which may or may not be of any use.

1 Like

Thanks to both David & Mark for commenting. I am aware (at least vaguely) of the approach to Go and Chess and how they differed (past tense - the chess engines have moved on so I believe). And I am aware of quite a bit of the work on Poker too.

I dunno if HTM is of any use or not. It seems to me that sequence memory and anomaly detection is (in principle) just pattern recognition (albeit spread over the time dimension as opposed to spatial or other aspect) together with exceptions to said patterns. So that doesn’t rule it in or out!

N

1 Like

Hi everyone,

My name’s Ted Southard, and I’m a game developer, which is kind of what brought me here. I’d been working on a pet project of Non-Player Character conversation systems using procedural content generation at a pretty granular level, which was going fairly well until I started to drill down into being able to compare things. And while I was able to somewhat semi-hard-wire some analogous things like yellow being a color and the sun being yellow and a flower being yellow and getting an NPC to say a flower was the color of the sun, there was just out of reach a more “natural” way to compare things- a data format that would allow for that to happen, that I just didn’t have.

Long story short, after a few years of searching, I’d happened upon Numenta (though I’d read On Intelligence years before) and SDRs. Now, I’m trying to shoehorn/mutliate the principles into a format that is not quite the full-on binary/fully-distributed representation, but is a bit more friendly to content/visualization, with an eye to finally finish off the NPC conversation project as well as building out my own personal chatbot/grimoire software.

I also want to say that learning about the grid cells made me think of an old handwriting recognition system IBM had made decades ago that was vector-based, and makes me think that a lot of what we’re doing is brute forcing things because we don’t have the ground-level structures (grid/place/etc cells) in place to work with, and when we do, methods will be much less complex. That said, full disclaimer: I suck at math and lean heavily towards intuitive systems that themselves can brute-force things, so YMMV.

5 Likes

Hello all,

I’ve just finished reading “On Intelligence” and found my way to the forum. I’m an electronic engineer by training though that was a long (long, long!) time ago. I’ve been involved in software development all my working life (now measured in decades…gulp).

I have a long-standing interest in language - particularly the intersection of human- and machine language. My initial forays in the former started when “AI” was synonymous with predicate calculus and prolog. More recently I’ve been exploring neural net based language models, for example using rasa.

On the machine language side, I spent a good chunk of time in domain modelling and domain-specific languages, along with a perennial interest in programming language design.

My hunch - and it was no more well-formed than that - was that emulating human level language performance needed to fuse elements of both. Specifically: conventional “chatbot” architectures are a dead end, because they separate processing and knowledge. As an example: dialogue context plays no part in intent recognition, and prior knowledge plays no part in dialogue evolution. Through domain modelling, I learned how critically important knowledge structure is - that semantics arise more from the relations among entities than the entities themselves.

I came across Jeff’s book whilst trying to investigate that hunch. I found the book fascinating and stimulating. Only finished it today, came to the Numenta site to find out more, and found the forum. So here I am :). It was fascinating to learn about cortical structure; the inter-connectedness of neurons across and within layers resonated strongly with my “relations over entities” learning above. The memory-prediction model gave depth and rigour to my sketchy ideas on chatbot limitations.

I suspect I’ll be far more beneficiary than contributor. Most of the above is side interest rather than what I get paid to do. And I’m very much engineer rather than scientist. So, first of all, thanks in advance for all the contributions I’m going to benefit from!

If I can contribute, though, I will. I first discovered Python in 1992 and it’s been my go-to language for a broad range of tasks since. With any luck that’ll allow me to reciprocate in some way.

Anyway, that’s quite long! Looking forward to learning a lot.

6 Likes

Welcome! I look forward to your contributions.

My background and path here are similar to yours. My attention was caught by the concepts of:

  • An SDR, as a possible universal data representation
  • A cortical column, as a possible massively replicated computational unit
  • On the fly sequence learning
  • Place cells, as a possible mechanism for scale-independent location.

But as I look deeper I realise just how little we know of how the brain really works. The algorithms, the processes, not just the neuroanatomy and physiology.

I leave you with two thoughts.

  1. If the cortical column is the hardware, where is the software? Something must be playing that role.
  2. The first Generalised Artificial Intelligence created by us will use brain mechanisms discovered by us; it will differ from us mainly in speed and memory capacity; and it (not we) will create its successor.
1 Like