Introduce yourself!

Hi everyone!

Nice to meet you! I realized that I never actually introduced myself on the forum - I’m Charmaine and I joined Numenta’s marketing team back in August. I’ve been helping @cmaver with the research meeting videos and the promotion of Jeff’s upcoming book so you’ll probably see my name pop up on here from time to time :slight_smile:

I came across Numenta in my senior year of college. As a huge tech enthusiast, I’ve learned a lot about the potentials of AI systems and how they can make life so much easier (e.g. Siri, Google Maps). But, as an environmental and economics major, I also learned a lot about the environmental impacts AI can have. I love that Numenta takes a biologically inspired AI approach and to me, it seems like an extremely promising approach towards sustainable AI and AGI.

I look forward to learning all that I can from everyone here and feel free to reach out if you have any questions or feedback on any Numenta events or materials!

6 Likes

@clai Welcome to the team and forum!

2 Likes

Hey everyone!

My name is Akash, and I joined Numenta as a research intern a few weeks ago. I recently graduated from UC Berkeley studying Electrical Engineering and Computer Science, and spent some time doing robotics and reinforcement learning research.

I got introduced to Numenta through random internet browsings, and through Jeff’s conversation with Lex Fridman on his podcast, and am super excited to be looking at machine intelligence and AI through more of a neuroscience lens as an intern here!

Outside of work/academics, I love pretty watching and playing pretty much all sports and support most LA sports teams. I also enjoy playing the piano and chess.

Looking forward to learning a lot from everyone during my time here!

8 Likes

It’s a pretty remarkable time to be joining Numenta right now- a real sense of tractive foundational knowledge. If I may be so bold to suggest- here are three of my personal favorite articles that denote Numenta’s unique marks in AI research.

  1. Check out Jeff’s original set of ideas and how he frames AI research. This was written in 1986 when Jeff was a graduate student- https://numenta.com/assets/pdf/whitepapers/Hawkins1986.pdf
  2. This is one of my favorites- I believe the paper is what gave Jeff the courage to start Numenta. It’s a paper by Dileep George and Jeff. It’s interesting to see how his line of research changed since then. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.163.7566&rep=rep1&type=pdf
  3. The final paper is written by Pentti Kanerva who was with Jeff at Redwood. This is a paper that seems to be easily overlooked in Numenta’s history, but it set out the foundational ideas that underlie SDR. http://rctn.org/vs265/kanerva09-hyperdimensional.pdf
4 Likes

Hi all. I’m an ML engineer for a medical imaging startup. I think deep learning is fundamentally flawed and I’m interested in importing good ideas from outside the field.

3 Likes

Hello everyone,
I am an PhD student, working on applying deep learning for dynamics system modeling.
I first heard about the thousand brains theory on Lex Fridman podcast. Wanting to know more I watched a few videos on youtube (HTM school, a couple presentation) and I am currently reading a the book “A thousand brains”.
I am loving the the concepts described but I am still wrapping my head around a lot of how stuff actually works, mostly when frames of references come in with the “grid cells” and “displacement cells” (how do they work, how do they integrate with TM and SP…). Trying to figuring this out is what brought me here.
Ideally I’d like to explore the “modeling capacity” of HTM given observations of a dynamical system, so if you know of any work that relates I’d be more than grateful :slight_smile:
Looking forward to learning more !

6 Likes

Howdy everyone,

I’m Ben, and I’m thrilled to join the HTM community. My academic background is in neuroscience and machine learning. I’ve been interested in the canonical cortical circuit / cortical column for some years, which is how I found out about Numenta. As a researcher here, I’m currently focused on applying principles of 1000 brains theory to machine learning. On the side, I’m also interested in what cortical circuit models can tell us about mental health. I love the enthusiasm and variety of ideas bursting out of the HTM community, and I’m looking forward to learning from and talking with you all.

5 Likes

Hi everyone,
I’m a R&D software engineer with a bachelor’s in physics, living in Boulder, Colorado. For my day job I develop software testing tools, mostly doing full-stack web development using NodeJS, Angular and AWS, with some C++ and Python thrown in.

Neuroscience and the philosophy of mind have been lifelong interests. I recently stumbled on Jeff Hawkins and Numenta while reading a MIT Technology Review article. Other recents: Randall O’Reilly’s Emergent platform and his computational cognitive neuroscience course materials. For contrast, I’ve also taken Andrew Ng’s Machine Learning course on Coursera.

My current focus is the amygdala - locus coeruleus - norepinephrine pathway and its impacts on attention and (ultimately) planning. Yes, this is pretty different from HTM, but so far it looks like these areas are complementary.

I’m blown away by this forum’s “all you can eat” buffet of ideas and the serious open source development occurring on the platform. I’m looking forward to great discussions!

6 Likes

Hi, I read On Intelligence, and added some of the logic to a evolutionary computation framework I wrote years ago (Test framework)

Since that time I have been looking at complex adaptive systems (CAS) developing a framework for helping filter ideas that are supported by CAS logic from any other chaotic ideas. It was a similar logic to Jeff’s view of Mountcastle and his framework for understanding brains. My focus was on the underlying plan: genes, memes, source code; that allow a CAS to evolve.

Thousand Brains fits in really well, since the Cortical Columns can be viewed as agents (Cortical columns)

Not so convinced about Jeff’s logic regarding discarding emotions etc. from future life. Sorry.

I am interested in getting a better understanding of how links work in the cortical column model since if Numenta is right, this will be the way a schematic structure is represented in the brain. I have not found these details yet.

I trained to be a biochemist, which is helpful in looking at CAS, and worked as a programmer for years, then ran some programming projects.

These days I mostly program in Perl, which is powerful, Javascript to activate mouse over on my webpages and c where necessary, and write summaries of books that provide insights about CAS, such as the thousand Brains - see cortical column link.

Rob

6 Likes

Very pleased to find this forum. My work and focus is in Machine Consciousness (sentience). HTM has promise there.

7 Likes

Hi @EEProf in your work does sentience require qualia?

1 Like

That is something of a trick question, are you a philosopher? :wink:

Yes, it does…but…I define machine sentience as a machine being able to display the Jaynesian features of consciousness. From an architectural standpoint, it follows Dennett’s multiple drafts theory (loosely) and his Center of Narrative Gravity. The latter incorporates J’s Analog I and Metaphor Me and is what I call the Sentience Engine. Dennett, at least in my opinion, put the whole qualia issue to bed and that’s fine with me.

You have two approaches right now to machine consciousness. One tracks the Neural Correlates of Consciousness (NCC) and anybody deep into HTM is in that camp; i.e., simulate the brain accurately and Voilà! Consciousness.

The other looks at it from a behavioral standpoint and says that if I build a machine that displays consciousness then it must be conscious. Have a long conversation with Alexa, or Siri, or…whatever. Not conscious, far from it. yet, we start thinking that they might be. Also, chat with the latest winner of the Loebner Prize, again, not conscious, but so close. What magic spice is missing, what subroutine was left out? I’ll give a hint: somatosensory awareness. What this implies is either the need for a robot or a very robust simulation.

OK, back to work.

IMO quoting authors doesn’t cut it. AI is software, creating it is engineering based on hard science.

We cannot build what we cannot define. Simulate the brain as accurately as we know how, it will not be enough. Building a machine that (Turing-like) fools a lot of people into thinking it’s conscious does not make it conscious. Or am I a bot too?

A long conversation with Alexa or Siri is enough to show how far we have come and how far we have yet to go. Where are the pronouns? The passage of time? Localisation (HTM’s coffee cup)? The words are there, but the inner model is missing in action.

The worrying part is not machine consciousness, but the opposite: machines with spectacular abilities to analyse and manipulate people controlled by human consciousness. Yuval Harari is worth a read.

2 Likes

Emergent behavior?

1 Like

This is interesting. Could someone turn this into a thread of its own, please?

(@clai or @Paul_Lamb or @Bitking)

It is fair to say that we can build things that develop behaviour we did not expect, intend or specify. Like genetic mutations, in almost every case that behaviour is harmful or even lethal to what was expected, intended or specified. Does that sound like a good path to follow?

I agree with @Falco that this could be a good dedicated subject. Personally, it seems obvious that physicalism keeps expanding as science changes and contemporary science does not have a definitive theory/explanation of everything. To claim what qualia are in the terms of our current science, or to deny that qualia exist in the terms of our current science, is to commit the same mistake - to assume that current science is sufficient to explain things when it is clearly not.

I’m not at all sure that NCC maps to qualia. There are computationalists that will claim this is the case. I find this sort of ridiculous because they have no idea what the algorithm might be but they are sure the algorthim will cause qualia. This is obviously just a belief rather than science.

Here (the HTM community) I think you will find many peole who don’t think intelligent machines will be conscious (as in experience qualia). Personally I suspect that qualia are more closely connected with life than with intelligence. It seems likely we will solve autonomous intelligent machines well before solving how to fabricate living systems. So I suspect it will go in that order - the machines will help us undrestand what life is and how to fabricate it and that will lead to a scientific understanding of qualia.

Here you go:

I’m curious about qualia. If you assume that there’s a collection of neurons somewhere (in V1 I suppose) which fire when you see something red, then they would have a 1-1 correspondence to the sensation of redness - see a red car, imagine a red unicorn, listen to someone spell out the letters “R-E-D” and those same neurons should fire every time. So “experiencing the sensation of redness in any way” would then be a synonym for “this specific group of neurons is active”. If that’s the case, then it seems to me that there’s no real depth to qualia : “redness” in all its forms is just the label we adopt to denote that those neurons are active. It seems such a simple argument that I suspect I’ve missed something in the definition of qualia : can someone enlighten me ?

2 Likes

Hello.

While I’m not a scientist (yet), I do have a fascination with artificial and biological intelligence, which has lead me to reading about the neocortex (and other parts of the brain) and the various theories about how it works, which led me to learning about HTM and finding this forum, which has given me plenty of reading material and ideas to read and consider, respectively. Like many on here I don’t think deep learning, at least as it is now, is the key to AGI and view HTM/TBT as a useful piece of the puzzle that is general intelligence.

4 Likes