Is HTM an Artificial General Intelligence (AGI) framework?
Moved from nupic into #htm-theory.
The purpose of HTM is not to be an AGI framework. However, it might become one someday.
What I mean is that AGI is not the goal of Numenta or HTM research. It might be an effect of this research at some point in the future, but we are not focused on implementing AGI. We are focused on understanding and implementing intelligence by studying the neocortex.
My opinion is that when AGI is created, it will be strongly underpinned by HTM technology.
Thank you for answering my question. I want to use HTM to create a story-teller system. I want to input many stories and distort, disassemble and combine them to create new stories. I am learning HTM now so that I can figure out a way to accomplish this if it is feasible. Any insights on this is appreciated.
Definitely research Cortical.IO.
Thanks again for communicating. My idea was to make SDRs out of the submodalities of the major modalities of sight, sound, feeling, smell, and taste. I would use color, shape, size, and use one SDR for red, one for blue and so forth. Shape would be based on a three dimensional image. A set of points would describe the shape. The average distance between the points would give size I am guessing. Similarly I would encode temperature, pressure, texture, and location for feeling. Pitch, loudness and so forth would be sound. Distortions in these modalities would be verbs. For example running would be distortions in the shape, color distortions in face shapes would give emotion and so forth. These are low places on the hierarchy. Move up the hierarchy and you get the ability to describe these transforms in sentences. The man ran. The boy blushed. That is my idea. Do you think it is feasible? Would it fit htm’s or is there something I am missing? I don’t even know how to start yet.
But that’s not how it works in HTM theory. From the brain’s perspective, reality is a 3D space with sensory features linked from every point in space. Shape is not a feature of an object. Shape is the object. It’s spatial attributes exist in space where an entity has sensed them in the past, as far as the entity is concerned. I really want to use the term “quantized space” when talking about this mental representation, but I’m not sure that’s the right term.
I agree, shape is the object, so is color and texture and all the sub-modalities. The object viewed is at a specific location in all of space-time. I guess I don’t understand enough to see where this model I present fails. Shape, color, and the other submodalities is the object. It may be that I am not explaining my idea correctly. Is this part of HTM theory described in the white paper? Thanks again for talking to me about these ideas…
Actually, we are not talking about that type of location. We are talking about allocentric location, meaning the object’s definition in relation to itself and nothing else. For example, we could use an object’s center of gravity and xyz coordinates to define all physical objects in the universe independently of any other frame of reference. Each point would contain sensory features we have observed on the object in the form of SDRs. But these features are distributed throughout the system. Some sensory features will be in in the visual cortex, others in somatic, auditory… but they all light up with their own representations of “dog” with you think “dog”. And you can also bring up a representation of a dog in your mind in space. You know the general shapes and sizes of dogs in relation to your body scale. It doesn’t matter where in your egocentric space the dog actually is, the idea of the dog exists outside of that egocentric reality.
From your explanation, we are talking about the same thing. An apple can feel round and hard, that sensation is the brain lighting up in the feeling part of the brain, red and shiny is the visual cortex lighting up, and so forth. We are in sync here, its just my inability to properly communicate that is causing the trouble. I still don’t see the problem with my model so it must be my understanding of it. This is in the white paper or on intelligence or another publication?
On a second note, where is the installation folder in windows 10. I tried running py.test tests/unit and it gives
platform win32 – Python 2.7.12, pytest-3.0.7, py-1.4.34, pluggy-0.4.0
rootdir: C:\Users\William Taylor\Downloads, inifile:
plugins: xdist-1.16.0, cov-2.5.0 error file not found tests/unit
I am trying to install nupic and don’t even know where it is. I also tried William Taylor the main directory without downloads.
A post was split to a new topic: Installing NuPIC on Windows