How do you build a theory, if you excuse me? How do you design experiments? Can you show me ONE example in the whole history of humanity where a theory and an experiment didn’t require ANY introspection AND any conversation? Please?
Reminding you any “thinking out loud” or “inner voice” is both introspection and conversation at the same time. Even children, without the language acquisition, design their experiments using introspection since the model of how the world works is inside your brain and your cognition uses the world model to predict (and action to update its configuration) so that it can predict an outcome for the senses. If those predictions match the senses every time, your model is right; if you are surprised, the model is wrong. Experiment always require “what is my model?” then you package your model in the zip of language and share it with other people so that they can use it. Always, always, whatever you do it is introspection and conversation. Always.
Before creating mechanisms, I think we need to know how the brain goes about doing things. “Design intelligence” is a pretty stupid prompt for a STEM topic.
Models can help explain how the brain does things, but even if they work, they’re not proof. Correlations are problematic, because brain functioning is intricate, but even more so because of that stupid prompt.
As a quick example, I used to interpret projections to the spinal cord as motor output. The cortex actually targets the sensory part of the spinal cord too, not just the motor part. That leads to things like distinguishing self-motion from motor commands. My notions about motor control were too vague. They probably still are.
Here’s a longer example of how correlations could cause problems. Attention is a vague common sense notion. Let’s say you think a class of neurons responds to attended stimuli. Somehow, you have an experiment which determines whether the animal attends a stimulus. The neurons correlate very well with attention.
That doesn’t necessarily mean they’re involved in attention, although it points in that direction. For example, they might participate in a sensory submodality with attention-worthy properties. Even if they participate in attention, that doesn’t mean they represent attentionally filtered stimuli. For example, they might be involved in surprise. If so, it could be for behavior, perceptual detection, brain state, etc. They could also only be tangentially related to attention, or involved in a mechanism which doesn’t perfectly match the notion of attention. For example, attending something could mean anchoring locations to it.
Would you be willing to soften your viewpoint to include a scientific investigation into how introspection works and what it can and cannot offer into understanding mental processes?
Keep in mind that this could include invasive instrumentation that finds neural correlates to various mental states.
Understandable, any sensory neuron can use the motor commands to improve its prediction. V1 takes motor input as well. That’s an excellent observation about perception-action loops. Probably the exact ratio is in the connectomes and not something we can analytically work out.
Agree, long road ahead, this is an entry for the “Pitfalls of the Correlation” title for example ![]()
I’m not sure L5 is sending motor commands to sensory spinal cord. It probably does, but it could be other things, like attention / detection or modelling self-motion.
Can you show me an example of anything built with introspection and conversation and no experimental data or technology?
It ain’t going to happen. The AI problem is one of science and engineering and those of us who get that will do the work and find the answers where others are still just navel gazing and chatting.
All thinking is “introspection”, it’s just a matter of depth and breadth. Problem is, introspection of most people is quite shallow, and it’s not any deeper for most coders. No, @Bitking, I don’t think previous attempts were based on good definitions, I think they were garbage. A good definition should have scalability built in. Anyway, here is my proposal, top-to-bottom: http://www.cognitivealgorithm.info/. Just in case there some one-in-a-billion deep-introspection guy here.
If clever mechanisms do something neat and are consistent with the brain, I’m sure the brain breaks problems into pieces the same way as you do.
That’s bs. It’s not bad to think about what intelligence is. That’s bs on its own of course, but not once it meshes with neuroscience facts, or when it points out stealth bs.
Not so. Not even close.
" Introspection is the examination of one’s own conscious thoughts and feelings"
The thinking process in science and engineering is the examination of the external physical world and the principles on which it operates. It is objective, and explicitly excludes one’s own thoughts and feelings.
All of your knowledge of external world is internalized, by definition. “You” is a sum of your knowledge. Some of that knowledge is value-charged, that’s your feelings. Consciousness has nothing to do with it. Objective / subjective is shades of grey. Emotions: curiousity is the driver, the rest gets in the way.
Even when you’re trying create a brain-like AI? Brains think. They also perceive. I can do introspections about what I perceive. It’s not like animal experiments never consider the animal’s perceptions. Often, they’re basically just conscious perception too (perceptual stuff reported by behavior.)
I didn’t say no experiment, I said:
there MUST be experiments before, I can reevaluate them, I can suggest them. But somewhere along the line while you try to come up with a unifying model, you have to specialize in the engineering area where you read many papers, design models, argue with every other engineers to the teeth, agree on something, predict outcomes, try to the prove predictive power so that the actual scientist go and do additional experiments to see if that predictive power can be exercised in the real world. I meant we mostly argue for anything and everything and do small simulations, scientists do lots of experiment and discuss the result in one last paragraph, so why not complement the 2?
Yes, exclude the subjective experience to reach the objective truth… so that you can reach the subjective experience back where it no longer exists? Illogical. You can’t do an experience machine - an animal - by averaging over the experiences of the animals that you want to reverse-engineer.
I am not saying the lies people say themselves matter, they don’t. I am saying with the cross examinations of those experiences, you can find common denominators and (non)linearly independent phenomenon that can be turned into symbols, which then can be communicated, which then can find other examples in other experience machines and you can create a confidence about their existence, which then guide you to look for them in experiments. If you don’t have introspection and conversation, YOU CANNOT design experiments and YOU CANNOT evaluate them. Every time you do something you use your beliefs about the unobservable parts of the universe.
Development of the wheel… sometimes some things just dont require an explanation or deep thought. They create an activation sequence that joins up areas that then cascade into the “Eureka” moment…
I’m not quite sure you have a concept as to how the cortex works (along with all of us here, me includeded) as any activation sequence through the cortex is what you are labelling “introspection” because any activation of prior learning of any type is effectively what your labelling introspection. Without any of this sequence activation we would revert to animal instinct behaviour dominated by the “old brain” areas and likely just fight each other to be Alpha rather than create anything new that is not needed for basic survival or procreation.
Agree with the depth, instinct (Eureka) moments are less of a recursive process and is not what I would call introspection because the activation sequence is different and not as long, more like a wider cascading event. Iinstincts can also be different because they can be more heavily influenced by activations outside of the cortex (e.g. body language micro expressions, dodging an object thrown at you).
With the current GPU approach I think the majority of “use” is sort of like teaching someone about a hammer and giving them nails to do a job, then later giving them a handfull of screws and then wonder why the screws were hammered into the wall.
I have seen (and inadvertantly hired) developers who write some horrendously inefficient code that they see as fine because it’s running on leadeing edge hardware, get’s results and they don’t see a problem with it. Scale is what breaks “all” code at some point. AI is quite a few magnitudes different in the scale problem from what the majority of developers can deal with.
My view on this is a little different because I believe (conjecture open to being flamed) that we only need to develop a certain level of intelligence that is a subset of human capabilities which would then evolve the next generation. Trying to create a human replica as an AI is like asking Henry Ford to have made a car with legs. It can be done but is going to take way longer to achieve and be way less efficient.
Just my thoughts and matches to the heap of combustable material…
I agree but for different a reason: that once anyone uses biology to actually make money then it becomes a lot easier for outsiders to invest in this technology. And neuroscience does not need to accomplish very much, it just needs to compete with backpropagation in order to redirect the funding and the hype away from those old unrealisic AIs and towards the study of biological intelligence.
Let us assume for discussion that the HTM/TBT is correct as it stands to describe the function of the cortical fabric.
Is anyone here moving up to the hierarchy and sub-cortical connections?
Not mine.
But I’ve wondered about the idea of using genetic algorithms to search the parameter space for HTM configurations or hierarchy arrangement.
One of the challenges would be determining the virtual world, and what is considered a ‘success’. For this fella’s implementation, he simply define certain regions so that his creatures would evolve to migrate to those survival regions, thus passing on their genes.
I might go for doing this myself, but at least wanting to float the idea here.
At long last. At least 500 years of science and 10,000 of engineering tell us sitting on your bum and introspecting butter no parsnips, you have to do the experiments and create the theories and build the models. Nothing else works.
You may have subjective experiences, but they are not experimental data and they are worthless to science. There is a path, but it does not go this way.
Sorry, but evolution is another idea that won’t work. Genetic algorithms do well when they mimic adaptation (adjusting parameters to choose from known features) and are hopelessly slow when they try to use mutation to generate new features.
The feasible path is that we study biological systems and use science and engineering to create an AI with similar capabilities (say a lab rat). This AI is not at all human, but it’s rat-smart and has access to the entire Internet, vast quantities of sensory data, and runs a million times faster with vast quantities of working memory… Set that AI the problem of how humans work and it just might find the answer we cannot.
As it happens, that’s the path we’re on, whether we know it or not.
I fully agree with that perspective as there are no current approaches, which create adaptations of “significant” architectural difference (i.e. adding on a whole new block of architecture akin to adding a Hippocampus). The architecture of the way the brain works and is integrated I don’t think any GA approach can achive success any much more than the proverbial monkey at a typewriter creating a litterary work of art before the universe cools close to absolute zero.
My perspective is maybe a little too out there because I believe that it will be the first “basic” AGI which will (help) create the next, not any simplified GA type approach. Humans will then be sitting at the proverbial typewriter, alongside the monkey, trying to create the next work of art. Again a bit too out there, just my perspective.
HTM/TBT is the best theory of how “part” of the architecture needs to work, but I’m not in the camp of fully replicating spiking (full biology) behaviour because of the resulting level/ degree of complexity later on in the process if you go down that route. HPC architectures will be able to implement the raw compute and interconnect but I think it will be a less optimal route at significant scale. Reducing bit widths/size for the sake of data compression to fit a model in memory, to create an extra bit of “simple” scale, I think destroys a critical part of the temporal nature as to what needs to be implemented, primarily in memory formation. Again just my perspective and conjecture I’m trying to program.
The research is needed and awesome as to what it achieves and is as you say is the work that is needed to make real progress that no amount of thinking can achieve. Im 100% in the camp of the think->study->engineer->repeat loop. Studying the biology is the only way to get there.