Sorry about that. I didn’t mean to imply anything about your approach, and sorry if I was rude.
This might not be the right place for this discussion so I’ll reply to your points in a spoiler.
Rest of this response
Regions have specific roles but the same circuitry (or at least most of it) is used for everything the neocortex does. The connections between layers are consistent. As far as I know, there’s not a single region with an extra layer completely different from the others. There are definitely specializations, but if at least the same components of the cortical circuitry can be used for every function the cortex serves, then either the cortical circuitry is intelligent or it’s a short step away from intelligence.
Predictions and hierarchy could easily be central to intelligence, but that’s not enough to claim more than an educated guess in my opinion. How does it make novel predictions? Hierarchy might help with that, but it can’t solve everything. How does the brain understand sentences or produce thoughts (in word form, image form, or whatever else, I’m not arguing language is important or unimportant for intelligence)? I don’t see how prediction could do that.
The thinking about hierarchy has changed since. Check out this podcast: https://www.buzzsprout.com/188368/753219-episode-1-research-update-with-jeff-hawkins-part-1
I don’t know if hierarchical perception leading to general AI is still the goal, because there are still aspects of hierarchy, just not as much emphasis on the physical cortical hierarchy.
I’m not arguing that action is important. I just don’t see how perception could lead to intelligence on its own.
From my perspective, theories about AI are very opinionated. I have strong opinions which are weakly supported too. It’s impossible not to have weakly supported opinions in my opinion because so many ideas about AI have failed. When I have a new idea which I’m excited about, I try to remind myself that there’s a 90% chance it will fail. Maybe I shouldn’t try to force that on others.
No, you wouldn’t know what to copy at the start, but you can figure that out. There are ways to get around distracting things which aren’t involved in intelligence. For example, there are unessential neuron classes. That’s known because there are neuron classes unique to some regions which you can be intelligent without. If you look for a given neuron class in more than one region but only find it in a small fraction of the regions, you can be pretty certain you can ignore that cell type. Then, you can solidify that based on theory, either showing that the cell type isn’t required for intelligence or for the cases where you get it wrong, showing that it actually plays some essential role based on the rest of what the circuit does, and it just wasn’t discovered yet in other parts of the cortex.
Another way around the distracting details is to not require anything to be included in the theory until there’s a need to do so. That’s an approach Numenta uses, I’ve read.
There are loads of other approaches to get around the messiness, and other people probably know of other approaches. Grid cells have been big in HTM theory recently, and those we discovered by recording neurons. There are some things in neuroscience which aren’t messy and ambiguous, which can really help get a framework going. Another approach is to just try and figure out the role of a connection, neuron class, layer, or whatnot, without worrying too much about how it carries out that role. For example, let’s say a type of neuron activates a little while after all the others. It also doesn’t fire much unless the animal is behaving. But it isn’t involved in generating behavior because it starts firing a while after the animal starts behaving and it has restricted receptive fields. Based on that, that type of neuron might be involved in processing movement, maybe moving sensory input or maybe to deal with the impacts of behavior on the sensory input.
Approaches like these can be used together to build up a better and better sense of what’s going on over time. That’s also just the neuroscience side. I don’t know much about how to test things in code in ways which go alongside neuroscience-oriented approaches.
I don’t really see the difference. I agree that it’s not really right to call it science, because there aren’t any measurements, but science also involves analyzing and philosophizing, especially for really hard problems.
Subcortical structures like thalamus and basal ganglia are still on the table. Regardless, neocortex is still pretty dang complex. I don’t think it’s super complex in terms of core operations, but things which can be described simply can be complex without that description. Neocortex is also pretty messy, especially when most info is based on tiny isolated slices or anesthetics with massive influences on its activity.
Introspection and neuroscience complement each other. Jeff Hawkins has argued that we can’t get to general AI in the near future except by copying the brain’s core principles of intelligence.
I agree we shouldn’t copy most of what the cortex does, although I’m not sure if you’re talking about general operations or more specific things.
Let me try to illustrate why I think we should copy the cortex with an exaggerated story which might not be relevant to your reasoning. I wouldn’t blame you for skipping my rambling.
I find some coffee poured on the road, and I’ve never seen coffee before but decide I want to make some because that coffee was pretty good but, you know, it was on the road so it could be better. So, I go buy some coffee beans because they smell similar but have no idea what to do with them. Do I just leave the coffee there on the road, or do I take it home to help guide coffee making? There’s all kinds of dirt in it so that would distract from the flavor I’m trying to make, so does that mean I throw out the coffee? No, I should keep it around to see if the recipe is heading in the right direction. If I do that, when I add pepper I know the taste is definitely wrong. If I don’t, I’ll just keep going with the pepper because it seems like the right starting point. Maybe I’ll actually start with putting them in water, a good start. But then I can’t check if the color is right based on the roadside coffee, so I end up with beans in water. It tastes like coffee, but not quite right, so I keep adding all kinds of spices, because that’s how you get subtle flavors, right? I never end up making the right coffee because I never realize you can grind coffee beans. Instead, decades in the future when I’m retired from the coffee development business, I’ve made something pretty tasty, with all the right combinations of spices (pepper plays the central role), but it’s definitely not coffee. Still good though.
That’s not what I was trying to say. By “being produced,” I meant any AI not based on the brain. Intelligence is the goal here. My point was that if it’s not based on the brain, it’s not going to develop easily towards intelligence, so it’s going to be something else.