What might a bigger neocortex be able to do?

Here is an interesting paper on saddle point proliferation: https://papers.nips.cc/paper/5486-identifying-and-attacking-the-saddle-point-problem-in-high-dimensional-non-convex-optimization.pdf

1 Like

It’s depressing that that was only noticed so recently.

I don’t know what a larger brain would be capable of. But I know what the first thing it would do. It would order total planet cleansing from invasive, selfcentric and highly dangerous species. Human race will be put in a zoo and partially neutered…

Happy New Year everyone

Let’s assume a massive neocortex could analyze information perfectly. It still might not seem magical in most situations. The bottleneck at that level of intelligence is awareness. For example, you can’t be sure what the next number in the sequence 1, 2, 3 is. 4 would be a good guess, but maybe it’s 8 because the sequence is actually +1 each time but add 5 every 4th time. In the real world, there is usually a lot more information but also a lot more ambiguity because the world is so detailed. It could make perfect predictions, but it would need access to a network of microscopes covering the entire world.

In some cases, it could seem like an oracle. But computers already seem like oracles in a lot of those situations since you can just simulate them if you have enough information. With incomplete information, however, a giant neocortex would probably excel at guessing a bunch of possibilities, one of which is correct.

In terms of things besides pure prediction, like science, I’m not sure society as a whole is too far off perfect information analysis. We’re really slow because we need to publish papers etc. but we can fully and nearly perfectly analyze anything if enough people collaborate for long enough. I suppose speed would be the major difference between an artificial super intelligent scientist and society, but gathering information for analysis would still take at least a little time.

@maxima120
The above if partially why I think a larger brain would not kill everyone. Safety measures are possible because limited awareness limits what it can know. If people simply give it information and it spits out an answer (instead of having access to the internet or nukes or whatnot), its only option is manipulation. It cannot reliably manipulate people without knowing their experiences, so we can simply deny that information and the odds of a successive attempt to manipulate are very low. If it fails, we will destroy it, so its best option is to not try. That’s assuming its goals allow harming people. It doesn’t even need goals or output if we can read its thoughts directly. There are many ethical issues (and much potential to do good) since it would be so powerful, but doomsday is unlikely.

We want answers and build machines to answer well. In an ambiguous and rich world, however, isn’t it the greater art to find the right questions?

That’s what I would like to see in an intelligent machine. A neural network or an HTM doesn’t exhibit intelligence on its own, but after learning from data, which has been generated by some underlying process (a world). If the machine can interact with a world, instead of passively experiencing it, I think it can accomplish much greater intelligence. Of course, the richness of the world and of the interaction should determine the outcome.
I suppose google’s atari-playing system which Addonis mentions is moving in this direction. Has any such work been done with HTM systems?

We believe that may be the only way for an entity to become intelligent. At least that is the only way biological systems gain intelligence.