If numenta discover AGI, is that going to be a open source? I mean it’s somewhat dangerous right? I’m not judging anybody or anything. I’m thankful for the community for sharing the idea, but if someone breaks the scientific barrier, and they could make AGI like system, the ideas should be somewhat secret. Am I right? Any thoughts.
Despite the general alarmism on AGI I think it is way too soon to be worried about it.
We have billions of natural intelligence examples floating around now and we seem to be doing OK.
But I think numenta is not just an example, it’s way more ahead than a DNN reptile brain.
Despite our (the HTM community’s) best effort. We are ages from an actual AGI. Like, I calculated that I need a exascale supercomputer with ASICs designed to run HTM with perfect scaling, which is never the case, to be able to simulate the human brain using HTM theory.
We already live on the planet of the robots. Things with sensors, actuators and central processing units that swim in the sea or scamper around on land or even fly in the atmosphere, a bit surprizingly.
All designed by evolution out of heavily quatized materials.
If you wanted to design an AGI out of multiple agents (a society of the mind) you would face a problem where the slightest damage to one would bring the whole society down like a house of cards. Ideally you would like the agents to couple to each other gradually over time and to a degree each agent decides. That would allow for smooth evolution.
If you write into some types of associative memory that induces slight changes everywhere at a noise floor and strong changes at the intended location.
Agents reading or writing into such an associative memory could gradually locate the read and write addresses of other agents and couple to the other agents as strongly or weakly as necessary.
A damaged agent might automatically decouple itself by shifting (because of damage) to other read write addresses that no other agent was interacting strongly with.
A society of mind based on agents interacting in a sea of associative memory seems an attractive option and automatically the agents would have access to large amounts of memory themselves.
The computational demands to evolve such a system seem rather severe. I doubt you could make much progress with a laptop. It’s more for those with 100 GPUs plus.
Moore’s curve is relentless.
Depends on how do people want to play with it
Corporations are considered to be legal persons. It may not be surprising that AGI will incorporate much like described in Charles Stross’ Accelerando and Richard K. Morgan’s Altered Carbon. (And probably others).
Also, when you think about the ethical problems around patenting parts of the human genome, maybe closed-sourced AGI should be viewed through the same lens.
There is Numenta (Largely open-source) and their main competitor (Vicarious) which is proprietary. Which will have larger advances first and/or more often? I’m willing to bet that’s Numenta. AGI? Well, someone above said they calculate some enormous computational capacity necessary to emulate a human brain. So that’s something.
Normally, I’d say “That doesn’t necessarily apply to AGI” but actually, from an HTM point of view, it probably does. In speculating that it will take ages though, consider that breakthroughs come in waves. Often one leads to another very rapidly. I’m 34 years old. I believe it will be well within my lifetime at this rate. I’m no HTM expert (Though I code machine learning) but I would feel comfortable with Numenta making those breakthroughs, which I believe has a relatively high likelihood.
Better Numenta than Alphabet who is hardly even trying (Publicly).
In my opinion, the smaller institutions like Numenta and university spin-offs will continue to come up with the most important break-throughs. But at some point someone is going to have to build a sufficiently large machine ahead of the others. And that will be done by a large entity. Something like DARPA, Google or Baidu.
My money is on the Chinese: they have the most resources and they are better organised.
Yes and no.
Imho we are an undetermined time away from a breakthrough about the model. That is, a model showing a fully functional and generic “unit” of human-like cortex. And indeed we’d still be ages away from running a human-brain-scale simulation at realistic rates on current computers. But that’s because they are ill-tailored for the problem at hand. Even GPUs are.
But once such a model is found, how much time until a rough, but working full-hardware version of these “units” is poured out of semiconductor foundries ? a decade ? 5 years ? a couple ?
Then a couple more years to reduce that to the then-current best nanoscale printing ?
Remembering various drawings (most of them dug out by @bitking ^^) about lateral axonal scales, we may envision such a unit to reach a practical optimum at representing roughly 1mm² of cortical tissue, so that its communication with any of its 6 neighbors is enough to account for the lateral connections it may form in same cortical area. that’s thus a few tens of thousands pyramidal cells modeled in there. For various reasons, my guess in such a model is that roughly same count of addresses would be the local, per-synapse potential. And same could be said of the address-to-long-range stuff. Okay, so that’s on the order of 16b per address, give or take. Let’s be conservative and allow 1 billion synapses in there. you’d thus need on the order of 2 or 3 gigabytes memory on top of the unit module to account for plasticity. The rest is just not so many transistors reflecting whatever learning and processing model you’ve come up with. And how fast is the thing ? Assuming a fully hardwired process allowing only a sequential update of each neuron in each “unit”, at current computer clock rates, that’s a dozen microseconds per pass. We poor biological souls are processing our stuff thousands of times slower.
Fast-recap, would you ?
- a few gigabytes memory per mm², we’re already there if I’m not mistaken.
- a bunch of processing transistors on top of that ? haha.
- A 1mm² hex-shape for standardized lateral connections to neighbors ? no problem.
- Serializing current states to neighbors ? Seems doable.
- Arranging 16x16 or 32x32 of these hexes in an area-like module with power supply ? No problem.
- Having enough room and power for, like, 0.3m² of those ? Errr… yeah ?
- Having long-range wiremaps from area module to area module ? It would require lots of large-scale connectome data for having something humanlike, but from a “red wire goes there” viewpoint, it’s been done.
- Synchronizing the whole thing ? Maybe hard but seems in range of our technical abilities.
- Feeding that stuff with a lizard, sensors and motors ? I trust bitking to come up with something Easy Peasy.
Thus I believe that, from a technical point of view, semiconductor foundries would already be able to produce a to-scale-hardware replica (as of 2019) with comparable detail and abilities than the human cortex, … albeit clocked at about a thousand times the human rate.
What we lack is “simply” the model.
Once a working concept is known, I bet the available hardware implementations will very soon surpass what we believed was the “future”… and what we thought were small incremental progresses towards an AGI will suddenly blow up.
Now consider there would certainly already be silicons optimums to be found in no time, that biology did not : eg, is 1mm² the best unit ? Would you lose flexibility or computing power if you reduced the number of neurons in a unit ? or if you enlarged it ? or greatly so ? Where could we gain from no-loss computing as compared to our lousy neurons ? Also, quite simply : What’s limiting silicon to those 0.3 m² ? How deep could we go in stacking layers ? Is abstracting power virtually unlimited ? Can we easily couple neurologically based-reasoning with assisting devices such as integrated digital calculators ? Also : What intelligence breakthrough are achievable with superhuman sensors, such as a vision extended to infrared, microwave, or X rays ?
I guess I’m halfway dismissive of the “fear of AI”, halfway in awe of its potential, and halfway straight-out frightened myself ^^'. And Yes. That’s one way and a half.
about that OP question, I believe open source is safer than the alternative(s?).
But open source will simply find the model. Then big money will rule us all. As it does.
IMO if Numenta discovers AGI or if there’s such thing as discovering it then it would be easily figured out by other parties. It will be implemented using computers of some kind but then computers are more predictable and hackable. It would then be quick for someone to build a flavor of implementation for it. If it’s really using the brain’s natural algorithm then I don’t think the theory can be claimed and contained within a group as it is naturally owned by humanity.
A side question I’ve been asking myself for years now is: if we get alternative neocortical columns to work in silica at much faster speeds than our biological counterparts, how would that effect our perception of time?
Would we be able to experience our surroundings in slow-motion?
Even though I’m now questionning my own figures above (didnt really take into account integrating all synapses sequentially or mem access rates) I also had same questions.
My take was along those lines : we dont readily recognize very slow-mo audio. Maybe training could do, but at some point solving for this is easier if you go Red October mode : replay that at higher pace and you maybe can hear the engines of that Sean Connery sub.
Which would mean that those faster-than human brains may benefit from an underclocked part, just to implement a human comm interface :p7
@gmirey - you are on to something in regards to the perception of time.
As I see it - the early stages that model the physics of the real world have to stay locked in to those dynamics to understand and predict what is happening. (Perception is active prediction!)
Later stages are not bound by these restrictions and it makes sense that they may be running with a faster clock when required.The flip side of that coin is that when there is no need for rapid response the clock could be much slower, both for a reduction in energy requirements and for possible advantages in the thinking process. (think meditation).
I made a post on clocking of the cortex here:
One of the links was to this excellent bit on the perception of time AND what happens to the perception of time under stress. The internal clock speeds up which seems to slow down time in some parts of the brain. The perception of the physics of the universe does NOT slow down.
This strongly points to at least two clocks in the brain, one being variable.