Intelligence is embarrassingly simple

From my point of view, I see that you are more into the same general approach as BKAZ. Good luck in your endeavors.

I think that much of the brain function/computation is buried in the configuration of the sub-components. I see that this configuration is based around the spatial/temporal behavior of the components.

As long as the human brain is the only real example of human level intelligence, I will continue to look there for inspiration and understading in building an AI.

2 Likes

I see nothing in common.

2 Likes

You are both chasing an algorithm rather than slavishly copying the brain.
Yes, different algorithms, but each is convinced that you are pursuing the holy grail computation.

1 Like

That’s like saying a space rock and Andromeda nebula are similar because neither is on Earth. A very narrow-minded POV, if I may. One similarity is instance-based learning, but that depends on how you define instances.

1 Like

And each thinks they have the one true answer.

1 Like

The perfect conclusion to the perfect discussion:

Afoot and light-hearted I take to the open road,
Healthy, free, the world before me,
The long brown path before me leading wherever I choose.

Henceforth I ask not good-fortune, I myself am good-fortune,
Henceforth I whimper no more, postpone no more, need nothing,
Done with indoor complaints, libraries, querulous criticisms,

Strong and content I travel the open road.(c)

2 Likes

If you dont have time to explain it, a link to some example code where those analogue inputs are used would be more useful than a slightly ironic non-answer.

1 Like

I wouldn’t be too bearish; you wouldn’t believe how many alternative methods towards AGI I’ve heard in my short lifetime. But the bitter lesson always comes knocking at the door - and those who ignore it are bound to fail :slight_smile:

BP+NNs works much better than most people credit them for; and many of their failings are statistical limits rather than being an intrinsic drawback of them (Indeed, lacking inductive biases is the sole reason they outperform other methods so well in the first place).

Its a dangerous game, where being seduced by biology and ignoring the theory could easily lead to a dead-end…

2 Likes

In reading Myths of the Instance Based Learning I still can’t figure out what’s the architecture of the ANN, especially failed finding transcript/explanation of this video:

I’m a hardcore software engineering guy, yet failed cracking GitHub - MasterAlgo/GPT-Teaser for lacking comments there.

Also many important (to my understanding, at least) LinkedIn links are not available now, e.g. in this page:

Math:
https://lnkd.in/gnyv-r-y
https://lnkd.in/g--EERUk
Generations (both word and character levels):
https://lnkd.in/gN8xC-mt
https://lnkd.in/gGKMd-hF
https://lnkd.in/gJdmNJf3
https://lnkd.in/g-cFSa8w

Hell, I want to understand the design better, any help appreciated!

2 Likes

@Bullbash I especially have difficulty in understanding this:

Myths of the Instance Based Learning.

This net generalizes well by the rule:

Systems (instances) similarity depends on number of common subsystem (tokens).

Some distance measures behave better then others but in general a successful comparison is made on sets of millions common constituents. That is like comparing real-component vectors in 10^6 dimensional space. It just works, it does. The Curse of Dimensionality?! Hello?

The intuition of such a network is in this crude visualization:

https://youtu.be/uAQB9d6ovlk

In your schema, does a system/instance have fixed number of subsystems, or the number varies? Also does the order of subsystems matter? I think I’m asking: how the identity of a subsystem/token/system/instance form?


Do you mean “comparing real-component vectors in 10^6 dimensional space” doesn’t suffer the Curse of Dimensionality? AFAICT Euclidean distance between such vectors doesn’t work sufficiently well, do you imply that some distances are inherently free of Dim Curse? What are those?

2 Likes

should you tell it to Jeff Hawkins? :slight_smile: ))

will keep it in mind…
Thank you for contributing!

1 Like

Ironic, considering Numenta’s current direction has been leaning more and more heavily into integrating their theories with DL. Hawkins’ is smart enough to realize where this theories are flawed. And smart enough to lean onto other concurrent work to plug holes in his own theories. Hopefully, he might be able to make some discoveries in parallel to both the fields but only time will tell…

1 Like