I don’t think @Bullbash meant that the code is for GPT-styled models, rather its an alternative to GPT (I guess it shouldn’t have been named GPT but anyways).
The parameters ANNs use and the “parameter” @Bullbash are fundamentally very different…
I don’t think @Bullbash meant that the code is for GPT-styled models, rather its an alternative to GPT (I guess it shouldn’t have been named GPT but anyways).
The parameters ANNs use and the “parameter” @Bullbash are fundamentally very different…
All I see there is a brute-force combinatorial search, + Hebbian learning. What part of that is novel?
Regarding
I’m kind of struggling to put together an acceptable “paper” - absolutely not my style. Not that it is complex, it is just big
You should ask ChatGPT to generate a paper from a less rigorous description of how your code works. Or if I’d want to tease you: Use the GPT-Teaser to generate a paper prototype.
And yeah, it is a poor name choice - this isn’t a generative pre-trained transformer. Whatever name you pick make sure it ends in “is all you need”.
I can only recall past memories as such, in terms of anything resembling visual pattern recall. The “black horse” would be a search through TV and places. The problem here is that based on what I have asked other people when they “see” something they have made up it is completely alien to me. I was a bit stunned at first. My wife visualised a grass field, effects of the wind, blue horse, trees, etc. I was puzzled.
I dream strange images and those are the only images that I ever make up and “see”, howerver strange they may be (some odd wasp crossed with spider after working in a loft space that had a wasps nest and lots of cobwebs).
I’m not sure aphantasia is really binary because the sleep process and dreaming may be a different process out of any instigated attention control (ha, avoided the word conci…nes).
Size, just size.
I levitate a penny, you point to an industrial crane as a baseline. Levitation - see below.
Thank you for protecting my scientific nativity?innocence? - really , and it was named “a GPT-teaser” [as mysterious cezar_t pointed out, thank you!], because I was pissed off with all the excitement and bragging of the first GPT size (~1bln? parameters). So, I did that LLM (2021?) which produced parameters 1bln/hour… $300 used server… 128GB RAM… no GPU. There are interesting architectural solutions but the size at the moment I was looking at.
And that is a well pointed question. Lets call the “combinatorial search” for a moment “combinatorial-transform”[cTrans} (because it is). Here is the statement:
There is no currently way to compare 100+K component vectors - because of The Dimensionality Curse. There is lot of papers and books on the subj. What I found: application of cTrans to a few of 100+K vectors projects them into 100+M overcomplete space(s), where Euclidean distance (Minkowski 1, 2) start working again.
From, for example, three hyperdimensional vectors there are two which more similar then the third one. Again, the code to play with at (OK, just change the properties.txt and supply your long vectors)
GitHub - MasterAlgo/Simply-Spiking (9 long sequences simultaneously)
The question now: is that there is a set of operations that allows to overcome The Curse? (“levitation”)
I cannot check every distribution and a Dataset, but whatever I did has worked.
The answer should be “yes” or “no”, like in “blue” vs “red” pill.
Yes - changes a lot, it breaks a lot. And we might talk further.
No - shows me a liar. I’ll shamefully get back into my swamp and never come to your blessed sand box again.
Have I broken The Curse?
P.S. Thank you everybody participating in discussion.
P.S.S. Just uploaded my 9 long texts to Github, it seemed to be happy - but I see no zip with data files… 9Mb only… wtf?
No idea about your 100K+ vectors, or if that’s even relevant.
Just of the top of my head, Alpha Zero used MCTS (a form of combinatorial search) with backprop, pretty successfully. I assume they used backprop rather than Hebbian because it works better. They obviously knew about Hebbian, plenty of neuroscientists in Deep Mind.