Compute is cheap as @BrainVx Quoted below, demonstrating what compute will develop over the next few years.
Data? We are on the internet. We literally haven’t scratched the surface yet. we haven’t even covered all of the text, let alone all the pictures, videos, audio etc. Every single modality counts.I am happy to report that there are multiple Big Tech, as well as startups/NPOs doing the data collection - like LAOIN.
is peanuts. Its nowhere near the manhattan project, which lead to significant advances.
That’s how science works - its not about making things cheap but about understanding and gaining knowledge.
You can look up - Again, I am not claiming GPT3 HAS achieved AGI in any form.
I am merely challenging you to locate a system more proficient in reasoning abilities than GPT3. You can search some for yourselves, these are what I found from google [To what extent is GPT-3 capable of reasoning? - LessWrong, https://towardsdatascience.com/is-gpt-3-reasonable-enough-to-detect-logical-fallacies-3c3dc4b7fda1]
Another thing - if you try out
Co-Pilot you can see some examples of inferential understanding - particularly in writing out complex models. Its not everything, but it demonstrates something at least.
Universal Approximation Thereom.
Well, I can ask the same - Hawkins’s has been at this fore about 20 years, research in Biology/pschyology is much older than AGI research. But apparently, in the short term at least results seem to be only with DL
Very true, indeed. But pointing out to models like LamDa, Meenu etc. is difficult. For instance, MUM: A new AI milestone for understanding information is pretty interesting seeing its advanced multi-modal capabilities allows greater reasoning capabilities. Unfortunately, its behind closed doors so all we can do is wait
I hope the above para answers your question. Its a simple logic,
If you can guarantee me that If I hook up a human brain with no senses or body apart from a brain in a contraption too keep it alive (with all the hormones), directly interfacing it with data. Given the same corpus Language models are given -
Can you guarantee me that the human developed thus would be conscious? Or better, would it be better than say an RNN?
AFAIK from my testing it can do up to 7-9 digit multiplication very well, it breaks down after a certain length. Why?
- GPT3 can’t compute in variable timesteps, nor store data/ops internally. That requires a complex knowledge interchange which perhaps MoEs might achieve if the experts have connections facilitating direct flow
- The fact that it can perform multiplication at all is itself a huge testimony that its not simply remembering/memorizing stuff but can actually figure out things on its own.
Lastly, due to architectural constraints performing deeply recurrent ops would be complex.
One thing I can’t grasp is why everyone seems to think GPT3 to be the only saving grace for entirely of DL Like if it can’t do certain tasks, that’s representative of the entire research. Strange…
Scalability as a software attribute - surely HTM/TBT has enough funds to afford a few AMD threadrippers for a few days and try things out?
Its simply the problem with GOFAI/Biological approaches (bar some, maybe even TBT/HTM) they simply do not scale. Look at CYC, or LogicMOO. The problem is very clear here.
Lastly, Again I am not pointing out LMs to be AGI - simply asking if a more intelligent and Generalizable system exists.
Even if DL shows teeny-tiny reasoning capabilities, I would be more interested in finding its limits than achieving AGI - A grand experiment in deed