So, I find this comunity rather inactive, I get that we are a very niche comunity but I would guess there are other online forums similar in spirit to this one that are more active. The only other comunity I know (Nengo forum) is even more deserted than here.
I’ve never understood why some people are so dismissive of AI risk. Even narrow AI has the potential to be dangerous when put when put in charge of high assurance systems due to the fact they are black boxes and as such we can’t bug check them. How do they not think an autonomous artificial general intelligence, especially if it’s self-improving, would not have the potential to be dangerous?
A weird thing about tech is that when the various foundational threads come together for one person, it often is in the air for most of the practitioners in the field. Germany was working on nuke tech at the same time that the allies were working on similar tech.
Once the “attention is all you need” paper was available, the LLM world exploded. The genie is out of the bottle.
When the next papers that bridge from the current LLM to effective AI are released, nobody is going to be able to hold it back. It is possible that the people that release the key papers will not know that it will work as well as it does initially - the same as the attention paper.
I expect that once the foundation tech is in place, it is unlikely that any individual can stop the tech from being exploited for both good an evil.
Arguments against AI risk I heard (but not necessarily share) are:
runaway intelligence is unlikely
researchers will be able to educate nascent AI the same way people raise children
it will take a long time before research comes close to AGI
whoever’s in control of the GPU farm, can pull the plug
AI development moratoria are a ploy by big tech to protect their lead
Personally I think several increasingly important accidents using weak AI could happen before we reach AGI, and those are dangerous enough. Potential scenarios:
development and accidental or intentional release of extremely effective pathogens
development of military tech that gives one side an extreme edge on the global theater
internet contamination that will make digital communication completely unreliable at every level
internet contamination that will make commodity trade dangerously erratic or even impossible
For what it’s worth, I am not a doomer. I think we need AI to get out of the mess we created on this planet. When Pandora opened her amphora, only hope remained inside. But let’s not forget that the ancient Greeks considered hope as the ultimate curse bestowed upon humanity. As long as we have hope, we willingly continue to suffer our fate.
I think this community is still pretty active, but the topics of discussion are not for everyone. Personally I care about neuroscience, but not about deep learning so I tend to stay out of those conversations. And some people who care about deep learning would prefer to ignore other topics like the cerebellum, the basal ganglia or consciousness. It’s not like we’re divided into camps, but rather each person is here for their own reasons, and when the topic of the day is irrelevant to you then it can seem like a slow day on the forum.
Nano scale atomic structured supercapacitors are far more dangerous and an inevitability of current technology. Current developments and tech can already wipe out humanity without any new around the corner developments.
Stuff will happen and life will go on. If you can’t control it (or do anything about it) don’t bother worrying about it.
Research on AI risks are of paramount importance. I don’t know if it’s correct but I think the hate it’s getting probably stems from too many political creatures (mixing in ideologies) and pseudo-experts offering their opinions. Politicians have some well-meaning intent to rein in AI’s risks but on the other hand they’re also trying to put a leash on big tech companies to ensure they’re still top dogs in the food chain by building oversized bureaucratic swamp around it.
I own the LinkedIn group: (bio)cybernetics; it relates to this forum in that I study/use HTM theories to build a ‘cybernetic’ library on ‘Unconventional Intelligence’ exhibited by nature. Sadly, my group is quiet as well, but I welcome anyone who’d like to debate/discuss such things. Here is a link: Sign Up | LinkedIn