So, I find this comunity rather inactive, I get that we are a very niche comunity but I would guess there are other online forums similar in spirit to this one that are more active. The only other comunity I know (Nengo forum) is even more deserted than here.
Iāve never understood why some people are so dismissive of AI risk. Even narrow AI has the potential to be dangerous when put when put in charge of high assurance systems due to the fact they are black boxes and as such we canāt bug check them. How do they not think an autonomous artificial general intelligence, especially if itās self-improving, would not have the potential to be dangerous?
Even āsafeā narrow AI thats been designed to be simple tools are dangerous because they can and will empower the wrong people.
I honestly think its hopeless, we cant win the alignment race, Big tech are going to build bigger and bigger systems and release them with no regards for the consequences.
So - any tech is dangerous in the hands of āthe wrong people.ā
Look at what China is doing with face recognition and the Social Credit system.
I am reminded of the Black Mirror Nosedive episode.
I guees my argument is: if by chance you managed to figure out how to make cold fusion that can be used both as power source or as a ācleanā hydrogen bomb. Would you share it?
A weird thing about tech is that when the various foundational threads come together for one person, it often is in the air for most of the practitioners in the field. Germany was working on nuke tech at the same time that the allies were working on similar tech.
Once the āattention is all you needā paper was available, the LLM world exploded. The genie is out of the bottle.
When the next papers that bridge from the current LLM to effective AI are released, nobody is going to be able to hold it back. It is possible that the people that release the key papers will not know that it will work as well as it does initially - the same as the attention paper.
I expect that once the foundation tech is in place, it is unlikely that any individual can stop the tech from being exploited for both good an evil.
Arguments against AI risk I heard (but not necessarily share) are:
runaway intelligence is unlikely
researchers will be able to educate nascent AI the same way people raise children
it will take a long time before research comes close to AGI
whoeverās in control of the GPU farm, can pull the plug
AI development moratoria are a ploy by big tech to protect their lead
Personally I think several increasingly important accidents using weak AI could happen before we reach AGI, and those are dangerous enough. Potential scenarios:
development and accidental or intentional release of extremely effective pathogens
development of military tech that gives one side an extreme edge on the global theater
internet contamination that will make digital communication completely unreliable at every level
internet contamination that will make commodity trade dangerously erratic or even impossible
For what itās worth, I am not a doomer. I think we need AI to get out of the mess we created on this planet. When Pandora opened her amphora, only hope remained inside. But letās not forget that the ancient Greeks considered hope as the ultimate curse bestowed upon humanity. As long as we have hope, we willingly continue to suffer our fate.
I think this community is still pretty active, but the topics of discussion are not for everyone. Personally I care about neuroscience, but not about deep learning so I tend to stay out of those conversations. And some people who care about deep learning would prefer to ignore other topics like the cerebellum, the basal ganglia or consciousness. Itās not like weāre divided into camps, but rather each person is here for their own reasons, and when the topic of the day is irrelevant to you then it can seem like a slow day on the forum.
Also depends on the balance of content, less more rich and interesting content is far better than lots of panic, speculation and outright blind guesses.
Sometimes it does seem like a room full of people with the lights out, nobody says anything for a while and then there is a wave of activityā¦ sound like a familliar patternā¦ lol.
Nano scale atomic structured supercapacitors are far more dangerous and an inevitability of current technology. Current developments and tech can already wipe out humanity without any new around the corner developments.
Stuff will happen and life will go on. If you canāt control it (or do anything about it) donāt bother worrying about it.
Research on AI risks are of paramount importance. I donāt know if itās correct but I think the hate itās getting probably stems from too many political creatures (mixing in ideologies) and pseudo-experts offering their opinions. Politicians have some well-meaning intent to rein in AIās risks but on the other hand theyāre also trying to put a leash on big tech companies to ensure theyāre still top dogs in the food chain by building oversized bureaucratic swamp around it.
I own the LinkedIn group: (bio)cybernetics; it relates to this forum in that I study/use HTM theories to build a ācyberneticā library on āUnconventional Intelligenceā exhibited by nature. Sadly, my group is quiet as well, but I welcome anyone whoād like to debate/discuss such things. Here is a link: Sign Up | LinkedIn