Other similar comunities?

So, I find this comunity rather inactive, I get that we are a very niche comunity but I would guess there are other online forums similar in spirit to this one that are more active. The only other comunity I know (Nengo forum) is even more deserted than here.

Do you guys know any?

2 Likes

There is a decent Discord server around Machine Learning Street Talk. Tim Scarfe is active almost daily there, and very friendly and open.

The community is seriously anti AI risk though. Not out-right hostile, but they dismiss any discussion around it as doomerism. Just so you know.

You can find a link to the server in most of their YouTube video’s descriptions.

2 Likes

I’ve never understood why some people are so dismissive of AI risk. Even narrow AI has the potential to be dangerous when put when put in charge of high assurance systems due to the fact they are black boxes and as such we can’t bug check them. How do they not think an autonomous artificial general intelligence, especially if it’s self-improving, would not have the potential to be dangerous?

1 Like

Even “safe” narrow AI thats been designed to be simple tools are dangerous because they can and will empower the wrong people.

I honestly think its hopeless, we cant win the alignment race, Big tech are going to build bigger and bigger systems and release them with no regards for the consequences.

1 Like

So - any tech is dangerous in the hands of “the wrong people.”
Look at what China is doing with face recognition and the Social Credit system.
I am reminded of the Black Mirror Nosedive episode.

I guees my argument is: if by chance you managed to figure out how to make cold fusion that can be used both as power source or as a “clean” hydrogen bomb. Would you share it?

1 Like

A weird thing about tech is that when the various foundational threads come together for one person, it often is in the air for most of the practitioners in the field. Germany was working on nuke tech at the same time that the allies were working on similar tech.

Once the “attention is all you need” paper was available, the LLM world exploded. The genie is out of the bottle.

When the next papers that bridge from the current LLM to effective AI are released, nobody is going to be able to hold it back. It is possible that the people that release the key papers will not know that it will work as well as it does initially - the same as the attention paper.

I expect that once the foundation tech is in place, it is unlikely that any individual can stop the tech from being exploited for both good an evil.

2 Likes

Arguments against AI risk I heard (but not necessarily share) are:

  • runaway intelligence is unlikely
  • researchers will be able to educate nascent AI the same way people raise children
  • it will take a long time before research comes close to AGI
  • whoever’s in control of the GPU farm, can pull the plug
  • AI development moratoria are a ploy by big tech to protect their lead

Personally I think several increasingly important accidents using weak AI could happen before we reach AGI, and those are dangerous enough. Potential scenarios:

  • development and accidental or intentional release of extremely effective pathogens
  • development of military tech that gives one side an extreme edge on the global theater
  • internet contamination that will make digital communication completely unreliable at every level
  • internet contamination that will make commodity trade dangerously erratic or even impossible

For what it’s worth, I am not a doomer. I think we need AI to get out of the mess we created on this planet. When Pandora opened her amphora, only hope remained inside. But let’s not forget that the ancient Greeks considered hope as the ultimate curse bestowed upon humanity. As long as we have hope, we willingly continue to suffer our fate.

2 Likes

I think this community is still pretty active, but the topics of discussion are not for everyone. Personally I care about neuroscience, but not about deep learning so I tend to stay out of those conversations. And some people who care about deep learning would prefer to ignore other topics like the cerebellum, the basal ganglia or consciousness. It’s not like we’re divided into camps, but rather each person is here for their own reasons, and when the topic of the day is irrelevant to you then it can seem like a slow day on the forum.

3 Likes

Also depends on the balance of content, less more rich and interesting content is far better than lots of panic, speculation and outright blind guesses.

Sometimes it does seem like a room full of people with the lights out, nobody says anything for a while and then there is a wave of activity… sound like a familliar pattern… lol.

1 Like

Hillarious… what will be the digital AI equivalent of smoking pot or getting drunk…

2 Likes

Nano scale atomic structured supercapacitors are far more dangerous and an inevitability of current technology. Current developments and tech can already wipe out humanity without any new around the corner developments.

Stuff will happen and life will go on. If you can’t control it (or do anything about it) don’t bother worrying about it.

1 Like

Research on AI risks are of paramount importance. I don’t know if it’s correct but I think the hate it’s getting probably stems from too many political creatures (mixing in ideologies) and pseudo-experts offering their opinions. Politicians have some well-meaning intent to rein in AI’s risks but on the other hand they’re also trying to put a leash on big tech companies to ensure they’re still top dogs in the food chain by building oversized bureaucratic swamp around it.

1 Like

I was also wondering if there’s another community that’s focused on something like vector symbolic architecture, SDRs etc.?

1 Like

I own the LinkedIn group: (bio)cybernetics; it relates to this forum in that I study/use HTM theories to build a ‘cybernetic’ library on ‘Unconventional Intelligence’ exhibited by nature. Sadly, my group is quiet as well, but I welcome anyone who’d like to debate/discuss such things. Here is a link: Sign Up | LinkedIn

1 Like