Discussion about abusing AI technology cont'd from stream

Hydride regulation prevents nuclear weapons from being easily created/abused. Are there similar mechanisms in relation to AI (Skynet) that can limit its abuse?

-Limiting GPU acceleration
-Auditing Code

-Limiting Quantum Computing acceleration

If Einstein were to write a letter to a president today about AI, what would such a letter look like, and what would it propose?

2 Likes

Welcome to the community.

MrZ, I do understand your desire to limit the technology of destruction. I can’t think of a time that has worked very well in the past. It has been tried here and there through history and opinions vary on how well it has worked.

That said, the biggest problem is that some people will follow the guidelines - the people that don’t tend to be the lawbreakers (by definition) and are also the ones most likely to do harm. The net effect is that you cripple the advancement of the positive benefits without really slowing down the harms. You mention hydrides. I don’t see that as the pacing element for the development of nukes - chemistry is something that any country can develop from scratch without too much trouble. The most common pacing element is the centrifuges. This is the technology to separate and concentrate the correct isotopes of uranium. This is done by making it into a gas form and centrifuge off the heavy molecules that have these atoms. Note that several countries have developed this technology even with international attempts to limit nuclear proliferation.

I can’t address your “letter to the president” question but let me address at the issues you raise.

-Limiting GPU acceleration
There are literally millions of gaming computers that have GPUs in the United States. Every bitcoin site has more than enough GPU power to be considered a super computer. I would say that every engineering workstation has graphics acceleration to run CAD software. My daily driver computer is a ($300 off Amazon) off-lease workstation with a high-end Qudropro GPU in it. This train has left the station.

-Auditing Code
You are asking this on a site full of hobbyists who code at home. How do you propose to decide who is coding and what is acceptable. Many (most?) of these have computers that are gaming computers that are very powerful. And perfectly capable of running CUDA GPU code.

-Limiting Quantum Computing acceleration
Let me know if you are aware of any running quantum technology. To the best of my knowledge, they are currently working at the single gate level and are far from a working computer.
I would be far more worried about optical computing as a do-able thing.

The question of runaway AI has been the topic of hot discussion for a long time and this is perhaps one of the most thoughtful that I have seen. And it’s free!

1 Like

Thanks for joining the forum and posting @mrZ. But I agree with Mark here. Restricting compute power is a lost cause already. If compute power is what ends up being the thing that kills humans, so be it.

Never fear, the general consensus on this site is that the current AI paradigm of conventional neural networks will not take us to AGI, so the restrictions you listed is unnecessary. It is more believed that AGI will come from understanding how the existing human brain reasons about the world, so a letter to the president would probably advise him to fold all experimental neuroscience techniques like multiphoton microscopy and other novel methods for understanding the brain into government control, and restrict the flow of information about them from other researchers.

2 Likes

Hello mrZ, welcome to the site.

I want to avoid being political, but I don’t think the current president of Mickey Mouse-land is the right person to send such a letter to. However a few years ago a number of prominent researchers, (I think by initiative of Max Tegmark) published this Open Letter, and this Research Guidance Document.

1 Like

With Donald Trump and Mike Pence in office: that’s a very scary thought.

In my opinion Einstein would be far far more worried about all the nukes in the hands of scientifically irresponsible warmongers.

2 Likes

Also, lets not forget that Szilard and Einstein’s letter to Roosevelt, despite their fears, actually led to the Manhattan Project.

1 Like

I did not know about that!

This quote is interesting:

In 1947 Einstein told Newsweek magazine that “had I known that the Germans would not succeed in developing an atomic bomb, I would have done nothing.”

1 Like

Considering the growing amount of US debt and condoned religious hostility towards science a letter to Chinese leaders might be more effective:

Sorry to have to be so negative, it’s just that I can think of many better things to be worried about right now.

1 Like

Thank you to everyone for all the great feedback.

GPU’s are already regulated, since for example folks at the Russian Academy of Science can’t even buy new GPU’s officially from Nvidia (my actual experience).

Auditing code might be tough, but I had some hope due to DARPA’s CyberGrand Challenge.

Limiting Quantum Computing Acceleration, there already exists a Canadian company selling a 2048 Qubit machine - they’re called DWave. They currently offer one second of computing time to researchers who want to get their feet wet.

Our work involves human genomes, and in addition to NUPIC, we are also testing NENGO and TENSFORFLOW. If the problem is set up as an energy minimization problem, then it lends itself to be accelerated by this particular quantum computer. Our initial estimates put it way above a few hundred Xeon machines for solving systems of coupled differential equations. Check out their Leap/Ocean API for more.

The possibility of abuse of upcoming genomics information technology, specifically one of the projects our team is working on - puts us in a bit of a pickle, so we ended up reaching out to DARPA and the DRDC for oversight and guidance. Perhaps that is overboard, but I would rather not that people get hurt just cause some kids can cook a virus in their basements #biohacks.

Cheers and have a wonderful day folks.

2 Likes

You do have a unique situation as you are dealing with the same people that restricted copiers, typewriters, and computers to try and control free speech in the past.

The rest of the world benefited from this tech anyway.

How did that work out for the Soviet Union? States still do what they want, nefarious actors still do what they want, and citizens and researchers are prevented from doing useful work.

If you say neural networks started getting interesting around 2010 then the collapse of Moore’s Law has had a serious slowing effect. Computers should be 32 times more powerful than then whereas in fact they are only 3 or 4 times more powerful. And the performance gap is only getting wider with time.
Greatly slowing the singularity.
Memory technology is still getting significanly better with time, maybe go with that rather than raw flops.
Anyway we are definitely going to Cook the Planet with Carbon Dioxide over the next 100 years. It would be nice to have a technology that can either deal with that for us, or replace us if it can’t be dealt with. That is if you see any intrinsic merrit in intelligence at all.

Talking about nukes I believe at one stage the Manhatten project was consuming 10% of the electricity generated in the US. If you can hack into US power plants you can find out if the US has a Manhatten project for AI from the load data.
https://archive.org/details/in.ernet.dli.2015.84862/page/n2

How does that square with bitcoin mining and grow houses?
Or google data pods?

Yeh, I suppose you would have difficulty detecting a project pulling less than 1% of generated power. You could also look for an infrared heat signature on satellite imagery to see if there was a major project.
I would say there isn’t such a project at the moment until an arms race really happens.

These projects tend to be black at first. You can’t be sure that this is not already well underway.
China is already doing the 1984/black mirror population control by automated big brother so the level of tech that is off-the-shelf is already fielded.

Considering big data, automated speech and video recognition (and generation), good automated translation, deep taps on the worlds communications systems, and access to essentially limitless computing power - who can say what can be done with it?

Russia did it by hand without really automating anything and managed to saddle the USA with an idiot in control. This was a powerful attack that you have to acknowledge as highly successful. A more powerful and automated system could be capable of so much more. It really could be everywhere at once and influence virtually every issue in the world in a coordinated and undetectable way if that was deemed to be a useful enterprise. Imagine well known (but simulated) figures addressing targeted populations in multiple countries driving them to foment vigilante based action to destroy foreign investments. Kill the intellectuals in the population. Drive countries back into the dark ages. So much mischief.

I guess what you say is true. I am really fascinated by the drama around the presidency and Alzheimer’s. Super interesting.
Dot products falling apart and nonlinearities out of control.

Deep learning is fundamentally very simple. The threats and the benefits from it are to be found in the applications you construct from it, not the thing itself I suppose.