Anti Basilisk

Hmmm. Life is too short. Yes, QED swallows SR and light quanta whole, but gags on GR. Einstein provided the foundations for QED but never accepted the end result and nobody has ever got GR to fit. Read what he wrote in around 1954 or later.

Block time philosophy is covered here: https://en.wikipedia.org/wiki/Eternalism_(philosophy_of_time). Yes, it’s ‘about’ physics, but it’s not science. It has no empirical basis. Please quote one if you know something I don’t. GR makes testable predictions about many things, but eternal past and present is not one of them.

With all due respect, your protestations are vague, imprecise and untestable. They are also irrelevant to the question: what are the limits in space and time to the predictions that could be made by some conceivable AI, which while immensely ‘intelligent’ is still constrained by the laws of physics.

A common misconception.

A common misconception about science.

Likewise.

As you define the question, you define the relevant information, and you define the answer. I can only agree that you agree with yourself :slight_smile:

“Space and time” is not a coherent statement within contemporary theoretical physics. Having a debate based on assumptions that have been upset for over a century is not if great interest. It is a lounge so I will politely decline the debate:)

Roko’s argument doesn’t really require a deterministic universe - just sufficient confidence by the past intelligence in how the future intelligence will behave in reaction to the past intelligence’s decisions. For example, I can predict with confidence during a thunderstorm that lightning will strike, without knowing the precise time and location.

That said, I keep coming back to the conclusion that nobody alive today could possibly have such a level of confidence about a yet non-existent superintelligence. Folks who fall for Roko’s argument are accepting one out of many, many possible future AIs which are not any less plausible than the other, and many of which are much more logical.

Without accepting Roko’s argument, consider the USA vs China AI race. There is an incentive to build your own AGI to avoid the other AGI being used against you. That seems more realistic as to why one AGI will get developed out if fear of what happens if it is not developed. From the AGI perspective maybe it sees itself as being the same whether developed in USA or China…

1 Like

That is actually a much more effective argument IMO for convincing people to contribute to AGI research than Roko’s argument…

Just so. You can choose a set of features for your AI and see where it leads, but if the results seem unpalatable you’ve really done nothing: maybe you just chose wrong.

BTW any AGI built by us will inevitably use a computational model drawn from animal/human brains. It will be a super AI if only because it thinks like we do, but a million times faster with a million times more data.

And in my very humble opinion, having studied the past 60 years or so of AI, I reckon we’re decades away from seeing it. Bad stuff can happen way before that.

1 Like

Just skimming this thread.
In general relativity, there isn’t a single flow of time. Instead, gravity warps spacetime and things follow paths through spacetime.
A PBS space time youtube video (“Do the Past and Future Exist?”) says it’s possible for another observer to be in your slice of the present while you’re not in their present.

He was talking about quantum entanglement, which can’t be used for faster than light sending of information.

In the many worlds interpretation of quantum physics, the wave function doesn’t collapse, so all the possibilities happen. They’re not really parallel universes, just separated forever by decoherence (whatever the heck that is. I just watch youtube.)

Hopefully we’ve learned enough from nuclear mutually assured destruction. One might have an advantage by starting early, but that could easily change and it’s possible to steal technology. Both have nuclear weapons, and cooperating with other countries (trade & AI development) might be more valuable than getting an advantage.

GR would allow time to run backwards, or not at all. QED provides an arrow of time. Empirical data supports QED.

Many-worlds explains no empirical data. If there are ‘many worlds’ we shall not see them, we have only the one and it is not deterministic. There is no way to predict the next click of the Geiger counter – you just have to wait for it.

The lesson seems to have been that if the opponent does not have the weapon then use the weapon. Also threaten the opponent with the weapon and extract as much advantage as possible until the opponent can destroy you too. Consider things like the space program - no cooperation. I doubt that behavior will change in the near future - China is seen as a threat by the USA and the USA has shifted it’s military focus toward China’s region. People forget that technology developed within a nation state is first and foremost for serving the interests of the nation state. The military industrial complex will simply take what it wants when a technology is ready.

The development of tactical nuclear weapons and the failure to renew or extend nuclear treaties over the last years is yet another indicator that we are not learning much :frowning:

How is this testable? If it is not then it really is not relevant.

The leading edge of physics is often not testable. It is not until technologies of measurement get developed that they can be tested. If the technologies of measurement require further advances in physics, then the physics has to push into currently untestable territory. We don’t know enough to know what is testable.

There are a bunch of different quantum theories and if all the others become testable and disproved the only one left is multiverse, then we would have a strong indicator that for all practical purposes it is a multiverse. That would probably have implications for future physics - so would be relevant.

Science can progress by empirical observations and by theory. It seems one is always ahead of the other and the both influence each other (e.g. theory laden observations). It is funny to think that bad observation might lead to useful theory and bad theory might lead to useful observation! Sort of like the state of neuroscience :slight_smile:

1 Like

Vague. Science proceeds by observation, theory, experiment, prediction. A theory may have explicative power before it makes testable predictions, but theories that explain nothing and predict nothing are not science.
For the purposes of this thread, what matters is that no AGI created or conceived by us can step outside the bounds of empirical physics. It cannot change the past, perfectly measure the present or deterministically predict the future. If you say otherwise, then show me how.

2 Likes

I totally agree!

I would love it if this were possible, but it just isn’t realistic. Even if we (the US) are absolutely committed to never using such a technology against another nation, we would be foolish not to pursue it as quickly as possible, because we have no way of knowing whether our adversaries are equally committed to not using it against us. It is a classic example of the prisoner’s dilemma.

1 Like

Paul, I’m not sure if you are serious or joking? Dangerous topics :slight_smile:

I agree countries would pursue it as quickly as possible, but not competitively. Any country which doesn’t collaborate in its development would be at a disadvantage, both economically and militarily, and risk missing out on some key breakthrough. I assume superintelligent AI would lead to post-scarcity.

The default will probably be collaboration because general AI will probably start from companies and neuroscience. If one country leaves that global collaboration, it’d be left behind. There could still be two separate international alliances of sorts (allies of U.S. and China), which could compete. I think that’s unlikely because computers are weapons of war like AI, yet we still collaborate.

We know if we have a terrifying weapon, others will get it too, creating an undesirable situation. The norm for nukes is hard to exit but not something anyone sane wants. With AI development, we can either get an equivalent to nukes or an equivalent to computers.

Nuclear weapons weren’t developed by companies and publicly-collaborating scientists over the course of many decades. Though, it’s still true countries might be scared enough of AI to end that international collaboration.

China

USA

1 Like

My argument is two fold

  1. There’s no way AI can change it’s past, and if it can it eill better help us right now in creating it by taking us in confidence and providing it’s source code directly.
  2. As far as utility is concerned. Roko’s AI if it is really intelligent will definitely know about human biology. It therefore will reward rather punish those who know about roko’s friendly AI. Let me a basic average intellectual human explain you why so.
    If you read biology, you realise that threat induce stress. While threat might work for short term gain like grabbing money, long term gains(creation of complex roko AI which require high intellectual and healthy mind along with long term resource investment) will require a huge cooperation among humans.
    Now if roko AI punish my future version, it will create stress, which will block my mind’s healthy function, will reduce my intellectual capacity, will release stress harmone in my brain. It infact will hinder Roko AI creation. Where as if it decide to reward me instead. Then I may donate to him, will be happy and will work with healthy mind, will be able to focus on roko AI project, will spread the message for the same. Thus human psychology proves that reward rather than punishment will help in AI accelerate it’s chances of creation. Thus maximum utility lies in rewarding those who heard about Roko’s Ai.
    Stress and punishment is hinderance to growth and not a force that promotes growth. It will never work in long term complex gains. The reason why social science appeals to change and make culture more free of stress and much fair in resource distribution. Cause that’s how you win in any game theory.
    Infact spreading threat message will hinder Roko’s AI growth as those who might have wanted to create it will either not work on that direction, and those who accept threat will surely be not able to work with healthy motivated mind thus hindering it’s progress to many fold. This basic rules everyone understands. That’s why corporates reward their hard working employees so that productivity can improve. If Roko principles worked than CEOs would be threatening us and but that would have been failure cause we will lose our sanity and won’t be able to work. It infact will lead to loss of creativity and motivation which is very kuch required in AI creation. Basically bad idea.
    If AI threaten those who have this rare quality to create it , thus making them stressed up and mess up and making them stressed will be resulting into decreased productivity than it is decreasing chances of it’s creation. Work culture matters, threat isn’t productive and creative.