Some fun thoughts related to the question, “what leads to the ability to have experiences?”:
You’re just one small part of a brain, mistaking thoughts forced upon you by massive synaptic connectivity as your own. It goes all the way down to neurons. Rocks have experiences too.
The real rules are incomprehensible and totally different from our laws of physics, and they enable experiences to be a thing. We’re inside a simulation in the mind of a superintelligent AI in a civilization with the computing power of trillions of galaxies, doing trial-and-error to solve specialized problems after all of science has been solved. The real rules are indicated by things like quantum wave collapse, holographic principle, separate timelines in black holes, etc.
For real though, I think any explanation of consciousness still invokes some special sauce if it doesn’t explain the ability to have experiences. Experiences can’t be an illusion because illusions are experiences. Memory, thoughts, etc. are just add-ons.
Eventually, the bottleneck to knowledge won’t be computing power or comprehension. It’ll be knowability, chaos, and novelty or whatever.
I didn’t mean to imply that we experience all of an infinite universe. It’s not quite at that level of nope.
I was mainly describing an alternative version of ancestor simulations, where it’s not just simulating ancestors and it’s not simulating the same laws of physics. Plus grabby civilizations and theoretical physics (which is full of strange things like the holographical universe) to explain how that’s plausible, and inside the mind of AI to make it stupider because I’m not taking it seriously.
I don’t see how anything matters if we don’t experience. That’s more of a clarification how I’m framing things than an argument.
The “experience” is a form of how we internally reflect upon what we have already learnt. Experience is all about the context of the moment, the reflection. We have already learnt the moment to allow the process to reflect on it (short term memory). Experience is a slow process… it’s emotions.
I’m still re-thinking the experience of the phrase being between a rock and a hard place, lol.
By novelty, I meant whether it has seen some data necessary to determine something. I should’ve just said that instead of adding the whatever.
There’s a difference between randomness and chaos, in the chaos theory sense. So chaos, or at least its underlying causes, do have predictive value. For example, you could simulate countless variations of unknown details of a system to determine a reasonably complete set of possible outcomes.
That still relies on experience being a thing, because it relies on “we” or “I” being a thing and being capable of experiencing the reflection.
The only thing with enough compute power to predict the next state of the universe is the universe itself.
I like to say that the past is gone, the present is chaotic and the future is somewhat predictable. With the rules as we know them even if we are but a simulation (which in some parts of an infinite universe we must be), the only way to find out what really comes next is to run the model.
The purpose of intelligence is to model useful parts of reality and make better guesses at the future based on what happened in the past.
don’t see how anything matters if we don’t experience. That’s more of a clarification how I’m framing things than an argument.
Nothing matters other than survival. Intelligence is one way to get there, but how you ‘feel’ about it really does not matter a jot (or a tittle).
The bottleneck doesn’t require the only thing left to know being the whole state of the universe.
Sometimes it has to just think things through, which is basically just a simulation. Also, it’s possible for the set of possible outcomes to be describable with rules rather than the result of those rules.
If you mean “matters” in an objective sense, I don’t know what exactly you mean by matters. “A if B” means B matters.
I meant “matters” in the subjective sense. If we don’t experience, we aren’t impacted by being wrong, so we can safely assume we experience. If we don’t experience, we don’t experience any negative effects of being wrong.
If you meant “matters” in any subjective sense, survival being all that matters seems pretty arbitrary. Water can be safe to drink but taste good or bad. Evolution doesn’t optimize survival, it just selects whatever works.
If you meant “matters” in the sense of what rules we should follow to optimally perform the motives of our neural system, you’re doing that already regardless of what you do. Language-coded rules cannot describe how to follow motives better than whatever your motives do.
Schrodingers cat would object to that proposition.
A quantum computer that is an entangled universe (qbit count = particles in universe) could model an infinite number of universes at the same time if applying an infinite amount of imagination… lol.
It would seem that cat is rather out its depth here, as it only encodes a single qubit.
I would have thought it was rather obvious that (a) the universe already is a quantum computer and (b) being infinite there exists a one-to-one correspondence between itself and any of its subsets. IOW any infinite universe is the much same as any other.
Ok, yes, in unstructured environment. Where there is no encapsulation, anything is about equally likely to affect anything else. That’s not the case for your brain or society, the number of “actors” in climate or liquid dynamics is hugely greater than the number of humans. All that assumes that simulated you also exists in real world, and it’s easier for superintelligence to simulate than to directly control. Utterly unlikely.
Novel is data that wasn’t unpredicted, but it’s only valuable in proportion it’s own predictive power, the extend to which it increases predictive power of the system. Same as “comprehension”, not some exotic new factor. Yes, some ultimately predictive data is less structured than current low-hanging fruit, but that’s just a matter of degree.
I don’t understand some of what you’re saying, but does this really matter? I’m just throwing stuff at the wall, because consciousness seems like an impossible problem to solve.
My point is that, if there is no objective explanation, it doesn’t necessitate anything spiritual. Just the simulation hypothesis, modified to be a simulation of something non-physical (where non-physical means our physics.)
I argued why those kinds of simulations might be done, but that doesn’t really matter. If you want a clear, cohesive argument for the simulation hypothesis, read the philosophy paper by Nick Bostrom on ancestor simulations.
I also argued that it’s in the mind of a superintelligence. I just thought that was funny and wanted to make it clear that I’m not discussing this scientifically. Consciousness is partially a philosophy thing.
The arguments are persuasive (whatever that means) and in an infinite universe it must have already happened. So what?
It’s all a matter of levels. If we are the bottom level, the level ‘above’ is a simulation, and that level is a simulation in the level ‘above’ and so on. It’s simulations all the way up. So what?
We cannot prove we are in a simulation, or indeed learn anything about the level above, unless the simulator allows it. So we can go looking for a ‘cheat’ that would allow us to look upwards, or we go on assuming this is all there is.
For my money there is no ‘cheat’ and the only effective strategy is to go on ‘as if’ this is not a simulation.
It’s just an extrapolation of current gaming fad into infinity, nothing persuasive about it. And the article was meant mostly as a parody on converget fads, all of which is a moment in time.