The math that describes it is known precisely. Specific implications of this are known. There's no information transfer, there's no time delay, etc.
And yet lay people keep incorrectly thinking it can be used for communication. Because lay-audience descriptions by experts keep using words that imply causality and information transfer.
This is not a failure of the experts to understand what's going on. It's a failure to translate that understanding to ordinary language. Because ordinary language is not suited for it.
> Its “founding fathers” all admitted that it’s a bunch of guesswork and that the models we have are arbitrary and lack something essential needed for proper understanding.
We don't have a model of why it works / if there's a more comprehensible layer of reality below it. But it's characterized well enough that we can make practical useful things with it.
> This is not a failure of the experts to understand what's going on.
> We don't have a model of why it works / if there's a more comprehensible layer of reality below it.
Counterpoint:
You’ve just admitted they don’t understand what’s going on — they merely have descriptive statistics. No different than a DNN that spits out incomprehensible but accurate answers.
So this is an example affirming that QM isn’t understood.
QM isn't less well understood though than Newton's mechanics. Neither cover the "why". But both provide a model of the world, the model (!) is very precisely understood and it matches observations in certain parts of reality. Like all reasonable scientific theories do. They have limits, and beyond those limits they don't apply, but that doesnt mean they are not understood. It's reality that is not sufficiently well understood and by coming up with more and more refined models/theories, we keep approximating it, likely without ever having a "fully correct" theory encompassing everything without limits. (But that's ok.)
The only descriptive / empirical parts is the particle masses.
But it sounds like your objection is that reality isn't allowed to be described by something as weird as complex values that you multiply to get probabilities, so there necessarily must be another layer down that would be more amenable to lay descriptions?
My point is that their models are fitted tensors/probability distributions, often retuned to fit new data (eg, the epicyclic nature of collider correction terms) — the same as fitting a DNN would be.
Their inability to describe what is happening is precisely the same as in the DNN case.
Actually it is just the opposite. QED is comprehensive and, as far as we know, accurate.
But it is impractical to use in most situations so major simplifications are required.
The correction factors that you mention are the result of undoing some of those simplifications, sometimes by including more of the basic theory and sometimes by saying something like "we know that we ignored something important here and it has to have this shape but we can only kinda sort measure how big it might be because it's too hard to actually calculate".
As I pointed out, eg, the high number of correction terms when trying to tune the model to actual particle accelerator data is evidence that our model is missing something. (And some things are plain missing: neutrino behavior, dark matter, dark energy, etc.)
In the same way that a high number of epicycles was evidence our theory of geocentrism was wrong — even though adding epicycles did compute increasingly accurate results.
> As I pointed out, eg, the high number of correction terms when trying to tune the model to actual particle accelerator data is evidence that our model is missing something. (And some things are plain missing: neutrino behavior, dark matter, dark energy, etc.)
This is rather a problem of the standard model. Physicists will immediately admit that something is missing there, and they are incredibly eager to find a better model. But basically every good attempt that they could come up with (e.g. supersymmetric extensions of the standard model; but I'm not a physicist) has by now (at least modtly) been falsified by accelerator experiments.
The comment you originally replied to was about entanglement, not the entire standard model. The math there is very simple, not built on correction terms.
... So it's about not being able to observe short-lived particles directly, and having to work backwards from longer lived interaction or decay products? Or about how those intermediate particles they have to calculate through also have empirically-determined properties?
Most of that is measured corrections, not a theoretical model.
Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Similarly, to predict proton collisions, you need to add a bunch of corrective epicycles (“virtual quarks”) to get what we measure out of the basic theory. But adding such corrections is just curve fitting via adding terms in a basis to match measurement. Again, we can’t say what is happening or why that occurs.
We have great approximators that produce accurate and precise results — but we don’t have a model of what and why, hence we don’t understand QM.
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Bell's theorem was a prediction from math before people found ways to measure and confirm it. A model based on fitting to observations would have happened in the other order.
> A model based on fitting to observations would have happened in the other order.
We’d already had models which said that certain quantities were conserved in a system — and entanglement says that is true of certain systems with multiple particles.
To repeat myself:
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs.
Bell’s inequality is just a way to measure that correlation, ie, statistical effect — and I think it’s supporting my point the way to measure entanglement is via statistical effect.
ER=EPR is an example of a model that tries to explain what and why of entanglement.
The math that describes it is known precisely. Specific implications of this are known. There's no information transfer, there's no time delay, etc.
And yet lay people keep incorrectly thinking it can be used for communication. Because lay-audience descriptions by experts keep using words that imply causality and information transfer.
This is not a failure of the experts to understand what's going on. It's a failure to translate that understanding to ordinary language. Because ordinary language is not suited for it.
> Its “founding fathers” all admitted that it’s a bunch of guesswork and that the models we have are arbitrary and lack something essential needed for proper understanding.
We don't have a model of why it works / if there's a more comprehensible layer of reality below it. But it's characterized well enough that we can make practical useful things with it.