Chance, Calculation and Life. Группа авторов
Читать онлайн книгу.Eigenstate principle: a projection observable corresponding to the preparation basis of a quantum state is value definite.
The requirement called admissibility is used to avoid outcomes that are impossible to obtain according to quantum predictions, but which have overwhelming experimental confirmation:
Admissibility principle: definite values must not contradict the statistical quantum predictions for compatible observables of a single quantum.
Non-contextuality principle: the measurement results (when value definite) do not depend on any other compatible observable (i.e. simultaneously observable), which can be measured in parallel with the value definite observable.
The Kochen-Specker Theorem (Kochen and Specker 1967) states that no value assignment function can consistently make all observable values definite while maintaining the requirement that the values are assigned non-contextually. This is a global property: non-contextuality is incompatible with all observables being value definite. However, it is possible to localize value indefiniteness by proving that even the existence of two non-compatible value definite observables is in contradiction with admissibility and non-contextually, without requiring that all observables be value definite. As a consequence, we obtain the following “formal identification” of a value indefinite observable:
Any mismatch between preparation and measurement context leads to the measurement of a value indefinite observable.
This fact is stated formally in the following two theorems. As usual we denote the set of complex numbers by ℂ and vectors in the Hilbert space ℂn by ǀ.>; the projection onto the linear subspace spanned by a non-zero vector ǀφ> is denoted by Pφ. For more details see Laloë (2012).
THEOREM 1.1.– Consider a quantum system prepared in the state ǀψ> in dimension n ≥ 3 Hilbert space ℂn, and let ǀφ> in any state neither parallel nor orthogonal to ǀψ>. Then the observable projection Pφ is value indefinite under any non-contextual, admissible value assignment.
Hence, accepting that definite values exist for certain observables (the eigenstate principle) and behave non-contextually (non-contextuality principle) is enough to locate and derive, rather than postulate, quantum value indefiniteness. In fact, value indefinite observables are far from being scarce (Abbott et al. 2014b).
THEOREM 1.2.– Assume the eigenstate principle, non-contextuality and admissibility principles. Then, the (Lebesgue) probability that an arbitrary value indefinite observable is 1.
Theorem 1.2 says that all value definite observables can be located in a small set of probability zero. Consequently, value definite observables are not the norm, they are the exception, a long time held intuition in quantum mechanics.
The above analysis not only offers an answer to question (1) from the beginning of this section, but also indicates a procedure to generate a form of quantum random bits (Calude and Svozil 2008; Abbott et al. 2012, 2014a): to locate and measure a value indefinite observable. Quantum random number generators based on Theorem 1.1 were proposed in (Abbott et al. 2012, 2014a). Of course, other possible sources of quantum randomness may be identified, so we are naturally led to question (2): what is the quality of quantum randomness certified by Theorem 1.1, and, if other forms of quantum randomness exist, what qualities do they have?
To this aim we are going to look, in more detail, at the unpredictability of quantum randomness certified by Theorem 1.1. We will start by describing a non-probabilistic model of prediction – proposed in (Abbott et al. 2015b) – for a hypothetical experiment E specified effectively by an experimenter6.
The model uses the following key elements:
1 1) The specification of an experiment E for which the outcome must be predicted.
2 2) A predicting agent or “predictor”, which must predict the outcome of the experiment.
3 3) An extractor ξ is a physical device that the predictor uses to (uniformly) extract information pertinent to prediction that may be outside the scope of the experimental specification E. This could be, for example, the time, measurement of some parameter, iteration of the experiment, etc.
4 4) The uniform, algorithmic repetition of the experiment E.
In this model, a predictor is an effective (i.e. computational) method to uniformly predict the outcome of an experiment using finite information extracted (again, uniformly) from the experimental conditions along with the specification of the experiment, but independent from the results of the experiments. A predictor depends on an axiomatic, formalized theory, which allows the prediction to be made, i.e. to compute the “future”. An experiment is predictable if any potential sequence of repetitions (of unbounded, but finite, length) can always be predicted correctly by such a predictor. To avoid prediction being successful just by chance, we require that the correct predictor – which can return a prediction or abstain (prediction withheld) – never makes a wrong prediction, no matter how many times it is required to make a new prediction (by repeating the experiment) and cannot abstain from making predictions indefinitely, i.e. the number of correct predictions can be made arbitrarily large by repeating the experiment enough times.
We consider a finitely specified physical experiment E producing a single bit x ∈ {0,1}. Such an experiment could, for example, be the measurement of a photon’s polarization after it has passed through a 50:50 polarizing beam splitter, or simply the toss of a physical coin with initial conditions and experimental parameters specified finitely.
A particular trial of E is associated with the parameter λ, which fully describes the “state of the universe” in which the trial is run. This parameter is “an infinite quantity” – for example, an infinite sequence or a real number – structured in a way dependent on the intended theory. The result below, though, is independent of the theory. While λ is not in its entirety an obtainable quantity, it contains any information that may be pertinent to prediction. Any predictor can have practical access to a finite amount of this information. We can view a resource as one that can extract finite information, in order to predict the outcome of the experiment E.
An extractor is a physical device selecting a finite amount of information included in λ without altering the experiment E. It can be used by a predicting agent to examine the experiment and make predictions when the experiment is performed with parameter λ. So, the extractor produces a finite string of bits ξ (λ). For example, ξ (λ) may be an encoding of the result of the previous instantiation of E, or the time of day the experiment is performed.
A predictor for E is an algorithm (computable function) PE which halts on every input and outputs either 0, 1 (cases in which PE has made a prediction), or “prediction withheld”. We interpret the last form of output as a refrain from making a prediction. The predictor PE can utilize, as input, the information ξ (λ) selected by an extractor encoding relevant information for a particular instantiation of E, but must not disturb or interact with E in any way; that is, it must be passive.
A predictor PE provides a correct prediction using the extractor ξ for an instantiation of E with parameter λ if, when taking as input ξ (λ), it outputs 0 or 1 (i.e. it does not refrain from making a prediction) and this output is equal to x, the result of the experiment.
Let us fix an extractor ξ. The predictor PE is k-correct for ξ if there exists an n ≥ k, such that when E is repeated n times with associated parameters λ1,…, λn producing the outputs x1 x2, …, xn PE outputs the sequence (ξ (λ1)), PE (ξ (λ2)), …, PE (ξ (λn)) with the following two properties:
1 1) no prediction in the sequence is incorrect, and
2 2) in the sequence, there are k correct predictions.
The repetition