This blog post explores whether Karl Popper’s philosophy of falsificationism and the Copenhagen interpretation of quantum mechanics are mutually compatible.
Karl Popper, renowned for scientific falsificationism, also articulated his stance on probability theory while explaining his philosophical ideology in his book ‘The Logic of Scientific Discovery’. Based on his own theory of probability interpretation, he ends up largely rejecting the Copenhagen interpretation, the most dominant interpretation of quantum mechanics. This stance stems from the logic and assumptions he established to ground his philosophy of falsificationism. However, Karl Popper’s philosophy of falsificationism is a highly significant idea accepted within the philosophy of science, and the Copenhagen interpretation of quantum mechanics is also a theory that enjoys overwhelming support among physicists. I wish to examine whether the Copenhagen interpretation of quantum mechanics is truly incompatible with Karl Popper’s philosophy.
First, let me briefly explain Karl Popper’s falsificationism and philosophy of probability. Karl Popper argues that it is fundamentally impossible to prove scientific facts through induction. Induction is the process of deriving universal statements from singular statements. No matter how many individual cases are gathered, it is impossible to perfectly prove a universal fact. A famous example of this is the statement U: “All swans are white.” No matter how many swans are observed and confirmed to be white, this can never perfectly prove U. Even if one were to observe every swan existing on Earth at a single point in time and confirm they are all white, U would cease to be true if a non-white swan existed in the past or if a non-white swan were to be born in the future. Therefore, Popper rejects the scientific methodology based on induction. This is precisely what is called “The Problem of Induction.”
Instead of induction, Popper advocated a scientific methodology based on deduction. Consider a general proposition U. No matter how many individual propositions s1, s2, … prove U to be true, this never guarantees that U is not false. However, just one individual proposition s1’ proving U to be false suffices to demonstrate that U is indeed false. Consider the example from the previous paragraph. Let U be “All swans are white.” Suppose we observe 1000 swans and find them all to be white, recording these observations as s1, s2, …, s1000. The observations from s1 to s1000 do not provide a perfect guarantee that all swans are white. However, if the 1001st swan observed is black, then U is definitely false. This is how U is falsified. The scientific methodology proceeding in this manner is called Falsificationism.
Falsificationism also serves as the criterion for distinguishing science from non-science, as proposed by Karl Popper. Establishing the criteria for distinguishing science from non-science was one of the major problems in the philosophy of science, known as the “Problem of Demarcation.” Karl Popper viewed falsifiability as the distinguishing feature separating scientific theories from non-scientific ones. That is, when considering a theory as a collection of general propositions, if each of those propositions has the potential to be experimentally disproven, then that theory is considered scientific. According to this criterion, Freud’s psychology does not qualify as a scientific theory.
By resolving the demarcation problem using falsifiability as the criterion, we can also define the objectivity often discussed in scientific methodology. In the opening of his book ‘The Logic of Scientific Discovery’, Karl Popper argues that scientific methodology must be strictly separated from the individual’s mind. One example is his clear demarcation that when describing scientific research, there is no need to describe the process by which the researcher conceived the hypothesis. This also serves as the logical foundation for his philosophy of falsificationism. Falsificationism views the scientific theory system as a continuous attempt to construct explanations of reality that are independent of the individual’s mental world. In a similar vein, he later states that natural laws can only be described in a form that holds true regardless of spatio-temporal location or the individual making the observation. He asserts that all scientific propositions must secure objectivity by being “inter-subjectively testable.” The high importance placed on ‘reproducibility’ in contemporary research methodology can be understood within this context.
Building upon this philosophy of falsificationism, criteria can also be established for the verification and adoption of scientific theories. Popper selected four points to consider when evaluating scientific theories. The first is the internal logical consistency of the theory itself. This is a criterion for internal coherence, examining whether there are contradictions among the propositions constituting the theory. Second is the interpretation of the theory’s logical form. This assesses whether the theory possesses the qualities of a scientific theory. As mentioned earlier, falsifiability serves as the criterion for this evaluation. Third is comparison with other existing theories. This considers the question of how this theory, after withstanding various tests, might contribute to the advancement of the field of science. Fourth is the experimental verification of predictions derived from the theory. Even if verified as correct, the theory is only temporarily supported; the possibility always exists that it could be falsified by subsequent experiments and discarded.
Now let us examine the Copenhagen interpretation of quantum mechanics. The Copenhagen Interpretation is the most mainstream interpretation accepted today. Surprisingly, there is no single definitive literature that definitively defines the Copenhagen Interpretation. However, compiling various academic literature reveals that the following content is agreed upon within the academic community. First, the properties of a physical system are generally not fixed before measurement. These properties are represented by the wave function, which encompasses all known variables of the system; no additional ‘hidden variables’ exist. Second, consequently, quantum mechanics can only determine the probability of certain outcomes in a measurement. Third, the act of measurement itself influences the system. The wave function, which previously represented all possible states of the system in superposition, irreversibly collapses upon measurement, yielding an observable result. Fourth, because the act of measurement itself affects the system, certain properties of the system are mutually incompatible. That is, specific physical quantities of a single system cannot be measured precisely simultaneously. This is called the Uncertainty Principle. Fifth, the results recorded by the measuring apparatus must be describable solely within the realm of classical physics. Sixth, the wave function possesses properties associated with probability. Seventh, the wave function exhibits wave-particle duality.
The Copenhagen interpretation, including the sixth and seventh principles, has survived without being falsified by verification through actual measurement results or real experimental outcomes to date. Examples include Schrödinger’s cat thought experiment, the double-slit interference experiment, and experiments on the material wave properties of electrons. First, Schrödinger’s cat thought experiment demonstrates that accepting the uncertainty principle at the microscopic level leads to this uncertainty affecting macroscopic objects. It also tests the internal contradictions of the Copenhagen interpretation. Imagine placing a live cat inside a sealed box, where the cat’s life or death depends on the state of a certain elementary particle within the box. For simplicity, let’s say this particle has two possible states, each with a 50% probability. The cat’s wave function is then derived from the particle’s wave function. Since the wave function is a superposition of all possible states, the cat’s state becomes a superposition of being alive and dead, each with a 50% probability. This leads to the criticism that it makes no sense for the cat to be both alive and dead simultaneously until the box is opened and checked. However, the Copenhagen interpretation posits that the wave function represents only what the observer perceives about the state inside the box, not the cat’s own state. It states that there is a 50% probability of finding a dead cat and a 50% probability of finding a live cat when the box is opened. Therefore, this thought experiment cannot be used to claim the Copenhagen interpretation has an internal contradiction.
Now, let’s examine experimental results dealing with the dual nature of light and matter, including the double-slit experiment. These experiments are prime examples where the Copenhagen interpretation’s predictions about the external world survived verification by actual experimental results. The double-slit experiment with light demonstrates that light passing through two slits creates a diffraction pattern on the screen. Since diffraction is a property of waves, this experiment shows the wave nature of light. However, the photoelectric effect experiment demonstrates the particle nature of light. When light is shone on a metal under specific conditions, photoelectrons are emitted from the metal. The problem is that this condition is independent of light intensity or exposure time; it is determined solely by the light’s frequency. If the light’s frequency is below a specific threshold, no matter how intense the light or how long the exposure, photoelectrons will never be emitted. However, if light above this threshold frequency is shone, photoelectrons are emitted immediately and unconditionally. These experimental results can be explained by interpreting light as particles called photons. The particle nature of matter is so self-evident that it can be clearly demonstrated even by classical mechanics. The wave nature of matter can be confirmed by the existence of matter waves. When cathode rays, electrons accelerated to high speeds and fired, pass through a double slit, a diffraction pattern appears on the screen. This also confirms the wave nature of matter. Beyond this, numerous experimental results concerning the wave-particle duality of light and matter do not contradict the Copenhagen interpretation.
However, the genuine point of contention that Karl Popper’s philosophy could raise against the Copenhagen interpretation emerges from a different angle. The foundations Popper built while establishing his philosophy of falsificationism could conflict with the assumptions set for the concept of the wave function. As discussed earlier, Popper argued that natural laws can only be described in a form that holds true regardless of spatio-temporal location or the individual making the observation. Given that ‘inter-subjectively testable’ verification is the criterion for objectivity in science within his philosophy, the Copenhagen interpretation—which asserts that phenomena change due to the act of measurement itself, meaning experimental results vary each time depending on the observer—appears incompatible with Karl Popper’s philosophy. In the same vein, one might also question how to define the ‘measuring apparatus’.
However, the above problem can be resolved by presenting a more rigorous formulation of the uncertainty principle within the Copenhagen interpretation, one actually used in physics. This solution is also consistent with Karl Popper’s extension of the propensity theory of probability, where he expanded ‘experiments performed by observers’ to encompass ‘natural phenomena themselves’. The ‘propensity theory of probability’ is one variant of the frequency theory of probability. The frequency theory of probability regards the probability of a phenomenon as the relative frequency (or the limit of the relative frequency) in a sufficiently large (or infinitely large) population. However, Popper criticizes this frequency theory for failing to aid in predicting isolated, singular events. Instead, he proposes the propensity theory of probability. This theory suggests that, rather than humans intervening to set up experiments and derive relative frequencies to establish probabilities, nature itself repeats ‘experiments’ in the form of recurring natural phenomena. Humans then observe some of these and derive probabilities from them.
Now, replacing the ‘measuring apparatus’ in the above paragraph’s point of contention with ‘all objects interacting with arbitrary particles throughout the entire universe,’ and replacing ‘experiment’ with the set of all such interactions. To elaborate further, consider setting a microscopic particle (e.g., an electron) and defining ‘observation’ as the detector observing the photon colliding with the particle. Then, describe the uncertainty principle considering both this particle and the photon. The uncertainty principle can then be understood as follows: After colliding a photon with a microscopic particle whose state is unknown, the scattered photon is detected, and the state of the microscopic particle is backtracked from this. The collision between the photon and the particle alters the particle’s physical quantities. However, the information directly obtainable through detection cannot precisely reveal the particle’s state prior to the collision. This is because light possesses wave-like properties, leading to a minimum position error equal to the wavelength of light. Simultaneously, due to light’s particle-like nature, the collision induces a change in the particle’s momentum, resulting in a corresponding minimum momentum error. To reduce the position error, shortening the wavelength increases the photon’s momentum, causing the particle’s momentum to change even more significantly in the collision. Conversely, to reduce the momentum error, lowering the frequency increases the wavelength, leading to a larger position error. Through this approach, the uncertainty principle can be described entirely without invoking the concept of an individual observer.
Although Karl Popper himself sometimes expressed a somewhat negative stance toward the Copenhagen interpretation, according to the evaluation criteria proposed by Karl Popper, the Copenhagen interpretation is at least compatible with Popper’s falsificationist philosophy and probabilistic theory of probability. Therefore, under these two perspectives, there is still no need to discard this theory. This is one reason why the Copenhagen interpretation currently enjoys overwhelming support as the mainstream interpretation in academia.
Let us reconsider the relationship between Popper’s falsificationism and the Copenhagen interpretation. The Copenhagen interpretation in quantum mechanics adopts a highly unique interpretive approach. It is the concept of “wave function collapse.” This collapse occurs due to the act of measurement and describes the process by which a physical system transitions to a definite state. This is fundamentally opposed to the determinism of classical mechanics. Popper proposed falsificationism against the backdrop of classical determinism, but the non-deterministic element of the Copenhagen interpretation, which conflicts with falsificationism, becomes the main reason Popper rejects this interpretation.
So, can this non-deterministic element truly be accommodated within the framework of falsificationism? Popper’s falsificationism evaluates scientific theories based on propositions that are experimentally falsifiable. In the Copenhagen interpretation, the state before measurement is not deterministic, but the result after measurement is clearly falsifiable. For example, in an experiment measuring a particle’s position, if the particle is not found at a specific location, the proposition that it was at that location is falsified. That is, since the act of measurement itself inherently involves falsifiability, it is difficult to view the non-deterministic nature of the Copenhagen interpretation as completely incompatible with falsificationism.
Popper’s tendency-based probability theory could also be applied to the Copenhagen interpretation. Tendency-based probability theory posits that natural phenomena themselves possess a tendency to produce specific outcomes. The probabilistic interpretation of quantum mechanics can serve as a good example explaining such tendencies. Using probabilistic tendencies to predict a particle’s position or momentum allows calculating the probability of a specific outcome occurring. This suggests Popper’s probabilistic approach and the probabilistic predictions of the Copenhagen interpretation can be mutually complementary.
Furthermore, Popper’s philosophy of falsificationism plays a crucial role in the development of scientific theories. Scientific theories must be continuously tested and falsifiable through new experiments and observations. The Copenhagen interpretation has also evolved through this scientific verification process. Although Popper criticized the Copenhagen interpretation, it warrants reevaluation within his philosophical framework. This is because it represents a crucial process in the development of scientific theory: accommodating diverse perspectives and advancing science through the verification and falsification of new theories.