Chris Fields Research
Ideas, drafts, recent publications ...

Return to Homepage


The boundary project: Physics research track

The physics research track asks two questions. First, does existing fundamental physical theory actually account for the "emergence" of the classical world of ordinary experience, and in particular, the classical world of spatially-bounded macroscopic objects that persist through time? If so, how, and if not, why not? As explained below, some formulations of fundamental physics tacitly assume that the objects of ordinary experience exist independently of our observations of them; for example, many formulations of quantum mechanics assume that macroscopic laboratory apparatus exists in an observation-independent way. Such an assumption is entirely fair and reasonable, and may well be necessary in order to coherently formulate fundamental physical theory. However, one cannot claim to have explained the existence of macroscopic objects within any theory that assumes that they exist; a theoretical explanation of the "emergence" of the classical world constructed within a fundamental physical theory must account for this emergence using the principles, methods and language of the fundamental theory and nothing else. Hence the second question of the physics research track is whether fundamental physics, and in particular basic non-relativistic (i.e. not concerned with objects moving near the speed of light) quantum mechanics, can be formulated in a way that makes no assumptions about the existence of spatially-bounded systems that persist through time.

Box 1: Quantum mechanics and the "measurement problem"

Quantum mechanics differs from the classical mechanics of Galileo, Newton, and even Einstein's theory of relativity mainly in how it characterizes the "state" of a physical system. In classical physics, an object - for example a chair - can be characterized at any given time t by the values at t of a collection of physical properties called "state variables." Position with respect to some labeled reference location is a state variable: a chair can be 2 meters (m) to the left of some reference location at time t1, and then 3 m to the right of it at t2. Velocity is another state variable: the chair could be moving to the right at 2 meters per second (m/s) at t1, and have turned around and be moving to the left at 1 m/s at t2. Temperature, color, hardness, essentially any measurable property of a chair or of any other object can be considered a state variable. In classical physics, any given object at any given time has one and only one numerical value, in a given system of units (e.g. meters, seconds, etc), for each of its state variables: a chair is at a fixed position (there), is moving at a fixed velocity relative to some reference point (0 m/s - not moving), has a fixed temperature (18 degrees C), etc. at any time t.

Quantum mechanics employs a radically different concept of a physical state. In quantum mechanics, if a chair can be, for example, in either position x or position y, it can also be in any position ax + by where a and b are (complex) numbers and a2 + b2 = 1. In general, if X and Y are possible states of some physical system, then any normalized linear combination of X and Y, that is any aX + bY where a2 + b2 = 1, is also a possible state. Such "superpositions" of possible states do not exist in classical mechanics: classical mechanics can describe a chair as here or there, but not as being half-here and half-there. Classical mechanics, however, is wrong. The results obtained by precise experiments with physical objects always confirm quantum mechanics (to, as noted before, accuracies up to one part in 10 billion), so the correct description of the world, to the best of our current knowledge, is quantum mechanical. Hence to the best of our current knowledge, all objects can be in states that are superpositions of their "classical" states, all the time.

The idea that ordinary objects can be in "quantum states" that are superpositions of their classical states is hard to reconcile with the fact that we, as human observers, never see objects as being in superpositions. We never see chairs as being half-here and half-there. The "measurement problem" in quantum mechanics is the problem of describing the interaction between observers, including human beings, and the world in a way that makes it clear how we perceive objects as being in classical, not quantum states. The measurement problem has bedeviled quantum mechanics since 1926, and is the reason that Einstein never believed quantum mechanics was correct even though he helped develop it. Thousands of papers and many books have been written about the measurement problem, but no solution is generally accepted. Most physicists simply ignore it. It is, however, clearly relevant to the question of emergence; a solution to the measurement problem within quantum mechanics would be an explanation of how the classical world emerges from the quantum world.

Beginning in the 1970s, a formal extension of quantum mechanics called "decoherence theory" was developed to describe how objects interact with their environments, and in particular how the interaction between an object and a much larger environment could make the object's quantum states appear classical. Decoherence theory is based on the realization that interacting with a much larger environment forces a system to be in some states and not others, and hence rapidly destroys the "quantum coherence" that it would exhibit if it could be in any superposition of its possible states. Measurements involve interactions between a measured quantum system and a typically much larger measurement apparatus; hence decoherence theory provides a mathematical framework for explaining how the process of measurement could force quantum systems to be in states that appear classical. Decoherence theory is of enormous practical importance to the new technology of quantum computing, because it provides the mathematics necessary to understand how interactions between the microscopic devices required to implement quantum algorithms - the quantum equivalent of transistors - and their environments can be used to initiate, control, and access the results of quantum computations using an ordinary "classical" computer. Decoherence is also employed in cosmology to explain the apparently classical initial state of the post-inflationary universe.

The interactions described by decoherence theory take place at the boundary between a quantum system undergoing decoherence and its environment. Over the past decade, an elaboration of decoherence theory called "quantum Darwinism" has been developed to enable predictions of which quantum states of a system are stabilized by decoherence in particular kinds of environments, including the environment provided by the photons that make up ambient light. Quantum Darwinism is widely regarded as providing a set of mathematical methods for predicting, at least in principle, which states of quantum systems would be stabilized by decoherence and appear classical; hence it is widely regarded as providing a purely quantum-mechanical explanation of the emergence of the classical world of ordinary macroscopic objects from the microscopic world of fundamental particles and forces.

My work on the physics track began with the suspicion that the formalism of quantum Darwinism relies on an assumption that quantum systems have well-defined spatial boundaries at which decoherence can be considered to act. Assuming a well-defined spatial boundary is fine if one is using quantum Darwinism to calculate how decoherence acts on some particular system of interest, like a part in a quantum computer. Such an assumption is not fine, however, if one is trying to explain using quantum Darwinism how classical states with well-defined spatial boundaries could emerge in the first place; in this case the assumption of well-defined spatial boundaries renders the "explanation" circular. It is therefore necessary to ask whether quantum Darwinism can be made to work without assuming that quantum systems have well-defined spatial boundaries. The mathematical formalism of quantum Darwinism is based on two ideas: first, that the environment of a physical system encodes information about the system's state; and second, that observers can obtain information about physical systems by interacting with such environmental encodings. For example, photons of ambient visible-spectrum light interact with macroscopic objects such as chairs, and human observers can obtain information about such objects by interacting with ambient light: this is what is happening when one opens one's eyes and sees a chair. My first step in addressing the question of whether quantum Darwinism can avoid the assumption that quantum systems have well-defined spatial boundaries was to show that two observers interacting with distant environmental encodings of the state of a macroscopic physical system - for example, two observers visually determining the state of a system by interacting with ambient light that has previously interacted with the system - cannot demonstrate that they are interacting with encodings of the same quantum system (see "Quantum Darwinism requires an extra-theoretical assumption of encoding redundancy"). This result is a straightforward consequence of the Heisenberg uncertainty principle: in order to demonstrate that they are observing the same system, the observers must interact with it directly, which they cannot do without altering the environmental encodings that they are attempting to associate with the system. Multiple observers interacting with distant environmental encodings can, therefore, at best assume that they are interacting with encodings from the same system.

The second step in addressing the question of whether quantum Darwinism can avoid the assumption that quantum systems have well-defined spatial boundaries was to show that the assumption that distant environmental encodings are redundant - that is, encode information about the same system - is equivalent to the assumption that the system of interest has a well-defined boundary in Hilbert space, the abstract space in which quantum states are mathematically defined (see "Classical system boundaries cannot be determined within quantum Darwinism"). In the special case of measurements of physical position, the boundary in Hilbert space is a spatial boundary. Such boundaries are typically assumed when making quantum-mechanical calculations; my result shows that they cannot be determined experimentally and hence must be assumed. Quantum Darwinism cannot, therefore, escape the assumption that quantum systems have well-defined spatial boundaries, and hence cannot the explain the emergence of the classical world.

This result about quantum Darwinism can be formulated with somewhat greater generality. Following ideas introduced by the general-relativity theorist John Archibald Wheeler, many physicists now believe that information should be regarded as the basic constituent of the universe, a view sometimes called "digital physics." This view distinguishes "quantum information" - the information encoded by the quantum state of a system - from "classical information", the kind of information encoded by these words on this web page. Classical information is defined, in this framework, as quantum information that has survived decoherence and that is, as a result of surviving decoherence, stably encoded in the environment (here, the web) for all to see. Because this definition relies on decoherence, and because decoherence requires a boundary at which to act, without a way of deriving Hilbert-space boundaries from experimental observations this definition of classical information is circular.

The deep logical circularity associated with defining the boundaries of physical systems suggests that the measurement problem in quantum mechanics may be more subtle than is generally appreciated. Discussions of the measurement problem typically assume - often explicitly - the notion of a "Galilean observer" that is employed in both Newtonian physics and relativity. A Galilean observer is simply a point of view, without any specified physical structure or information-processing capabilities. Real observers are obviously not Galilean; someone tasked with observing the read-out of a piece of laboratory apparatus must be able to identify the apparatus to be observed, identify its read-out, and distinguish the various "pointer values" that the read-out displays. Doing these things requires stored information, a result established formally in the early days of classical automata theory. In particular, an observer capable of extracting information from multiple arbitrarily-chosen read-outs and storing the results for later reporting must have an information-processing architecure equivalent to a classical Turing machine. If observers are represented as classical Turing machines, both the physical systems that they can observe and the system states that they can report are fully specified by the classical information that they store prior to making observations. Dropping the assumption of Galilean observers in favor of observers characterized as classical Turing machines allows the derivation of standard quantum theory, including Bell's theorem and decoherence, from standard physical assumptions of deterministic, time-reversible dynamics and a weak version of counterfactual definiteness that assures that the dynamics is observer-independent (see "If physics is an information science, what is an observer?"). This result raises the possibility of a fully "systems-free" formulation of quantum mechanics that makes no assumptions about the boundaries or even the existence of "systems" external to observers. In such a formulation, the universe consists of "stuff" with quantum degrees of freedom - hence the systems-free formulation is realist - but all "systems" are virtual machines defined implicitly by the information storage and processing capabilities of observers, which are themselves virtual classical Turing machines capable of making and reporting observations. The relationship between the classical world and the underlying quantum world in the systems-free formulation is not one of emergence, but rather one of semantic interpretation: it is the same as the relationship between a word and the thing that it refers to.

Box 2: Virtual machines

Computers are boxes full of electronic circuits, but almost no one except their designers thinks of them that way. Most people think of computers as tools for doing various things, like re-touching photographs, communicating with their friends, or looking for stuff on the web. How does this work? How can you use a box of electronics to do a lot of different things without ever having to get out a soldering iron and change some of the wiring inside?

Computer science is founded on two basic ideas: the idea of an algorithm, or a finite procedure for getting something done, and the idea of a virtual machine. A virtual machine is something that acts like something else. The universal Turing machine, the mathematical model of computation invented by Alan Turing on which computer science is based, is a universal virtual machine, a thing that can act like any specialized computer designed to execute some finite algorithm.

The idea that something can "act like" something else is a semantic idea. To say that some object X is a virtual Z-machine means that someone could interpret what X is doing as Z. They could also interpret it as something else. Someone who had never seen a computer before could still plug my laptop in, turn it on, and interpret it as a convenient device for keeping one's lap warm. Someone using a computer to do two different jobs, e.g. listening to music and writing emails, is interpreting the computer as two different virtual machines simultaneously. The computer is the first device invented by human beings for which the best answer to "what is that?" is "what do you want it to be?"

The interpretation of something as a virtual machine is not arbitrary; some interpretations work, but some don't. What it means for an interpretation to "work" can be defined mathematically. Whether a particular interpretation of a particular system as a particular virtual machine in fact works can be demonstrated mathematically (i.e. proven) in some cases but not in others. Most software for practical applications has undetected errors or "bugs" and therefore doesn't "work" in theory, but still "works" well enough to be useful in practice. Hence while not arbitrary, the human interpretation of physical systems as virtual machines is somewhat lax. We human beings are ourselves finite systems with finite computational power, so we cannot consider every possible behavior of every system in every possible circumstance; hence we cannot find all the "bugs" in any reasonably complicated system. Our semantic interpretations of even moderately-complex systems are, therefore, inevitably lax: we cannot say with certainty which virtual machines even moderately-complex systems implement. This semantic laxity lets us get away with "good enough" for practical applications, but sometimes leads to unanticipated disaster when an undiscovered "bug" turns out to have significant consequences for system behavior.

The view of reality implied by systems-free quantum mechanics is odd, but is arguably no odder than the views of reality implied by other realist formulations or interpretations of quantum mechanics, such as the many-worlds interpretation or the transactional interpretation. What this view entails that others do not is an intrinsically social nature of the classical world. In systems-free quantum mechanics, the interpretation of the world in terms of encoded classical information is not a game that can be played alone; the minimal classical world in this quantum mechanics comprises two observers that observe and hence encode classical information about each other.

Realist approaches to quantum theory have always been opposed by "instrumentalist" approaches in which quantum states are taken to refer not to physical systems but to "beliefs" of observers. The most explicitly-stated and radical current instrumentalist formulation of quantum mechanics is quantum Bayesianism, a formulation in which quantum states are only defined relative to particular observers, as sets of probabilities of various beliefs of those observers about the systems that they observe. This formulation is explicitly not systems-free, as it assumes that macroscopic systems and hence classical states exist in an observer-independent way. Using methods very similar to those employed to understand quantum Darwinism, I have shown that the assumption of observer-independent classical states is inconsistent with a second foundational assumption of quantum Bayesianism, the assumption that quantum systems have well-defined, finite Hilbert-space dimensions (see "Autonomy all the way down"). As a well-defined Hilbert-space dimension is a prerequisite of a well-defined system boundary in Hilbert space, this result shows that in the context of quantum Bayesianism, well-defined system boundaries are inconsistent with observer-independent classical states. Hence it shows that to be logically consistent, quantum Bayesianism must be instrumentalist not only about quantum states but about classical states as well, i.e. it must be systems-free.

My current work on this research track is focused on clarifying the systems-free formulation of quantum mechanics and making its details more rigorous. I will then attempt to apply this formulation to practical problems in quantum computing, including the problem of preparing initial states of quantum computers. The systems-free perspective may also shed some light on the relationship between quantum mechanics and space-time, and hence on quantum gravity, as it implies that classical space-time is a virtual construct derived by each observer on the basis of that observer's observations.


Return to Homepage




Copyright © 2010-2011 Chris Fields