The Astrophysics Spectator

Home

Topics

Interactive Pages

Commentary

Other Pages

Information

Commentary

Alienated Theory

A local business, looking to hire a new employee, interviewed three applicants from different professions. Each applicant came in turn to be interviewed. The first, a doctor, was asked the question “what is two plus two?” “Four,” he replied. The second, a lawyer, was asked the very same question. “Five,” he replied. The final applicant was an accountant. Asked the same question, the accountant looked to the left, looked to the right, leaned over to the interviewer, and quietly asks “what would you like it to be?”

This is my wife's favorite joke, because she's an accountant. But I could just as well apply this joke to theorists. What is the x-ray spectrum for this binary star? Well, what would you like it to be? These are the observationally oriented theorist who at least try to generate predictions from their theories. The other half of the community of theorists are agnostic about observations; their theories may or may not match the observations, but it doesn't matter, and, like the lawyer in our joke, these theorists will not be bother by any contradictions between their theories and the observations. Of course, we would love to be like the doctor, but we seldom have that option in astrophysics. We usually have only two options, each one a strategy to overcome a simple problem: most objects in astrophysics are too complex and too poorly observed to fully understand theoretically.

Copernicus, Kepler, and Newton had a much simpler task than the modern astrophysicist. They were developing simple mathematical theories to explain very simple and precisely measured motions on the sky. At the level of observational precision at which Newton worked, the principal conclusions of his theory for planetary motion can be worked out with pencil and paper, and the results can be confidently tested by plotting the predicted motion against the observed motion. Not until one attempts to include the gravitational pull of the planets on each other does the theory become computationally complex. Even with this complication, however, the mathematical problem is precisely posed, so that computer calculations can produce a model for planetary motion that is directly comparable to very precise measurements of planetary motion. The physicists that followed Newton could develop the complex mathematics of planetary perturbations with assurance that they were solving the correct problem.

The problems of modern astrophysics suffer both from an uncertainty in the physics and in the ambiguity of the observations. The Sun provides a good example of the problem encountered with distant objects. Up close we see a boiling photosphere covered with sunspots and expelling loops of magnetic fields and hot gas. This complex structure is difficult to model, despite our ability to observe the complexity directly. Place our Sun at several parsecs, so that all of this detail disappears, and we would lose any guidance in disentangling the physics. We are left with trying to understand complex phenomena from the sparsest of data.

In this environment, a theorist adopts one of two strategies: he either construct very simple theories for an astronomical object in the hopes that this correctly describes the grossest characteristics displayed by the data, or he develop computer codes that model the complex physics thought to exist in the object without worrying too much about the observations. For practical reasons—computer codes are expensive to develop—the first strategy dominates when many different theories are in competition. The development and use of computer codes that model complex physics occurs when the community is confident in the basic theory.

Creating simple models can be great fun. Do you need to explain the emission of x-rays and gamma-rays? Then start with a very hot gas, perhaps stuck in a magnetic field. Need variation on the millisecond timescale? Employ a compact star, such as a neutron star, or a black hole. Is the object a thousand times brighter than the Sun? Throw in the accretion of gas onto our compact star. The early phases of theory development are this simple.

The striking feature of this strategy is how static it is. The same set of ideas are continually redeployed to describe a wide variety of phenomena. To some extent this simply reflects the dominance of energy generation in many phenomena. There are only two good sources of energy in astronomy, gravitational potential energy and thermonuclear energy. If you are constructing a theory for a luminous object based on gravitational potential energy, then you think in terms of things shrinking (protostars), collapsing (supernovae), or falling onto other things (accretion disks around neutron stars and black holes), but if it is based on thermonuclear energy, then you think of objects that are dense and hot (the interior of a star), or are under high pressure (the surface of a neutron star).

The difficulties come in converting the energy liberated from an object into electromagnetic radiation. The ambiguity is in the details, for while the physics of creating light from energetic electrons and ions is well understood, the process of converting gravitational potential energy and thermonuclear energy into thermal energy in electrons and ions is not. The intermediate processes must transport energy to areas visible to us, and they must distribute the energy among electrons and ions is just the right way to produce the spectrum of light that we see. The difficulty is that these processes are too complex to derive mathematically with pencil and paper. If we are to keep the problem manageable, we are reduced to guessing at the distribution of energy among electrons and calculating the light spectrum these electrons produce. For instance, I might have a theory that explains the x-rays observed from near a black hole as the radiation produced when a fast-moving electron collides with a microwave photon, transferring energy from the electron to the photon, and converting the photon into an x-ray. This process is easily calculated if I set the velocities of all the electrons, but the spectrum then reflects my choice for these velocities rather than the actual physics that gave each electron its velocity. The gulf remains between the liberation of energy and the transfer of this energy to the electrons creating the radiation we see.

The connection between energy release and light generation requires computer simulation. The ideal case would be to have simulations that replicate the observations from first principles. We would like a supernova simulation, starting with a star in the final stages of thermonuclear fusion, to follow the star's collapse to completion, and to give us the precise appearance of the resulting supernova. We want a pulsar simulation to show us precisely how the radio emission from a pulsar varies with time. But reality intrudes, both in our lack of information and in our lack of computing power, to prevent most of our simulations from accurately portraying the events and objects we see in astronomy.

The first problem is one of detail. For instance, when someone writes a computer code to simulate a pulsar, which is a spinning magnetized neutron star, he inevitably assumes the magnetic field has a dipole structure or some other simple structure. We can see how wrong this assumption can be by simply looking at the Sun's magnetic field, with its numerous evolving loops and streamers of magnetic field at the photosphere. The Sun's magnetic field is extremely complex, and we have no reason physical to assume that a pulsar's magnetic field is any simpler. But we lack the detailed information to impose complexity on a simulation of pulsars, so we stick with simple magnetic field structures.

Part of the complexity we encounter is from the many physical scales we encounter in astrophysics. For instance, when a degenerate dwarf star undergoes a thermonuclear detonation, creating a type 1a supernova, the nuclear burning occurs on a surface passing through the star, with unburned material ahead of the surface, and burned material behind it. If the surface were flat and stable, the problem would be easy to solve with a computer, because it would be inherently one-dimensional. The real situation, however, is that this surface of thermonuclear burning is unstable, with the burning moving into unburned material faster at some points than others. The surface rapidly becomes complex, with fingers of burned material poking into regions of unburned material. Worst, this complexity cascades to smaller and smaller scales, so that the large-scale fingers thrust into the unburned material themselves are composed of fingers thrust into unburned material. This type of problem is very difficult to simulate with a computer.

Finally, there is the “weather” problem. Even if our computers were powerful enough to follow every detail of a physical simulation, they would not be able to replicate the complex behavior of a system. The problem is that many systems in astrophysics, like the weather system on Earth, is chaotic. The slightest change in initial conditions between two simulations will produce dramatically different results. In a weather simulation, this may mean that a hurricane strikes the Florida panhandle in one simulation, but South Carolina a week later in a second simulation. Much of the detailed behavior we see in astronomy, such as the complex time-dependent radio emission from a pulsar or the bursts of x-rays from some x-ray binaries, are manifestations of “weather.”

With all these problems, computer simulations are generally incapable of producing results that match the observations in any detail. The best hope is to replicate the gross behavior of a system, and to understand how different bits of physics within the theory affects the behavior of the system. For this reason, theorists engaged in numerical simulation normally disregard many of the observations, dismissing it as “weather.” In some instances, these theorists veer to the extreme and totally disregard the observations, placing more faith in the physics of their simulations than in the complexity of the observations. More commonly, they concentrate on the general properties of an object while ignoring the detailed behavior.

Following either strategy, theorists are limited in their ability to replicate the observations. This is simply a fact of life in this discipline. The implications of this, however, are important. First, an inability to explain the observations of a phenomena does not necessarily mean that the theorist does not have a correct understanding of the basic principles underlying that phenomena. The threshold for abandoning a theory because of disagreement with the observations can therefore be high. It is this reason I tend to be skeptical of theories that invent physics (a new fundamental particle, for instance) to explain a phenomena that is not adequately explained by theories employing currently-accepted physics. Second, there is such a thing as too much information. At some point, the data generated by observers becomes irrelevant. An observer can carefully measure every fluctuation of the x-rays from a binary star, but the results can be of no value, because the detail is too fine and too dependent upon a complex geometry, complex fluid motions, or some other complex physical process to simulate accurately. In the end, theoretical astrophysics by its nature is separated from observational astronomy by an unbridgeable divide.

Jim Brainerd

Ad image for The Astrophysics Spectator.