There is no known physics theory that is true at every scale—there may never be
henever you say anything about your daily life, a scale is implied. Try it out. “I’m too busy” only works for an assumed time scale: today, for example, or this week. Not this century or this nanosecond. “Taxes are onerous” only makes sense for a certain income range. And so on.
Surely the same restriction doesn’t hold true in science, you might say. After all, for centuries after the introduction of the scientific method, conventional wisdom held that there were theories that were absolutely true for all scales, even if we could never be empirically certain of this in advance. Newton’s universal law of gravity, for example, was, after all, universal! It applied to falling apples and falling planets alike, and accounted for every significant observation made under the sun, and over it as well.
With the advent of relativity, and general relativity in particular, it became clear that Newton’s law of gravity was merely an approximation of a more fundamental theory. But the more fundamental theory, general relativity, was so mathematically beautiful that it seemed reasonable to assume that it codified perfectly and completely the behavior of space and time in the presence of mass and energy.
The advent of quantum mechanics changed everything. When quantum mechanics is combined with relativity, it turns out, rather unexpectedly in fact, that the detailed nature of the physical laws that govern matter and energy actually depend on the physical scale at which you measure them. This led to perhaps the biggest unsung scientific revolution in the 20th century: We know of no theory that both makes contact with the empirical world, and is absolutely and always true. (I don’t envisage this changing anytime soon, string theorists’ hopes notwithstanding.) Despite this, theoretical physicists have devoted considerable energy to chasing exactly this kind of theory. So, what is going on? Is a universal theory a legitimate goal, or will scientific truth always be scale-dependent?
The combination of quantum mechanics and relativity implies an immediate scaling problem. Heisenberg’s famous uncertainty principle, which lies at the heart of quantum mechanics, implies that on small scales, for short times, it is impossible to completely constrain the behavior of elementary particles. There is an inherent uncertainty in energy and momenta that can never be reduced. When this fact is combined with special relativity, the conclusion is that you cannot actually even constrain the number of particles present in a small volume for short times. So called “virtual particles” can pop in and out of the vacuum on timescales so short you cannot measure their presence directly.Here are Feynmann's Nobel Lecture and banquet speech.
One striking effect of this is that when we measure the force between electrons, say, the actual measured charge on the electron—the thing that determines how strong the electric force is—depends on what scale you measure it at. The closer you get to the electron, the more deeply you are penetrating inside of the “cloud” of virtual particles that are surrounding the electron. Since positive virtual particles are attracted to the electron, the deeper you penetrate into the cloud, the less of the positive cloud and more of the negative charge on the electron you see.
Then, when you set out to calculate the force between two particles, you need to include the effects of all possible virtual particles that could pop out of empty space during the period of measuring the force. This includes particles with arbitrarily large amounts of mass and energy, appearing for arbitrarily small amounts of time. When you include such effects, the calculated force is infinite.
We know of no theory that both makes contact with the empirical world, and is absolutely and always true.Richard Feynman shared the Nobel Prize for arriving at a method to consistently calculate a finite residual force after extracting a variety of otherwise ambiguous infinities. As a result, we can now compute, from fundamental principles, quantities such as the magnetic moment of the electron to 10 significant figures, comparing it with experiments at a level unachievable in any other area of science.
But Feynman was ultimately disappointed with what he had accomplished—something that is clear from his 1965 Nobel lecture, where he said, “I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug.” He thought that no sensible complete theory should produce infinities in the first place, and that the mathematical tricks he and others had developed were ultimately a kind of kludge.
Now, though, we understand things differently. Feynman’s concerns were, in a sense, misplaced. The problem was not with the theory, but with trying to push the theory beyond the scales where it provides the correct description of nature....MORE
More interesting, I think, is his 1974 Caltech commencement address.
If interested, Bill Gates bought the rights to a bunch of Fenymann's lectures and other ephemera and put them online.
See also:
How to Tell Crazy From Brainpower Intensive