The title is Nancy Cartwright’s, from her book of the same name, and from an article which forms Chapter 3, “Do the Laws of Physics State the Facts?” No, she says, and we agree. I thought it well to summarize this chapter before we review work by David Deustch.
I have no proof of it, but physicists must use the same PR firm computer scientists do. The names these guys come up with are brilliant: neural nets, black holes, artificial intelligence, quasars, realism. This last is the philosophical position that says physicists’ models are Reality, and not just models of Reality. “Anti-realist” critics of this view start back on their heels (“What? You’re against Reality!?”), a great disadvantage.
I don’t like “anti-realism”. So I shall call Cartwright’s view (which I share) Pro-Reality, and will call the other view Model Reification.
Cartwright takes the Pro-Reality view. In my words, this pronounces an anathema on all forms of the Deadly Sin of Reification, even the popular ones.
Now you don’t need this review. The link above is to the book, and you can read it. I will be leaving out much, and adding my own gloss. Consult the book for greater detail.
Cartwright says “that the laws of physics do not provide true descriptions of reality”, and that “that our explanatory laws do not tell us what they do. It is in fact part of their explanatory role not to tell.”
She proves this starting with the “law” of gravitation, which you will recognize:
F = GMm/r^2,
where F is the force, G is a constant, M and m the masses of two bodies, and r the distance between them.
Cartwright asks if this “law” accurately describes how bodies behave. “Assuredly not,” she says. You might quail at that. But suppose the two bodies are two magnets; or, in her example, two electrically charged bodies. Or suppose there are more than two “neutral” or massed bodies in the universe. Or suppose there is air between them, hence friction, or even a so-called quantum vacuum lies between, in which particles are supposed to pop in an out of existence.
Speaking more carefully, the law of universal gravitation is something like this:
If there are no forces other than gravitational forces at work, then two bodies exert a force between each other which varies inversely as the square of the distance between them, and varies directly as the product of their masses.
She allows that this might be true, “But it is not a very useful law.” Because, as is obvious, of that no other forces bit. Which never happens. The gravity “law” only explains ideal, which is to say unobservable, or approximate situations. Saying the “law” is universal involves an extrapolation in thought, an induction to what cannot be observed; i.e. no observational verification is possible of the situation where only—the word is strict!—gravity between M and m is in play. You must grasp this before moving on.
Most scientists think “nature is governed by a small number of simple, fundamental laws.” We observe “complex and varied phenomena, but these”, scientists think, “are not fundamental.” Complexity arises, most scientists say, “from the interplay of more simple processes obeying the basic laws of nature.”
In other words, it’s all “laws”, which is why, most physicists believe, we’ll some day find a “general unified theory”, a single equation that governs all behavior.
That will never happen, because the Model Reification view gets it all backwards. Once you get this, which you won’t on first reading, and especially if you’ve had training in the sciences, you see everything fresh and new.
The “laws” view itself comes from the drive to reduce everything to abstractions. The yen to make models of Reality and say those models are Reality. When what we really want—and this will be no surprise to regular readers—is knowledge of cause.
An objection that will have occurred to you is this: why not treat the forces of the “laws” acting on (say) the magnets from gravity and also separately from magnetism? Indeed, most scientists have the “presumption […] that the explanatory laws ‘act’ in combination just as they would ‘act’ separately…actual [observed] behaviour is the resultant of simple laws in combination.” Like a tug-of-war game.
With my emphasis:
Our example, where gravity and electricity [or electromagnetism] mix, is an example of the composition of forces. We know that forces add vectorially. Does vector addition not provide a simple and obvious answer to my worries? When gravity and electricity are both at work, two forces are produced, one in accord with Coulomb’s law, the other according to the law of universal gravitation. Each law is accurate [they say]. Both the gravitational and the electric force are produced as described; the two forces then add together vectorially to yield the total ‘resultant’ force.
The vector addition story is, I admit, a nice one. But it is just a metaphor. We add forces (or the numbers that represent forces) when we do calculations. Nature does not ‘add’ forces. For the ‘component’ forces are not there, in any but a metaphorical sense, to be added; and the laws that say they are there must also be given a metaphorical reading…
In interaction a single force occurs—the force we call the ‘resultant’—and this force is neither the force due to gravity nor the electric force. On the vector addition story, the gravitational and the electric force are both produced, yet neither exists.
She gives the example of motion: “When a body has moved along a path due north-east, it has travelled neither due north nor due east. The first half of the motion can be a part of the total motion; but no pure north motion can be a part of a motion that always heads northeast.” Yes, we can model the motion as a vector addition, and model it well—but the body isn’t doing the math!
All is not lost. It is not that the “laws” are wrong in every way. Indeed, they can be saved. The way to save gravity and Coulomb and so on is via the composition of causes: the “laws” aren’t laws in the sense they are strict operative proscriptions by Nature. They instead “describe the causal powers that bodies have.” That means the objects themselves act by the powers they possess, or they are acted on by other bodies with the powers they have, but nothing is being acted on by “laws”. “Laws” are not forces: things have causal powers.
[T]he law of gravitation claims that two bodies have the power to produce a force of size GMm/r^2. But they do not always succeed in the exercise of it. What they actually produce depends on what other powers are at work, and on what compromise is finally achieved among them…the laws we use talk not about what bodies do, but about the powers they possess.
In contrast to “fundamental laws” like gravity there are phenomenological descriptions, which are closer to Reality, because they model directly observations or phenomena. She gives examples like Fourier’s law for heat flow and Ohm’s law for current. They are “ceteris paribus” descriptions, saying only what will happen if all else equal.
The basic laws on influence, like Coulomb’s law and the law of gravity, may give true accounts of the influences that are produced; but the work of describing what the influences do, and what behavior results, will be done by a variety of complex and ill-organized laws of action: Frick’s law [of diffusion] with correction factors and the like.”
To play the role in explanation we demand of them, these laws [of nature] must have the same form when they act together as when they act singly. In the simplest case, the consequences that the laws prescribe must be exactly the same in interaction, as the consequences that would obtain if the law were operating alone. But then, what the law states cannot literally be true, for the consequences that would occur if it acted alone are not the consequences that actually occur when it acts in combination.
If we state the fundamental laws as laws about what happens when only a single cause is at work, then we can suppose the law to provide a true description. The problem arises when we try to take that law and use it to explain the very different things which happen when several causes are at work.
Simple summary: there are no “laws” of nature operating on objects; there are objects with causal powers operating on other objects, and where we can grasp or model these powers to some extent.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Cash App: $WilliamMBriggs For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
Ernst Mach said something touching on a similar point in his book The Science of Mechanics:
“Purely mechanical phenomena do not exist. The production of mutual acceleration in masses is, to all appearances, a purely dynamical phenomenon. But with these dynamical results are always associated thermal, magnetic, electrical, and chemical phenomena, and the former are always modified in proportion as the latter are asserted. On the other hand, thermal, magnetic, electrical, and chemical conditions can also produce motions. Purely mechanical phenomena, accordingly, are abstractions, made, either intentionally or from necessity, for facilitating our comprehension of things. The same is true of the other classes of physical phenomena. Every event belongs, in a strict sense, to all the departments of physics, the latter being separated only by an artificial classification, which is partly conventional, partly physiological, and partly historical.
The view that makes mechanics the basis of the remaining branches of physics, and explains all physical phenomena by mechanical ideas, is in our judgment a prejudice. Knowledge which is historically first, is not necessarily the foundation of all that is subsequently gained. As more and more facts are discovered and classified, entirely new ideas of general scope can be formed. We have no means of knowing, as yet, which of the physical phenomena go deepest, whether the mechanical phenomena are perhaps not the most superficial of all, or whether all do not go equally deep. Even in mechanics we no longer regard the oldest law, the laws of the lever, as the foundation of all the other principles.”
This set me thinking about a theory I’ve been musing out for the past few years. There is a greater confluence between the habits and modes of the world and our own social and political interactions than we like to consciously admit. What is true in the world at large often translates with surprising nearness into the world of the Polis.
– For every (political) action there is an equal and opposite (political) reaction. (Causality in politics)
– A (policy or habit) in motion tends to stay in motion unless acted upon by an outside force. (Inertia in politics)
In this case it got me thinking, a realistic view of politics would say: “There are no civil laws operating on men, there are only men using power to operate on other men.” Civil Laws are just warnings by the more organized and violence capable to the less organized and less violence capable.
Are these supposed, “Scientific Laws,” scientific, or even laws? Or are they merely the observable habitual interaction of the people and things which populate our world? The excerpt seems to suggest the latter.
The confounding factor of the analogy of course being the Will, which dumb matter lacks.
“The Law of Causal Powers Shitting on Other Objects”.
How many hairs can be split on the head of a pin?
Every “law” in physics is explicitly expressed to students as an approximation applying to x and y under conditions z.
By the logic expressed, cars can’t move because the wheels only spin, the don’t directly cause forward motion.
Physics is quite simple: When the force of the bombs on Bikini Atoll was measured, only what the measuring devices of the armament industry were suitable for could be recorded. No force can be measured if there is no appropriate measuring device. This is also the reason why the thermodynamics “calculation” of the fusion power plant experiments fails, namely the zeroth law of physics: The law is as simple as typing sensational words for the next grant: It is the law of misleading by omission.
Hogwash.
What madam Cartwright is attacking is not the idea of “laws of physics” but rather abstract thought itself.
The whole point of abstraction is to disregard certain aspects of things in order to find some simpler, unifying aspects which are not immediately obvious. The fact is, almost all abstractions are worse than nothing. They simply disregard some aspects of reality without revealing any deeper connections at all.
But a precious few abstractions actually do reveal something new and unexpected. The law of gravity is one example.
Prior to 1957 there were no manmade sattelites orbiting Earth. They had never been observed, because they had never existed. The only reason to believe that such a thing could ever exist was physical theory, namely the law of gravity combined with the law of inertia. It took the combined wisdom of nazis and commies to make the idea a physical reality, but the starting point was pure theory. To call such theory “not very useful” you would have to be exceptionally stupid, perhaps even to the point of being a professor of philosophy.
And, do physical objects add vector components? Yes, some do. Quadcopters, for example. Such a drone has sensors detecting the motion and a processor which compares the measured to the desired motion, calculates the necessary force and torque to be generated and then sends a corresponding signal to the propeller motors.
It works. The difference between engineering and sophistry.
If you want to read competent works of philosophy, you should read books much older than something from 1983.
I’ve always understood that little g is correct but big G is false. We can drop atuff and test little g. Also gravity being generated by mass is just stupid. I always thought so.
Morton,
You do not offer any counter arguments, except incredulousness, but for the adding vector components. Which fails. And fails obviously. Applying force in a given direction is applying force in a given direction, by design, even.
There is a model in the copter, sure enough, tweaked and tuned because of the departures from theory.
Summed up, back in the day: “Don’t make vast conclusions from half-vast data (which includes conditions and assumptions)” and it should be obvious that ontology is different from epistemology.
Briggs,
A statement S1 is made: “Object A never has the property B.”
Another statement S2 is of the form: “In the case C, A did have the property B. Hence S1 is false.”
I would call S2 an argument against S1. You would call it incredulousness.
In our particular instance S1 is “the law of gravity is not particularly useful”, to which I made the response that only the law of gravity allowed anyone to even conceive of manmade sattelites.
You may then respond either that my example is not true, and point to something else that gave people such an idea, or you may respond that this is not what you mean by the word “useful”. Either (whether true or false) would at least constitute a counterargument to my argument, whereas calling it mere incredulousness seems like a bit of an easy way out.
As for the addition of vector components, I fail to see the failure. The only content of the copter “model” is exactly the fact that force components can be added to give the desired total force. An acceleration is measured, a pre-existing force is calculated and the difference between this and the necessary force is then the composant that must be added. To say that the resulting force is something qualitatively different from each of the two input composants seems, well… not very useful?
It’s sort of like saying 2+2 is not 4, which is kind of true. 2+2 is a calculation while 4 is a number, but when and how is that distinction useful?
As an engineer, I don’t think I have ever come across a wordier explanation of nonlinearity.
What? I missed the whole point?? Again??? Curse these utilitarian brain cells of mine.
At least my employer seems to appreciate them, so there’s that.
four – there are 4 angels on that pinhead.
In all seriousness:
1 – McChuck (above) has some of it right: “Every “law” in physics is explicitly expressed to students as an approximation applying to x and y under conditions z.”
2 – There are no laws of nature. What we misleadingly call “laws of nature” are approximations and descriptions nature is under no compulsion to honor. Thus F= GM1M2/d^2 only worked until it didn’t.
The woke argue that “2+2=4” is a mere generalization conveniently elevated to Mathematical Truth by the prevailing Rich White Male power structure. There are, it’s asserted, all sorts of quibbles where “2” at a particular moment might not even equal “2” earlier, or later, or for a Front-Holed Person of Color whose ancestral lands have been appropriated.
The structure of the argument against ” F = GMm/r^2 ” seems to me very similarly constructed. Neutered masses, constant distances, scaler directions all chosen by Isaac Asimov (or Newton, or Washington, one of those guys) for reasons of his own with which none of the rest of us need agree.
The argument FOR ” F = GMm/r^2 ” may be similarly constructed. All abstract concepts are socially constructed, these days.
The map is not the territory. All models are wrong. Ptolemy and Copernicus were both wrong. But to say that Copernicus and Ptolemy were equally wrong is to be wronger than either! (According to Isaac Newton. Or Asimov. One of those guys.)
The laws enhance our understanding of the world, and don’t tell you anything about the so-called Heaven (an alternative reality?). After all, the world is complex, can we hope to reach an accurate description using one law? No. Do people believe that, e.g., the falling of leaves is only subject to gravitational force? (I don’t know the answer. I now found that it is easier and more honest to say “I don’t know”. )
We know a lot about how thing behave. We do not know what they are. The problem is most people do not grasp this nuance. Seriously, what is a zero dimensional object? We can model things by math. But these are just models.
In other words, it’s all “laws”, which is why, most physicists believe, we’ll some day find a “general unified theory”, a single equation that governs all behavior.
Oh, we’ve known that for years. The equation is “What do you get if you multiply six by nine?” To which the answer, non-intuitively, is 42.
This conversation could perhaps be helped with some information about the practice of modern physics, particularly on the process and outcomes that comprise physics, some details on the actual content of modern physics, and some notion of the relationship of modern physics to other modern physical sciences. Since there appears to be some lack of that information here, this comment may be somewhat long, by usual standards. Any relation or comparison to other disciplines such as philosophy is left to the reader.
It also useful when considering the notions of “explanation” and “cause” and “reality” as used in popular, non-physics-based conversations, to clarify the notions of “theory” and “prediction” as used in modern physics, and in the meantime illuminate the nature of terms used by physicists. By “physics” here it is further useful (for clarity) to restrict to what is considered as fundamental physics distinguished from physical sciences in general. Fundamental physics would include Newtonian, Lagrangian, relativistic, quantum, and statistical mechanics, classical and quantum electrodynamics and other field theories, the so-called standard model, etc. Physical sciences in general rely heavily on fundamental physics but many focus attention on specific, often large and complex systems such as in astronomy and cosmology, planetary science, and climate science. For example, astronomy and cosmology make the explicit assumption that fundamental physics far away from our present spacetime location is the same as we measure in our laboratory here. (The same “what” will be clearer below.) The key difference between fundamental physics and general physical sciences is that fundamental physics requires the ability to perform controlled experiments in a laboratory fully characterizing the phenomena of interest, and the ability to replicate the experiment and control and vary the relevant variables and initial conditions. In the evolution of physics to the present day, it has often occurred that a theory constructed to explain observations not yet amenable to controlled experiment eventually, over time, becomes within our technology for controlled experiments and only after such controlled experiments is the theory considered to be tested for agreement with experiment. (General relativity has been such a case, Newton’s theory of gravitation was also.) It is not yet possible to replicate a full-scale planetary system, star, galaxy, universe (or climate) so we can only observe the ones we have, without the ability to control its variables and initial conditions, however we can perform controlled experiments testing aspects such as orbital dynamics as are implied by Newtonian or general relativistic theories of gravity. Often such other physical sciences are called explanatory or observational sciences to distinguish them from intrinsically experimental fundamental science. In this sense the basic (or “hard”) sciences might be considered in this conversation to be physics, chemistry, and biology, and for the remainder of this comment, when the word “physics” is used it is meant in the sense of fundamental physics, testable by controlled experiment. This may seem a heavy restriction, but in fact the approach of (fundamental) physics is the paradigm for the physical sciences in general and it does focus the discussion if one wants to compare physics to other disciplines.
There are roughly three types of physics courses taught at most universities, and they illustrate the different approaches to the subject. One type could be called a physics appreciation course, generally meant for non-science majors to acquaint them with some of the results of modern physics, largely as a collection of “facts” to be assimilated as best the students can. There is no intention of enabling the attendees to actually “do” physics or create new physics, but rather to present them with things “about” physics, much like a tourist might learn about a locale they are visiting, but not actually live there. The mathematical content of such courses is typically nil or minimal, but it seems to satisfy notions of being “well-educated” or “informed” and may be useful in whatever arts or social studies or philosophy they may be concerned with. Typically, such courses are designed to be as enjoyable and stress free as possible and are only offered at the undergraduate level. Another type of course is physics for scientists and engineers who will need or want to use physics concepts constructively in their own disciplines. This may be in use as supporting concepts for analyzing their discipline but often it is so that they can understand the limits and capabilities of the instrumentation they will use, which generally are based on physics. These courses usually rely extensively on some mathematics, particularly calculus and abstract algebra such as vectors and tensors, as well as some basic statistical notions and tools involving analysis of experimental data. Mechanics is generally treated in the Newtonian context, there is some use of Maxwell’s equations and quantum mechanics in the wave function context, and thermodynamics is usually addressed in the continuum approach. It is not expected to equip the students to “do” physics as in the creating new physics context, but that can and sometimes does occur in the future course of their work. There is often significant stress involved in these courses, since biologists or chemists may not all be predilected toward mathematics to the same degree, but the students need to learn and demonstrate the basic skills needed to use physics in their field, not simply to appreciate things “about” physics in a contemplative sense. It is nevertheless generally understood that the time requirements of these courses should be geared to a supporting course, not a course within their major program. These courses may be offered within a physics department but more often are within the relevant departments such as chemistry or biology or a college of engineering. The third type of course is physics for physics majors. Generally, no holds are barred in terms of math or rigor in these courses and the only limits are due to the relatively short duration of typical undergraduate programs, which limits the amount of time that can be spent on each subject area. Approaches beyond the usual Newtonian one are covered somewhat, particularly those based on more general formulations involving Lagrangians and Hamiltonians, and use of the calculus of variations, sometimes including more general treatment of theoretical symmetry and conservation via use of tools such as Noether’s theorem. The emphasis is on learning and demonstrating the skills required to create new physics in a research context. Significant student stress can attend these courses and there is little expectation that the time required for the course should be significantly bounded above. Each of the three types of courses described above have obvious analogs with treatment of music, or visual arts, or athletic disciplines such as martial arts, in terms of learning about something, learning to use something, and learning to master something and create within the discipline. So, what is this physics that physics majors learn to do?
First, physics restricts itself to well defined (to non-physicists, or even non-metrologists, tediously over-defined) experimentally measurable quantities (base quantities and derived) in terms of non-arbitrary units, for example location and direction in spacetime, momentum, energy, work, linear and angular quantities, amount and type of object (e.g. mass charge, charm, etc.) and standardized units to describe such quantities (such as the SI system of units meter, kg, second, ampere, kelvin, mole, candela), some specific useful fundamental constants such as light speed, Planck constant, elementary charge, Boltzmann constant, Avogadro number, etc. and a variety of quantities derived from these such as density, flux, flow, current, moment, etc. The various units and constants in use have been modified and refined over time so that the standards are known to many decimal places (often 10 or more) in precision. A major reason for this seemingly obsessive attention to precision is to rigorously insure non-arbitrary communication among different observers, so that anyone can understand results obtained by anyone else and determine whether they replicate the experimental results themselves. That means for example, such terms as black holes, pulsars , quasars, or reality, are not sufficient terms to specify such objects or concepts within physics, but rather might sometimes be used as shorthand for a specific collection of quantities (such as a black hole or pulsar) or never used in technical context, such as reality. The importance of this will arise shortly.
The term “black hole” serves as a useful illustration. John Wheeler popularized use of the term in the late 60s, although workers recall it being used in a seminar a few years earlier by referring to such objects as a “black hole of Calcutta.” Wheeler said he began using it exclusively because people told him they were tired of hearing the cumbersome language “gravitationally completely collapsed object” and black hole was a succinct substitute. But the meaning within physics was quite clear and non-arbitrary. Einstein’s field equations for gravitation comprise 10 non-linear, hyperbolic-elliptic coupled partial differential equations, which makes finding explicit analytical solutions notoriously difficult. The first (general relativistic) solution for such an object was found by Karl Schwarzschild in 1915 (amazingly while he was serving in the German army in WWI). Strictly speaking, it is simply a mathematical solution of Einstein’s equations with boundary conditions that there is no matter anywhere else in the universe (yielding homogeneous equations) except at a single point at the origin of the solution (in spherical coordinates). Until Wheeler in the 60s, the consensus among astronomers and many physicists was that it was a mathematical curiosity, but not one likely to occur in nature, with various arguments made as to what would keep such things from forming, partly due to its requirement for complete spherical symmetry. In 1963, Roy Kerr published a solution with rotational symmetry, representing a solution with angular momentum, which accords with most of the astronomical objects seen in astronomy, and some reduction occurred in the reluctance to consider such objects as candidates to occur in nature. A solution with charge and angular momentum has also been found (Kerr-Newman). However, there is nothing arbitrary about the meaning of “black hole” it is shorthand for a solution of Einstein’s equations with a given mass M (in kilograms or other derived convenient unit), possibly angular momentum in kg-meters^2/second, and possibly charge in Coulombs, and nothing elsewhere in the universe. The properties of the solution are non-arbitrary and can be worked out by anyone (who has the patience for it) and explored for possible effects that might be observable. For the present, black holes are still in the observational, explanatory phase and have not yet transitioned to controlled laboratory experiment, but many of the phenomena involved have been experimentally measured, not the least of which was the first detection of gravitational waves in 2015, fairly well explained (mathematically) as two colliding black holes . (For examples of others, search for tests of general relativity.)
Similarly, pulsars, quasars, white dwarves or neutron stars as well as such popular things as quantum entanglement, wavefunction collapse, etc. can be characterized quantitatively and related to a non-arbitrary definition in terms of fundamental quantities and units of physics, but “reality” does not readily meet that requirement.
With that background, a theory in physics (as something that is done, not a set of facts that one might learn about and contemplate) must have several necessary attributes, 1) it must be expressed so that it is understood the same way by any other physicist (i.e. in terms of the quantities above) and 2) it must be non-arbitrarily quantifiable into mathematical expression such that 3) it produces non-arbitrary quantitative predictions of the outcomes of controlled experiments on phenomena it necessarily covers. Colloquially, many people use the term “theory” in such a way as to imply a certain vagueness or lack of extension to experiment, but that is not so in physics. A theory must include the ability to calculate any outcome in the realm to which it applies. (This may take some time from initial formulation, depending on the theory. For example, when Maxwell established his equations in 1865 for electricity and magnetism, it was almost immediately apparent that the electric and magnetic fields had radiative solutions and that light, which had been measured and used for centuries, was just a result of the theory. In contrast, Einstein published general relativity in 1915, but it took about 50 years of hard work to establish that gravitational radiation was an allowable result of the theory and another 50 years before gravitational waves were detected.) A simple definition of a theory is that, for a configuration of the quantities considered by physics (mass, momentum, energy, etc), along with initial conditions and boundary conditions, it predicts the outcome (or possible set of outcomes) of a measurement of the system at a later time (i.e. in the future of the initial conditions. There is a nuance that might come up in this comment, if it gets to “cause,” having to do with the speed of light and possible limits on future times that can be causally affected by certain past events). If the predicted outcome is within the designed (by control) precision of the experiment, a theory is put or left in the category of not wrong, to that level of precision. This does not say the theory is “right” or “true” or “real” in any absolute sense, only that it is not wrong for now. These latter points have some relevance to the use of statistics in physics, which may appear later in this comment.
For now, consider Ms. Cartwrights concern with what is known as Newton’s law of general gravitation (call it NLG), that it does not describe all possible things that can affect an object. One who has taken at least a couple of the second category of classes on physics, or some of the third category, would recognize this observation as being patently obvious. NLG is not and never was intended to address all forces that might affect an object. Insofar as that concept is addressed in physics, it is addressed in what undergraduates learn as Newton’s second law, which states that the sum of the forces acting on a body equals the body’s mass times its acceleration. Colloquially referred to as F = ma, more generally as the sum of all forces equals the sum of all masses in the system and their accelerations. A more precise statement is that the F = dp/dt, where p is the momentum, itself equal to mass times acceleration if the mass is not changing, but otherwise can include changes in mass. This “law” is popularly thought to be some kind of statement about how things interact, but in fact it is more importantly the definition of what is meant by a force, namely that force and momentum change are equivalent, described in the same quantities and units. The summation of forces is traditionally accomplished in Newtonian mechanics (such as is taught in undergraduate courses of categories 2 and 3) in what is called a Free Body Diagram. Literally a free body diagram is a graphical depiction of all the forces acting on a body, in magnitudes and directions, whether they be from gravity, friction, induced electromagnetic currents, charge or whatever else. The various sources of the forces are described or characterized in the relevant so called “force laws.” Examples would be Coulomb’s law for the force between charged bodies, Ampere’s force law for currents, the Lorentz force law, and etc. It is neither expected nor intended that NLG include every force that might act on a body. As an approximation, it served and serves fairly well for short trajectories of heavy bodies (as in artillery), or falling heavy bodies, such as Galileo’s Tower of Pisa experiment, but it is well known that if air is present, the relative motion between the air and the falling body produces a drag force. Such drag forces have been extensively studied and characterized empirically, which results in the ability to predict the behavior of space craft or ballistic missiles or warheads re-entering the earth atmosphere to largely acceptable accuracies, even without in-flight corrections. Even rockets being launched take such effects into consideration, particularly when addressing stability of the thrusting rocket at lift-off and during stage separations, when the aerodynamic stability and control of the flight of the vehicle could easily be degraded catastrophically.
Had she taken such an undergraduate mechanics course, she likely would have been exposed to a thought experiment, or perhaps conundrum somewhat related to cause and effect involving Newton’s second law and force addition. It involves Michaelangelo’s assistant, who is tasked with dragging the huge stones needed to carve out his statues. The assistant is aware of Newton’s third law, which states that if two bodies exert forces on each other, the forces have the same magnitude but opposite direction. The assistant argues as follows: “If I try to pull on the stone to drag it for you, according to Newton, it pulls back on me with an equal and opposite force, therefore it is hopeless, according to Newton the forces sum to zero and the stone will not move and I will just waste my time and effort. Can I just go relax and drink some wine?” This sometimes perplexes undergraduates, but yields to careful thought about forces acting ON things versus forces exerted BY things. And in the end, the stone moves.
This confusion is evident in the discussion saying
“We add forces (or the numbers that represent forces) when we do calculations. Nature does not ‘add’ forces. For the ‘component’ forces are not there, in any but a metaphorical sense, to be added; and the laws that say they are there must also be given a metaphorical reading…
In interaction a single force occurs—the force we call the ‘resultant’—and this force is neither the force due to gravity nor the electric force. On the vector addition story, the gravitational and the electric force are both produced, yet neither exists.”
This is unfortunate gobbledygook, saying the forces are not “there”. Where specifically does this mean? What is the location in space and time she associates with “there?” Or does she mean there to be in her mind? What is the there, there? Nature is quite simple, one constructs the experiment, nature produces the result, one measures the result, and Nature is never wrong. Whether one predicts the outcome by using some force laws and Newton’s second law or conjuring up some things called “causal powers,” Nature will give the same result. And if one conjures up “causal powers” in any but idle conversation, one needs to specify how causal powers are measured, in what quantities, what units are to be used, what standards for those units, how they are combined if they are, etc so that anyone else can replicate the calculation, if calculation it is. In fact, if conjuring up “causal powers” alternatively can produce quantitative predictions that agree with experimental results, then more power to it. In that case, one would almost surely be able to show the “causal powers” explanation to be equivalent to what is presently in use in physics, at least in experimental situations in which they agree. Until then, it may only be useful or fun to talk about “causal powers” over cocktails, if anyone still drinks cocktails. Whether a mathematical or mystical power is “there” or not is irrelevant. The only thing relevant is whether the procedure allows non-arbitrary quantitative prediction of the experimental outcome to sufficient accuracy and in a manner that can be communicated in non-arbitrary fashion to others and replicated by them. [The wording “neither exists” is also gobbledygook. Exist in what sense? Each force acting on a body can be independently measured on-board objects acted on by several types of forces and such is routinely done in any number of experiments.]
For an illustrative example of the interplay of “causal powers” and the emerging discipline of physics one can turn again to NLG. In the early and mid 1600s Kepler, Galileo, Brahe, and others had accumulated observations of the positions of the planets in the sky. Kepler deduced simply by numerical analysis that 1) the planets appeared to follow elliptical orbits and 2) in their orbits they swept out equal areas in equal time. For most of the thousand or so years prior, those who studied objects observed in the sky had speculated on the “causal powers” that moved them, with the heavy favorite being whatever supernatural entity was in favor at the time. Musings by Galileo and others at the time began moving toward something happening that depended on the inverse square of the distance, motivated by the apparent elliptical nature of the observations. There is some historical discussion about credit for NLG and the second law, but regardless of the authorship, NLG allows predictions of the observations to accuracy commensurate with observations. Equal areas with equal times is a straightforward consequence of NLG and conservation of momentum (angular in this case). NLG was first conclusively and directly confirmed (in the value of the constant G) experimentally waited until about 1800 with the Cavendish experiment, which isolates objects from any other forces and has been repeated with increasing accuracy since then. Since then, we have repeatedly confirmed the predictive capability by launching numerous space vehicles at various distances and over varying times. So far, such predictions have not needed additional “causal powers” for refinement.
While on the subject of G, it might be helpful to clarify any confusion expressed in the comments about the difference between g and G. As is the case in physics, when in doubt check the units. The gravitational constant G has units (by virtue of how it appears in the equations) meters^3/kg/second^2. This is since it must result in a force when used in the equation F = MmG/r^2. If you work out the units, putting G in the equation results in kg-m/second^2, which is a mass times an acceleration, which is what the second law defines to be a force. The value of G is available anywhere on the net. The specific unit of force, kg-m/second^2 is called a Newton. In contrast little g, is used to refer to an acceleration, with units meters/second^2. It is used to denote (typically) the acceleration a massive body would feel (per unit mass) at a distance from the center of the earth (or some other plane of interest) equal to some mean of the distance to the surface of the planet. For earth that distance is about 6380 km and the value of G combined with the mass of earth result in g being about 9.80 meters/second^2m the familiar value. It is equal to GM/r^2 where M is the earth mass, r is the distance to the surface and G is the gravitational constant. Notice that it does not have the mass of the body concerned, the acceleration is the same for any massive body as demonstrated famously by Galileo, allegedly at the Tower of Pisa. The force exerted on bodies of different masses are different, but the accelerations are all the same g ~ 9.80 meters/second^2. This makes calculation of near-earth surface trajectories like rifle bullets or artillery shells or other heavy (so that drag is negligible) objects fairly easy and accurate. One can if one wants take into account drag and wind direction, etc, but it is straightforward.
Now a brief comment on probability and statistics with respect to physics. Most physicists would likely consider themselves frequentists, if one explained the alternatives to them and asked the question. Even when probabilities are explicitly discussed, such as in quantum mechanics or statistical mechanics, it is invariably in the context of the number of outcomes of measurements at specific possible values or the number of allowable states of a system with a given energy. Musing about the “nature” of probability or the “causal powers” does not have any effect on the process of predicting outcomes. There is a peculiar notion involving what are called (by physicists at least) “probability amplitudes,” which are defined as complex valued functions which can have a defined evolution in time and space and allow calculation of the probability of specific outcomes by adding the complex values and taking the magnitude of the result. There are lots of discussions of such things available at varying levels of rigor. Some of the references at the end of this comment have fun discussions.
As to statistics, Rutherford is famously attributed with a quote (although whether he said these exact words is subject of some argument, most arguers agreeing that he would have said it even if he didn’t), to wit “If your experiment needs statistics, you ought to have done a better experiment.” Rutherford had that luxury because his famous experiment was one of the “big” experiments in physics, namely ones designed to decide between two theories with substantially different predictions. (The Michelson-Morley experiment, the gravitational deflection of light and advance of perihelion of Mercury being others.) For Rutherford’s experiment, the two competing theories had to do with whether the positive and negative charges known by experiment to comprise atoms were distributed uniformly throughout the general volume of the atom (the so-called plum pudding model) or whether the charges were segregated in some volumetric way, with positive and/or negative charges concentrated in a small volume. His experiment shot alpha particles (relatively heavy particles with two protons and two neutrons, thus a positive charge) at a gold foil, which of course contained gold atoms, which have a fairly large number of positive charges, 79, in each one. The two theories predicted very different results. The continuous plum pudding model predicted virtually all of the particles to travel straight through with very little scattering to off angles but the more concentrated the positive charge was in a volume within the the atom, the more scattering outcomes should occur in backward directions. (This of course predicted via Coulomb’s law, coupled with Newtons second and third laws.) What Rutherford observed was very large numbers of backward scattered alpha particles, so large in fact that he “didn’t need statistics” to tell which theory was more wrong, so perhaps he was justified in his opinion. The Michelson-Morley experiment similarly was aimed at detecting whether the difference in the velocity of the earth at different points on its orbit in which its velocity relative differences are of order 30 km/s, would result in measuring different velocities of light perpendicular versus parallel to the motion due to the presence of the ether, a possible medium required for light propagation. The various calculations of what the difference should have been resulted in the difference between interference fringes (it was a large interferometer) on the order 0.4 fringe. The design of the device was such that its precision of fringe measurement was about 0.01 fringe, so they expected to be well able to measure any effect. The null result of course is history, and subsequent improvements in the resolution of the experiment have failed to show measurable differences in the speed of light and it became virtually impossible to predict the smaller and smaller results with a reasonable theory including the ether. The result is the Lorentz transformation, leading to special relativity, but there is a good bit more to that story than space allows.
The general point is that while statistics seem to be useful to many physicists, it generally relates more to characterizing repeated outcomes, i.e. frequencies, than speculating about cause and effect or “causal powers.” It should be emphasized that this perspective is peculiar to physics as described here, and other disciplines likely have other predilections.
Well, this “comment” has become a bit long for a typical blog comment. Typical blog comments seem brief and focused on specific relatively small (in the scheme of things) nits to pick. There are surely many such nits one might pick here and dig on, but it never hurts to try to encompass the larger picture. Any of the “nits” one might find themselves would merit similarly long discussion, with frequent use of examples, but such things seem to go by the wayside online.
Some recommended references. None except the book by Ms. Sobel would be considered “popular” science books. Even QED, allegedly the transcription of lectures given to a lay audience is a fairly tough read for non math inclined folks.:
1. Feynman lectures on physics. Available free online at https://www.feynmanlectures.caltech.edu/ (the site also has recordings of his Messenger lectures)
2. QED: The Strange Theory of Light and Matter. Princeton U. Press. Also available digitally in PDF online, but I prefer not to list the location, though it is easily findable.
3. On the Shoulders of Giants. Stephen Hawking ed. Many revile Hawking’s views on supernatural issues, but this is a collection of what he considered the most important documents in physics, in the original words of the authors. Reading the thoughts directly in the words of Galileo, Copernicus, Einstein, and others provides illumination to the patient reader of the evolution of physics.
4. God Created the Integers. Hawking again in a similar collection of his “greatest” math writings in the original words of the authors. Useful for background on the interplay between physics and math, but requires patience sometimes.
5. Galileo’s Daughter: A Historical Memoir of Science, Faith, and Love. Dava Sobel. A rare and intimate look at the struggles of the forerunner of modern quantitative science in the context of his devout faith and loving support of his daughter.
6. Perhaps also Quantum Mechanics and Path Integrals. Feynman. There is an amended edition out that claims to have fixed some typos and made equations clearer. The first couple of chapters are accessible to a casual, but math and science conversant, reader. The remainder of the book is much more intricately detailed. His take on the interplay between quantum mechanics and probability is worth absorbing.
7. Anything you can get your hands on by Galileo. His contribution to the evolution of quantitative science is greatly under-appreciated and his thought paths are delicious. Particularly fun is his famous/infamous Discourse on the Two World Systems, which is widely available free online.
Pingback: There Is No Difference Between A Proposal, Hypothesis, Model, Theory or Law – William M. Briggs
Pingback: What Parts Of Reality Does Science Describe, And Which Parts Are Merely Constructs? – William M. Briggs
Pingback: What Parts Of Reality Does Science Describe, And Which Parts Are Merely Constructs?
Pingback: Emergence Is Substance; Entanglement Is Substance: Science Is Healing – William M. Briggs