iSoul In the beginning is reality

Travel time and temporal displacement

There’s a basic distinction between the travel distance (or flight length) and the displacement. There should be a corresponding distinction between the travel time (or flight time) and the temporal displacement – which I’ll call the distimement (dis-time-ment vs. dis-place-ment).

The travel time is the total duration of the trip, and the travel distance is the total distance traveled (odometer length). The displacement is the result of the change in position, the resultant position vector. Similarly, the distimement is the result of the change of the point in time, the resultant time position vector.

A round-trip begins and ends at the same place; its displacement is zero. However, the place is not exactly the same because changes are always occurring – you can’t step in the same river twice (Heraclitus). Temporally, a round-trip begins at a certain point in time – an event – and returns to the ‘same’ event so that its time displacement is zero. It’s not exactly the same event but a parallel event such as opening the door to the same home.

Sometimes in my career when I went on official travel, I would take a personal side trip at my own expense. In order to make the official travel voucher as straightforward as possible, I would begin and end my side trip at the same time of day on a subsequent day. For example, if my official business was complete at 5:00 pm Thursday, instead of taking a flight home I would switch to personal time and go off a few days on my own. Then at 5:00 pm on, say, Saturday, I would switch back to official time and fly home on official travel. The time displacement of the side trip was zero, which made it easy to remove the personal time from the official per diem allowance.

Speed is the ratio of the distance traveled to the travel time, that is, distance over duration. Velocity is the ratio of the change in position – the displacement – to the travel time. Legerity should be defined analogously as the ratio of the change in temporal position – the distimement – to the distance traveled.

The dependent variable in velocity is the displacement. Similarly, the dependent variable in legerity is the distimement. The independent variable in velocity is the travel time for a single motion but for multiple motions the independent variable is the magnitude of the distimement, not the total travel time.

Similarly, the independent variable in legerity is the travel distance for a single motion but for multiple motions the independent variable is the magnitude of the displacement, not the total distance traveled. Thus the resultant of several velocities or legerities is their vector sum.

If someone travels 30 km North in 4 hr, then 40 km East in 3 hr, their displacement is 50 km Northeast at 53 degrees from the North and their distimement is 5 hr Northeast in a course 37 degrees clockwise from the North. Their velocities are 30 km/4 hr = 7.5 km/hr North and 40 km/3 hr = 13.3 km/hr East.

The trip took 3 + 4 = 7 hr. So is their resultant velocity 50 km/7 hr? Or is the resultant velocity the displacement divided by the magnitude of the distimement, 50 km/5 hr = 10 km/hr? It is the latter. So the distimement is really not new.

Similarly, their legerities are 4/30 = 8 min/km North and 3/40 = 4.5 min/km East. The distance traveled is 30 + 40 = 70 km. Is their resultant legerity 5 hr/70 km? No, it’s the distimement divided by the magnitude of the displacement, 5 hr/50km = 6 min/km.

One might define the effective velocity of a trip as the displacement divided by the total travel time. And the effective legerity of a trip might be defined as the distimement divided by the total travel distance. But these are different from the resultant vectors of velocity and legerity.

Galilei doesn’t lead to Lorentz

I haven’t mentioned this before because I have a solution to it but there is a problem with deriving the Lorentz transformation from the Galilei transformation. If one uses the spatial Galilei (Galilean) transformation, the gamma factor leads to the Lorentz transformation. But if one uses the temporal Galilei (Galilean) transformation, the gamma factor does not lead to the Lorentz transformation.

As usual, the standard transformation of reference frames begins with two frames in uniform relative motion along one axis (usually called x). Here we take the spatial axis to be the r-axis, which parallels the spatial axis of motion.

The two frames are differentiated by primed and unprimed letters. They coincide at time t = 0 and their relative speed is v. In the Galilei (Galilean) transformation, there is a universal time that is available to all reference frames.

The Galilei (Galilean) transformation is: r′ = rtv and t′ = t.

To derive the Lorentz transformation one includes the gamma factor with the first equation. Why not include the gamma factor with the second equation? Because it won’t work.

For the transformation t′ = t, the reverse transformation is simply t = t′.

If we include the gamma factor as with these equations, similar to the usual derivations, and follow the usual procedure to combine the Galilei (Galilean) transformation and its reverse, the result is:

t′ = γt and t = γt′.

As before, multiply these together and solve for γ: tt′ = γ²tt′.

Divide out tt′: 1 = γ².

And so γ = ±1, which is not the gamma of the Lorentz transformation.

So the Galilei (Galilean) transformation is not sufficient to derive the Lorentz without qualification or modification. There should be (1) a qualification that the temporal transformation cannot be used to derive the Lorentz transformation, and (2) an expansion to include the co-Galilei (co-Galilean) transformation, in which the temporal transformation leads to the co-Lorentz transformation. For details, see here and here.

Transformations for time and space

The standard transformation of reference frames begins with two frames in uniform relative motion along one axis (usually called x). Here we take the spatial axis to be the r-axis, which parallels the spatial axis of motion. Similarly, the temporal axis is taken to be the t-axis, which parallels the temporal axis of motion.

One aspect of the exposition here is that the notation is indifferent as to the existence of other dimensions. If they exist, they are orthogonal to the direction of motion, whether spatial or temporal, and their corresponding values are the same for both frames. One can generalize the results here to other directions by rotation.

The two frames are differentiated by primed and unprimed letters. They coincide at time t = 0 and their relative speed is v or pace u = 1/v. The main difference between speed and pace is that time (duration) is the independent variable for speed and (travel) length is the independent variable for pace.

We’re assuming the existence of what I’m calling a characteristic (modal) rate, which is a speed or pace that is the same for all observers within a context such as physics or a mode of travel. The characteristic speed, c, or pace, b = 1/c, may take any positive value, and may represent a maximum, a minimum, or a typical value, depending on the context. In physics, the speed of light traveling in a vacuum is the characteristic speed.

The trajectory of a reference particle (e.g., photon or probe vehicle) that travels at the characteristic speed follows these equations in both frames:

Speed: r = ct or r/c = t and r′ = ct′ or r′/c = t′.

Pace: r = t/b or br = t and r′ = t′/b or br′ = t′.

Consider a point event such as a flash of light that is observed from each reference frame. How are its coordinates in each frame related?

There are two basic relations: r′ = rtv and t′ = t – ru. However, these relations assume different independent variables so they need to be kept separate.

To find the time and space transformations for each of these requires a characteristic rate, which allows the inter-conversion of space and time. There are thus two transformations, one for speed and one for pace, though we’ll convert pace to speed for convenience:

Speed: r′ = rtv = r (1 – v/c) and

t′ = t (1 – v/c) = t – (r/c) (v/c) = t – r (v/).

Pace: t′ = t – ru = t – r/v = t (1 – c/v) and

r′ = r (1 – u/b) = r (1 – c/v) = r − ct (c/v) = rt (/v).

Note the factors (1 – v/c) and (1 – c/v) transform the unprimed to primed coordinates. Note also these limits:

Speed: t′ = t – r (v/) approaches t as c approaches infinity, as in the Galilei transformation.

Pace: r′ = rt (/v) approaches r as c approaches zero (it cannot equal zero).

Lorentz transformation

For this we take the previous transformations and include a factor, γ, in the transformation equation for the direction of motion:

Speed: r′ = γ (rvt) = γr (1 – v/c),

Pace: t = γ (t – ru) = γ (tr/v) = γt (1 – c/v),

with equal values for the other corresponding primed and unprimed coordinates. The inverse transformations are then:

Speed: r = γ (r′ + vt′) = γr′ (1 + v/c),

Pace: t = γ (t + ru) = γ (t + r/v) = γt (1 + c/v).

Multiply each corresponding pair together to get:

Speed: rr′ = γ²rr′ (1 – v²/c²),

Pace: tt′ = γ²tt′ (1 – c²/v²),

Dividing out rr′ and tt′ yields:

Speed: 1 = γ2 (1 – v2/c2),

Pace: 1 = γ2 (1 – c2/v2).

Solving for γ leads to:

Speed: γ = (1 – v2/c2)–1/2, which applies if |v| < |c|,

Pace: γ = (1 – c2/v2)–1/2, which applies if |v| > |c|,

and that is the complete Lorentz transformation.

What is the uniformity of nature, pt. 2

In 1738 David Hume wrote of the “principle that the course of nature continues always uniformly the same.” (A Treatise of Human Nature, Bk I, pt iii, sec 6). In 1748 he wrote in An Enquiry Concerning Human Understanding:

When we have long enough to become accustomed to the uniformity of nature, we acquire a general habit of judging the unknown by the known, and conceiving the former to resemble the latter. On the strength of this general habitual principle we are willing to draw conclusions from even one experiment, and expect a similar event with some degree of certainty, where the experiment has been made accurately and is free of special distorting circumstances. (First Enquiry, Section 9, Footnote 1)

Hume is considered to be the first to use the term “uniformity of nature.” He described this as a “general habitual principle” since he justified it only via habit and custom rather than necessity. He stated that “even one experiment,” that is, a sample of one, is sufficient to enable us to draw conclusions and “expect a similar event” elsewhere or in the future because we judge that the unknown resembles the known.

The expression “the Uniformity of Nature” became familiar later from James Hutton in his 1785 Theory of the Earth and from Charles Lyell’s Principles of Geology in the 1830s. William Whewell termed this usage “uniformitarianism”.

I have written before (here) about how the principle of the uniformity of nature (PUN) was adopted by J.S. Mill in the 19th century for his explication of induction.

In 1953 Wesley C. Salmon wrote a paper on PUN with this abstract:

The principle of uniformity of nature has sometimes been invoked for the purpose of justifying induction. This principle cannot be established “a priori”, and in the absence of a justification of induction, it cannot be established “a posteriori”. There is no justification for assuming it as a postulate of science. Use of such a principle is, however, neither sufficient nor necessary for a justification of induction. In any plausible form, it is too weak for that purpose, and hence, it is insufficient. Since a justification which does not rely upon this principle can be given, it is not necessary. (Philosophy and Phenomenological Research 14 (1):39-48)

That is the position I have taken in this blog.

Induction and Laws of Form

I wrote before here about the book Laws of Form. I’ve written recently about conceptual induction here. This post connects the two.

In the book Laws of Form, Appendix 2, G. Spencer-Brown interprets the calculus of indication for logic and finds a problem when it is interpreted existentially. To avoid this problem he introduces “Interpretive theorem 2” which states (p. 132): “An existential inference is valid only in so far as its algebraic structure can be seen as a universal inference.”

That is, one should interpret an existential proposition in universal terms in order to make valid inferences. One way of doing this is to refine the terms and concepts of the existential proposition so that it expresses a universal proposition. That is what people such as Aristotle, Francis Bacon, and William Whewell meant by induction. John P. McCaskey offers several examples in the history of science in which a term was redefined to narrow its extension and ensure that inductive inferences were true by definition: cholera, electrical resistance, and tides.

This conceptual understanding of induction preceded the inferential understanding adopted by J.S. Mill and common today, in which induction is an inference from particular instances. Mill introduced the principle of the uniformity of nature as a necessary major premise of all inductive inferences. Others since De Morgan  have tried to base inductive inference on probability. The problem of induction, now traced back to David Hume, arose from the inferential version of induction. There is no such problem for the older conceptual induction.

Induction with uniformity

John P. McCaskey has done a lot of research (including a PhD dissertation) on the meaning of induction since ancient times. He keeps some of his material online at http://www.johnmccaskey.com/. A good summary is Induction Without the Uniformity Principle.

McCaskey traced the origin of the principle of the uniformity of nature (PUN) to Richard Whately in the early 19th century. In his 1826 “Elements of Logic” he wrote that induction is “a Syllogism in Barbara with the major Premiss suppressed.” This made induction an inference for the first time.

There are two approaches to inferential induction. The first is enumeration in the minor premise, which was known to the Scholastics:

(major) This magnet, that magnet, and the other magnet attract iron.
(minor) [Every magnet is this magnet, that magnet, and the other magnet.]
(conclusion) Therefore, every magnet attracts iron.

The second is via uniformity in the major premise, which was new:

(major) [A property of the observed magnets is a property of all magnets.]
(minor) The property of attracting iron is a property of the observed magnets.
(conclusion) Therefore, the property of attracting iron is a property of all magnets.
(conclusion) Therefore, all magnets attract iron.

The influential J.S. Mill picked this up and made it central to science. Mill wrote in 1843:

“Every induction is a syllogism with the major premise suppressed; or (as I prefer expressing it) every induction may be thrown into the form of a syllogism, by supplying a major premise. If this be actually done, the principle which we are now considering, that of the uniformity of the course of nature, will appear as the ultimate major premise of all inductions.”

Mill held that there is one “assumption involved in every case of induction . . . . This universal fact, which is our warrant for all inferences from experience, has been described by different philosophers in different forms of language: that the course of nature is uniform; that the universe is governed by general laws; and the like . . . [or] that the future will resemble the past.”

So Mill generalized Whately’s major premise into a principle of the uniformity of nature. McCaskey writes:

“This proposal is the introduction into induction theory of a uniformity principle: What is true of the observed is true of all. Once induction is conceived to be a propositional inference made good by supplying an implicit major premise, some sort of uniformity principle becomes necessary. When induction was not so conceived there was no need for a uniformity principle. There was not one in the induction theories of Aristotle, Cicero, Boethius, Averroës, Aquinas, Buridan, Bacon, Whewell, or anyone else before Copleston and Whately.”

McCaskey goes on: “De Morgan put all this together with developing theories of statistics and probability. He saw that, when induction is understood as Whately and Mill were developing it, an inductive inference amounts to a problem in ‘inverse probability’: Given the observation of effects, what is the chance that a particular uniformity principle is being observed at work? That is, given Whately’s minor premise that observed instances of some kind share some property (membership in the kind being taken for granted), what are the chances that all instances of the kind do? De Morgan’s attempt to answer this failed, but he made the crucial step of connecting probabilistic inference to induction. The connection survives today, and it would have made little sense (as De Morgan himself saw) were induction to be understood in the Baconian rather than Whatelian sense of the term.”

That’s how the problem of induction was born, which is essentially the problem of justifying the principle of the uniformity of nature. But this depends on an inferential understanding of induction instead of the older conceptual understanding.

What is the uniformity of nature, pt. 1

A uniformity principle says, What is true of some is true of all. This is usually applied to nature: What is true of some of nature is true of all of nature. Or, What happened in one experiment will happen if the experiment is repeated by anyone else. Since J.S. Mill this principle of the uniformity of nature (PUN) has been considered necessary for science. That is, science reasons as follows:

Proposition P is true of some.
What is true of some is true of all. (The uniformity principle)
Therefore, proposition P is true of all.

This syllogism is completely logical and thoroughly outrageous. Yes, it allows something which science needs: that an experiment done on Monday in Australia is just as valid if it were done on Tuesday in Africa. But it allows far more.

A uniformity principle is the presumption that things unobserved were, are, and will be similar to those observed. But what things? Under what conditions? What suffices to make things and conditions similar enough? Is it the color? The size or shape? Generally, no. What makes things and conditions similar enough? There’s the rub.

A uniformity principle would allow a sample of one or a minuscule amount of X to represent all the X. But if we know anything it is that what is true of some things is not necessarily true of all things. Does a sample of one white powder represent all white powders? Is everything that is true of some rocks true of all rocks? Does everything that happens on Mondays happen on Tuesdays, too? No, no, and no.

Now it’s true that a sample of any amount of copper represents all copper. But that begs the question: what is copper? If something looks like copper, is that enough? No, its chemical structure must be that of copper. But how did chemists find out what the chemical structure of copper is? After experimentation, chemists defined copper by a certain chemical structure. That was the real inductive step.

We can reasonably say: In some cases, what is true of some is true of all. But what cases are those? There is where the real work of science takes place. No PUN intended.

Uniformity without a principle

I have written about uniformity before, such as here and here. This post looks at the need for a principle of uniformity.

David Hume’s principle of the uniformity of nature (PUN) asserts that unobserved cases closely resemble previously observed cases. This principle concerns the character of natural populations based on a sample as well as the sequence of natural events in the future based on past observations. Because of this principle, science can count on uniformity when making inferences and determining laws of nature. So they say.

But PUN is unnecessary overkill. It supports all sorts of false generalizations. If swans in Europe are observed to be white, PUN supports the assertion that all swans everywhere are white. This turns out to be either a bad definition of swans or a false statement, since there are black swans in Australia. If it rains on your birthday every year for 10 years, does that mean it will always rain on your birthday? PUN says Yes.

The problem is that PUN endorses far too much. It endorses good and bad inductions. And it does nothing to distinguish good from bad inductions or to improve inductions.

Instead of PUN science needs good definitions and conditions. Whatever fits unambiguous definitions and meets specific conditions is uniformly the same – by definition. Good definitions are ones that delineate the essence and only the essence of something. Good conditions are ones that specify no more or less than what is necessary for something to exist or to happen.

Because the sun has been observed to rise every the morning for thousands of years does not guarantee that it will rise tomorrow – unless one defines the sun in such a way as to include the property of its rising relative to the earth. That may or may not be a good definition. It will take other observations and definitions to determine how good it is.

PUN should be discarded. Science works best with the right definitions and conditions, without a PUN.

Six dimensions of space-time

If one travels a distance X east, then goes a distance Y north, that is the same as going a distance √(X² + Y²) northeast. But if one travels for a time X east, then goes for a time Y north, is that the same as going for a time √(X² + Y²) northeast? No, the travel time would be (X + Y) in that case. This is because time is conceived as a magnitude, without regard for direction, and so is cumulative.

I’ve mentioned before that there are apparently six dimensions of space-time (or time-space) but there is more to it. For one thing we need to distinguish two ways that a point event relates to multiple dimensions. The first way is that a point event has a location in multi-dimensional space-time. The second way is that a point event may be the resultant of a series of motions in different dimensions. For space these two ways are equivalent but for time they are different.

The first way has been developed in detail: for 3D space the components are combined with a Euclidean metric and for 3D space + 1D time a hyperbolic metric. The spacetime (invariant) interval between two point-events is:

s² = Δr² – c²Δt² = Δr1² + Δr2² + Δr3² – c²Δt².

If time is a vector, it should have components with a Euclidean metric, too:

s² = Δr² – c²Δt² = Δr² – c²Δt1² – c²Δt2² – c²Δt3².

But this is misleading because we don’t ordinarily think of time that way. Instead, we think of time as something flowing from one motion to the next, which would mean time is cumulative. So a time vector would be understood similar to a taxicab metric:

s² = Δr² – c²Δt² = Δr² – c²(Δt1 + Δt2 + Δt3)²,

where the Δ quantity is understood as a distance (and so is non-negative). Otherwise the absolute value would be taken:

s² = Δr² – c²Δt² = Δr² – c²(|Δt1| + |Δt2| + |Δt3|)²,

But this is misleading, too, since it is a series of motions and their resulting time displacement rather than the components of a space-time location. So we should go back to the Euclidean metric and think of the time components differently.

What do the components of time mean if they aren’t the flow of time for a series of motions? Temporal components should be considered like distances measured by time with a constant speed. For a vehicle traveling at constant speed (or pace) space and time are very similar. Multidimensional time isn’t the cumulative flow of time but the dimensions of duration by direction of a vehicle traveling in space-time.

In the end, the six dimensional space-time (invariant) interval is what would be expected:

s² = Δr² – c²Δt² = Δr1² + Δr2² + Δr3² – c²Δt1² – c²Δt2² – c²Δt3².

It’s just that we need to be careful not to confuse time here with a cumulative flow of time.

Miracles and uniformity

The week before Christmas is a good time of year to write about miracles because it’s a time to be reminded of the meaningfulness of miracles. But what about their truth? Doesn’t the uniformity of nature make miracles impossible?

Thomas Aquinas said a miracle is ‘beyond the order commonly observed in nature’ (Summa Contra Gentiles III), but David Hume went further and defined a miracle as ‘a violation of the laws of nature’ (Of Miracles, 1748). Hume also claimed that scientific induction required the uniformity of nature, so on his telling, miracles undermined science.

However, Hume failed to establish the uniformity of nature on rational grounds. The future does not necessarily resemble the past. The most he could say was that the uniformity of nature is a matter of custom and habit. (There’s a convenient summary of his argument here: Probable reasoning has no rational basis.)

Others have also been unable to establish the uniformity of nature on rational grounds. This failure led to Karl Popper’s argument that induction is merely not untrue, and that one counterexample can falsify any induction. However, the history of science shows an unwillingness to abandon well-accepted science because of one or a few anomalies.

Does scientific induction really require the uniformity of nature? No, that is a misunderstanding of science that goes back to Scholasticism, which was revived in the 19th century by Richard Whately and John Stuart Mill. See John P. McCaskey’s writings on The History of Induction.

Induction is based on classification, not a principle of uniformity. Observation and experiment lead to the definition of a class by a uniformity. Then by definition other objects or events in the same class possess the same uniformity, whether in the past, present, or future. As I wrote here, science studies uniformity but that is far from requiring uniformity everywhere at all times.

It is better to define a miracle by what it is – unique – rather than what it is not – uniform. A miracle is a highly unique event or result, especially one attributed to divine agency. Since science studies uniformity, not uniqueness, it doesn’t have much to contribute about miracles. But uniqueness is studied by other disciplines such as history, philosophy, theology, and literature – that is, the humanities, not the sciences.

Miracles are by their nature very unique and significant. They fall outside of uniformity but since there is no valid principle of uniformity, that is not a problem.