iSoul Time has three dimensions

Dissident scientists

There is a curious paradox about the science community. On the one hand scientists have a reputation for independent thinking but on the other hand the science community likes to present a solid front. The latter is closer to the truth: most scientists are conformists. It is well known from the history of science that dissent is not welcomed in the scientific community. Dissidents are roundly condemned and ostracized — until they turn out to be correct. Then history is rewritten from the perspective of the new paradigm (it’s called a Whig history).

Also, along with so many aspects of the modern world, science has become increasingly politicized. Since much research funding is government controlled, it’s not surprising that politics gets involved. So perhaps we should pay more attention to dissident scientists. Below are some links.

Big-bang Dissidents

Global Warming Dissidents

Goethean Scientists

Darwin Doubters

Intelligent Design Scientists

Creation Scientists

Uniqueness and uniformity

If everything were completely unique, we would have no way of identifying them as to what they are. If everything were completely identical, or uniform, we would have no way of distinguishing them. We conclude that the world is somewhere in between: everything is a combination of the unique and the uniform.

If all events were completely independent, or unrelated, we would have no way of identifying them as to what they are. If all events were completely identical, we would have no way of distinguishing them. We conclude that all events are a combination of the independent and the identical.

So it is not possible to have two completely unique or identical individuals. Nor is it possible to have two completely unrelated or identical events.

In statistics, we assume the least about events we don’t know about: we assume they are independent and make the least possible inference. We assume we know nothing other than what we are given in the data. We take multiple trials and use the law of large numbers to infer safe conclusions. Or we adopt a maximum entropy prior distribution as a minimal assumption.

In natural science, we assume the most about things we don’t know about. This is based on an assumption of the uniformity of nature. The natural world that we don’t know is like the natural world we do know about. We assume that what we don’t know about is the same as what we do know about. That is, we assume everything we know is all we need to know – until we know more. Then we revise and make the same assumption.

If we begin natural science with no prior knowledge and pick up a rock, we conclude that everything is rock. If we then step in a puddle, we conclude that everything is a rock or a puddle. If we let go of the rock and it falls to the ground, we conclude that all rocks fall to the ground just like that rock.

In history, the less we assume about events we don’t know, the better. Events are assumed to be unique though somehow related to other events. Through historical study we infer the relation of events. So history is more like statistics than natural science.

Natural history takes the approach of natural science toward studying the past. It assumes that all events in the past are like events in the present. So the past and the present are alike and history is the repetition of similar events. This is an anti-historical approach to history because it ignores or downplays the uniqueness of events.


Science and statistics

Here is a story about two statisticians and two scientists. They are given a problem: what are the frequencies of the letters in English texts? The junior statistician has no research budget whereas the senior statistician has a modest research budget. Similarly, the junior scientist has no research budget but the senior scientist has a large research budget.

The junior statistician has no budget to collect frequency data and, being a careful statistician, makes no assumptions about what is unknown. So the conclusion is made that the frequency of each letter is 1/26th. A note is added that if funds were available, a better estimate could be produced.

The senior statistician has a modest budget and so arranges to collect a random sample of English texts. Since English is an international language, a sample of countries is selected at random. Written English goes back about 500 years so a sample of years is selected at random. A list of genres is made and a sample of genres is selected at random. Then libraries and other sources are contacted to collect sample texts. They are scanned and analyzed for their letter frequencies. The letter frequencies of the sample are used as the unbiased estimate of the population frequencies. Statistical measures of uncertainty are also presented.

The junior scientist has no budget to collect data but happens to own a CD with classic scientific texts. With a little programming, the letter frequencies on the CD are determined. These frequencies are presented as the frequencies of all English texts. No measures of uncertainty are included. It is simply assumed that English texts are uniform so any sample is as good as another. However, a caveat is made that the conclusion is subject to revision based on future data collection.

The senior scientist has a large research budget from a government grant. Arrangements are made to collect a massive amount of data from electronic sources such as the Internet and several large libraries. The written texts are scanned and combined with the electronic sources into a large database and then the letter frequencies are determined. These frequencies are announced as the letter frequencies for all English texts. No measure of certainty is included. It is not mentioned that future data collection could lead to a revised conclusion.

The senior scientist collects a prize for successfully completing the project. The others are forgotten.

Who had the best approach? Why aren’t scientists and statisticians more alike?


Superseded science

Science is an iterative process and so theories that were once widely accepted may later become superseded, but what does that entail? A superseded theory may be thoroughly undermined by its consistent failure to match expectations. Contrary to falsificationism, it usually takes more than a few anomalies to bring down a theory. Theories are part of larger paradigms which have many variations.

Superseded theories are not only no longer accepted but they are virtually lost, except in general references or obscure historical works. Also, those promoting victorious theories may make exaggerated claims about what has been superseded. There may be confusion about what has been superseded and what has not. The purpose of this essay is to clarify these things in a few cases.


While the story of science may be said to begin with the pre-Socratics, modern empirical science started with Bacon and Galileo. The first superseded theory then was Ptolemaic astronomy as the advantages of Copernicanism were realized. Copernicanism was subsequently superseded, too, though this is often not acknowledged. What did these two theories propose?

Ptolemy wrote a treatise in the 2nd century AD called the Almagest, which described an astronomy in which the earth was the center of circular movements of the moon, the planets and the stars. In order to maintain this paradigm but conform more closely with observation, epicycles (circles on top of circles) were added to these circular orbits. In ancient and medieval times circles were considered the most perfect geometric form. Because the celestial bodies were idealized, there was a strong desire to understand them in terms of circles. The use of epicycles enabled people to maintain this idealization and conform to observation.

The geocentric paradigm of Ptolemy and those who followed him was challenged by the heliocentric astronomy of Copernicus (1473-1543). Galileo (1564-1642), Kepler (1571-1630), and Newton (1643-1727) proposed theories that were further developments of the heliocentric paradigm.

Copernicus wrote that the sun was the center of circular movement of the planets and the stars. Galileo showed how the heliocentric paradigm could reduce the number of epicycles needed. Kepler showed that the planetary orbits were elliptical. Newton placed the planetary orbits into a comprehensive theory of motion by an inverse square law of gravitation.

So Copernicus was vindicated in the sense that the heliocentric paradigm proved more fruitful than the geocentric paradigm. However, his assertions that the orbits were circular and that the sun was the center of the motion of the stars have been superseded.

Has the geocentric paradigm been completely superseded? In so far as it requires the earth to be the center of movement of celestial bodies other than the moon, yes. But there are other senses of the term “center”, notably a center of mass. The validity of a universe with a center of mass is disputed but has not been superseded and so should be considered an open question.


Aristotle’s biological writings were quite empirical for his time. He classified organisms into genus and species and speculated on the purpose (teleology) of the diversity of life. Hippocrates and Galen wrote about medicine. Jumping ahead to the 18th century, Linnaeus developed a biological taxonomy in 1735, and variations of this have been in use ever since. Like those before him, Linnaeus conceived of species as fixed life forms within a hierarchy.

This paradigm of life forms emphasized completeness and stasis: all possible forms of life were included and change over time was excluded. So it was a problem when fossils were found that seemed to be from organisms that no longer existed, because that would imply there were gaps which was considered to be a waste or loss that God would not allow. Or it meant that species had changed.

In the 19th century, the main current of science abandoned both completeness and stasis: many but not all possible life forms have existed and these have changed over time via what Darwin’s followers called evolution. The pendulum of science had swung in the other direction.

Darwin’s criticisms were leveled at a simple version of his opponents’ paradigm in which no species ever changed. But this paradigm was revised to allow some change over time within limits. Darwin and his followers ignored these revisions. The old theory was superseded but the old paradigm was not.

Darwinism prevailed and marched on without opposition until the later half of the 20th century when some scientists started promoting “creation science.” They began working with a paradigm that has never been fully explored: species that have changed within limits of their kind. Initially ignored or vilified by establishment science, they have grown into a worldwide movement.


Dangers of extrapolation

The naïve conception of the figure of the Earth as a plane is locally correct everywhere on Earth but globally false. Observers in every place on Earth see evidence of a finite plane that ends at the horizon. The simplest generalization of these observations is to say they are all seeing the same plane of the Earth. So the naïve conception is empirical. But extrapolating these observations leads to a false generalization.

What is wrong? It is fine as a starting point but it should be challenged. Consider new kinds of evidence, not just more of the same kind of evidence. Also, consider other conceptual schemes. The concept of a spherical Earth was first proposed on aesthetic grounds by ancient Greeks who held the sphere as the simplest, most perfect form. Given these two rival conceptions, a way is needed to judge between them. The empirical way is to look for evidence for one that is evidence against the other.

The naïve conception illustrates the dangers of extrapolation. It is so easy to extrapolate but it can be so wrong. Extrapolation often works to a limited degree but increasingly diverges until it is simply mistaken. Extrapolation is like an analogy: it is valid to a point but rarely valid at all points. We should always look for the limits of extrapolation.

What is a guideline for the limits of extrapolation? Consider a time series such as mean temperature for 100 years. Could we extrapolate it 1000 years? That would be much longer than the original time series which seems excessive. How about 100 years? Maybe, though that would be pushing it. For 10 years? That seems reasonable. So extrapolation for a period less than the original data is about the most that can be expected.

Every observation has a date in which it took place. The range of all observation dates is the range of recorded history, which is less than 10 thousand years. This is a problem for natural history which extrapolates to time periods much longer than the period of observation. The solution is to assign risk to extrapolations: the greater the extrapolation, the greater the risk of error. Unfortunately, this is not done; in natural history the risk of error from extensive extrapolation is simply ignored.

May 2008

Convergent induction

History of Chemistry, Simplified

The simplest universe has only one kind of substance, which was the first scientific theory, that of Thales (ca 585 BC) who stated that the origin of all matter is water. Then there was Anaximenes, who held that everything in the world is composed of air. Xenophanes said, It’s all Earth. No, it’s all fire, said Heraclitus.

Empedocles combined them all in his theory that all matter is made up of four elemental substances – water, air, earth, and fire – in fixed quantities. The Pythagoreans taught that all things are composed of contraries. Aristotle combined the four elements of Empedocles and the contraries of the Pythagoreans and said that every substance is a combination of two sets of opposite qualities – hot and cold, wet and dry – in variable but balanced proportions.

Leucippus and Democritus disagreed with this approach and took the opposite position that all matter is made up of imperishable, indivisible entities called atoms. The atomic approach languished until many years later it was revived during the Renaissance. It was further developed by Dalton and others in the 19th century. Basic combinations of atoms, called elements, came to be seen as the building blocks of all substances. The list of elements was expanded into the Periodic Table which is key to the very successful science of chemistry today.

The so-called Ockham’s Razor, which may be stated “entities must not be multiplied beyond necessity”, is usually understood as a preference for simplicity. But it ignores trade-offs between, for example, a plurality of substances and a plurality of entities. Which is simpler, Thales’ single substance in many forms or Democritus’ single form but many entities?

Convergent Induction

There are three related lessons to be taken from this brief historical review: (1) science starts with simple, extreme positions; (2) for every simple, extreme position there is an opposite simple, extreme position; and (3) science develops complex, intermediary positions between simple extremes.

(1) It is well-known that science follows a principle of simplicity (parsimony) which leads it to start with overly simplified ideas, find their empirical weaknesses, and then gradually add complexity. As A.N. Whitehead said, “Seek simplicity and distrust it”.

(2) Simplicity comes in pairs. This is demonstrated in the case of chemistry between the extremes of one or a few substances and the opposite extreme of many atoms. There is the simplicity of a few unique entities vs. the simplicity of many uniform entities. There is also the simplicity of simple stasis vs. simple dynamics. These contrary simplicities have loomed large in the history of science, and many other subjects.

(3) While science begins with simple, extreme positions, it does not stay there. It progresses toward complexity. In an analogue to the mathematical theorem that any bounded increasing (or decreasing) sequence is convergent, simple extremes provide the bounds that ensure progressive induction converges.

Thus science proceeds via convergent induction, which is bounded by simple extremes and seeks empirical adequacy by progressively converging toward a complex mean. The process is progressive in that each step introduces a complexity not present before. Convergence is ensured by bounding the progression with simple extremes.

It is most usual to begin with one extreme and work in the direction of the other extreme rather than to oscillate between opposites in a convergent way. In general, there are two strategies for inductive logic: (1) assume the most about what is unknown and (2) assume the least about what is unknown. Natural science takes approach (1) and statistical science takes approach (2).

There are two directions for each of these approaches. Statistical science may be approached from the direction of maximal or minimal knowledge. For example, if there is knowledge of the physical source of variability such as by examining a pair of dice, then a frequentist direction may be best. If little is known except some empirical data that are gradually available over time, then a Bayesian direction may be best.

Natural science also has two directions. The most that can be assumed about what is unknown is that it is like what is known. But that may be either because it is a different form of the same thing or because it is a different combination of the same constituents. The former direction is top-down, macrocosmic, whereas the latter direction is bottom-up, microcosmic or atomic. The atomic direction has proved to be the most fruitful for natural science.

Since the convergent is a kind of mean between the initial extremes, that leads to the question of whether it would be possible to follow means instead of extremes. One could start with a simple mean between the extremes and then adjust it to another mean as need be. Perhaps this would be more efficient.

October 2010


Today the word science usually means naturalistic science. Historically, naturalism was not dominant in modern science until the nineteenth century, when it was promoted by those who were called “naturalists” (not to be confused with a naturalist as someone who studies natural history). These naturalists promoted the idea that science was limited to naturalism. They were adept at taking leadership of science at a critical time when it was becoming professionalized and supported by government largesse.

Since naturalism is a false philosophy, science today is alienated from truth. The intelligent design movement challenges the idea that science is limited to naturalism. The creation science movement, which is related but independent, denies naturalism and much of the science built upon naturalism, including evolution and deep time. While these movements are small, it is important to remember that they stand in a tradition that goes back to the first centuries of modern science. The science of Galileo, Newton, and other great scientists of the past was not naturalistic science.

These posts are on science and the foundations of science:

Convergent Induction, Dangers of Extrapolation, Superseded Science, Science and Statistics, Uniqueness and Uniformity, and Dissident Scientists.

Two important articles about the rules of science:

Religion and science

Secularists have it all wrong in their desire either to separate religion and science completely or to make religion submit to science. Look again at the major world religions:

  •  they last for thousands of years
  •  they are the basis for whole civilizations
  •  their adherents accept martyrdom rather than give them up

Science doesn’t come close to any of these. One can easily argue that if science isn’t tied to a religion, it is doomed. Techno-wizardry will not save it in the long run.

March 2009


Here are various links worth exploring.

Seeking Answers?

Religion and Public Life

Help the Persecuted

Defending Liberty

Review of Gillquist’s Becoming Orthodox

On Peter Gillquist’s Becoming Orthodox (Conciliar Press, 1992)

This book presents an engaging story and defense of the transition of a group of evangelicals into the Antiochian Orthodox Church. Parts One and Three tell the story and Part Two presents a defense of Orthodox positions on issues sensitive to many evangelicals. The key point in their journey he says was letting history judge them instead of the other way around. This meant giving priority to the faith and practice of the ancient, undivided church.

The group of leaders that Peter represents did careful historical research, were open to what they found, and were willing to change if necessary.  Their guiding desire was to find the one, true church if possible. They ended up starting their own Evangelical Orthodox Church that eventually merged with (if that’s the right term) the Antiochian Orthodox Church.

He addresses several issues that are hot buttons for some evangelicals: tradition, liturgy, calling priests “father”, the Virgin Mary, and the cross. Their background is apparently the anti-liturgical wing of evangelicals who are suspicious of all tradition and liturgy because they are associated with dead ritualism. But there are many evangelicals in Anglican, Lutheran, and Methodist communions, for example, who don’t have such attitudes. Though admittedly the high church “smells and bells” type of liturgy he defends is the liturgy of only a few evangelicals.

Read more →