iSoul In the beginning is reality

Tag Archives: Philosophy Of Science

Philosophical justification and critique of science.

Inverse causes

I’ve written about Aristotle’s four causes before (such as here and here). This also continues the discussion of observers and travelers, here.

Forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector (the device at the end of a robotic arm) from specified values for the joint parameters. Forward kinematics is also used computer games and animation. Inverse kinematics makes use of the kinematics equations to determine the joint parameters that provide a desired position for each of the robot’s end-effectors.

In other words, forward kinematics is for finding out what motion happens given particular inputs, whereas inverse kinematics is for determining how to move to a desired position. In terms of the four Aristotelian causes or explanatory factors, forward kinematics is concerned with the efficient and material causes, and inverse kinematics is concerned with the final and formal causes.

The surprising thing is that these two kinds of causes (higher and lower) are inverses of one another.

Causes
Higher Final Formal
Lower Efficient / Mechanism Material

From the lower perspective one begins with some material. From the higher perspective one begins with the objective. From the lower perspective forces and laws make things happen. From the higher perspective following plans gets the job done.

One can see rôles parallel to the causes:

Rôles
Traveler Set the destination Plan the trip
Observer Observe the motion See the material

And in robotics (or animation):

Kinematics
Inverse Pick the end position Plan the motions
Forward Make the motions Pick the device

One could say that forward kinematics is for scientists and inverse kinematics is for engineers since the latter incorporate objectives and designs in their work but the former are focused on observation only. To go beyond observation scientists would have to open up to formal and final causes.

Design illustrated

This post continues thoughts about design, last posted here.

Here is a description of how cement is made from the Portland Cement Association:

In its simplest form, concrete is a mixture of paste and aggregates, or rocks. The paste, composed of portland cement and water, coats the surface of the fine (small) and coarse (larger) aggregates. Through a chemical reaction called hydration, the paste hardens and gains strength to form the rock-like mass known as concrete.

The key to achieving a strong, durable concrete rests in the careful proportioning and mixing of the ingredients. A mixture that does not have enough paste to fill all the voids between the aggregates will be difficult to place and will produce rough surfaces and porous concrete. A mixture with an excess of cement paste will be easy to place and will produce a smooth surface; however, the resulting concrete is not cost-effective and can more easily crack.

The design in this case is the proportion of ingredients in the mixture. It might happen that the ingredients formed naturally but they would be in the correct proportion only by design. That is, the particular application entails a goal, which the design meets.

Certainly concrete can and does happen naturally in aggregate rock formations. But it does not meet a need without a design. And that doesn’t happen naturally. Roads built with concrete only happen because engineers and construction crews built them. There’s nothing natural about that.

Science in the center

There are many different musical temperaments that have been used to tune musical instruments over the centuries. They all have their advantages and disadvantages. But there is one musical temperament that is optimally acceptable: the equal temperament method in which the frequency interval between every pair of adjacent notes has the same ratio. This produces a temperament that is a compromise between what is possible and what is agreeable to hear.

Science faces many situations such as the challenge of musical temperament. Conventions and methods need to be adopted and there are multiple options, each with their advantages and disadvantages. There are those who promote one method and those who promote another method, often the opposite method. Should science pick one and force everyone to conform? Or should science find a compromise of some sort?

There is a way in the middle that is a compromise between extremes and alternatives. It is a conscious attempt to avoid extremes and biases, and seek a solution that is the most acceptable to all. This is science in the center, a science that minimizes bias. Although it might be called “objective,” that obscures the fact that it is a conscious choice.

I previously wrote about the need for a convention on the one-way speed of light. Science of the center would avoid the bias toward one direction of light and choose a one-way speed that is in the middle between all the possible speed conventions. This is the Einstein convention, which is part of his synchronization method.

Science in the center includes not biasing classifications either toward “lumping” or “splitting.” Nor should explanations of behavior be biased toward “nature” or “nurture.” The particulars of each case should determine the outcome, not a preference for one side or the other. If there’s any default answer, it’s in the center between such extremes.

Occam’s razor is understood to prescribe qualitative parsimony but allow quantitative excess. This is as biased as its opposite would be: to prescribe quantitative parsimony but allow qualitative excess. Science in the center would avoid the bias that each of these has by prescribing a compromise: there should be a balance between the qualitative and the quantitative. Neither should be made more parsimonious than the other. All explanatory resources should be treated alike; none should be more abundant or parsimonious than any other. I’ve called this the New Occam’s Razor, and it is an example of science in the center.

Event-structure metaphors

This continues the posts here and here and here based on George Lakoff and Mark Johnson’s book Philosophy in the Flesh (Basic Books, 1999).

The Location Event-Structure Metaphor
Locations → States
Movements → Changes
Forces → Causes
Forced Movement → Causation
Self-propelled Movements → Actions
Destinations → Purposes
Paths (to destinations) → Means
Impediments to Motion → Difficulties
Lack of Impediments to Motion → Freedom of Action
Large, Moving Objects (that exert force) → External Events
Journeys → Long-term, Purposeful Activities

The States are Locations metaphor has a dual, the Attributes are Possessions metaphor, in which attributes are seen as objects one possesses. The difference is a figure-ground shift. Grounds are stationary and figures are moveable relative to them. The Attributes are Possessions metaphor combines with Changes are Movements and Causes are Forces to form a dual Event-Structure system.

The Object Event-Structure Metaphor
Possessions → Attributes
Movements of Possessions (gains or losses) → Changes
Transfer of Possessions (giving or taking) → Causation
Desired Objects → Purposes
Acquiring a Desired Object → Achieving a Purpose

Perception requires a figure-ground choice. Necker cubes show that figure-ground organization is a separable dimension of cognition.

Necker cube

Figure and ground are aspects of human cognition. They are not features of objective, mind-independent reality. [p.198]

Location metaphor: Causation is the Forced Movement of an (Affected) Entity to a New Location (the Effect. Causation as Forced Movement of an Affected Entity to an Effect.

Object metaphor: Causation is the Transfer of a Possible Object (the Effect) to or from an (Affected) Entity. Causation as Transfer of an Effect to an Affected Entity.

In the Location metaphor, the affected entity is the figure; it moves to a new location (ground). In the Object metaphor, the effect is the figure; it moves to the affected party (ground).

What this means is that there is no conceptualization of causation that is neutral between these two! [p.199]

The Moving-Activity Metaphor
Things That Move → Activities
Reaching a Destination → Completion of the Activity
Locations → States
Forces → Causes
Forced Movement (or Prevention of Movement) → Causation
Impediments to Motion → Difficulties

The Action-Location Metaphor
Being in a Location → An Action
Forces → Causes
Destinations → Purposes
Closeness to a Location → “Closeness” to an Action
Forcing Movement to a Location → Causing an Action
Stopping a Traveler from Reaching a Location → Preventing an Action

The Existence (or Life) as Location Metaphor
Coming Here → Becoming
Going Away → Ceasing to Exist
Forced Movement Here → Causing to Exist
Forced Movement Away → Causing to Cease to Exist

The Causal Path Metaphor
Self-Propelled Motion → Action
Traveler → Actor
Locations → States
A Lone Path → A Natural Course of Action
Being on the Path → Natural Causation
Leading To → Results In
The End of the Path → The Resulting Final State

Each particular theory of causation picks one or more of our ordinary types of causation and insists that real causation only consists of that type or types. [p.226]

Ordinary vs. scientific perspectives: It is not that one is objectively true while the other is not. Both are human perspectives. One, the nonscientific one, is literal relative to human, body-based conceptual systems. The other, the scientific one, is metaphorical relative to human, body-based conceptual systems. [p.232]

What remains [after eliminating simpleminded realism] is an embodied realism that recognizes that human language and thought are structured by, and bound to, embodied experience. In the case of physics, there is certainly a mind-independent world. But in order to conceptualize and describe it, we must use embodied human concepts and human language. [p.233]

Causes and functions

This post continues other posts (see here and here) on the relevance of Aristotle’s four causal factors.

Call the higher causes the final and formal causes, and the lower causes the efficient (mechanistic) and material causes. Aristotle argued that the upper causes are more important. Early scientists argued that we couldn’t know them regarding nature and so should only look for efficient and material causes.

The lower causes are synchronic, spatial causes expressed in theories, and are most appropriate for the natural sciences. The higher causes are diachronic, temporal causes expressed in narratives, and are most appropriate for the social sciences and history.

There are some parallels between the four causes and the psychologist Carl Jung’s four functions: sensing, intuition, feeling, and thinking, especially as modified by Myers and Briggs’ MBTI:

function groups: judgment perception
upper causes: final || feeling formal || intuition
lower causes: efficient || thinking material || sensing

Aristotle focused on the perceptual functions, sending and intuition (SN in MBTI), with formal and material causes in his philosophy combining matter and form, called hylomorphic (from Greek hylē, matter + morphē, form). The lack of judging functions may reflect Aristotle’s realism.

Modern science focuses on the efficient cause (the forces and mechanisms) and the material cause; it could be called hylodynamic after Greek hylē, matter + dynamis, power. Here the sensing-thinking (ST) personality dominates.

Intelligent design advocates are trying to return formal causes to science. They tend to focus on information theory and so the formal and efficient causes; such science could be called dynamorphic after the Greek dynamis, power + morphē, form. The intuitive thinking (NT) personality dominates.

Other possibilities include final causes/feeling. A telohylic (SF) science might be the detailed narratives of historians. A telomorphic (NF) science might be the wide-ranging narratives of a theologian. A telodynamic (TF) science lacks perceptive functions and would be suitable for anti-realists.

Beyond Occam’s razor

This continues a previous post on Occam’s razor, which it was pointed out is a principle that is arbitrary and biased. With what should it be replaced?

Every science has at least two schools of thought. These reflect well-known tendencies to ascribe more significance to one of two contrary explanatory factors. For example, there are lumpers and splitters in every classification endeavor. In every historical science there are those who emphasize continuous change and those who emphasize discontinuous change. Social sciences have their nature-nurture poles.

From a larger perspective, there is the importance of subjective vs. objective methods. Which comes first, facts or theories? That is, do scientists discover facts and develop theories to explain them or do they construct theories and seek facts that follow from them? Is the “view from nowhere” better than a self-aware view from somewhere? Are final and formal causes (explanatory factors) more important than efficient (mechanistic) and material ones? Is the spiritual more important than the physical?

It should be clear by now that these extremes are all partly true but too extreme. The truth is somewhere in the middle or in a combination of the extremes. Instead of expecting one side to win and the other side to lose, we should allow them both equally. Let them compete. Let them compromise. Let them jointly come up with something that is acceptable to both.

There are a few examples of friendly competition. The corpuscular theory of light developed by Isaac Newton and the wave theory of light developed by Christiaan Huygens competed for years. The quantum mechanical solution is to accept both. Atomic theory developed by John Dalton competed with the known natural kinds of substance. The periodic table solution joins both in a combination of common atoms and distinct chemical elements.

There are a few examples of monopolistic science in which one side sought to marginalize and ban the other side completely. Since the late 19th century those espousing gradualist theories of historical science have worked to banish their one-time colleagues who promote the importance of discontinuous change and difference. The control of educational positions, funding, and prestige have enabled much one-sided and weak science to persist. Science is the loser in these wars.

Each science needs to work out the compromises and combinations that are best for it. A genuine pluralism is possible and should be sought earnestly. The replacement for Occam’s razor is a razor that allows multiple criteria, minimizing both quantities and qualities without sacrificing accuracy.

Pluralism in science

I previously wrote about pluralism here.

Science is usually considered monist in various ways: there is one scientific truth, one scientific reality, one scientific method. This leads to having one scientific theory for each subject, if at all possible.

The single scientific method is the easiest to critique: each branch of science has its own methods, and the attempts to articulate a single method for all sciences have failed. One commentator finds the opposite extreme more accurate (the “anything goes” of Feyerabend).

Science is also divided about reality between idealists and materialists, just as philosophers are. At one moment the materialists have the upper hand and there’s only matter, no mind, no spirit, nothing else. At another moment matter disappears in a blaze of equations and theoretical particles, which are considered the true reality.

Science is even divided about truth. Does the environment or genetics have the dominant influence? Is it nature or nurture that predominates? Is light a wave or a particle? Is quantum mechanics or relativity correct? The answer is that it’s both.

Moreover, science needs both sides of the truth. Biology requires both law and chance. How is the mix of law and chance determined? Is that by law or chance? Evolutionists often imply that chance determines the mix of law and chance but that ignores the extent of law. It’s law and chance all the way down. That is pluralism.

Pluralism is the acknowledgement that truth is many. It does not mean truth is individual or multitudinous. That would be equivalent to relativism. Pluralism acknowledges a small number of truth types, that is, truthful variations that are all on the same level. And these variations are not resolved on another level; they are a quality of truth itself.

Pluralism does acknowledge the unity of ultimate truth. The variations of truth are not so different as to be in complete opposition. Further, there is a complementarity about the variations of truth; they fit together. There are limits to pluralism but truth doesn’t always have a single answer.

What is the uniformity of nature, pt. 2

In 1738 David Hume wrote of the “principle that the course of nature continues always uniformly the same.” (A Treatise of Human Nature, Bk I, pt iii, sec 6). In 1748 he wrote in An Enquiry Concerning Human Understanding:

When we have long enough to become accustomed to the uniformity of nature, we acquire a general habit of judging the unknown by the known, and conceiving the former to resemble the latter. On the strength of this general habitual principle we are willing to draw conclusions from even one experiment, and expect a similar event with some degree of certainty, where the experiment has been made accurately and is free of special distorting circumstances. (First Enquiry, Section 9, Footnote 1)

Hume is considered to be the first to use the term “uniformity of nature.” He described this as a “general habitual principle” since he justified it only via habit and custom rather than necessity. He stated that “even one experiment,” that is, a sample of one, is sufficient to enable us to draw conclusions and “expect a similar event” elsewhere or in the future because we judge that the unknown resembles the known.

The expression “the Uniformity of Nature” became familiar later from James Hutton in his 1785 Theory of the Earth and from Charles Lyell’s Principles of Geology in the 1830s. William Whewell termed this usage “uniformitarianism”.

I have written before (here) about how the principle of the uniformity of nature (PUN) was adopted by J.S. Mill in the 19th century for his explication of induction.

In 1953 Wesley C. Salmon wrote a paper on PUN with this abstract:

The principle of uniformity of nature has sometimes been invoked for the purpose of justifying induction. This principle cannot be established “a priori”, and in the absence of a justification of induction, it cannot be established “a posteriori”. There is no justification for assuming it as a postulate of science. Use of such a principle is, however, neither sufficient nor necessary for a justification of induction. In any plausible form, it is too weak for that purpose, and hence, it is insufficient. Since a justification which does not rely upon this principle can be given, it is not necessary. (Philosophy and Phenomenological Research 14 (1):39-48)

That is the position I have taken in this blog.

Induction and Laws of Form

I wrote before here about the book Laws of Form. I’ve written recently about conceptual induction here. This post connects the two.

In the book Laws of Form, Appendix 2, G. Spencer-Brown interprets the calculus of indication for logic and finds a problem when it is interpreted existentially. To avoid this problem he introduces “Interpretive theorem 2” which states (p. 132): “An existential inference is valid only in so far as its algebraic structure can be seen as a universal inference.”

That is, one should interpret an existential proposition in universal terms in order to make valid inferences. One way of doing this is to refine the terms and concepts of the existential proposition so that it expresses a universal proposition. That is what people such as Aristotle, Francis Bacon, and William Whewell meant by induction. John P. McCaskey offers several examples in the history of science in which a term was redefined to narrow its extension and ensure that inductive inferences were true by definition: cholera, electrical resistance, and tides.

This conceptual understanding of induction preceded the inferential understanding adopted by J.S. Mill and common today, in which induction is an inference from particular instances. Mill introduced the principle of the uniformity of nature as a necessary major premise of all inductive inferences. Others since De Morgan  have tried to base inductive inference on probability. The problem of induction, now traced back to David Hume, arose from the inferential version of induction. There is no such problem for the older conceptual induction.

Induction with uniformity

John P. McCaskey has done a lot of research (including a PhD dissertation) on the meaning of induction since ancient times. He keeps some of his material online at http://www.johnmccaskey.com/. A good summary is Induction Without the Uniformity Principle.

McCaskey traced the origin of the principle of the uniformity of nature (PUN) to Richard Whately in the early 19th century. In his 1826 “Elements of Logic” he wrote that induction is “a Syllogism in Barbara with the major Premiss suppressed.” This made induction an inference for the first time.

There are two approaches to inferential induction. The first is enumeration in the minor premise, which was known to the Scholastics:

(major) This magnet, that magnet, and the other magnet attract iron.
(minor) [Every magnet is this magnet, that magnet, and the other magnet.]
(conclusion) Therefore, every magnet attracts iron.

The second is via uniformity in the major premise, which was new:

(major) [A property of the observed magnets is a property of all magnets.]
(minor) The property of attracting iron is a property of the observed magnets.
(conclusion) Therefore, the property of attracting iron is a property of all magnets.
(conclusion) Therefore, all magnets attract iron.

The influential J.S. Mill picked this up and made it central to science. Mill wrote in 1843:

“Every induction is a syllogism with the major premise suppressed; or (as I prefer expressing it) every induction may be thrown into the form of a syllogism, by supplying a major premise. If this be actually done, the principle which we are now considering, that of the uniformity of the course of nature, will appear as the ultimate major premise of all inductions.”

Mill held that there is one “assumption involved in every case of induction . . . . This universal fact, which is our warrant for all inferences from experience, has been described by different philosophers in different forms of language: that the course of nature is uniform; that the universe is governed by general laws; and the like . . . [or] that the future will resemble the past.”

So Mill generalized Whately’s major premise into a principle of the uniformity of nature. McCaskey writes:

“This proposal is the introduction into induction theory of a uniformity principle: What is true of the observed is true of all. Once induction is conceived to be a propositional inference made good by supplying an implicit major premise, some sort of uniformity principle becomes necessary. When induction was not so conceived there was no need for a uniformity principle. There was not one in the induction theories of Aristotle, Cicero, Boethius, Averroës, Aquinas, Buridan, Bacon, Whewell, or anyone else before Copleston and Whately.”

McCaskey goes on: “De Morgan put all this together with developing theories of statistics and probability. He saw that, when induction is understood as Whately and Mill were developing it, an inductive inference amounts to a problem in ‘inverse probability’: Given the observation of effects, what is the chance that a particular uniformity principle is being observed at work? That is, given Whately’s minor premise that observed instances of some kind share some property (membership in the kind being taken for granted), what are the chances that all instances of the kind do? De Morgan’s attempt to answer this failed, but he made the crucial step of connecting probabilistic inference to induction. The connection survives today, and it would have made little sense (as De Morgan himself saw) were induction to be understood in the Baconian rather than Whatelian sense of the term.”

That’s how the problem of induction was born, which is essentially the problem of justifying the principle of the uniformity of nature. But this depends on an inferential understanding of induction instead of the older conceptual understanding.