iSoul In the beginning is reality

# Tag Archives: Logic

nature and application of logic

# Logic as arithmetic

George Boole wrote on “the laws of thought,” now known as Boolean Algebra, and started the discipline known as Symbolic Logic. A different George, George Spencer Brown, wrote on “the laws of form,” which presented an arithmetic system underlying logic. Below are two symbolic logics equivalent to Boolean algebra that resemble ordinary arithmetic in some respects. To resemble arithmetic in other respects, use the Galois field of order 2, GF(2). Zero is taken as representing false, and one as true.

LOGIC OF SUBTRACTION

Subtraction

A – 0 = 1 – A = 1

A – 1 = A

Definitions

– A is defined as 0 – A (and so 0 is ”  “, ground, false)

A + B is defined as  A – (– B)

Tables

 A 0 − A A − B 0 1 A + B 0 1 0 1 0 1 0 0 0 1 1 0 1 1 1 1 1 1

Consequences

– (– A) = A

A − B = A ← B

A + B = A ∨ B

A + B = B + A

– is not distributive

DIVISION LOGIC

0 / A = A / 1 = 0

A / 0 = A

Definitions

/ A is defined as 1 / A (and so 1 is ”  “, ground, true)

A • B is defined as  A / (1 / B)

Consequences

1 / (1 / A) = A

A / B = – (A → B)

A • B = A ∧ B

A • B = B • A

/ is not distributive

Tables

 A 1 / A A / B 0 1 A • B 0 1 0 1 0 0 0 0 0 0 1 0 1 1 0 1 0 1

# Classical Model of Science

Another paper that should get wider exposure: “The Classical Model of Science: a millennia-old model of scientific rationality” by Willem R. de Jong and Arianna Betti. Synthese (2010) 174:185-203. Excerpts:

Throughout more than two millennia philosophers adhered massively to ideal standards of scientific rationality going back ultimately to Aristotle’s Analytica posteriora. These standards got progressively shaped by and adapted to new scientific needs and tendencies. Nevertheless, a core of conditions capturing the fundamentals of what a proper science should look like remained remarkably constant all along. Call this cluster of conditions the Classical Model of Science. p.185

The Classical Model of Science as an ideal of scientific explanation

In the following we will speak of a science according to the Classical Model of Science as a system S of propositions and concepts (or terms) which satisfies the following conditions:

(1) All propositions and all concepts (or terms) of S concern a specific set of objects or are about a certain domain of being(s).

(2a) There are in S a number of so-called fundamental concepts (or terms).

(2b) All other concepts (or terms) occurring in S are composed of (or are definable from) these fundamental concepts (or terms).

(3a) There are in S a number of so-called fundamental propositions.

(3b) All other propositions of S follow from or are grounded in (or are provable or demonstrable from) these fundamental propositions.

(4) All propositions of S are true.

(5) All propositions of S are universal and necessary in some sense or another.

(6) All propositions of S are known to be true. A non-fundamental proposition is known to be true through its proof in S.

(7) All concepts or terms of S are adequately known. A non-fundamental concept is adequately known through its composition (or definition). p.186

The Classical Model of Science is a recent reconstruction a posteriori of the way in which philosophers have traditionally thought about what a proper science and its methodology should be, and which is largely set up, as it were, by abduction. The cluster (1)-(7) is intended, thus, to sum up in a fairly precise way the ideal of scientific explanation philosophers must have had in mind for a very long time when thinking about science. p.186

A proper science according to this Model has the structure of a more or less strictly axiomatized system with a distinction between fundamental and non-fundamental elements. p.186

The history of the conceptualization Science knows three milestones: first of all, Aristotle’s Analytica posteriora, especially book 1; secondly, the very influential so-called Logic of Port-Royal (1662), especially part IV: ‘De la méthode’, written mainly by Antoine Arnaud and relying in many respects on Pascal and Descartes; and finally Bernard Bolzano’s Wissenschaftslehre (1837). p.187

The formulation coming closest to a systematization of the ideal of science we codify in the Model is perhaps the description of scientific method given in the Logic of Port-Royal, ‘The scientific method reduced to eight main rules’:

Eight rules of science

1. Two rules concerning definitions

1 . Leave no term even slightly obscure or equivocal without defining it.
2. In definitions use only terms that are perfectly known or have already been explained.

2. Two rules for axioms

3. In axioms require everything to be perfectly evident.
4. Accept as evident what needs only a little attention to be recognized as true.

3 . Two rules for demonstrations

5 . Prove all propositions that are even slightly obscure, using in their proofs only definitions that have preceded, axioms that have been granted, or propositions that have already been demonstrated.
6. Never exploit the equivocation in terms by failing to substitute mentally the definitions that restrict and explain them.

4. Two rules for method

7. Treat things as much as possible in their natural order, beginning with the most general and the simplest, and explaining everything belonging to the nature of the genus before proceeding to particular species.
8. Divide each genus as much as possible into all its species, each whole into all its parts, and each difficulty into all its cases. pp.187-188

… the Model is a fruitful analytical tool. Its influence lasted until recently; having persisted at least to Lesniewski, it in fact extended far beyond what one might expect at first glance. It is certain, however, that at a some point the Model was abandoned without being replaced by anything comparable. p. 196

# Induction with uniformity

John P. McCaskey has done a lot of research (including a PhD dissertation) on the meaning of induction since ancient times. He keeps some of his material online at http://www.johnmccaskey.com/. A good summary is Induction Without the Uniformity Principle.

McCaskey traced the origin of the principle of the uniformity of nature (PUN) to Richard Whately in the early 19th century. In his 1826 “Elements of Logic” he wrote that induction is “a Syllogism in Barbara with the major Premiss suppressed.” This made induction an inference for the first time.

There are two approaches to inferential induction. The first is enumeration in the minor premise, which was known to the Scholastics:

(major) This magnet, that magnet, and the other magnet attract iron.
(minor) [Every magnet is this magnet, that magnet, and the other magnet.]
(conclusion) Therefore, every magnet attracts iron.

The second is via uniformity in the major premise, which was new:

(major) [A property of the observed magnets is a property of all magnets.]
(minor) The property of attracting iron is a property of the observed magnets.
(conclusion) Therefore, the property of attracting iron is a property of all magnets.
(conclusion) Therefore, all magnets attract iron.

The influential J.S. Mill picked this up and made it central to science. Mill wrote in 1843:

“Every induction is a syllogism with the major premise suppressed; or (as I prefer expressing it) every induction may be thrown into the form of a syllogism, by supplying a major premise. If this be actually done, the principle which we are now considering, that of the uniformity of the course of nature, will appear as the ultimate major premise of all inductions.”

Mill held that there is one “assumption involved in every case of induction . . . . This universal fact, which is our warrant for all inferences from experience, has been described by different philosophers in different forms of language: that the course of nature is uniform; that the universe is governed by general laws; and the like . . . [or] that the future will resemble the past.”

So Mill generalized Whately’s major premise into a principle of the uniformity of nature. McCaskey writes:

“This proposal is the introduction into induction theory of a uniformity principle: What is true of the observed is true of all. Once induction is conceived to be a propositional inference made good by supplying an implicit major premise, some sort of uniformity principle becomes necessary. When induction was not so conceived there was no need for a uniformity principle. There was not one in the induction theories of Aristotle, Cicero, Boethius, Averroës, Aquinas, Buridan, Bacon, Whewell, or anyone else before Copleston and Whately.”

McCaskey goes on: “De Morgan put all this together with developing theories of statistics and probability. He saw that, when induction is understood as Whately and Mill were developing it, an inductive inference amounts to a problem in ‘inverse probability’: Given the observation of effects, what is the chance that a particular uniformity principle is being observed at work? That is, given Whately’s minor premise that observed instances of some kind share some property (membership in the kind being taken for granted), what are the chances that all instances of the kind do? De Morgan’s attempt to answer this failed, but he made the crucial step of connecting probabilistic inference to induction. The connection survives today, and it would have made little sense (as De Morgan himself saw) were induction to be understood in the Baconian rather than Whatelian sense of the term.”

That’s how the problem of induction was born, which is essentially the problem of justifying the principle of the uniformity of nature. But this depends on an inferential understanding of induction instead of the older conceptual understanding.

# Negation and logic

Two propositions are contrary if they cannot both be simultaneously true but it is possible for both to be simultaneously false. For example, the proposition that “every man is just” is contrary to the proposition that “no man is just,” since both propositions may be false if some men are just.

Two propositions are contradictory if both cannot be simultaneously true and both cannot be simultaneously false. The proposition that “not every man is just” is contradictory to the proposition that “every man is just,” because both cannot be simultaneously true, nor can they be simultaneously false.

Note that contraries are two universal propositions and contradictories must have one universal and one existential proposition. And note that one proposition is the negation of the other — but there are two kinds of negation: contrary and contradictory.

Fregean logic handles these two kinds of negation by segregating them: contradictory negation goes before the quantifier and contrary negation goes after it. So these expressions are equal:

All x aren’t y as -∀x: x ⊂ y = ∃x: x ⊂ -y

and

Some x aren’t y as -∃x: x ⊂ y = ∀x: x ⊂ -y.

The other purpose of quantifiers is to bind a variable as universal or existential.

George Spencer Brown’s Laws of Form does something similar in two dimensions with his “cross” symbol ( ⏋). Contradiction is represented in the horizontal dimension via the Law of Calling. Contraries are represented in the vertical dimension via the Law of Crossing.

The intersection of horizontal and vertical crosses is a single cross, which in the interpretation for logic represents negation. With a variable under or ‘inside’ it, the cross represents “non” or “no” as in “non-A” or “no A”.

# Distinctions of Genesis 1

In the beginning God created the heavens and the earth. The earth was formless, and indistinct; and darkness was on the face of the deep. And the Spirit of God was hovering over the face of the waters.

Then God said, Let there be light; and there was light. And God saw the light, that it was good; and God divided the light from the darkness. God called the light Day, and the darkness he called Night. The evening and the morning were the first day. So the first distinction was between Day and Night.

Then God said, Let there be a space in the midst of the waters, and let it divide the waters from the waters. Thus God made the space, and divided the waters which were under the space from the waters which were above the space; and it was so. And God called the space Heaven. The evening and the morning were the second day. So the second distinction was between waters below and above Heaven.

Then God said, Let the waters under Heaven be gathered together into one place, and let the dry land appear; and it was so. And God called the dry land Earth, and the gathering together of the waters he called Seas. And God saw that it was good.

Then God said, Let the earth bring forth grass, the herb that yields seed, and the fruit tree that yields fruit according to its kind, whose seed is in itself, on the Earth; and it was so. And the Earth brought forth grass, the herb that yields seed according to its kind, and the tree that yields fruit, whose seed is in itself according to its kind. And God saw that it was good. The evening and the morning were the third day. So the third distinction was between the Earth and the Seas.

Then God said, Let there be lights in the space of Heaven to distinguish the Day from the Night; and let them be for signs and seasons, and for days and years; and let them be for lights in the space of Heaven to give light on the Earth; and it was so. Then God made two great lights: the greater light to rule the Day, and the lesser light to rule the Night–and also the stars. God set them in the space of Heaven to give light on the Earth, and to rule over the Day and over the Night, and to divide the light from the darkness. And God saw that it was good. The evening and the morning were the fourth day. So the Day was marked with the greater light and Night was marked with the lesser light.

Then God said, Let the Seas abound with an abundance of living creatures, and let birds fly above the earth across the face of the space of the Heavens. So God created great sea creatures and every living thing that moves, with which the waters abounded, according to their kind, and every winged bird according to its kind. And God saw that it was good. And God blessed them, saying, Be fruitful and multiply, and fill the waters in the Seas, and let birds multiply on the Earth. The evening and the morning were the fifth day. So the Seas were marked with fish and Heaven was marked with birds.

Then God said, Let the Earth bring forth the living creature according to its kind: cattle and creeping thing and beast of the earth, each according to its kind; and it was so. And God made the beast of the Earth according to its kind, cattle according to its kind, and everything that creeps on the Earth according to its kind. And God saw that it was good.

Then God said, Let us make man in our image, according to our likeness; let them have dominion over the fish of the Seas, over the birds of the Heaven, and over all the Earth and over every creeping thing that creeps on the Earth. So God created man in His own image; in the image of God he created him; male and female he created them. Then God blessed them, and God said to them, Be fruitful and multiply; fill the Earth and subdue it; have dominion over the fish of the Seas, over the birds of Heaven, and over every living thing that moves on the Earth.

And God said, See, I have given you every herb that yields seed which is on the face of all the Earth, and every tree whose fruit yields seed; to you it shall be for food. Also, to every beast of the Earth, to every bird of Heaven, and to everything that creeps on the Earth, in which there is life, I have given every green herb for food; and it was so. Then God saw everything that He had made, and indeed it was very good. The evening and the morning were the sixth day. So the Earth was marked with man.

Thus the Heaven and the Earth, and all the host of them, were finished. And on the seventh day God ended his work which he had done, and he rested on the seventh day from all His work which he had done. Then God blessed the seventh day and sanctified it, because in it he rested from all his work which God had created and made. So the seventh day was marked with the Sabbath.

# Two kinds of negation

This is a follow-up to the introductory post on Laws of Form here.

There are two kinds of negation: contraries and contradictories, and Laws of Form (LoF) represents both types. Furthermore both types apply to terms and propositions.

Contraries are two complete opposites; the negation of one is the other. The poles of a magnet for example are contraries: not South is North and not North is South. This negation is represented by the Law of Crossing.

Contradictories are partial opposites; the negation of one is different from the other but not necessarily the exact opposite. The negation of a proposition is a proposition that is inconsistent with it. This negation is represented by the Law of Calling.

Calling is a kind of negation but calling again doesn’t return to the original proposition; it reiterates the negation and remains in the same place.

If we negate North as a direction, do we get South? Not necessarily; we could get East or West which are different from North. As directions, North and East are contradictories.

It’s unusual to completely negate a proposition but it can be done. “The place is on North Main Street” is contradicted by “No, it’s on East Main Street.” The contrary proposition is “The place is on South Main Street” in the context of a north-south oriented Main Street.

To model both negations in ordinary arithmetic with 0 and 1, use the two operations: standard multiplication for calling and an alternate multiplication for crossing defined as:

x alt y = (x-1) * (y-1).

Then zero represents the marked state and one the unmarked state.

The beauty of LoF is that these two kinds of negation are combined into one symbol — as is the word “not”.

# Laws of form

The remarkable book Laws of Form by George Spencer-Brown was published in 1969 and is almost forgotten today. The best expositors have been William Bricken with his boundary mathematics, Louis Kauffman with his knot theory, and Francisco Varela with his work on self-reference. Otherwise it has become something of an underground classic but otherwise forgotten. There are several reasons for the latter, including the exaggerated claims of the author and some enthusiasts. That said, I think it’s worth rehabilitating the Laws of Form (LoF) and rightly discerning its significance.

LoF is a work on diagrammatic reasoning in the tradition of Leibniz and CS Peirce. It is a calculus, complete with arithmetic and algebra, based on the act of making and indicating a distinction. Thus it is a work of mathematical realism, which begins to explain why it is not of interest to anti-realists. Its greatest accomplishment is the unified treatment of injunction and indication, of implication and negation via a single symbol, called a cross.

Here are the arithmetic axioms of the calculus of indications:

That’s it. The inverted “L” is the cross symbol. A cross next to another cross is equal to one cross; this is the Law of Calling. A cross inside another cross is equal to blank, that is, as if no cross had been written. This is the Law of Crossing, hence the name of the symbol, Cross.

This is a two-dimensional calculus, which gives it advantages that one-dimensional notation does not have. It also makes it hard to display typographically. The best alternative is simply to use parentheses or brackets:

( ) ( ) = ( ) and (( )) =  .

These arithmetic axioms can be used to derive two algebraic axioms:

((A) (B)) C = ((A C) (B C)) and ((A) A) =   .

From this a complete calculus can be constructed. It is isomorphic to Boolean algebra and other functionally-complete binary calculi, which is another reason LoF hasn’t stirred a lot of interest.

Things get more interesting as we review where this calculus comes from. Again this exhibits its realism; the standard approach for mathematics and symbolic logic is to begin with algebraic axioms or postulates without reference to any model or reality.

Let’s begin with a blank surface, say a blank page of paper. Now draw a distinction on this surface; that is, draw a closed curve or divide the page into two parts. Notice what has happened: part of the paper is distinguished from the rest of the paper by being to one side of the curve, say the inside. The curve separates the other side from the inside; call it the outside. But the original piece of paper is still there. We can still consider the whole piece of paper.

This process is symbolized by LoF as follows: what is outside the cross (or parentheses) can be seen inside the cross (or parentheses) if we change perspectives to the whole page. This is symbolized in a theorem:

(A) B = (A B) B.

So the distinction that is drawn is not between two contraries but within one space, represented by the whole page. It also shows the distinction can be undermined. This has been exploited to represent self-reference.

Much more could be said about LoF but that’s it for now.

# The real scientific method

The real scientific method is the inductive method invented by Socrates and elaborated by Aristotle, Bacon, and Whewell. It is different from the hypothetico-deductive method invented by JS Mill in the 19th century which is passed off as the method of modern science.

Consider Francis Bacon. He called immature concepts “notions”. Induction starts with notions from common experience and iteratively improves them using sense experience until the form or essence is identified. This form is the cause in the full sense of the word; the form is what something truly is — and so should be defined as such. Thus the induction is true by definition. Sound circular or trivial? It’s not because getting the concepts right is what inductive science is all about.

William Whewell described two complementary processes, the explication of conceptions and the colligation of facts: To explicate a conception is to clarify it by identifying what it contains, by unfolding it, for example by surveying and examining examples. The end result is a careful definition of the conception. Colligation is the complementary process of binding facts together by means of a precise conception. The result is an induction, which is the narrowing of a generalization until it is exact and universal.

Yes, induction leads to hypotheses and testing but this is for the purpose of finding the consilience of inductions, the confirmation of inductions in different and multiple ways. The key step takes place before hypotheses and testing: the discovery of a conception of the facts that binds them together.

This understanding of induction was lost in late antiquity until Francis Bacon restored it and laid a foundation for science that lasted two centuries. Then in the 19th century Richard Whately and JS Mill replaced it with a different method, one that came to be called the hypothetico-deductive method, which depends on uniformity and naturalism, and is conceptually confused and logically deficient.

John P. McCaskey and others have explained the history of Socratic induction in science. As examples he gives cholera, electrical resistance, and tides (see here).

# Approaching the unknown

We have some knowledge but it is not complete knowledge, not even arguably near complete. So what should we do about the areas where knowledge is lacking? We should certainly continue to investigate. But what do we say in the mean time? What can we justify saying about the unknown side of partial knowledge?

There are three basic approaches to the unknown: (a) assume as little as possible about the unknown and project that onto the unknown; (b) assume the unknown is exactly like the known and project the known onto the unknown; or (c) assume the unknown is like what is common or typical with what is known and project that onto the unknown.

Approach (a) uses the principle of indifference, maximum entropy, and a modest estimate of the value of what is known to the unknown. It takes a very cautious, anything-can-happen approach as the safest way to go.

Approach (b) uses the principle of the uniformity of nature, minimum entropy, and a confident estimate of the value of what is known to the unknown. It takes an intrepid, assertive, approach as the most knowledgeable way to go.

Approach (c) uses the law of large numbers, the central limit theorem, the normal distribution, averages, and a moderate estimate of the value of what is known to the unknown. It takes a middle way between overly cautious and overly confident approaches as the best way to go.

The three approaches are not mutually exclusive. All three may use the law of large numbers, the normal distribution, and averages. They all may sometimes use the principle of indifference or the uniformity of nature. So calling these three different approaches is a generalization about the direction that each one takes, knowing that their paths may cross or converge on occasion.

It is also more accurate to say there is a spectrum of approaches, with approaches (a) and (b) at the extremes and approach (c) in the middle. This corresponds to a spectrum of distributions with extremes of low and high variability and the normal distribution in the middle.

This suggests there is a statistic of a distribution that varies from, say, -1 to +1 for extremes of low and high variability that is zero for the normal distribution. So it would be a measure of normality, too. The inverse of the variability or standard deviation might do.

Compare the three approaches with an input-output exercise:

1. Given input 0 with output 10, what is output for input 1?
1. Could be anything
2. The same as for input 0, namely, 10
3. The mean of the outputs, namely, 10
2. Also given input 1 with output 12, what is output for input 2?
1. Still could be anything
2. The linear extrapolation of the two points (10+2n), namely, 14
3. The mean of the outputs, namely, 11
3. Also given input 2 with output 18, what is output for input 3?
1. Still could be anything
2. The quadratic extrapolation of the two points (10+2n+n^2), namely, 25
3. The mean of the outputs, namely, 40/3
4. Now start over but with the additional information that the outputs are integers from 1 to 100.
1. The values 1 to 100 are equally likely
2. The values 1 to 100 are equally likely
3. The values 1 to 100 are equally likely
5. Given input 0 with output 0, what is output for input 1?
1. Bayesian updating
2. The same as for input 0, namely, 0
3. The mean of the outputs, namely, 0
6. Also given input 1 with output 5, what is output for input 2?
1. Bayesian updating
2. The linear extrapolation of the two points (5n), namely, 10
3. The mean of the outputs, namely, 2.5, so 2 or 3 are equally likely
7. Also given input 2 with output 9, what is output for input 3?
1. Bayesian updating
2. Since there are limits, extrapolate a logistic curve ((-15+30*(2^n) / (1+2^n)), namely, 12
3. The mean of the outputs, namely, 14/3, rounded to 5

2008

# Assertions

Motivating Example

According to the Gospels, there was an inscription above Christ on the cross which said (in English translation):

Matthew (27.37): “This is Jesus, the King of the Jews. ” (ABD)
Mark (15.27): “The King of the Jews.” (D)
Luke (23.38): “This is the King of the Jews.” (AD)
John (19.19): “Jesus of Nazareth, the King of the Jews. ” (BCD)

Note that the versions are composed of these phrases which appear in this order: (A) “This is”, (B) “Jesus”, (C) “of Nazareth, (D) “the King of the Jews.” Hence the capital letters in parentheses above.

What did the inscription say? If we insist that every true statement must tell “the truth, the whole truth, and nothing but the truth,” then at most one of these versions is true. If we expect every true statement to be consistent with the others though perhaps incomplete, then we would conclude that their union is the complete (or more complete) truth: “This is Jesus of Nazareth, the King of the Jews” (ABCD). If we expect every true statement to contain the truth but may be partially inconsistent with others, then we would conclude that their intersection is the whole truth: “The King of the Jews,” (D) the version Mark has.

We often need to analyze statements from different sources and then determine which ones are correct or how correct statements can best be extracted or reconstructed from the sources. So we could analyze the statements into four categories:

(1) The statements we are confident are exactly true, that is, they are consistent and complete representations of what is the case. The reason may be because they are from sources that are fully trusted, or we have knowledge which confirms them, etc. The conclusion should be the maximally consistent and complete set of statements.

(2) The statements that are consistent with statements in (1) and with each other but are partial statements of what is the case. They may be from reliable sources which aren’t expected to express full knowledge of what is the case. They are consistent but incomplete representations of what is the case. The conclusion should be the maximally consistent set of statements.

(3) The statements that contain the complete truth but may be partially inconsistent with each other. Perhaps they are different views of the same subject or they’re garbled messages but still reflect an authentic source. They contain a common truth that is consistent with (1) and with each other and so can be safely accepted. The conclusion should be the maximally complete set of statements.

(4) The statements that are rejected, that is, considered false.

In category (1) the default value for statements is false. If there’s anything wrong with a statement, it should be considered false. Only if it is found to be completely and consistently true should it be accepted as true.

Standard (Fregean) logic deals with category (1). If a statement in category (1) is accepted as true but turns out to be false in any way, it would be disastrous. Material implication would “explode” and any statement could be inferred. So statements in (1) must be exactly true or else rejected and the conclusion must be a tautology.

In categories (2) and (3) the default value for statements is true. If there’s anything true in a statement, it should be considered usable for determining conclusions. Only if it is found to be totally contradictory (2) or completely false (3) should it be rejected as false.

***

In standard logic, contradiction is concerned with whether or not a statement can be both true and false. The standard Principle of Non-Contradiction asserts that no statement can be both true and false. The standard Principle of Excluded Middle also applies to any statement. It asserts that every statement is either true or false.

In Aristotelian logic, contradiction is a relation between two statements. Given a statement and its negation: The Principle of Non-Contradiction asserts that at most one is true; both can be false. The Principle of Excluded Middle asserts that at least one is true; both can be true.

In standard logic, the Principle of Double Negation (p equals not not p) implies both these principles so they tend to be combined into one Principle of Bi-Valence: every statement is either true but not false or false but not true and no other values are allowed.

The principles of Aristotelian logic are better suited to deal with the statements in (2) or (3). In (2), the Principle of Non-Contradiction is upheld to protect consistency. In (3) the Principle of Excluded Middle holds. Both principles hold in categories (1) and (4).

Categories (2) and (3) complement one another. By exchanging consistency and completeness they can be mapped to one another. This gives us a clue as to how to solve the case of category (3): transform them into category (2) and solve them.

2008