# Objections to Verificationism and ‘It-From-Bit’

Schlick’s verificationism is vulnerable to a number of objections.  In light of the similarities between informationism and verificationism, we might wonder whether informationism falls prey to the same sort of objections.  We will now discuss some objections to the given and see if the sort of informationism held by Wheeler can overcome them.

The most immediate objection to Schlick’s verification principle is that the verification principle itself is not logically verifiable.  Fortunately for Wheeler, this will not be a problem for informationism.  Wheeler is not committed to the meaning of his statements relating to some atomic properties of perception.  Meaning is the joint product of all the evidence that is available to those who communicate.  Evidence can be either direct or indirect.  There is no recourse to unanalyzable, non-theoretical features of perception because instead, Wheeler relies on the notions of the kind of question asked and the digital response.  A digital response need not be an atomic response.

Another concern for both verificationism and informationism might be, how can we have third person scientific knowledge if all scientific knowledge is based on 1st person statements?  Fortunately, there is agreement in third person scientific knowledge between scientists.  Supposing that each has a different experience, the fact that they all agree in the way that they communicate suggests that there is a structural similarity between each’s first-person experience.  Scientific knowledge and theory are intimately connected.  And theory is about the structure of relations between those things that feature in our experience.  The description of the structure may (and should) be identical, regardless of the organization of the features of experience for each individual.  And, indeed, this makes a great deal of sense on Wheeler’s picture.  This is because all ‘reality’ for each subject is information-theoretic.  And the information is constituted by the relations between its components, without ever being committed to saying what those components actually are.  Objective, third-person, scientific knowledge is information-theoretic — it strives to capture the formal relations between phenomena, regardless of what the character of the phenomena is to any particular individual.

A larger problem, raised by Plato’s ‘Theaetetus,’ regards the fact that if atomic statements are verifiable by an individual, then those statements will always be true.  And if those statements are always true (and so trivially true) then they can have no descriptive content.  It is as if someone were to say, ‘I’m sensing the thing that I sense over there in the manner that I typically sense it.’  This is completely and totally uninformative.  We will now elaborate on this.

Prima facie, on Wheeler’s view, knowledge and perception and intimately connected.  Knowledge comes from recording the binary responses of our measurement devices (and interpreting the responses in such-and-such way).  So it seems that ‘man is the measure of all things.’  We grant existential status only to those things which we can measure to be so.  This may be problematic.

Take six dice.  They number more than four by a half.  But compared to twelve dice, the six are fewer by a half.  It is both more and less.  But nothing can become greater or less while remaining equal to itself.  The number of dice is either ‘is greater’ or ‘is less’ depending on the frame of reference that it is considered in.  The veridicality of the ascription of the predicate depends not on the properties of the object under question, but more upon its mode of consideration.  This seems an impoverished notion of knowledge, for it does not seem to give us insight into the actual properties of the object.

Moreover, intuitively, it seems that perception is the union of capacity for sensation and an object of sense.  Perception depends on some connection between an agent with a capacity for certain kinds of sensations and an object with a capacity for producing those kinds of sensations.  But on Wheeler’s picture, it seems like the (‘physical’) object of perception has no (independent) existence until it is united with the subject (for instance, the scientist).  There can be no one, self-existent thing.  Rather, everything is related within the information space.  Each component in the space depends on its existence on the structure of the rest of the components of the information space.  There is a potential infinity of ‘physical’ objects and subjects (which can come together in perception) — each combination of object and subject produces a result which is not the same, but different.  This is because each perception is defined by the unique identities of both the object and the subject.  My capacity for perception, $\phi$, meets with an object with a capacity to produce certain perceptions in virtue of its identity, $\alpha$, to produce the unique perception, $(\phi + \alpha)$.  Another agent with capacities for perception, has his own identity $\psi$.  When he meets $\phi$, the perception is uniquely defined as the resultant of $(\psi + \alpha)$.  And there can be no justification for the claim that $(\phi + \alpha)$ is identical with $(\psi + \alpha)$.  Consequently, there is no other object I could encounter which should give me the same perception, for another object will correspond to a different agent-patient relation and so the perception must be different.  Nor can any object which affects me in a certain way, if it should meet with some other subject, produce the same perception.  For that perception will be uniquely defined by that other subject and the object.

When I perceive something, I must be the percipient of something.  For there could be no such thing as perception without some thing being perceived.  In the words of Socrates, ‘nothing can become sweet which is sweet to no one.’  So on Wheeler’s view we can only be bound to one another.  The existence of all things depend on their relation to something else — no thing can be absolute.

Moreover, if this is so, then all my perceptions must be true to me.  And if this is so, then how could I ever fail to know that which I perceive?  For if truth is found only in perceptual experience (or sensation), and no man can know another’s feelings better than he, then each is to himself the sole judge — and everything that he judges must be true.  There is no need for us to consult each other, for each is the God of his own perception and consequently determines what is true of his own reality.

Three points are crucial here.  (1) That there be some intersubjective agreement on matters of fact, (2) Wheeler does not mean to deny that there is some object of our perception, and (3) if we take the primacy of information spaces seriously, then that ‘there can be no one, self-existent thing’ is not as counterintuitive as you may suppose.

With regard to 1, while each individual may be the final arbiter of the character of his own perceptual experience, this only entails that his (honest) reports about the character of his experience be true — not that his (honest) reports with respect to his inferences from his perceptual experience be true.  I say, ‘such-and-such looks green to me,’ and this may be true, regardless of whether or not the object I am referring to actually is green.  But if I say, ‘such-and-such is green,’ then I am not reporting my experience, but rather reporting a fact inferred from my perceptual experience.  It is often the case that such inferences are false.  It does not matter that no identity can be drawn between $(\phi + \alpha)$ and $(\psi + \alpha)$; what does matter is that $\phi$‘s report and $\psi$‘s report be in agreement, not that they be identical.

With regard to 2, Wheeler, unlike Schlick, does not straightforwardly dismiss the notions of an internal or external world.  Rather, to confirm an object of reality, we just need some empirical justification, direct or indirect.  That there are objects of our perception is not denied.  What is denied is that they really are ‘physical,’ for the word ‘physical’ is itself a theoretical term.  It does not matter that perception requires the union of a subject and an object, for Wheeler allows there to be independent objects.  (He is just reluctant to make a definitive claim to their ontological status.)

With regard to 3, we must first consider Wheeler’s views on space and time.  Wheeler claims that there is no space, nor no time.  He cites both Leibniz, ‘…time and space are not things, but orders of things…,’ and Einstein, ‘Time and space are modes by which we think, and not conditions in which we live.’  He goes on to describe Einstein’s notion of spacetime, saying that on this theory, predicted fluctuations grow so great at distances on the order of the Planck length, that ‘they put into question the connectivity of space and deprive the very concepts of ”before” and ”after” of all meaning.’  So for Wheeler, spatial and temporal concepts are modes of thought, not features of reality.  This sort of view is lent support by the establishment of nonlocality and absolute simultaneity in quantum mechanics.  Split a pion to produce an electron and a positron.  The outcome of the measurement of the electron collapses the associated positron (into the opposite value), regardless of the distance between the two particles — the effect is absolute simultaneity, and that causes need not operate locally.  Absolute simultaneity entails that local realism is false, and if local realism is false then realism about special relativity is false, too (space and time are not part of reality).  Now recall how an information space is constructed.  There are difference relations between information states embedded in an information space, and the relations can be transmitted down some causal pathway.  You might think that there has to be some self-existing thing, that there must be some loop like this: physics gives rise to observer-participancy, observer-participancy gives rise to information, and information gives rise to ‘physics.’  So first, there is something that exists, which causes there to be observers, and only then can the information relation be constituted, wherein we can then access ‘physical’ knowledge.  This line of reasoning presupposes that time is a feature of reality and not a mode of thought.  There is something thought to ‘exist before’ which at some time later gives rise to observer-participants.  But if time is not a feature of reality, and reality is just an information space, then we cannot make sense of a real temporal relation between physical processes giving rise to observers.  Here’s one way to think about it.  All ‘reality’ is at once instantiated — objects, subjects, and relations, all.  You, as a subject instantiated someplace is the information-space of reality, perceive time to give order to your perceptual interactions with objects in the information space.  Objects do not precede you in time, they are instantiated alongside you in the information space and are experienced in a certain order.  As such, there is no need to talk about some unobserved/unobservable feature of reality prior to observation which gives rise to observers.

So it seems like informationism does, in fact, overcome the objections to verificationism that we’ve been discussing.  This looks promising for Chalmers.  However, there is a larger, more powerful objection to this kind of view which is clearly articulated by Sellars, and we will discuss next.

# Informationism and Verificationism – A Comparison

Wheeler’s informationism should remind us of Schlick’s verificationism and the old school of logical positivism.  Schlick shares with Wheeler this sort of hardline empiricism.  This section will explore the similarities and differences between the two.  As a first order of business, we should briefly explain Schlick’s verificationism.  (Note that this explanation can also be found in the above Schlick link.)

The main thrust of verificationism is this.  A statement is meaningful only insofar as it is logically verifiable.  Any statement that is not logically verifiable is not meaningful.  The only statements that are logically verifiable or knowable are those which reduce to some description of the given.  The given is the domain of all that is knowable; it is roughly your perceptual experience at some particular point in time.  The given should not be confused with the terms ‘the internal world’ and ‘the external world,’ both of which are meaningless for the verificationist.  This is because propositions like ‘there is an external world,’ will turn out to be not logically verifiable.1 All difference in the given is detectable.  Because the given is what is presented to you in perceptual experience, there can be nothing in the domain of the given that is undetectable.

Features in the given are describable with atomic words or atomic sentences.  Atomic words, like green, pain, and so on, can only be known by ‘pointing’ to some feature of our perceptual experience.  They cannot be understood in terms of other words.2  I point or otherwise gesture to a grassy knoll and say ‘that green.’  The word’s meaning is established by the agreement of the reactions of others, e.g. that other react by observing, ‘green.’  That is, the use of the word occupies the same relational-role in the given}as it is experienced by each of us.  For the verificationist, the question of whether the phenomenal quality of his green-experience is identical to the phenomenal quality of my experience, is meaningless.  This is because that fact is not logically verifiable.

Atomic sentences are composed of atomic words.  All complex propositions, like ‘there is a deer by the bush,’ are made of atomic sentences, like ‘there is a brown spot with such-and-such features by that green spot arranged in so-and-so way.’  So complex propositions are reducible to (some sequence of) atomic words, whose meaning directly describes the given.  To see this, suppose that a proposition’s meaning is something over and above its determining some state of affairs in our perceptual experience.  If this additional meaning is expressible, then it would be a (complex) proposition (and so nothing over and above an atomic description of some feature of our perceptual experience).  But if the meaning is not expressible, then it cannot mean anything, for that which expresses nothing means nothing.  So the truth or falsity of a proposition must correspond to a difference in the given in order to be meaningful.

It follows from this that the meaning of a proposition is identical with its verification in the given.  The meaning of ‘there is a deer by the bush’ is just whether or not there is a familiar arrangement of brown situated by another familiar arrangement of green, and perhaps some audible rustle — for these are the features of our perceptual experience which verify and are associated with the presence of a deer.  So if we cannot conceive of some verification in the given of the fact, then the fact means nothing.  So, a proposition is meaningful only insofar as it is logically verifiable.  A meaningful statement says that under certain conditions, certain data appear.3

Here are the similarities between Wheeler and Schlick.  Prima facie, both seem to share the verification principle — that is, the only statements that are meaningful are those which are logically verifiable.  For both Schlick and Wheeler, if something is meaningful, it must correspond to some empirical indication of fact.  Consequently, both Schlick and Wheeler grant existential status only to those things that have some possible effect on our perceptual experience — for something to exist, it must be meaningful.

They also share a sort of ‘atomism’ about reality.  For Schlick, meaning comes from the atomic features of our perceptual experience.  For Wheeler, meaning comes from the binary answer to a question.  But these binary answers are a lot like the ‘atoms’ of Schlick, as for both reality bottoms out at something that is impenetrable to further investigation or analysis.  The ‘atoms’ of Wheeler are fundamental digital questions/answers, while the atoms for Schlick are atomic words that directly ‘point to’ features of perceptual experience.  They differ in how and when they ‘bottom out,’ but they agree on ‘bottoming out’ somewhere upon which the entirety of our discourse gets its meaning.

Both the ‘it-from-bit’ doctrine and verificationism, at heart, are deeply antimetaphysical views.  For Wheeler, physical objects have the status of ‘theory’ because they are the result of an interpretation of a binary item in our perceptual experience.  Because reality is theoretical, we ought not make metaphysical claims about it and, moreover, at any rate, such claims will be meaningless.  Likewise Schlick, in explaining the given, emphasizes his avoidance of any commitment to an internal or external world — for such concepts are meaningless.  Metaphysical statements are not verifiable, and so not meaningful; whence the antimetaphysicalism.  But if we take physical objects to be objects in the external world, then Schlick will see physical objects as the same sort of ‘convenient’ myth as Wheeler and Quine (for, for Schlick, there is no external world — any talk of the [objects of] the external world can only be taken as heuristic).

The differences between Wheeler and Schlick primarily revolve around (1) space and time, and (2) meaning.  For Schlick, space and time will be features of the given, their reality easily ‘verified’ by the mere fact of the given at all.  In contrast, Wheeler sees space and time as modes of thought, not part of reality.  If space and time are modes of thought, then there must be something that we are thinking about.  This seems to imply that there is something external to us or mind-independent that our thoughts try to ‘reach out and grasp,’ or represent — but this kind of talk is forbidden on Schlick’s account.

For Wheeler, meaning is the joint product of all the evidence available to communicators.  For Schlick, meaning is identical with method of verification in the given.  Prima facie, these views are rather similar.  But for Schlick, all meaningful statements must be reducible to some concatenation of atomic words, directly referring to the immediately apprehensible features of the given.  Wheeler doesn’t explicitly commit himself to such reductionism (to atomic words).  Rather, evidence is more broadly construed so that we can actually talk about theoretical entities without talking about only our phenomenal experience. For Wheeler, to say that there is a forcefield is to infer a theoretical fact about reality from a set of registrations on some device.  Schlick, in contrast, maintains that just to say that there is a forcefield is to say that such-and-such a device registers so-and-so in a particular way — and does not ascribe reality to the forcefield itself.  The differences in their respective accounts of meaning will be important going forward.

1. The truth or falsity of the reality of the external world has no impact on your perceptual experience.  If we are all in the internal world and this should be some fantastic dream, there is no empirical matter of fact you could ever come across which would verify that you are in an internal or an external world.
2. For such a description of pain can only amount to something like, ‘pain hurts,’ ‘pain is the opposite of pleasure,’ or ‘pain is what makes you recoil.’  The first is a tautology, the second is almost as trivial, the third overbroad and not necessary, and none of them convey any nontrivial knowledge about what pain actually is to the person who has never experienced it.
3. For such a statement to be verified re vera, is for there to be consistent agreement in the reactions of a sufficient number of persons to a given stimulus — an agreement that under certain conditions, certain data appear.  (In this way, hallucinations and illusions will not be verifiable.)

# It from Bit, Information as Fundamental

The main problem that leads Wheeler to propose his ‘it-from-bit’ doctrine is the mysterious nature of the fifth axiom of quantum mechanics, viz. the collapse postulate, which we will discuss later.  ‘It-from-bit’ is an antimetaphysical thesis.  The motivation for holding an antimetaphysical thesis is that it provides a clearer notion of truth and a definite, methodical path to getting there.

Wheeler’s central distinction is between ‘its’ and ‘bits.’  An it is a thing (that is, something that we ascribe existence to).  This class includes particles, forcefields, the spacetime ‘continuum,’ and your mother’s rosebush.  A ‘bit,’ is an apparatus-elicited answer to a yes-or-no question (that is, a binary choice); e.g. the counter registers a click in a specified second, indicated ‘yes’ for ‘photon.’1   Every ‘it’ derives its function, meaning, and existence from ‘bits.’  The reality of every ‘it’ is derived and established from the affirmative answer to a binary/digital question.  I establish the reality of my coffee mug by asking ‘is there a coffee mug on the table?’, looking to it and registering the familiar shape of the cup and handle, and the characteristic deep blue color, in my visual experience (resulting in an affirmative answer), and then I can say, ‘there is a coffee mug on the table.’

Wheeler says that ‘It from bit symbolizes the idea that every item of the physical world has at [very] deep bottom…an immaterial source and explanation;…reality arises in the last analysis from the pose of yes-no questions and the registering of equipment evoked response.’  This amounts to: all things physical are information-theoretic in origin — that is, information is in some sense ‘prior to’ the physical world.  We can break most things down and explain them in terms of their component parts — and take those component parts and do the same.  But eventually we will bottom out somewhere (binary).  Suppose we reach the most fundamental physical particle — some physical point — call it $\omega$.  At that point, the only question we can ask is the brute question, ‘is an $\omega$ there?’ as we cannot explain it in terms of other things (or anything else more fundamental).  If we can measure its presence, and in the affirmative, then that is the brute bottom of our explanation of $\omega$.  But the reality of $\omega$ comes from being able to measure its presence.  The information precedes the ascription of existence to the physical object.

Wheeler shows how this comes out in a number of ways.  Take a putative physical object, like a forcefield.  We measure the strength of a forcefield by using a device which measures shifts in interference patterns by representing the number of ‘fringes’ in the pattern.  But all the fringes can possibly stand for is a statistical pattern of yes-no registrations.  Or consider how we determine the existence of a photon.  We ask a question like, ‘did a counter register a click during a specified second?’  If so, we say, ‘a photon did it,’ thus ascribing existence to the putative physical object on the basis of binary information.  Blackholes furnish a particularly interesting example.  Consider the following discovery by Bekenstein.  The surface area of the horizon of a blackhole measures the entropy of the blackhole.  Thorne and Zurek explain that, in performing an operation on the value of the surface area we get $N$, the number of binary digits (‘bits’) required to specify in all detail the constituents of the blackhole.  Entropy is a measure of lost information.  No outside observer can determine which of the $2^N$ configurations of bits compose the blackhole.  So the size of a blackhole (an ‘it’) is defined by the number of ‘bits’ lost within it.  Finally, a more ordinary example.  You wish to determine whether or not your tea is too hot to drink.  If you taste it and burn your mouth then ‘yes’ it is too hot to drink.  If you taste it and do not burn your mouth, then ‘no’ it is not too hot to drink.  In this way, the evaluation of a putatively physical property like temperature is reduced to a binary choice, and so the information precedes the ascription of the property.

What this means is that physics can be cast in terms of information.  Wheeler calls (physical) reality a ‘theory.’  We can make each physical item a (metaphysically neutral) element (in some arbitrary state — either 0 or 1) in an information space, and characterize the relations (and their similarities and differences) between elements without ever being committed to a metaphysical claim about what those elements actually are.  Physics does not require a commitment to physicalist metaphysics.

For Wheeler, the notions of ‘meaning’ and ‘existence’ are intertwined.  Meaning is ‘the joint product of all the evidence that is available to those who communicate.’  So for something to be meaningful it must be (1) communicable and (2) empirical.  Let’s explain 1 first.  It’s plausible to say that anything expressible is communicable (and vice versa). If something that is meaningful were not expressible, then it could not mean anything, for that which expresses nothing clearly means nothing.  So for something to be meaningful it is necessary that it be expressible. Now let’s explain 2.  Something that is meaningful must make an empirical difference — that is, there must be some item in possible perceptual experience, which is logically possible to access, that corresponds to the thing’s truth-value.  This notion of meaning is not as impoverished as you might expect, for there is quite a bit of evidence available to communicators.  Even with regard to the past, an intrepid crew of investigators, armed with the right equipment, will be able to establish that such-and-such happened so-and-so long ago in the past, based on some chain or network of physical evidence.  Their findings will contribute to the establishment of that past event’s meaning.  Here’s how this importantly ties into existence.  If a $\phi$ is not meaningful, then it is meaningless to assert something like, ‘$\phi$ exists.’  (Attach any other predicate you choose, and it will nevertheless presuppose the existence of $\phi$.)  For suppose I do assert that ‘$\phi$ exists.’  That entails there must be some possible item in my perceptual experience which ‘verifies,’ so to speak, the existence of $\phi$.  If there isn’t anything that I could see, or smell, or taste, or hear as some result of $\phi$‘s existence, then it means nothing to say that $\phi$ exists.  What would it mean to ascribe existence to something which could never impinge upon our perceptual experience?  Its existence or lackthereof will never affect the truth-value of any proposition of this world.  So if something is not meaningful, any assertion of its existence is meaningless; therefore we can only grant existential status to those things which are meaningful.

Chalmers’ observes that this sits nicely with the idea of Shannon information.  In Shannon information, where there is information, there are information states embedded in an information space — where an information space is a structure of (difference) relations between its components.  Differences may be transmitted down some causal pathway.  Notice how this sits nicely with how Wheeler thinks that past events are meaningful.  Consider the infamous tree that fell in the forest with no one to hear it.  Nevertheless, its fall will leave some kind of evidence (like a depression in the ground, scattered needles, etc…) which some investigators may happen to stumble across (and so make the fall meaningful).  On this picture, the tree in the forest, prior to its fall, is an information state (to be defined in terms of its relations to other trees, perhaps).  The information space evolves, differences are transmitted, and some information relates to the tree in such a way that it falls (its fall constituting continuous differences down a causal pathway).  When the information corresponding to the falling tree is related to the ground, there are changed information states corresponding to the depression it leaves and the needles which scatter.  The information is finally communicated when our intrepid explorers see the depression and so receive the information of the tree having fallen.

Physical ‘its’ must come from ‘bits,’ which are discrete, for there is no continuum in physics.  There is no continuum in physics because there can be no continuum in mathematics.   Of the number continuum, Weyl says ‘belief in this transcendental world taxes the strength of our faith hardly less than the doctrines of the early Father of the Church.’  Likewise there can be no continuum of/for physical objects; they must be discrete.  Quine articles this point quite well, ‘Just as the introduction of the irrational numbers… is a convenient myth [which] simplifies the laws of arithmetic… so physical objects are postulated entities which round out and simplify our account of the flux of existence…  The conceptual scheme of physical objects is a convenient myth, simpler than the literal truth and yet containing that literal truth as a scattered part.’  (1) That physics is discrete in this way means that it must yield to digital questions and consequently physics will be information-theoretic.  (2) I think that the phrase, ‘conceptual scheme of physical objects’ is particularly telling.  We interpret empirical evidence through a particular theory or conceptual lens — to call an object ‘physical’ is just to conceptualize a feature of our perceptual experience in a certain way.  We reserve the word ‘physical’ just for those meaningful, empirical items in our perceptual experience.

So on Wheeler’s account, ‘reality’ has the status of theory.  Reality is constructed out of the kinds of questions we ask about the world, and the ways in which we interpret those binary answers.  To press the point, consider how we measure the spin-properties of electrons.  Suppose I have an electron.  I cannot ascribe either ‘black’ or ‘white’ or both (or neither) until I shoot the electron through a color-box.  If the color-box does its job right, the outcome of the measurement will be with ‘black’ or ‘white’ (each with exactly $1/2$ probability).  But my choice to measure color disrupts the electron’s hardness value — that is, I can never predicate a definite hardness property and a definitely color property to the same electron at the same time.  The moral is that the choice of question (e.g. what is the hardness? vs. what is the color?) and the choice of when the question is actually asked play (some [but not the whole]) part in deciding what we can justifiably assert about reality or ‘the World.’  So to say that reality is a theory isn’t as unintuitive as it may first appear.

So if information is primary to physical objects — and, indeed, the status of physical objects is merely theoretical — then it seems like something which must be fundamental is perceptual experience.  This is the notion which led Chalmers to suggest something like Wheeler information as the fundamental constituent of reality.  Our conscious perception of ‘the World,’ or of things underlies all other empirical (and even metaphysical) knowledge.  Without it, reality wouldn’t even speakable.  So on this picture, physics has a distinct theoretical quality, whereas perceptual experience is non-theoretical and most fundamental.

1. Here’s the reason why I put scarequotes around ”photon.”  We ascribe existence to the thing we think caused the counter to register a click.  But a photon is a theoretical entity (you can never actually see a photon — only its causal influence).  If we conducted the experiment within a different theoretical/conceptual framework, we may attribute the registering of a click to some other theoretical entity.

# An Update and Chalmers

This aside is meant to provide some context for upcoming posts.

In ‘Facing Up to the Problem of Consciousness,’ David Chalmers explains exactly those features which make the ‘hard problem of consciousness’ oh so very hard. In light of his explanation, he suggests a way forward for nonreductive explanation, pointing out that in physics, it occasionally happens that an entity must be taken as fundamental (that is, not explained in terms of any [simpler] constituents or component parts), pointing out that to explain electromagnetism, the ontology of physics had to be expanded (with new basic properties and laws to explain electromagnetic phenomena). In a similar way, physics takes matter and spacetime as fundamental.

This leads him to propose that we take perceptual experience as fundamental (in a way analogous to electromagnetism and spacetime). He points out a number of features that a nonreductive theory of consciousness ought to have, and then goes on to suggest three basic principles in a theory of consciousness, namely (1) the principle of structural coherence, (2) the principle of organizational invariance, and (3) the double-aspect theory of information. Our discussion will primarily revolve around 3.

Chalmers understands information in the sense of Shannon information.  From the observation that there is a direct isomorphism between physical processes and phenomenal (or experiential) processes — that is, physical processes and conscious experience each embody the same abstract information. Chalmers hypothesizes that (at least some) information has two basic aspects, viz. the physical and the phenomenal. On this thought, experience arises in virtue of its status as one aspect of information and physical processes embody the other aspect of information.

Chalmers says that one motivation for this view comes from Wheeler and his ‘it-from-bit’ doctrine. The laws of physics can be cast in terms of information, and he takes it that Wheeler shows that information is fundamental to the physics of the universe.

Upcoming posts will be primarily concerned with exploring the viability of both information being fundamental and if this is the sort of thing that would actually buttress Chalmers’ view.

# Semiotics and System Development

This post will concern the development of systems, assuming the earlier supposed ‘systems theory’ ontology. In particular it will focus on the ‘canonical development trajectory’ (CDT) and show this fact, with an ontological plurality of systems, suggests emergent dualism, and a systems-theoretic way of thinking about the development of consciousness experience.

Assume the definition of Nature previously provided. A system undergoes development, and this development can be modeled as part of Nature. Development is a process from vague beginnings toward more and more specified particulars. Each stage of development will be a ‘refinement’ of the earlier stages, via having acquired more information in its process. For example, consider how a stream may develop from a glacier or mountainous lake. As the stream descends, its path down the mountain (its development) will progress in a way supervenient on the totality (of the development) of the preceding stages (this is how the system becomes increasingly specified as it accumulates more and more information at each stage [e.g. about the ruts and ridges which further specify the trajectory of the stream]).

Few developmental tendencies are universal; one such is the ‘canonical development trajectory.’ In CDT, each stage is defined by some thermodynamic and/or some informational change. Consider the universe at its inception and assume that it is thermodynamically isolated. As the universe expands, it becomes increasingly disequilibrated. This is to say that the universe’s constituents slowly clump into vague, amorphous masses; in these masses, continued disequilibration sharpens them into having more definite forms. Through continued disequilibration these forms develop into organizations. And so on, as the universe moves further from thermodynamic equilibrium.1 So the general pattern of development looks like this: [vague $\rightarrow$ [more definite $\rightarrow$ [more ‘mechanistic’]]]. This pattern of development occurs at different rates for different kinds of systems (e.g. chemical, biological).

Note that things like striving and haste make work less energy efficient, and that entropy production is the way that a disequilibrated region can foster or restore (some of) the universe’s thermodynamic equilibrium. This observation leads us to an Aristotelian conception of cause qua the development of a system. Consider a developing system S — what is it that causes it to develop in the way that it does? (1) The components of the system must have a susceptibility to being changed. That is, the ontological status of the substrate of the component must be the kind of thing that can undergo some kind of change. E.g. the bronze of Aristotle’s bronze statue is the kind of material that is susceptible to being carved into a statue. This is the material cause. (2) The system must have initial and boundary conditions, organized in a definite and particular way, constraining its development. The ruts and ridges of the mountain set the conditions for the development of the water (for instance, the water will flow down a rut and cannot balance on a ridge). This will be the formal cause. (3) There is some inciting ‘force,’ ‘push,’ or ‘action’ that initiates the process. Perhaps the dam (holding the water) gives way, initiating the water’s developmental flow down the mountain. This is efficient cause. (4) There is some thing that it ‘strives’ to develop into as it gains more information. This ‘striving’ will be attributable to the Second Law of Thermodynamics. The water flowing down the mountain is trying to ‘dissipate’ its energy (and increase the entropy of the universe, pushing it toward equilibrium). Put plainly it is the answer to ‘why’ the water flows down the mountain the way it does. All ‘work’ is undertaken to move the universe toward increased equilibrium (and entropy). This will be the final cause.

Each level in development represents a different ontological (or integrative) level: [physical dynamics $latex \rightarrow$ [chemistry $\rightarrow$ [biology $\rightarrow$ [neuropsychology (and so on)]]]] This is contrary to the ‘unity of the sciences’ perspective, which maintains that lower levels give rise to and subsume higher levels — that each higher level is supposed to be entirely supervenient on the lower level(s). But the ‘unity of the sciences’ thesis doesn’t seem to match with reality (or how we take things to be). For, prima facie, it seems that each higher level integrates and harnesses all the lower levels under its own local rules. That is, they synthesize all the lower levels in order to promote the final cause of the higher level from one moment to the next. To see this, consider a person (a biological system). Individual action comes from a person instantiating neural firings, which defines and restricts the space of biological (re)actions; the organization of the biology sets the criterion for what chemical/molecular actions can be, and so on. The highest integrative level determines the space of action for the lower levels. We say that levels are ontologically separate in that they are ‘integrative.’

In the course of development, there are two (interrelated) principles we must keep in mind, viz. total novelty denial and the principle of continuity. The former asserts that nothing ‘totally new’ appears in the course of development. The latter asserts that all emergent features at higher integrative levels would have been ‘vaguely and episodically’ present (primitively) in the lower levels. This entails that any present configuration at a high level implies that which gave rise to them (either ontologically-materially or conceptually). It’s worth noting that total novelty denial does not deny that something ‘apparently novel’ may emerge, only that it is not actually novel. If that sounds confusing, here is an example. You might think that the development of some new species or of a unique poem seems ‘new.’ But it is not ‘new’ re vera. The development of each is just a restriction on what was possible before its (that is, the species’ or poem’s) emergence. Salthe asks us to consider language. We ‘chose’ the English language and can express infinitely many things with it; however, English cannot express the ‘larger moods’ expressible in French (which may have been expressible in a common, ancestral language). So something ‘apparently’ novel can emerge in the long run,2 but in actuality it is just a restriction on the previous stage(s) of development.

And now we can reach the crux of the matter. Consider the proposed developmental pattern: [physical dynamics $\rightarrow$ [molecular connectivities $\rightarrow$ [biological activities $\rightarrow$ [individual action $\rightarrow$ [sociopolitical projects $\rightarrow$ [culture]]]]]. Notice that as the development of this system proceeds, causation is less susceptible to material reduction (in the sense that, at higher and higher integrative levels, it becomes progressively more difficult to describe the actions at those levels in terms of the more basic systems it out of which it arose). For when we reach culture, we have not a material form (assuming we ever thought we had one) of evolution, but a purely informational form of evolution. For culture is a product of information, of written history and literature, of the behavioral customs and norms between people of the same society, the thought behind certain practices, and so on. Culture is a highly-integrative level, and is in no sense straightforwardly material. By total novelty denial and the principle of continuity, culture logically implies constituents which give rise to it. But the constituents of culture are purely information, not matter. This, then, is resistant to material reduction, for the constituents that it implies are informational, not material. That development moves toward this suggests something like semiotics being the ‘ultimate framework’ of reality, as opposed to matter. Semiotics amounts to information through isomorphisms in ‘signs.’ And this seems consistent with our earlier developed ‘systems theory’ ontology — the (onto)logical priority of systems. If what constitutes a system are (1) ‘objects’ and (2) relations between them, then a semiotic framework seems intuitive. For each ‘object’ can be taken as a token or sign, and the signs will be related in a certain way. Two discrete systems are isomorphic if we can establish some \meaningful} mapping between them.3  If the concept of a system is the most prior, then the domain of semiotics seems to follow almost immediately after.

1. The continued development of increasingly complicated systems will, then, be dependent on the continuation of the universe’s expansion.
2. Here’s lookin’ at you, consciousness.
3. A robust account of this can be found in Godel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter.

# The Priority of the System

The concept of a system is going to be logically prior to any other concept. This has both scientific and metaphysical ramifications. This paper seeks to explain systems’ priority and touch on the consequences thereof.

Suppose that physicalism is true. Reality consists just in matter and motion, governed by physical laws, and that reality is nothing over and above this. The physicalist thesis logically entails that all reality is a physical system.

But to even talk about a ‘physical system’ or conceive of one, we must first have the concept of a system. For now, we can think of a system as (1) composed of components, (2) composed of relations between those components, and (3) the relations between the components perform some kind of function. To think of any thing requires first the thought of a system. Even to think of a thing in isolation is to think of a system. For suppose I consider a system amounting to nothing more than a thermodynamically isolated rock. To think of a thermodynamically isolated rock, I must also think of the pieces of rock which constitute it (exactly what constitutes a rock) — and that’s going to be some relation of things. The constituents of the rock relate to each other in such a way as to function as the identity of a discrete (if isolated) object. Or consider the following. Systems are prior to even the most general and abstract scientific discipline, logic. For to even do logic requires that one have some set of sentence letters and some set of axioms. The sentence letters amount to components and the axioms define the possible relations. Often, the relations between the sentence letters function to output (or be capable of outputting) some truth-value. Logic requires the instantiation of a system.

At the inception of any new science, there is some new realm of objects that are to be investigated. These objects are components and the relations between these objects (and their functional outputs) are to be investigated. New sciences require the instantiation of new systems.

Here’s an important point about modeling that reveals something crucial about systems. For a model of any system (take, e.g., a physical system) to be a good model, it need only depend on the model’s formal characteristics. We can create a cybernetic model of an organism. A cybernetic model simulates all the behaviors (of an organism) regardless of its material constitution (and its ontological status) — all that is important to the model is the preservation of the formal relations between components, but what those components actually are is irrelevant to the functioning (of the model).1  What this suggests is that material constitution of a system is irrelevant to that system’s functioning. An organism with a biological-material structure can perform the same functions equally as well as a cybernetic mechanism — the material constitution differs, but the functioning does not. So our understanding of a system must be independent of our understanding of its material constitution. That is, the whole (the system) is a sum of logical relations or connections between objects of any ontological constitution.2  If the ontological constitution of the object is irrelevant to the system, then we may consider systems as composed of mechanical stuff, or material stuff, or mental stuff without changing the behavior of the system. In cybernetics, when we try to simulate an organism, we must always make use of strictly formal concepts or tools like feedback, information, or control. And these concepts are what actually must figure in at the level of the simulated organism’s functioning, too. In this way, a model depends only on the formal concepts and not on the physical substrate. Systems theory uses these formal tools and is more general — and this generality heralds its ontological priority.

Now consider a system S composed of subsystems S$_1$, S$_2$, and so on. In this situation, we can study the structure of just one of the subsystems (say, S$_1$). To S$_1$ we take some established science and use it to study the relations and mathematical functions that govern S$_1$. Then we go on to S$_2$ and do the same thing (using, perhaps, some different established science). And then onto S$_n$, and so on. After studying each subsystem, we can consider all the relevant relations and mathematical functions which hold between S$_{1...z}$. For example suppose we consider the system of a person (something with both mental and physical attributes). We can study the structure the body system and, independently, we can study the structure of mental life (the mental system/perceptual experience). Having done this, we can try to map features of the mental structure to features of the physical structure (aiming to achieve some sort of isomorphism) in order to understand their relevant similarities and differences and how they both importantly figure in to the overall constitution of the person. Note, however, that the person is not going to be reducible to the mental or physical subsystems (or both), because the presence of both (working in tandem [in some way]) is going to be what makes the person the system such as he is. This highlights how we can cut up and divide a system in whatever way is most fruitful to investigation, give each subsystem the scientific treatment it deserves, and then look at the relations between each of those (in a sort of general systems theory).

That systems are divisible and relatable in this way is refreshingly antimetaphysical. We do not need to be committed to metaphysical theses which maintain that all reality is a physical system. Such theses are, at any rate, impotent. For if you are given the world, knock your head against it, and say, ‘Ouch! How physical,’ then you are simply appending a label, ‘physical,’ to the world and haven’t yet said anything about it. For a physicalist thesis to have any teeth, there must be nonphysical things that do not (or maybe could not) actually exist in the world. To say that all reality is physical is to say nothing; it is tantamount to saying that all reality is just reality. (Moreover, the physicalist who claims he can not even conceive of anything non-physical renders his physicalist inert. He is not saying that all things bottom out at the physical, he is simply saying that all reality is physical. This is nothing more than an uninformative new label for ‘reality.’) Here’s another way to bring this out. Physical laws can be cast in terms of ‘information.’ We can talking about how different states give rise to different effects without ever specifying the ontological status whatever is actually in that state.3  All that matters is the position of the object in the information space.

One advantage of a ‘systems theory’ view is that it is a better ontological fit with our ways of thinking about science and scientific objectivity. Because of the reducibility problems earlier mentioned, the ‘unity of the sciences’ thesis seems not only ad hoc, but forced. Different researchers operate in different domains — it is not as though physicists are out to discover the truths of neurophysiology, but rather the motions of bodies. When we are not dogmatically trying to reduce one science to another, we tend to treat each science as more or less independent from the others. This is why a ‘systems theory’ approach sits nicely with our contemporary scientific, intellectual climate.

And a parting thought. Reality or ‘the World’ is what is mediated to us in perceptual experience. All we are directly acquainted with are the aspects of our perceptual experience. Schlick, a logical positivist and verificationist, was distinctly antimetaphysical in a similar sense. He referred to ‘the given’ as what is (possible to) present to us in perceptual experience, and claimed that ‘the given’ is the domain of all that is knowable.4  I do not think of ‘the given’ in as narrow and impoverished a way as Schlick; I will, however, admit that reality or ‘the World’ is only present to us insofar as what information we can (possibly) acquire or know about reality must be contained in our perceptual experience (or else be some kind of a priori knowledge). ‘Nature’ should be thought of as distinct from ‘the World.’ Call Nature what is not what we can be presented with in perceptual experience, but is rather our scientific construct for scientifically examining the world, and it must be mediated through language. Nature is what happens when we talk about the world, try to contain and grasp it in our language (and its corresponding concepts). Consequently, statements about Nature are going to be theory-laden. Laden with whose theory? Observation about nature will be couched in the preexisting theoretical structure of whoever the observer is. Berkeley and Newton’s conceptions of Nature are radically different, but they both ‘reach out and touch’ the same reality or ‘the World.’

When the physicalist says that reality is a physical system he is not making a claim about ‘the World.’ Rather, he is making a claim about Nature, that all empirical science is ultimately about fundamental physical particles. This has a certain intuitive appeal. But (1) reductionism is often not successful (as indicated earlier) and (2) this really just amounts to saying that we can only conceive of the objects of perceptual experience as made of material constituents (but again this just nerfs the meaning of ‘material’).

1. This has been used to argue that psychobiological entities must be considered as nothing over and above mechanical things/processes.
2. Take ontological constitution to be the ontological status of the component. As an example, you might think that the ontological constitution of your body is material, whereas the ontological constitution of your perceptual experience is mental
3. This is meant to emphasize the point about cybernetics. And also foreshadow discussion to come.
4. Presumably mathematical truths are either abstracted from ‘the given’ or else ‘the given’ actually refers to all that is empirically knowable.

# The Liar Paradox and the Measurement Problem

Just a quick observation.  I think that a telling analogy can be drawn between the measurement problem in quantum mechanics (QM) and the liar paradox.  This aside is just meant to draw out those intuitions.

The liar paradox amounts to more or less the following statement.   ‘This sentence is false.’  You know the drill.  If the sentence is true, then it must be false (for it asserts its own falsity).  And if the sentence is false, then it must be true (for the negation of its falsity is its truth).  In light of this (and to avoid infinitely ‘looping’ through the truth-values of the sentence), we say that we cannot ascribe any truth-value to the sentence at all, dub it a paradox, and call it a day.

Another account of the measurement problem can be found here.  Nevertheless, here’s the gist.  The dynamic equations of motion (DEM) (the third axiom of QM) are thought to certainly determine the states and motions of all particles (all states and motions are calculable [via the Schroedinger equation]).  By DEM, if we measure the color of a hard electron, the measurement outcome should be in a superposition of being both black and white.  But this isn’t actually what happens (and is where the fifth axiom of QM comes in).  The measurement outcome is always either definitely black or definitely white (with each result have a probability of exactly .5).  (Somehow measurement ‘disrupts’ the outcome, collapsing the superpositional state into just one of its terms.)

Say our goal is to identify what ‘the liar paradox’ would look like in a physical, rather than linguistic, system.  Superposition seems like a good candidate.  When the hard electron is going through the color box, it is in a superposition of being both black and white.  But we only really understand what superposition means in a negative sense.  An electron in a superposition of being black and white is not black, nor is it white, and it is not definitely both black and white, but nor can it be neither — and what that means, we don’t really know (so we introduce the term, ‘superposition’).  In one sense, it seems that, prior to the measurement outcome, a color-property simply cannot even be predicated of the electron.  With regard to its color, literally nothing can be said.  (Until it emerges from the device, but this isn’t as relevant.)

And this starts to look like a liar paradox.  We refuse to ascribe a truth-value to ‘this sentence is false,’ in the same way we refuse to ascribe a color-property to the hard electron going through the color box.

DEM says that the result of a measurement is superposition, but the collapse postulate predicts a probalistic outcome of .5 for black.  (And how could you ever even see a superposition?)  Suppose the outcome is black.  When you measure a second hard electron, the outcome will necessarily be white.  And when you measure a third, the outcome will be black.1  This sounds like saying: suppose ‘this sentence is false’ is true.  Evaluate the truth-value of the sentence again; it must be false.  And on the next evaluation, it must be true.

The difference between the two is that, empirically, measurements must have outcomes, while the liar paradox doesn’t demand a truth-value in the same way — we can reserve our judgment.

1. We’re fudging a bit here, but bear with me.

# Metaphysical Dogmatism

In this post I will explain the problem of reductionism in the sciences and why the insistence on reductionism is objectionably dogmatic.  In the course of doing so, I will introduce a theoretical framework free of metaphysical dogmatism and sketch out why this framework might be preferable.

A scientific enterprise investigates a very circumscribed domain of ‘objects.’  No scientific enterprise is concerned with the general domain of all ‘things.’  Even the most (rational) general research program will inevitably become specialized in its generality.  To see this, consider modern logic.  Logic is arguably the most general of all research programs, exploring (more or less) all conceivable abstract structures (and so all the ways things could be rationally related).  But even logic, in all its generality, is a highly specialized, technical, and complicated discipline.  It is in no way accessible to the layperson.1  It is a speciality, and in this way we might say that it is ‘specialized in its generality.’  If this is true for logic, then this should also follow for the rest of the sciences (e.g. physics, neuroscience, psychology).

In noting the specialized nature of sciences and their circumscribed domains, we should not be so quick to dismiss our ordinary intuitions when they conflict with whatever scientific paradigm is in vogue.  That our intuitions run counter to current scientific paradigms (of explanation) does not entail that our intuitions are mistaken.  For the current scientific paradigms may be founded on concepts and principles that are ill-suited to the character of whatever it is that intuitively we wish to investigate.  For example, consider biology’s transition from metaphysics to science re vera.  Biological concepts like finality and autoregulation escaped purely mechanical explanation.  So, at the time, these biological concepts seemed genuinely metaphysical — they could only be treated as part of the (obscure) nature of biotic systems.  But these metaphysical peccadilloes eventually stimulated the development of new concepts in a new scientific discipline (viz. biology).  And in so doing, features like finality and autoregulation become ‘objectivized’ and transition from ostensibly metaphysical happenings to scientific phenomena re vera.  (The metaphysical concepts become the objective concepts of the new science.)

There are two kinds of reductionism in science, viz. (a) metaphysical and (b) methodological.  The former tries to make a single science (e.g. physics or mechanics) fundamental and subsequently ‘trace back’ the remaining sciences (e.g. chemistry, neuroscience, sociology) to the single, fundamental one (with its basic constituents, matter and motion).2  The latter is the conception that a satisfactory explanation of some system of reality is achieved by analyzing the system into its components or elements.  This amounts to the idea that we look to the behavior of the parts to account for the behavior of the whole.  For example, there is the thought that biological facts are explainable by looking at chemical facts, and those in turn by looking at thermodynamical facts (e.g. we look to the behavior of individual molecules to explain the behavior of biotic systems).  For a time, there has been a philosophical view claiming that real philosophical progress requires logical reduction.  To understand biology we reduce it to the simpler chemistry (which is more primitive) and try to there derive all the corresponding biological facts.  The solution to skepticism was though to come from reduction, we know the higher-level facts by reducing them to the more basic and knowable lower-level facts.

First we’ll explain the problem with metaphysical reductionism.  (This can be seen as the core of the mechanistic world view.)  Privileging any single science as ‘fundamental’ is not a scientific notion.  There is no scientific reason to assert that physics is more ‘fundamental’ than chemistry.  This is not the kind of thing that can be decided empirically.  Consequently, the privileging of some, one science as fundamental is a metaphysical view.  To privilege a science as fundamental amounts to the adoption of [that science]-metaphysical world conception.  There have been attempts to privilege mechanics as the fundamental science.  But this amounts to adopting a mechanistic metaphysical world conception — it says that all there is is mechanics, and so all reality really is are the fundamental constituents of mechanics, namely matter and motion; all nature is reducible to these two things.  This is the kind of metaphysical dogmatism that Schlick taught us to avoid when he introduced his concept of ‘the given.’  And to really press the point, what we are directly acquainted with is experience, not matter or spacetime or somesuch.  If we were really gung-ho about reductionism, then we would have to admit that all knowledge (in something purportedly primitive or fundamental, like physics) isn’t really knowledge of the physical bodies around us, but is actually knowledge of experiments and meter readings, as these are the tools that we must use in our perceptual experience to get any knowledge of the physical universe at all.  This is a conclusion that most physical and material reductionsists should like to avoid, but nevertheless this where dogmatic metaphysical reductionism must inevitably lead.

To explain the problem with methodological reductionism, we should first describe how scientific explanation proceeds.  Scientific explanation is a deductive process.3  We establish general premises by proposing and testing/falsifying hypotheses.  After establishing the general premises, we can deduce, logically, its consequences (for any particular event or state-of-affairs).  The general principles are thought to govern the large variety of facts that we want to explain.  This is what makes it deductive.  (Think about how all states-of-affairs in physics are calculable.)

Now let’s consider how methodological reductive explanation works.  In any particular science (e.g. physics, neuroscience), all the facts pertaining to that science follow deductively from that science’s general premises.  But reducing one science, like biology, to a more ‘fundamental’ science, like physics, is trickier; the reductionist cannot maintain his neat deductive process of explanation.  For instance, consider how the reductionist tries to explain life.4  At the crucial point, in the transition from physical, abiotic chemicals to veritable life (self-organization; self-sustenance), the reductionist must invoke chance in his explanation.  He posits that something extremely improbable, though not theoretically impossible, occurs (at some point in the history of the universe).  In explaining the ’emergent’ fact of $\acute{e}$lan vitale, the reductionist abandons deductive explanation and appeals to accidental, improbable events.  This amounts to a vague and modest admission of a(n) ‘(im)probabilistic’ explanation for a scientific phenomenon that should be explained in a logically satisfactory/plausible way.  Consequently, if scientists ever succeed in synthesizing life from raw chemical constituents they still would not have provided an adequate scientific explanation for what had happened, but at most suggest an ‘interesting logical possibility of its origin.’ (For we still have no straightforwardly scientific, deductive explanation.)  Nevertheless, biology transitioned to a veritable scientific enterprise.  But it not as though the mechanists have won (insofar as they were concerned with scientifically adequate explanatory reduction) and that the vitalists lost (in that we no longer think of the $\acute{e}&bg=e7e5e3$lan vitale).  Its that we had to develop a richer set of concepts (viz. autoregulation and heredity) that we couldn’t have acquired in pursuing a purely physical scientific research program.

There are two kinds of methodological reductionism, (c) ‘looking down’ and (d) ‘looking up.’  We have already described (c), where the Science$_{n+1}$ is explained in terms of the more fundamental Science$_n$.  (d) works the other way, where we explain the Science$_n$ be ‘looking up,’ as it were, to the Science$_{n+1}$.  To bring out the difference, consider the science of psychology.  Advocates of (c) will claim that all of the ‘scientific’ aspects of psychology are actually in (and so reducible to) the domain of neurophysiology.  Advocates of (d) will claim that all psychology not reducible to neurophysiology must be traced back to the social context at large (that is, to sociology).  But both (c) and (d) reductionism are equally dogmatic and objectionable.  Again, empirical investigation will not compel you to privilege any single science (either via ‘looking up’ or ‘looking down’), so the insistence on any one fundamental science can only be metaphysical dogma.  There is no neat, deductive, hierarchical structure within the totality of scientific disciplines; they are largely discrete.

And this makes sense.  Consider any system S.  All components of S depend on their existence on the existence the whole S.  This becomes clear when we consider human, anatomical system.  The whole human (S) is made up of components (heart, lungs, arteries, liver, stomach, etc.).  The heart (and any other part) cannot exist in isolation.  For the function of the heart is to pump blood through a body — but to accomplish this task, there must be veins and arteries which carry the blood.  The stomach requires blood in order to perform its role within the system.  A human S without a heart is not a functioning system.  All the parts come together, in some kind of synthesis, to sustain and constitute the system and their existence cannot be made sense of in isolation.  To point to a higher structure or an underlying lower structure is to miss the genuine reality of the system in some way.

To free science of metaphysical and methodological dogma, we should not put stock in reductionism.  Reductionism does not conform to what we want out of scientific explanation — deduction of what-is-so from a set of general premises.  In its place, we may wish to consider the more flexible systems theory.  Without going into any detail, here’s the gist.  For now, we can think of a system as (1) composed of components and (2) composed of relations between those components.  A theory of that system will describe relations and mathematical functions that hold between components of the system, so that we might make predictions about states-of-affairs within that system.  The ontological status of the components of the system have no impact on how the system functions (but presumably all components within a system will be of the same type [but this is not necessarily so[^5]).  The advantage of this kind of approach, in examining some definite system, we do not need to justify the epistemic possibility of examining the system via reducing it to the definite knowledge of some other system which we are confident that we already have.  We do not need a full account of physics to do neurophysiology (and so on, up the hierarchy).  The only real role physics has in advancing neurophysiology is in developing/engineering the various tools that the neurophysiological enterprise demands for its investigation.

An intuitive reason for this comes from the following observation.  Whenever a new scientific discipline emerges, it always determines a new realm of ‘objects’ to be scientific investigated.  This new science brings with it a new technological, (empirical) conceptual apparatus which lends itself to investigating these newfound ‘objects’ of reality.  Systems theory allows us to look at this new discipline without the metaphysical baggage that comes with reducing all the processes to mechanical laws of matter and motion.  For one of the biggest obstacles to the advance of science (and, too, philosophy), is the dogmatic insistence on traditional categories like mind and body,'matter and spirit,’ mental and physical.’  Metaphysical reductionism insists on these distinctions (and for that reason is problematic), but systems theory does not force these distinctions upon us.  We can consider a system as just the relations between its components without taking any metaphysical/ontological stance on what those components actually are.

A second intuitive reason for this ‘systems theoretic’ approach: without admitting the primacy of systems theory, we would not be able to even see systems at all (even when they really are there). Consider a conic party-hat on a brown table.  If we just consider the party-hat by itself, we will never see all the conic sections which are inside it.  The conic sections only become apparent when we take our birthday-scissors and cut the hat in a certain way.  But when we cut it, we can genuinely see the circles, ellipses, or parabolas that compose it.  But try asking the question, ‘Are these geometrical shapes really inside the cone?’  There is no satisfying answer to this.  For if you say that they are not actually in the cone (but merely ‘potentially’ inside it or somesuch, only to be actualized by the veritable ‘cutting’ of the cone), then you may be met by the reply: but how could geometrical figure come out of the cone if they were not already really in the cone?

But with systems theory, we can ‘see’ how the cone is composed of the geometrical figures.  We can see systems that are in fact embedded in reality.  Systems theory allows us to ‘cut’ reality (or the cone) in such a way that we can see genuine systems where it had been previously difficult to see such systems.  In this way, we can understand reality in a much more comprehensive way, without worrying about reduction.  All features of the world become epistemically accessible to us — we may investigate systems of nations states just as we may investigate quarks (for the are both real components of the world, despite the prima facie differences in their ontological statuses).  In a similar spirit, Searle tell us, to find out how the world works, you have to use any weapon you can lay your hands on.’  And this is exactly what systems theory lets us do: we identify a system and attack it with a conceptual or methodological framework suited to the system.  And so far this has seemed to work; we attack neurophysiology and physics with different frameworks, and we can advance both of them simultaneously.

And in a certain way, this sits nicely with Kuhn.  Kuhn did think that scientists give us truths about the world (I disagree, but it is the next point that is relevant), but instead scientists give us a series of ways of solving puzzles,’ of dealing with the puzzling problems that emerge in any scientific paradigm.  They develop tools and methodologies to solve these puzzles (and the tools and methodologies vary both with the paradigm and the scientific disciple under question).  This attitude is shared by systems theory.

So we have shown that reduction to single, fundamental science is nothing more than metaphysical dogma.  If we want to investigate reality in its most robust and nuanced way, then we ought to view the world as composed of systems (for which there may or may not be an appropriate way of cutting them up).  For investigation beginning from (e.g. mechanical) metaphysical dogmatism forces us into an impoverished conception of reality where we will ‘miss-out’ on systems which really are there.

References:
Schlick
Agazzi
Searle
Chalmers

1. I think that this is both poignant and ironic, for humans are the paradigmatic rational animals.
2. There was an attempt to provide a purely mechanistic explanation for biology facts.  The attempt has a certain intuitive appeal, but at the end of the day was not sufficiently plausible or persuasive to researchers.  This is because biological features like finality, autoregulation, etc. could not be adequately accounted for mechanically.
3. A difference worth noting: while scientific explanation is deductive, positing scientific hypotheses is usually an inductive process.  The merit of these hypotheses is established via test implications.  So, $H \rightarrow I$, and $\neg I$, therefore $\neg H$.  If $I$ instead of $\neg I$, then the hypothesis is not falsified, but nor is it proved.
4. Ye ol’ $\acute{e}$lan vitale.
5. I intend to explain this further in another paper or post.  It comes in at the level of relations between subsystems in a more global’ system.

# An update

Over the next few posts, I think I’m going to explore some ontological pluralism through systems theory.  Stay tuned.