AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
The term entropy is very often found in works about societal complexity. The theoretical framework that tackles the concepts of complexity as an emergent phenomenon is of course that of systems theory (Wiener, 1961; Bertalanffy, 1968). But the meaning of entropy in the social context is often not clear, and it results in obscure arguments about the factors that continuously reproduce complexity. In particular, references to entropy usually imply disorganization and quite often leave aside the fact that disorganization (or disorder) can justifiably be considered as constraints imposed on an observer by his language, that is, his system of distinctions (Foerster, 2003, p. 280). Theoretical frameworks such as the system entropy theory (SET) (Bailey, 1990, 1997a, 1997b, 2006a, 2006b) have been developed and refined to measure entropy as an indicator of the internal state of social systems, namely, their disorder as a temporal variable.
More specifically, entropy is generally considered as a measure of the ability to predict the next state of a system. If the next state is highly predictable, then entropy is considered to be low and vice versa; consequently, a system that presents low entropy is considered to be organized and, by deduction, desirable. Therefore, predictability seems to be the keyword when it comes to organization and when references to entropy appear (Wiener, 1961; Arnopoulos, 2001). If this is the case, then the univocal use of the thermodynamic meaning of entropy in the social sciences context could contingently lead to all kinds of misunderstandings (Bateson, 2000, pp. 458-459). Entropy has at least two distinct scientific meanings and also has its relevant counterparts: energy and certainty.
In this paper, we try to shed light on the two different meanings of entropy and draw clear distinctions as to the contexts that those meanings pertain to. That may supply contemporary systems theory with new perspectives, which could bring forth a new conception of the importance of otherness for social systems.
ENTROPY IN THE THERMODYNAMICS CONTEXT
First, let us try to clarify the older (historically speaking) meaning of entropy, that is, the entropy of thermodynamics. There are two ways to consider and measure entropy: (i) a measure of the unavailable energy in a closed thermodynamic system; and (ii) a measure of disorder of a closed thermodynamic system. The first measure is associated with the conversion of heat energy to mechanical energy. The second is associated with the probabilities of the occurrence of a particular molecular arrangement in a gas.
To recall that concept, let us use an example. Suppose we have an adiabatic envelope (a completely insulated chamber). We have a source of energy, say a lighter, inside that envelope, and the envelope itself is full of gas. If, by any means, we use the energy contained into our source to heat the gas (i.e. we just light up the lighter), after some time, we will end up to a state where the gas will have the same temperature: every molecule will have absorbed the same amount of energy (more or less). That will result to an unpredictable (and faster than before) movement of the molecules of the gas to any possible direction; no prediction about their movement can be done, therefore, no certainty is possible in a micro level. Our energy source will be exhausted and so will be our ability to probabilistically predict their next position. We use to call this situation chaotic. But at this point, keep in mind that there's no such thing as a perfectly isolated envelope (Popper, 1957, p. 151).
Without any further investigation, we can note some interesting aspects of our experiment:
(1) The procedure took time to complete. The dissipative energy raised and our energy source was exhausted. That is, the amount of available energy dropped to zero and the amount of entropy rose to maximum. We can say that, as soon as the convection started, the gradual increase of entropy could be used as a timer, ticking the moments to the end; inversely, we could use the decrease of energy in our energy source as a measure of time.
(2) During our experiment, we were able to predict the course of the molecules of the gas, with a certain degree of statistical certainty. The colder molecules were going down, and the warmer ones were going up, forming a current of hot gas. That was a work in progress, an intentional change. We conceive of work as the process that ensures intentional changes in a context: so, there was no work before the experiment, and there cannot be any work after the end of it. There was no certainty before we started using our energy source, and there is no certainty after it was exhausted.
(3) The amount of energy contained into our adiabatic envelope is constant; no energy is lost (because of the first law-the conservation of energy). But we do not have any more a form of energy that we can use within that envelope to produce work (because of the second law). [Correction made here after initial online publication.] That is, whenever we talk of work, we refer implicitly to the available (i.e. useful or organized) energy. If now we try to collect the energy from the molecules back to our original source, that will mean a production of work, (1) and there is no energy--at least not in an appropriate form--to use it so as to complete that task.
(4) Consequently, our original source of energy was in an appropriate form (so to produce work).
So, we used our source until it was exhausted, and we ended up with a total inability to do anything else. Before our experiment, there was potential; during the experiment, there was statistical certainty; and at the end, we have concurrently total certainty (for we are sure there is nothing more we can do) and total uncertainty (as of the trajectories of the molecules of the gas). And we are stuck. We can make no decisions because there are no options to select from. We reached a dead end.
But before we go on, what was that that we called a 'dead end'? Clearly, it is the state at which there is no potential, that is, no alternatives to select from. To put it differently, there are no distinctions to draw; the state (the final conditions) is given, and there is nothing we can do to select another. Up to this point, we can conclude that entropy in the domain of thermodynamics measures useless energy (and not lack of energy) and indirectly reflects uncertainty, and also, that maximum entropy signifies complete inability to select a successive state.
ENTROPY IN THE COMMUNICATIONAL CONTEXT
Let us try now to examine the meaning of entropy in the context of information exchanging systems. (2) Based on the work published by Shannon (1948), we wish to concentrate on the properties and characteristics of discrete channels. This preference occurs because of the fact that communication is triggered by 'a sequence of choices from a finite set of elementary symbols' (Shannon, 1948, p. 3), that is, a natural language, spoken or otherwise, (3) and, to put it more generally, a sequence of discrete selections or states, as it is the case in the domain of cybernetics (Ashby, 1957). Also, following Luhmann (1986, 1995), we consider social systems to be communications systems, that is, systems that are constituted through communications (communicative selections), so to examine entropy in the communicational context is more appropriate and, as we intent to prove, more plausible.
Shannon points out that each symbol in the sequence of a message depends on certain probabilities, varying according to the previous symbols already transmitted, that is, what he calls a 'residue of influence' (Shannon, 1948, p. 8). Therefore, he suggests that we could perceive of a discrete source as a stochastic process, a process where each successive selection is dependent on the previous ones. (4) Thus, entropy in Shannon's approach is defined as a measure of probability of the next symbol to appear in the message sequence, and therefore, entropy in the communicational context refers to a generalization of Boltzmann's statistical entropy. Consequently, entropy refers to the variation of uncertainty during the transmission of a message; in Shannon's own words, 'Quantities of the form H = -[SIGMA][p.sub.i]log [p.sub.i] ... play a central role in information theory as measures of information, choice and uncertainty' (Shannon, 1948, p. 11). It is of utmost importance for our analysis that Shannon refers to informational entropy as a measure of 'information, choice and uncertainty' for it is exactly the communicative selections (choices) of systems that produce and reproduce those specific concepts ('information, choice and uncertainty'). (5)
Shneider (2010, p. 3) following M. Tribus suggests that uncertainty could also be called 'surprisal'. That is, if there is a set of M available symbols, (6) and, at a certain point of the message sequence, a symbol u, which has a probability of appearance [P.sub.i], that approaches 0 eventually appears, the receiver will be surprised. That means that the receiver has expectations, and those are constituted during and because of the communication and are defined (or bounded) by the receiver's conception of the communicational content and context (e.g. a language, or more generally, a culture). This leads eventually to a circular procedure: communication produces expectations, which in turn reproduce communication. And this recursive process stabilizes certain bilateral expectations that, so to speak, define an intersubjective space that makes communication possible (Luhmann, 1995). Exactly this is what Shannon defines as 'redundancy'. Redundancy is defined as 'One minus the relative entropy' (Shannon, 1948, p. 14). But how can we conceive of the notion of redundancy? Shannon's definition is strictly mathematical. To exemplify on the notions of entropy and redundancy, let us try a simple example. Suppose you toss a (supposedly) fair dice. You can say 'I know that the outcome will be in the sample space [1, 2, 3, 4, 5, 6], and additionally, I know that the possible outcomes are equiprobable with a probability equal to 1/6'. How can you know that? The answer is, because of prior experience, you know what a fair dice is, you know what tossing is and there are no extra variables in your experiment, and therefore, your …