Bad Thermodynamics: Information

A reader commented on my recent post about entropy, and asked how information and entropy are related. Creationists sometimes argue that information can’t spontaneously increase because of the Second Law of Thermodynamics (they’re wrong), and then they use that mistaken understanding to try and assert that evolution is impossible. I mentioned that issue briefly in my post, but left further elaboration for later. Well, now is later.

First off, I’m not a specialist in information theory, so I can’t address anything high-level when it comes to that discipline. Encryption and data compression and whatnot are not my bag, man.

In thermodynamics, information is created when systems become more chaotic. That sounds counter-intuitive at first blush, but it’s really not when you think about what is actually meant by information.

Assume for the sake of this problem – only this problem – that you could describe everything in the universe by four variables: temperature, x-position, y-position and z-position, where x, y and z are physical locations in three-dimensional space. Let’s assume a very simple universe: one filled with a perfect crystal lattice of neutrons, each one at a precisely uniform temperature of 100 billion billion Kelvins, repeated over and over. You could describe the location and temperature of every neutron in that universe with a single equation. You could write a concise expression describing every part of existence, where every place in the whole of reality is precisely mapped, in one line of code. There isn’t much information there.

Now imagine that pockets in that perfect crystal universe began to break apart. Inside each pocket the particles begin to wander around chaotically. Things get so bad that whole regions of the universe break free and begin to clunk about wildly, with gaping expanses of hot neutron mist separating them. If you were interested still in recording precisely where every neutron in that universe were located, you’d have to add a lot more lines to your ledger. You’d need to write down the xyz coordinates and temperature of every particle in each different pocket. Because the particles wander about randomly as they bounce off myriads of other neutrons and primal-crystal blocks, you’d have no way of describing all of them unless the individual positions of every neutron and crystal fragment were tallied. Our map of the universe would become a lot more complicated, under those conditions. More information, simply by crumbling.

Take that model to its ultimate limit, where the entire universe is nothing but neutrons in chaos. Each particle wanders its own slow, random path through the cosmos, dissipating heat. Every neutron carries slightly different amounts of energy, at slightly different temperatures. Seen from a distance the universe would look like a cooling fog of neutron gas, describable according to the gas law, PV = nRT…. but not completely describable. The Cosmic Ledger isn’t interested in averages, or standard deviations, it only records the literal data on every part of existence.

To describe that ultimately chaotic universe completely, you’d have to record the position and temperature of each and every particle, individually. All neutrons, everywhere. In a perfectly entropic universe the Cosmic Ledger would be enormous, containing unimaginable oceans of raw data describing where everything was at every moment. A chaotic universe is an information-rich universe. Even as everything sails down over the long ages toward absolute zero, this remains true…. just with more zeros behind the decimal in the temperature value.

The more chaotic a system is, the more information it carries… automatically. What most people mean by information isn’t raw data… they mean knowledge. Knowledge is correlated data, information assembled into some construct that is orderly but not so orderly and boring as a crystal. Knowledge rises when a system is neither perfectly ordered or perfectly chaotic, where it is possible to have a lot of little semi-ordered, busy lumps spontaneously arise and do interesting things. Life is an expression of partially conserved order, where the surroundings are either more ordered and therefore simple and lifeless (e.g. rocks), or too chaotic to embed coded syntax (e.g. fire). Life rides that wave, balancing precariously.

On the journey from total order – as in the Big Bang, where the cosmos was one massively energetic, massively dense uberparticle – toward the eventual utterly (and literally) meaningless chaos of cosmic heat death, interesting things can occur spontaneously. We’re one of those things.

Advertisements

~ by Planetologist on February 11, 2009.

7 Responses to “Bad Thermodynamics: Information”

  1. What I’m especially interested in is the interaction between information coded into DNA or other such molecules, and entropy. I’ve run across mention of some sort of energy cost of accurate information, with comments associating it with entropy. IIRC this was in scholarly or peer-reviewed work, but my searches have been unable to recover them (references).

    Could we say that the very high energy cost of accurate replication in DNA (and transcription of the identical sequence to RNA)* is a matter of not increasing information (entropy) by creating a new polymer with the same exact sequence as the old?

    I must say I don’t really understand what’s up with this concept (or even whether I’m applying it correctly). Perhaps you could elucidate?

    * 1-1.2 ev if my calculations are correct: a pyrophosphate is dropped for each additional monomer added. AFAIK the cost of linking a random monophosphate monomer to the chain should be something like 1/5 that.

    • The thing about genetic information that the creationists always miss, is that any genetic code of an arbitrary length contains exactly the same amount of information as any other code sequence of the same length. The phrase GATTACA contains exactly the same quantity of information, in tertiary (four potential answers) code, as ACCTTGG, or GTCTAAA, or etc.

      Each position on the chain can have a value of 0, 1, 2 or 3 (or if you prefer, 1, 2, 3, or 4). Each position has to contain one of those values, so the raw information density doesn’t change do matter what the sequence is. Gene duplication and frame shift repeats are pretty simple mistakes, and they can double, triple or increase the length of the genetic code by any arbitrary multiplier… essentially creating new information. It only tangentially matters that some sequences result in functional replicons, while others don’t. But specifically on the question of information density, it doesn’t matter what the letters are. In other words: useful, useless, advantageous or harmful mutations in the gene code all carry the same amount of information, and all cost the same amount of energy to read and execute. The only difference is that useful mutations lead to results that sustain replication, while other mutations don’t.

      • I’m afraid I didn’t phrase my question properly. I wasn’t so much looking at mutation (as such) but at the relative costs of accurate vs. random addition of a monomer (base) to the chain.

        But let’s consider a point mutation caused by a copying error: if the string ACCTTGG is copied accurately, we now have two strings of ACCTTGG. But if a point mutation creates a string ACCCTGG, we now have a string with ACCTTGG and a string with ACCCTGG. Doesn’t this represent more information? Similarly, if we start with a string of ACCTTGG, then add a string of ACCTTGG, we now have two strings much more different (than the first case, which only varies at one base). AFAIK any such error in replication represents an increase in Shannon Entropy. How does this relate to the sort of entropy involved in setting energy costs for replication, as opposed to creating random strings?

        If I understand correctly, a lower energy cost for replication should be associated with a higher Shannon entropy and therefore a higher error rate, as demonstrated by viral DNA Polymerases that work much faster, with higher error rates. I’m assuming that the higher speed causes a higher local concentration of pyrophosphate near the center of replication (because of diffusion delays finding i-pyrophosphatase molecules), which in turn means that there’s less energy available to drive the replication.

        I have to admit I’m just starting out trying to understand this stuff. To aid in it, I created my own blog post Entropy and Information, which currently just has my reading list (which I’m still at work at). Please feel free to add comments there if you don’t want to discuss here (I admit it’s a little off-topic).

      • But if a point mutation creates a string ACCCTGG, we now have a string with ACCTTGG and a string with ACCCTGG. Doesn’t this represent more information?

        It doesn’t, and the reason is that information can be different and still contain the same number of bits. It might be more useful to think of this in terms of binary code; the code sequence 010 and 101 have very different numerical values in binary language, but they are both three-digit phrases. The information content of the three-digit phrase is exactly three bits, no matter if the phrase being coded is 111, 000, 100, 010, 001, etc. You might be confusing information content with functional meaning… in the example case the information code is always three bits, but when that code is then applied to a particular query, different results are obtained depending on the specific sequence of digits. A query might come from another program, itself just a nest of binary syntax… and the results of different answers might be dramatically different, but the information content of any three-digit sequence, or codon, is exactly the same as every codon.

        Tetradecimal codes like that employed by DNA (possible returns at each coding site: 1, 2, 3, 4) work just like binary code (possible returns: 0, 1), in that way. Now, I’m not saying that information describing the various effects of different codons are all equal. An explosion and a crystal have two different inherent quantities of entropy within them, but either result would always have less total energy than the reactants it took to cause each. Entropy is part of that energy balance, but only part. A product can have less entropy than its reactant, as long as something else gives…. which is how a crystal forms by spontaneous self-ordering, by giving up some heat to its surroundings. Basically what I’m saying is that in any particular reaction the entropy gain/loss alone cannot tell you how favored or unfavored a reaction is under a given set of conditions. Entropy can be gained or lost during spontaneous chemical processes… which means entropy is not a progress indicator.

        If I understand correctly, a lower energy cost for replication should be associated with a higher Shannon entropy and therefore a higher error rate, as demonstrated by viral DNA Polymerases that work much faster, with higher error rates.

        No, energy cost does not necessarily correlate with error rate or any other phenotype. Replication at lower energy cost is always a selection pressure, under any and every set of conditions where life finds itself, anywhere in the universe. That selection pressure is built into the feedback loop of biological evolution… in some sense they’re just different sides of the same idea.

        Something that replicates can change what it’s doing and become either more, equally, or less successful at replicating. Changes that increase replication success are favored…. but remember that with all else being equal, such favorable mutations may or may not be less energetically expensive. You have to factor in all the external variables. A mutation creating a new protein that fails at its original job of, say, regulating copper in the cytoplasm, might result in a potent new defense toxin, which when dumped into the cell’s antigen coat results in horrific casualties among attacking phages. That mutation might cost more calories to maintain… in fact it might be bloody expensive as all get-out… but if it triggers a causal chain of events that leads to increased replication success for completely different reasons, it’s worth the cost…. by definition.

        Thanks for the link, I’ll definitely check out your blog. Actually, in the meantime I can recommend a very good book on thermodynamics for non-specialists. Greg Anderson wrote a wonderful text on how thermodynamics applies to the Earth sciences, called Thermodynamics of Natural Systems. I’ve used his book as a course text, and it’s probably the best book of its kind I’ve seen… intended for people with training in the physical sciences, but who aren’t already experts on the topic. Unfortunately the paperback version is out of print, but there are usually a few used copies floating around on Amazon.

      • I’ve just gotten Thermodynamics of Natural Systems, and read (so far) through chapter 8. A lot of this I’d already been through, although the parts on gas don’t strike a memory. They’ll be helpful, I hope: I’d like to have a better idea how to calculate the proportions of gases in an early (hydrogen-rich) atmosphere.

        The parts on solution thermodynamics seem less helpful, so far. The basic assumption here involves transition between (meta-)stable states, while what’s going on with life is far-from-equilibrium even in many of the reactions that are going forward.

        With living systems, AFAIK, there’s a much greater issue with kinetics, especially involving the relative rates of reactions, primarily controlled by catalyst concentration and effectiveness.

        On that note, I’ve just posted the latest on my blog, which has some thermodynamic aspects, which I’m pretty sure I got right, but if you feel like going over it, I’d appreciate hearing about any bloopers (or even anything that strikes a professional as bad).

        Anyway, thanks for the book reference.

      • OOPS! I meant my last string to read GTCTAAA, not ACCTTGG. Sorry.

  2. Depends on who you are speaking to about information – its always in context is my not-so-humble opinion, and therefore relative to something. Even in black hole theory.

    Traveling through India in the entourage of a Tibetan Lama, we stopped for a night near a row of businesses. Tibetan monks speaking Hindi accompanied me to the store to pick up some things. I needed a bag and a few simple items.

    Nothing is really fixed priced there especially if you are white and don’t speak the language but for every item I would ask the monk to bring the price down – sometimes shopkeepers asked for more than the value even in an expensive shop in the US.

    So where they wouldn’t come down, I said no, and calmly went on to the next thing in my list.

    The shopkeeper became very upset – in Hindi he let loose on the monk – “Why are you helping this woman?? this white foreigner?” the monk just looked shyly away. Translate for me I said, and argued that the monk and my business arrangement was beside the point – travelers often are just middle class at best and he should not berate the translator – “How did you know what I said? you speak Hindi?!!?” came back the surprised response in Hindi – “Nay, No,” I said – please translated this “Our conversation is in CONTEXT what the heck else would you be complaining about to this monk?”

    With that the shopkeeper stopped arguing; he gave me my items at a reasonable price without the superjacked up amount tagged on it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: