Page:Lawhead columbia 0054D 12326.pdf/74

 “message” that contains nothing but randomly generated characters? If we think of the message in terms of how “surprising” it is, the answer is obvious: a randomly-generated string has maximally high Shannon entropy. That’s a problem if we’re to appeal to Shannon entropy to characterize complexity: we don’t want it to turn out that purely random messages are rated as even more complex than messages with dense, novel information-content, but that’s precisely what straight appeal to Shannon entropy would entail.

Why not? What’s the problem with calling a purely random message more complex? To see this point, let’s consider a more real-world example. If we want Shannon entropy to work as a straight-forward measure for complexity, it needs to be the case that there's a tight correlation between an increase (or decrease) in Shannon entropy and an increase (or decrease) in complexity. That is: we need it to be the case that complexity is proportional to Shannon entropy: call this the correlation condition. I don't think this condition is actually satisfied, though: think (to begin) of the difference between my brain at some time t, and my brain at some later time t$1$. Even supposing that we can easily (and uncontroversially) find a way to represent the physical state of my brain as something like a message, it seems clear that we can construct a case where measuring Shannon entropy isn't going to give us a reliable guide to complexity. Here is such a case.

Suppose that at t, my brain is more-or-less as it is now—(mostly) functional, alive, and doing its job of regulating the rest of the systems in my body. Now, suppose that in the time

64