Detecting Design in Biological Systems
William Dembski | Saturday, January 17, 2004Copyright © 2004, William Dembski
Edited transcript from a lecture given Friday, January 16, 2004, 7:30 p.m. 194 Chemistry, University of California, Davis As part of the Faith and Reason series sponsored by Grace Alive! and Grace Valley Christian Center
Signs of Intelligence
My talk is entitled “Detecting Design in Biological Systems,” but I want to start with detecting design in a few non-biological systems. Let me start with an example of a non-human intelligence that some people hope might be detected. The movieContact, with Jodi Foster, epitomizes the search for extra-terrestrial intelligence (SETI). What these SETI researchers are doing is looking for signs of intelligence from outer space.
Let us imagine that maybe we have watched one too many Star Trek episodes and expect our extra-terrestrials to communicate with us in English. In fact, we expect them to use not just English, but the American Standard Code for Information Interchange (ASCII) codes. So we get this signal: “Methinks it is like a weasel,’ a line from Shakespeare’s Hamlet, represented in ASCII code. Would this convince us that we are dealing with an extra-terrestrial intelligence? You might say, “Well, ‘Methinks it is like a weasel,’ is about 220 bits, so there is an outside chance that this could happen by chance.’ But what if it was the entire book of Hamlet? At some point we would say, “This is overwhelming evidence that an extra-terrestrial is communicating with us.’
But what if instead we got just “a’. Eight bits coming from outer space. Jodi Foster receives this message and says, “Let’s go contact the New York Times science editors! Aliens from distant space have mastered the indefinite article!’ Would we say that? Probably not, because with millions of radio channels being monitored from outer space, if you are reducing everything to bit strings, encountering this sequence of eight bits would be very likely. This is one in 256 possible sequences of eight zeros and ones, so you are very likely to see this sequence.
Let us try to work with our intuitions. What is the difference between a sequence like “Methinks it is like a weasel‘ and one like “a‘? Well, this “a‘ is not complicated enough to convince us that we are dealing with an extra-terrestrial intelligence. You see, all of this design inference, this detection of design, is based on circumstantial arguments. We do not have the smoking gun. We do not have the video camera running. We are not on Alpha Centauri or some distant planet where we can observe the alien transmitting. All we have is just these signals that are coming in. So what is it about these signals that will convince us that we are dealing with an intelligence?
We could say it is possible there is an alien transmitting this message who knows English. He transmits the letter “a’ and is about to transmit some long, beautiful poem beginning with the indefinite article, but drops dead of a heart attack before he can get on to the remainder of the message. Then we could say that this was in fact an intelligent communication. But if this is all we receive, we will never know for certain.
We need something more than just a short sequence. There has to be some complexity there. What was it that persuaded the SETI researchers in the movieContact that they were dealing with an extra-terrestrial intelligence? It was actually a long sequence of prime numbers, which are numbers divisible only by themselves and one. We can represent the prime numbers, 2, 3, 5, 7, 11, 13, etc., in unary form, with ones representing beats and zeros representing pauses. When they saw this in the movie Contact, they knew that in fact contact had been established with an extra-terrestrial intelligence. The key point of the movie is made when one radio astronomer says, “This isn’t noise; this has structure!’
Intuitive Observation
This is the very way we interpret our own experiences to determine if something is designed. How do we determine if something is the result of natural, undirected forces-accident or chance-versus design, intelligence, purpose? How do we make the distinction between natural things and artificial things? For example, consider Stonehenge, on the one hand, and Arches National Park, on the other. How do we distinguish things that are the result of intelligence versus things that arise from natural forces?
On one level we know by intuition. Suppose you are driving through southwest South Dakota and you come across a rock formation (Mt. Rushmore). Would you say to your traveling companion, “Gee, isn’t it amazing what wind and erosion can do?’ You probably would not say that, because there are some clear marks of intelligence there. In fact, in this case we know the causal history of how this formation came about. But what if we did not know the causal history? We would still have some idea of the causal forces that might be involved.
Consider the famous “Mars face.’ Is it a face? If you were going around Mars in 1976 and you caught a glimpse of it at just the right angle and right distance, you would say, “Oh, here is something that looks like a face.’ But you probably would not say this one is designed. What is the difference? Is there any way to get a handle on this rigorously?
Here is another example. If we take twenty-three Scrabble pieces, they can be randomly mixed up all higgledy-piggledy, or they can spell out something [“Methinks it is like a weasel‘]. What is the difference? Well, there is a pattern in the latter case which is not evident in the first. But is there a way to distinguish the type of pattern, because, in a sense, anything that is conceivable has some sort of pattern? Are there patterns that point us to intelligence? Are there patterns that point us to things that are just a result of accidents or chance? These are the questions I want to start off with.
Three Criteria for Detecting Design:
What we are looking for is a criterion for detecting design. What is going to allow us to nail design down? There are three things that we are looking for: contingency, which is essential for choice; complexity or improbability; and specification, or an independent pattern. Let me say a little about each one.
1. Contingency
What do we mean by contingency? It is a philosophical term. Something has contingency if it is a live possibility, but not the only possibility; there are other live possibilities as well. For instance, if I flip a normal coin, it is a live possibility that it will lands heads, but also a live possibility that it will land tails. So its landing heads is contingent. On the other hand, if I flip a double-headed coin, one with heads on both sides, its landing heads would be necessary.
Why is contingency used to detect the activity of an intelligence? The very notion of intelligence is the idea of choosing between. That is evident when you study the etymology. The word intelligence comes from the preposition inter, meaning between, and the verb lego, meaning to choose; so it means choosing between. Intelligence chooses between options. If you do not have a choice, then you are just a brute, automatic process doing whatever has to happen. We would say there is no intelligence involved there; it is necessity that is operating. So intelligence presupposes being able to choose between live, competing options.
2. Complexity
We have already touched on this idea of complexity. We needed a long sequence of bit strings coming from outer space if we were going to detect an intelligence, because if it was short, it could happen by chance. I am using complexity here as a synonym, as it were, for probability. Let me make that connection clearer. Say you have a combination lock. There are forty numbers on the dial, and you are turning it in three directions, so there are sixty-four thousand possibilities for opening the mechanism. Or, for, a more complicated mechanism, let us say a bank vault has maybe a hundred numbers and you turn it in five directions-that is 100 x 100 x 100 x 100 x 100, or ten billion possibilities.
The bank vault is a more complex mechanism so, correspondingly, it is going to be more improbable, if you try just randomly spinning the dial, to open the bank vault as opposed to the combination lock. There is a probability that corresponds to it. Using uniform probability, there is a one in ten billion probability of opening the more complex mechanism, a one in sixty-four thousand probability of opening the simpler mechanism. Higher complexity corresponds to lower probability.
When I first started dealing with design inferences I focused more on improbability. Today I tend to use the word complexity, in part because there is an established vocabulary out there that uses it already.
Let me just explain why any method of design detection is going to have to be a probabilistic theory. It is because if you allow yourself an undisciplined or unrestricted use of chance, you can explain anything. One of my favorite movies isThis Is Spinal Tap. It is a rock “mockumentary’; the group doesn’t actually exist. In one scene the band members are explaining why a long string of drummers have all perished under unusual circumstances. One drummer died by spontaneously combusting, and the band member says, “Well, you know, many people spontaneously combust each year.’ This is all tongue-in-cheek, but the fact is, it is a physical possibility. It could happen that right now that I could spontaneously combust in front of you. If all the fast moving air molecules suddenly converged on me, I could just go up in a puff of smoke. There is a probability that this could happen, yet we do not take it seriously.
But if we allow ourselves unrestricted use of chance, we can explain anything. We do not need Darwin to explain the emergence of life or the subsequent development of life once an initial life form is here; we can just invoke chance.
In our scientific theorizing we want more than just brute chance, so we need to discipline our use of probabilities. That is why any theory of design or any method of design detection is ultimately going to have to be a probabilistic theory, to deal with the probabilities and not allow, if you will, a “chance-of-the-gaps.’
3. Specification
By specification I mean there has to be a patterning. Highly improbable things happen all the time. If you get out a coin and flip it a thousand times, you are going to witness a highly improbable event, but there will not be any pattern to point you to an intelligence. Just about anything that happens is highly improbable; thus to ensure that something did not just happen by chance, it must conform to a pattern.
But not just any old pattern will do. Let me put up this slide; this is one of my favorites. [At first glance it appears to be random patches of gray splotches.] Now if you do not see the pattern you might say, “This is just a random inkblot in a complex configuration. There are many ways that this could have been arranged.’ But if you see the pattern, then suddenly you realize that it is, in fact, not random, but a picture of a cow staring at you.
You have all seen pictures like this, where at first it looks like it is just something higgledy-piggledy. Then you see the pattern and suddenly it snaps into place and you know that it is not random. One interesting thing is that you will never un-see this again. I can turn this slide off, and when I turn it back on, you will always see the cow. This actually gives us a fundamental insight into randomness. Randomness is a provisional designation; it applies as long as we do not see a meaningful pattern. But once we do see a meaningful pattern, we will never un-see it again.
Meaningful Patterns
I want to elaborate on this a little, because when I talk about “meaningful patterns,’ it brings up the whole issue of semantics. I am trying to approach this as a probabilistic mathematician, trying to understand what are the types of patterns that we use in practice to eliminate chance and infer design, so I want to use something more precise. I will not go into detail, but the idea is that it has to be a pattern that is independently given. It is not something that we are just reading onto an event.
Let me give you an example of a type of pattern you do not want. Imagine you are an archer shooting at a wall fifty meters away. The wall is so large that you cannot help but hit it, but the arrowhead is very tiny. So for the point to land in any particular place is highly improbable, such that it could conceivably be landing by chance.
Picture two scenarios: In one you fix a target and shoot at it, and as you continue to shoot, you keep hitting the bull’s-eye. In the other scenario you shoot the arrow at a blank wall and every time the arrow hits, you get out your bucket of paint and draw a target around the arrow.
From these two very different scenarios you can draw two very different inferences. If the target is fixed, you draw a design inference. After a while it becomes a matter of bad faith to say, “Well, this was just beginner’s luck.’ We certainly would not do that at the Olympics! On the other hand, with a moveable target, it might be that the person is a skilled archer, but you are not going to be able to tell, because this pattern, rather than being independently given, is read off of the event in question.
What, then, does it mean to be independently given? Well, that actually takes a fair amount of mathematical and logical work to lay out, so I am going to leave it just in those terms for now. But if it is not independently given, and rather is read off of the event in question, then you are not going to be able to draw a design inference.
So the pattern we are speaking about has to be a special type of pattern. Here I will distinguish specifications from fabrications. Fabrications are the moveable targets; specifications are the fixed targets. This example of targets that I am describing is actually pretty well known in the statistical literature, where it is known as “setting up a rejection region’ in advance of an experiment. The rejection region is the bull’s-eye on the target. You hit the target, and then suddenly you say, “Well, it was not by chance that it happened.’ This relates to the notion of Fisherian statistical significance testing.
But if you are going to apply these ideas in biology, there is a different sort of temporal ordering there: The event has happened, biology has emerged, life has evolved, and now we are coming after the fact, trying to see if there are patterns in these systems that would reliably point us to an intelligence. So there is a reversal of order. We do not have the neat case as in statistics where we set up our experiment, set up the pattern in advance, and then run the experiment and see if there is a hit or a miss.
But there does not have to be this temporal ordering of patterns set up to run the experiment. You can run the experiment and still find the pattern after the fact and be convinced that the outcome of the experiment was not the result of chance but of design. Let me give you just a simple example to illustrate that the temporal ordering is not crucial.
Consider the case of cryptography. You have an encrypted text that reads: nfuijolt ju jt mjlf b xfbtfm. You might look at this and say, “For all I know, this could just be a random sequence of letters.’ Then you remember that Julius Caesar, in his Gallic Wars, used a certain cipher system in which he would just move the letters of the alphabet up or down a notch. So you ask, “What would happen if I moved each letter down one notch in the alphabet?’ So N goes to M; F goes to E; U goes to T; and so on. Then you get: “Methinks it is like a weasel.’
Suddenly you see this connection, even though when you first looked at this you did not have any pattern going into it. For all you know it might have been random. But when you realize a very simple encryption scheme is involved, and you get the message out, you say, “Hey, this is not chance.’
What is very significant here is that the encryption scheme is simple. There is complexity of the outcome, considered probabilistically, but simplicity of the pattern. When those two elements combine, that is what actually allows this design inference to work. That simplicity of the pattern is crucial to specification. Because, if you just sort of arbitrarily decided on a decryption scheme, saying, “Well, if N is in the first position, I’ll assign M; if F is in the second position, I’ll assign E; if U is in the third position . . .’ and so on, people would assume you just cooked it up. It is the fact that it is just, “Oh, you just move each letter of the alphabet down one notch,’ that allows you to say this is not random.
Explanatory Filter
I have codified this whole criterion for detecting design in what I call an explanatory filter. You will find many hits in the Internet about this-from those who are fans of it and from others who think it is entirely misguided. Essentially, the idea is this: You start with an object or other structure that you are interested in understanding as a result of design or of natural forces. First you ask, “Is it contingent?’ If not, then it is necessary. If not contingent, then there is only one live possibility, which means only one thing happens, which means you are talking necessity. Then you ask, “Is it complex?’ If not, attribute it to chance, like those eight bits representing the letter “a’ in the ASCII code. If it is contingent and complex you ask, “Is it specified?’ If not, then it is just like flipping a coin a thousand times-highly complex or improbable in that sense, but still the result of chance. Finally, if it is contingent, complex and specified, then we end at design.
That is the criterion. I have written this up in a book called The Design Inference. If you get on Google you will find about 5,000 hits about The Design Inference, referring specifically to this. What are people saying about it? Critics might want to say, “Dembski has been thoroughly discredited,’ but that is not actually the case. In fact, there are a number of lines of criticism, and I have been dealing with these extensively in my correspondence and writings. But I just want to give you a sense, if you are a skeptic, that at least there is some vigorous discussion about it.
Let me just give you one example. This is from Paul Davies, a well-known physicist and probably one of the most prolific science writers in the world. In an interview with Larry Witham, Davies said, “Dembski’s attempt to quantify design or provide mathematical criteria for design is extremely useful. I am concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves. Strictly speaking, you see, science should be judged purely on the science and not on the scientist.’ I am not sure what that says about me, but it is a good point!
Specified Complexity
What, then, does this design filter identify? If I had to give you a buzz phrase, it would be “specified complexity.’ That is the marker of intelligence or design. There is a dawning recognition that there really is something going on with specified complexity. Paul Davies wrote a book in 1999 that discussed the origin of life problem. He titled it The Fifth Miracle because in the book of Genesis, the fifth act of God is the creation of life. Davies writes: “Living organisms are mysterious, not for their complexity per se, but for their tightly specified complexity.’ Davies does not define specified complexity in the precise mathematical, logical terms I use, but he is getting at the same notion.
In 1984, Thaxton, Bradley and Olsen, in their book, The Mystery of Life’s Origin, wrote, “Before the specified complexity of living systems began to be appreciated, it was thought that, given enough time, chance would explain the origin of living systems.’
Actually the first instance that I find of specified complexity in any literature is a statement by Leslie Orgel: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.’
We must ask one key question about this criterion: Is specified complexity a sufficient condition for detecting actual design? Specified complexity is looking at various mathematical, logical, empirical features of systems, but it is not looking at the causal story. It does not look at the history. There is no video camera running. This is circumstantial evidence. It is saying that there is this feature, specified complexity, which attaches to something. The question is, do all those things that have this feature share a certain causal story? Do things that are complex and specified fall in the category of all things that have this property of being actually designed, so there is a causal story behind these complex, specified things which includes an intelligent agent? That is the question.
This is what I am claiming: that the universe of “complex and specified’ things fits in the box of “actually designed’ things. The worry is that there might be something that is complex and specified but not actually designed. What, then, are the proposals for trying to determine which is the accurate reality?
Explaining Specified Complexity:
There are three options for explaining specified complexity:
1. Intelligent Design - the action of an intelligent agent. Intelligent agency is known to have the causal power to produce complex specificity or specified complexity. If the SETI research program is ultimately successful, it will be so because they have discovered specified complexity in some form or fashion.
2. The Darwinian mechanism - differential survival and reproduction of replicators; natural selection and random variation working together. That is the option I want to explore next.
3. Complex and unknown laws of self-organization- The key word here is “unknown.’ Stewart Kauffman’s book, At Home in the Universe, which came out in the mid-nineties, is about the search for laws of complexity and self-organization. They are still searching. So far there are no concrete proposals. There are some diffuse gestures, such as, “We have these mathematical and computational simulations where self-organizational properties occur, and this applies, therefore, in the biological realm.’ But it is very diffuse at this point, so it is not a real option.
The Darwinian Position
The real question is this: Can the Darwinian mechanism account for all this design work without a designer, without an actual intelligence?
The Darwinian view is this: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.’ That is from Richard Dawkins. You see the sentiment repeated throughout the biological literature. Francis Crick writes, “Biologists must constantly keep in mind that what they see is not designed, but, rather, evolved.’ The idea is this: Our intuitions may be telling us that there is design, but they are misleading us! In fact, there is a naturalistic story, a Darwinian story, of natural selection and random variation working to produce these systems.
Stuart Kauffman, who is not a fan of this Darwinian view, nonetheless agrees that this is the majority position. He writes, “Biologists now tend to believe profoundly that natural selection (that is, the Darwinian mechanism) is the invisible hand that crafts well-wrought forms. It may be an overstatement to claim that biologists use selection as the sole source of order in biology, but not by much. If current biology has a central canon, you have now heard it.’
So natural selection and random variation are supposed to be the information ratchet that purchases the crucial biological information. You might say this is Darwin’s promissory note, that natural selection and random variation can explain the emergence of biological information-not just biological information in general, but the very structures of biological forms, including any instance of specified complexity that might be found in biological systems.
Bacterial Flagellum
Is that in fact the case? Can Darwinism do it? My friend and colleague, Michael Behe, a biochemist at Lehigh University, wrote a book in 1996 called Darwin’s Black Box. It is an incredibly successful book, now in its eighth year and still selling about 15,000 copies a year. Anybody who knows about the book business knows this is really quite remarkable. What he did was to take on this Darwinian challenge and ask: Are there systems that really are beyond the reach of the Darwinian mechanism?
One of the systems he looked at was the bacterial flagellum. It has become the mascot of the Intelligent Design movement, or, as one critic called it, “the icon of Intelligent Design.’ The flagellum is a bidirectional, motor-driven propeller found on the backs of certain bacteria; E. coli is the most commonly studied one. It spins at about 17,000 rpm and can go up to 100,000 rpm. It can change direction in a quarter turn. So, say it is spinning at 20,000 rpm; in a quarter turn it is spinning 20,000 rpm in the other direction. Howard Berg at Harvard calls this “the most efficient machine in the universe.’ It is liquid cooled, with an acid power drive. When you look at it under an electron micrograph, you can clearly see this is, indeed, a machine: the filament acts as the propeller; it has a hook, a drive shaft, various discs that mount it to the cell membrane, stators, and a motor.
Now, the question is: How did these things arise? What are the features of these things that should give Darwinists, people who want to dispense with design in the explanation of these systems, some pause? Well, first, they are multi-part, and the parts are functionally integrated. All the parts are there for a purpose. There is nothing that is dispensable. It all hangs together very tightly. They are non-simplifiable in the sense that you cannot really knock out a lot of components and simplify it. You are not going to be able to get this rotary, bidirectional motion if you drop out a lot of components from this system. You are going to need a propeller, you need a filament, you need something to hook it on, you are going to need a drive shaft, and there has to be a motor. Then there has to be something to mount this to the cell membrane. It is spinning awfully fast, so there has to be a tight bond. There are lots of parts that are needed.
No Hidden Structures
Another feature that argues for design is “no hidden structures.’ It is significant that Behe calls his book Darwin’s Black Box. In Darwin’s day, people had no conception of biochemistry or really what was going on in the cell. A contemporary of Darwin, Ernst Haeckel, called the cell “a homogeneous globule of protoplasm.’ It was basically thought of as a little Jello enclosed by a membrane.
But if that was all a cell is, there would be no problem with this thing arising by chance. In fact, the origin of life was not a problem in Darwin’s day. He did not really address it, except in one letter to Hooker, I believe. It was the subsequent development of life, the origin of species, that was the interesting question for Darwin.
Let us return to this idea of “no hidden structures.’ When you get to the level of biochemistry, you have gotten to the nuts and bolts level of biology. Below that you have just got brute chemistry and physics working for you; there is no biology going on there. The point is, if the analysis takes place at this biochemical level, it is not as if there are little green men hiding under that level, building these things, or some other unknown natural forces that could account for them. You are dealing with the nuts and bolts on the biochemistry level, so you have to account for the organization there.
Evolutionary Theories:
Behe calls a system like this “irreducibly complex.’ We have these systems, like that of the bacterial flagellum, all over the place in biology at the cellular level. How might evolutionary theory explain something like the bacterial flagellum? There are two main proposals-one that really does not go anywhere, and the other, which is the method of choice.
1. Scaffolding Approach
The scaffolding approach is also known as Roman arch approach. How might you build a Roman arch? Suppose you have blocks that need to be built into an arch, but if you remove one piece, the whole thing comes tumbling down. What you might do is build a scaffold and put the pieces in place. Then the scaffold drops off, and now suddenly you have this structure that is at least functionally integrated. It ends up not really being irreducibly complex.
The problem with this approach is that you do not get the function that is selectable by natural selection for an irreducibly complex system until all the pieces are in place, plus the scaffold. So the scaffold is not really buying you anything in terms of natural selection.
2. Co-evolution
This actually brings us to the more serious proposal that is on the table. This is known as co-evolution and co-option; sometimes it is just called co-option. It is the idea that structures and functions co-evolve, with old structures being co-opted to serve new functions. Co-evolution can be described in terms of either direct or indirect Darwinian pathways, both of which I will use to illustrate the evolution of a mousetrap.
– Direct Approach
Picture a standard five-part snap mousetrap with a catch, platform, holding bar, hammer and spring. One way we might imagine the evolution of a mousetrap is by direct evolution. I am going to describe it with the mousetrap becoming simpler. Evolution is usually described as going in the opposite direction: the simple becoming more complex. But here I am going to start with the complex and work backwards.
How can we simplify a five-part mousetrap? First, we can remove the catch and then put a little crick in the hammer. It could still serve as a mousetrap-not as effective, but it might still catch mice. Then, if we get rid of the hold-down bar and poise the hammer precariously on the edge near the cheese, the thing might occasionally snap on the mouse. It may not kill it, but at least you can capture it and let it starve to death. But then you can get rid of the hammer entirely and just bend out the spring, and, finally, you can have just the little spring. (Note: The illustration of this is on John H. McDonald’s website where he has, in running time, 3-D, the emergence of the mousetrap.)
My point is this: There is an evolving structure whose function stays the same. The trap is always trying to catch mice; the structure is getting more and more simple. That is the direct approach.
– Indirect Approach
There is also an indirect approach. A direct approach is not going to work with an irreducibly complex system, because it is non-simplifiable. With a bacterial flagellum you need the filament, you need the drive shaft, you need the motor, and it has got to be mounted. All these things need to be in place, so a direct Darwinian pathway is not going to work with something like the bacterial flagellum. What we need, then, is an indirect Darwinian pathway.
This is what an indirect Darwinian pathway might look like: You start with the mousetrap base, but initially it just serves as a doorstop. As it evolves, that becomes its selectable function; natural selection can select it as a doorstop. Next you add a spring and a hammer to form a tie clip.
So now we have a trap with three parts. Finally we add the catch and the holding bar and we get a fully functional mousetrap. Notice what is happening. It is evolving, getting more complex as we move along, but the function is changing as well. Initially, it is a doorstop, then a tie clip, and, finally, a mousetrap. That is the only explanation the Darwinists have for the way these things can evolve. It is the counter-proposal to the challenge of Intelligent Design. If these systems evolved, this is how it supposedly happened-by an indirect Darwinian pathway. What makes it indirect is you are not just improving or enhancing a given function. The system is evolving, but the function keeps changing. That is how you arrive at these irreducibly complex systems.
Evolution from Subsystems
Now, as a sheer logical possibility, maybe co-evolution could happen, maybe not. How are we going to test it? That is really the issue. If you have a bacterial flagellum composed of forty protein parts, what is its precursor? How might this thing have evolved?
It turns out that there is a precursor, though we should not call it such. There is a subsystem, as it were, of the flagellum, known as a type III secretory system. A type of pump, it is the delivery system for Yersinia pestis, the bubonic plague bacterium that is the pathogen responsible for killing a large number of people in Europe at various times. Embedded within this delivery system are proteins homologous to these proteins that are in the bacterial flagellum.
The debate has proven very entertaining. The Intelligent Design people have said, “Here is this system, the bacterial flagellum. Prove that it is not intelligently designed.’ The Darwinists have been trying to come up with proposals, but in eight years of debate, the only proposal they have put forth to explain the emergence of the bacterial flagellum is, “Here is the precursor.’ They have not been able to come up with a real story of how we get from this delivery system to the bacterial flagellum.
Here is something even more interesting. The bacterial flagellum is probably about two billion years old, maybe older than that. It is a motility structure used to drive the bacterium through its watery environment. This [Y. pestis] is a poison delivery system from the time of the development of the metazoa-multi-celled organisms. How long have they been around? About six hundred million years. So in evolutionary terms, the flagellum would have come first.
This is where the very best molecular evidence is going. The people in Dr. Milt Saier’s lab at UCSD are saying that the type III secretory system evolved from the flagellum, not the other way around. Yet, even if that happened, it does not really explain the evolution of these systems, if by evolution you mean you are trying to explain the complex in terms of the simpler. Here you are explaining the simple in terms of the complex.
This is the only proposal that is out there. How is the bacterial flagellum explained? “There is this type III secretory system embedded in it.’ Most design systems have subsystems that serve some function! I am just surprised that this is taken as the end of the story. As long as there is some possible precursor, they do not have to tell the rest of the story. But how did the proteins change? What exactly happened? How did this turn into a motility structure? What about the thirty other parts that still need to be incorporated into this? These scientists see no need to fill in the details!
What is needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying that we can travel by foot from Los Angeles to Tokyo because we have discovered the Hawaiian Islands.
Evolution Has No Clue
The bottom line is that evolutionary biology does not have a clue how the flagellum emerged. Don’t just take my word for it. James Shapiro, a very prominent molecular biologist at University of Chicago, reviewed Darwin’s Black Box in 1996 and wrote, “There are no detailed Darwinian accounts of the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation for such a vast subject, evolution, with so little rigorous examination of how well its basic theses work in illuminating specific instances of biological adaptation or diversity.’ Notice that phrase, “wishful speculations.’
Shapiro wrote to me a few years ago, saying, “I hear you are using this quote of mine and you are saying that I am an Intelligent Design person.’ I said, “No, I make it explicit that you are not an Intelligent Design person.’ So let me make it explicit here. James Shapiro does not subscribe at all to what I am proposing. But that is what he wrote, at least with regard to Darwinism.
In 2001, cell biologist Franklin Harold wrote The Way of the Cell, published by Oxford University Press. In it he writes, “There are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Where have we seen that before? But Franklin Harold is not citing James Shapiro. Apparently this is such a self-evident truth that the same terminology is used. Quite remarkable.
In 2002, Lynn Margulis, a member of the National Academy of Sciences, wrote, “Like a sugary snack that temporarily satisfies our appetites but deprives us of more nutritious foods, neo-Darwinism sates intellectual curiosity with abstractions bereft of actual details, whether metabolic, biochemical, ecological or of natural history.’
Here is one of my favorite quotes, because David Griffin is an outsider to this whole debate. A philosopher of science and religion, he, like the others I quoted, is not a fan of Intelligent Design. But from his observations of the debate he says, “The response I’ve received from repeating Behe’s claim about the evolutionary literature [that there are no detailed Darwinian paths to these systems, only a variety of wishful speculations] is that I obviously have not read the right books. There are, I am assured, evolutionists who have described how the transitions in question could have occurred. When I ask in which books I can find these discussions, however, either I get no answer or else some titles that, upon examination, do not in fact contain the promised accounts. This is known as the ‘argument by obscure reference.’ That such accounts exist seems to be something that is widely known. But I have yet to encounter someone who knows where they exist.’
It is a paper chase such as I see all the time. Ken Miller wrote a book where he claims to have found four glittering examples of how you get complex systems to form by Darwinian means. But three of those papers do not even deal with irreducibly complex systems, and one is just a one-line throw-away that is not even dealing with the problem. There are no detailed Darwinian accounts. They are just not there.
Divide and Conquer
I want to tie this all together. I have spoken about specified complexity as a criterion for detecting design, and I have spoken about irreducible complexity as something that at least should raise concerns as an obstacle for Darwinism. What is the connection? Obviously, I would like to see one of these systems, such as the bacterial flagellum, plugged into that criterion and put into the explanatory filter, and receive the answer that the system is designed.
The research is just getting going in this area. But I want to at least give you a sense of how one can go about it. The specification question is not a problem. No biologist that I know would argue that they are not specified. They are specified in virtue of their function. The issue is, how can you explain these systems which seem on their face to be highly improbable, and would be improbable if you had to account for them in one fell swoop? Can they be broken down into a simpler sequence of steps where each step is in fact highly probable? This is the premise of Richard Dawkins’ Climbing Mount Improbable. At the top of Mount Improbable is a highly complex and evolved system. How can we get up there? We cannot do so in one swoop like Superman, but we might be able to find a serpentine path to go up.
This is the point of Darwinism: Divide and conquer. Why all this emphasis on a type III secretory system in explaining the bacterial flagellum, even though it does not stand any hope of explaining it? Because it is a simpler system, and you have to divide and conquer. That is how Darwinism works.
The Origination Inequality:
Porigin ┰¤ pavail x psynch x plocal x pi-c-r x pi-f-c x po-o-a x pconfig
There are some probabilistic hurdles involved in explaining these irreducibly complex systems by means of the Darwinian mechanism. I will show this by means of an inequality. Now let me just say a little bit about what these terms mean. “Porigin’ represents the origination probability-the probability that you can get a system like the bacterial flagellum to arrive by Darwinian means. It is bounded by the product of many other probabilities. Let me try to break these down.
If you are going to build a bacterial flagellum, or if you are going to build a house-if you are going to originate any sort of structure, whether an irreducibly complex system or just a functionally integrated system-you need several things to happen. First, the parts have to be available; then, they have to be available at the right time. Then you have to be able to get them to the right location, to the construction site. You have to avoid interfering cross reactions, and there has to be interface compatibility. They must be assembled in a certain order, and the configuration must be correct.
Let us run through each of these in more detail:
1. Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available? There are problems just getting the right proteins. Some of the recent work on extreme functional sensitivity of proteins indicates that new proteins do not emerge easily. It is not just a matter of duplicating a gene, letting a little evolution happen, and then suddenly you have some nice new functional protein. It is hard to get these things even to start off with. So there is a corresponding probability, the availability probability, which is the probability that the types of parts needed to evolve a given irreducibly complex biochemical system become available.
2. Synchronization. Are these parts available at the right time so they can be incorporated when needed into the evolving structure? This corresponds to a synchronization probability-the probability that these parts become available at the right time.
3. Localization. Even with the parts available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the construction site of the evolving system? Things do not just randomly happen in a cell. These evolutionary arguments are based on the concept of redeployment. They are saying the parts are there, and they are now going to be redeployed, and that is how the evolution is going to take place. So, for instance, in the evolution of the flagellum, it would have to borrow simpler parts from elsewhere. But how do those parts get to the appropriate construction site? Right now they are being targeted elsewhere, and there is all sorts of gene regulation going on, saying, “Go over here; do not go over there.’ So how do the parts get there? So there is a corresponding localization probability connected with that.
4. Interfering cross reactions. Given that the right parts can be brought together at the right time and in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the construction site? For example, if you are building a house, you have to get the right pieces to the right site, but you also have to keep out things like mines, nuclear waste and other garbage. Insofar as localization works, you are getting the right pieces there, but you are also inviting the wrong pieces there. So there is, then, the interfering cross reactions probability.
5. Interface compatibility. This one is actually the toughest. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly, so that once suitably positioned, the parts work together to form a functioning system? Evolution plus natural selection is not like building a Toyota or Honda, where there are common conventions and design features so that when a bolt is used in one piece, for one car, it will also work in another car. Darwinian evolution is an opportunist. It just hunts and grabs things. Common conventions that would help design are going to be absent from this.
The Darwinian mechanism is an instant gratification mechanism that says, “Is this useful for me right now?’ It is not thinking, “You know, I would like to evolve an eye maybe in about a million years, so let me start setting aside these proteins now.’ It does not work that way. It is based on immediate gratification. How, then, do you get the interfaces? Even if you can get the right sort of generic protein to work, how is it going to interface properly? This is an area of research that is just now beginning. You can do it with computer simulations or with straight biological research, but all the results so far indicate that the probabilities are very small.
6. Order of assembly. Now you have to start putting the parts together in the right order. Just because a subsystem initially gets put together in a certain order, that does not mean when you put the whole system together, the subsystem will still be in the proper order. There is an order of assembly probability that needs to be overcome.
7. Configuration. When you have the right pieces assembled in the right order, can they be configured properly? Say you are building a wall. You can put all the bricks in place in the right order, but if they are all skewed and not configured correctly, the wall will fall over. So configuration is an issue. In biology the configuration actually does not end up being that important. The probability does not end up being that small because it ends up actually being included in the interface compatibility and in the order of assembly probability, because these are self-assembling structures. There are electrostatic forces that lock these systems into position. Whereas this configuration probability ends up being significant when you are talking about design of things like houses, it ends up not being so important when you are talking about self-assembly.
There you have the origination inequality. It is an inequality because you can still think of adding more terms. For example, you could add a retention probability-can you retain the right parts at the construction site long enough?
It turns out these probabilities do multiply, because each is conditional upon the preceding one. Start with the probability of availability-do you get the right parts? Given that they are available, are they available at the right time? Given that they are available at the right time, can they be localized? Given all these things, can we avoid interfering cross reactions? Given all these things, can we ensure interface compatibility? Given all these things, can we get the right order of assembly? We can go on and on.
The Drake Equation
The Drake equation: N = N* fp ne fl fi fc fL
N* represents the number of stars in the Milky Way Galaxy
fp is the fraction of stars that have planets around them
ne is the number of planets per star that are capable of sustaining life
fl is the fraction of planets in ne where life evolves
fi is the fraction of fl where intelligent life evolves
fc is the fraction of fi that communicate
fL is fraction of the planet’s life during which the communicating civilizations live
When all of these variables are multiplied together when come up with:
N, the number of communicating civilizations in the galaxy.
I want to contrast this origination inequality with the Drake equation, just to tie it back to our topic. It may seem like I am switching gears here, but it is going to be a useful point to end on. The Drake equation comes up in SETI research. This is how people determine that they have the grounds for believing they are going to find signs of extraterrestrial intelligence.
What does the Drake equation mean? N is the number of technologically advanced civilizations in the Milky Way capable of communicating with earth. If that number is large, then we stand a good chance of getting signals that will convince us of extraterrestrial intelligences. In that case, a movie like Contact would be vindicated. What is going to make that equation work? First, the number of stars in the Milky Way galaxy have to be large; then, the fraction of stars that have planetary systems needs to be large. Next, the average number of planets per star capable of supporting life needs to be large, and the fraction of planets where life evolves needs to be large. Next, the fraction of those planets, in turn, where intelligent life evolves needs to be large, and the fraction of planets with civilizations that invent advanced communications technology also needs to be considerable. Finally, the fraction of the planetary lifetime during which communicating civilizations exist needs to be significant. If communicating civilizations only have a fifty year window before they annihilate themselves with nuclear weapons, we will probably never hear from extra-terrestrials.
In the Drake equation, all these variables need to be large. If any one is too small, N will be too small, and SETI is unlikely to succeed. Now, why am I stressing this? Well, two reasons. One, I want to get a dig in at SETI. Michael Crichton had a wonderful Caltech Michelin lecture in 2003 where he said, “The only way to work the equation is to fill it in with guesses. And guesses, just so we’re clear, are merely expressions of prejudice. Nor can there be ‘informed guesses.’ If you need to state how many planets with life choose to communicate, there is simply no way to make an informed guess. It is simply prejudice. As a result, the Drake equation can have any value from billions and billions to zero. An expression that can mean anything means nothing. I take the hard view that science involves the creation of testable hypotheses. The Drake equation cannot be tested. There is not a single shred of evidence for any other life forms, and in forty years of searching, none has been discovered.’ That is a hard word.
What is the other reason? Look at the Drake equation, and then look at the origination inequality. These are probabilities. Probabilities are bounded above by one. Probabilities are strictly numbers between zero and one. If any one of these probabilities is small, the origination probability is going to be small. So for this thing to be testable, only one of these variables needs to be calculable. In fact, you just need to give an upper bound on it that is small enough.
This is significant because with the Drake equation everything needs to be large. You have to be able to evaluate everything. If there is some hole there-for example, if you do not know the frequency of intelligent life, given that life evolved-then you do not have any basis for thinking that N is either large or small. It can be anything.
With the origination inequality, it is enough for one of these to be small. I am betting that interface compatibility is going to be the downfall of Darwinism because, insofar at these have been calculated or estimated, all the signs say that we are dealing with something very small. But I think that this availability and interface compatibility is where you are going to see a lot of action in the coming years. I already know some of the research that is going on there.
If you want to know more about the Drake equation, then look at The Privileged Planet. This is a book that is coming out by some good friends and colleagues of mine. My first stab at this origination inequality actually takes place in my book, No Free Lunch.
Explaining Specified Complexity
The biological community, by and large those committed to Darwinism, would like us to take the following postures. First, they want us to hunch our shoulders and say, “I can’t find Darwinian pathways to complex specified biological systems. I guess I’m just too dumb or lazy.’ Richard Dawkins has said that he will not debate Michael Behe because Behe should just get back into the lab and try to figure out how these irreducibly complex systems evolved by Darwinian means. He does not seem to appreciate that if there are in-principle reasons for why these things should not have evolved by Darwinian means, there is no purpose for now going into the lab and trying to do that.
Second, they would like us to say, “I can’t find Darwinian pathways to complex specified biological systems, but neither can anyone else.’ That certainly is the case, as the quotes I gave you by Lynn Margulis and others indicate.
Third, they want us to say, “I can’t find Darwinian pathways to complex specified biological systems because no such pathways exist, and I can prove it.’ I think we are going to be there soon. We will not be able to “prove’ it in a strict mathematical deductive sense, but we will be able to prove it in a very scientifically rigorous sense. I would say we are going to get there in the next three to five years.
Finally, let me make four conclusions:
- Specified complexity is a reliable empirical marker of actual design.
- The best evidence suggests that irreducible complexity in biology is a special case of specified complexity.
- The Darwinian mechanism gives no indication of being able to resolve the problem of irreducible complexity by indirect Darwinian pathways.
- Irreducible complexity is exhibited in actual biological systems.
Thank you for reading. If you found this content useful or encouraging, let us know by sending an email to gvcc@gracevalley.org.
Join our mailing list for more Biblical teaching from Reverend P.G. Mathew.