Home » Intelligent Design » ID Metrics and an Active Information Tutorial

ID Metrics and an Active Information Tutorial

One of my favorite parts of ID is the fact that it is creating good tools for biologists to use. ID is often misconceived as a conclusion about whether or not X was designed. Instead, ID presupposes only the *possibility* that something was designed, and that intelligent agents are not mechanistic. In accordance with this, several metrics have been developed.

The first metric that I am aware of is CSI. The method for measuring CSI was originally developed by Dembski in The Design Inference. The main problem for CSI is in the difficulty of actually taking the measurements it requires.

The second metric (well, metric probably isn’t quite the right word, it’s a qualitative measure) is Irreducible Complexity as described in Darwin’s Black Box. As originally proposed by Behe, Irreducible Complexity is fully testable, and has been successfully tested by Minnich and Meyer in the lab. While Irreducible Complexity as proposed by Behe only conceptually argues against Darwinism, further theoretical work shows more specifically, based on computational principles, why Irreducible Complexity argues for intelligence, as well as practical uses of ID in biological research.

The third metric, however, is my favorite. It’s a simpler conception, yet very powerful, and is based directly on the No Free Lunch theorems. It is “Active Information”. Active Information is basically the measurement of how much information a search algorithm knows about the pattern of the search space that it is searching. It is measured by looking at the performance of the search algorithm vs a blind search. The paper describing it is here. This concept has been further applied to measure the amount of active information that is used by the immune system during somatic hypermutation (about 22 bits), and additional research is ongoing to apply it more generally to cells in hypermutable states.

Anyway, Active Information has a huge potential in biology to help detect which processes have frontloaded information, and how much information the cell is actually supplying for mutational processes. Anyway, below, Robert Marks gives a *great* lecture on information generally, and ends the lecture specifically talking about Active Information in evolutionary systems.

(Visited 634 times, 7 visits today)
  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

39 Responses to ID Metrics and an Active Information Tutorial

  1. By the way, if I’ve missed any metrics, let me know.

  2. Also, just to note, the measurement of Active Information for the immune system’s somatic hypermutation is actually a lowball estimate. By the actual method of Dembski/Marks, every possible transformation of the genome (no matter how many nucleotides) would have to be considered. The 22 bit metric is based on the natural biasing of the search towards low-energy transformations (i.e. changes a few nucleotides is intrinsically more likely than a wholesale swap of the entire genome).

  3. johnnyb,
    I tihnk you may have missed a few actually! Here’s the list I have:

    dFSCI, Digital Functionally Specified Complex Information;
    EF, Explanatory Filter;
    FAI, Functional Algorithmic Information;
    Fits, Functional Bits;
    FSC, Functional Sequence Complexity;
    FSCI, Functionally Specified Complex Information;
    GSP, Genetic Selection Principle;
    ID, Intelligent Design;
    IR, Irreducible Complexity;
    PI, Prescriptive Information;
    UPB, Universal Probability Bound;
    UPM, Universal Plausibility Metric;
    FIIRDS, Functional incredibly improbable random digital strings;
    a.k.a. FSCO/I, Functionally Specific, Complex Organisation and associated Information.

    • Peter -

      Those are terms, not all of them are metrics, and not all of them are from ID. UPB/UPM are related to each other, but neither originated with ID. EF is a conceptual explanation of CSI, not a metric in and of itself. ID is not a metric. I am not sure about the other ones. I have seen Fits, but am not sure if it’s application or origination was directly ID or not.

  4. jonnyb,

    While Irreducible Complexity as proposed by Behe only conceptually argues against Darwinism, further theoretical work shows more specifically, based on computational principles, why Irreducible Complexity argues for intelligence, as well as practical uses of ID in biological research.

    I noticed that the link embedded in that paragraph goes to http://www.creationbiology.org

    And on that site I read this:

    However, we find overwhelming evidence for phylogenetic discontinuity among major groups and, thus, discard the prevailing assumption that all living things are related in a great “Tree of Life”. As a result of this departure from conventional science, we have developed additional concepts and methods to meet our needs in studying discontinuity and describing the history of created kinds.

    Do you honestly think that by denying the basic foundational principle of common descent that you can really find practical uses of ID in biological research when you have started our rejecting what everybody else accepts?

    And these “created kinds” – how long ago where they created? Millions of years ago? A few thousand?

    When will the first such scientific practical use of ID in biological research happen? Years? Decades?

    Can you name a single fact that ID/Creationism has discovered that Darwinism has not? If not, are not claims of it’s potential utility in research premature?

    • Peter:

      Can you name a single fact that ID/Creationism has discovered that Darwinism has not?

      I can: The scientific evidence overwhelmingly supports the conclusion that the best and only viable explanation for the existence and variety of life on earth is that it was designed by an intelligent agent or agents. I think that fact is massively important, don’t you?

      And by the way, ID and Creationism are two different animals. The former bases its conclusions entirely on the results of scientific methodology, whereas the latter bases its conclusions on scripture.

      • 4.1.1

        Bruce,

        The scientific evidence overwhelmingly supports the conclusion that the best and only viable explanation for the existence and variety of life on earth is that it was designed by an intelligent agent or agents.

        Then why do the vast majority of scientists (i.e. the people closest to the evidence) of all religions and faiths and none at all, all over the world, disagree with you?

        So, as you claim to have “scientific evidence” for the existence of life from an ID perspective could you please share that with me?

        • 4.1.1.1

          Peter:

          Then why do the vast majority of scientists (i.e. the people closest to the evidence) of all religions and faiths and none at all, all over the world, disagree with you?

          This is really the weakest of all the arguments put forward against ID. Anyone with passing familiarity with the history of science knows that the fact that a theory is endorsed by the majority of scientists is hardly a guarantee that it is correct.

          So, as you claim to have “scientific evidence” for the existence of life from an ID perspective could you please share that with me?

          The following is a partial list, but will get you started:
          Darwinism, a Theory in Crisis by Michael Denton
          Darwin’s Black Box and The Edge of Evolution by Michael Behe
          Signature in the Cell by Stephen Meyer
          The Design Inference and No Free Lunch by William Dembski
          Genetic Entropy and the Mystery of the Genome by J.C. Sanford
          Douglas Axe and Ann Gauger’s papers, available at their Web site.

        • The short answer – if, like Broadway Danny Rose, I may interject – Pierre, is that they are terrified of a paradigm-change which would make them look as foolish as the stridency with which they have proclaimed non-believers to be irrational half-wits. I believe psychologists call it, ‘projection’.

          There was an article on here I believe, recently, in which it was recounted that a student questioned her professor on an aspect of ‘evamolution’, and he ‘went ballistic’. Imagine if he had addressed her question with integrity on its face value.

          If he had yielded to the merit of her question, it would have opened a whole Pandora’s Box for him, because one truth would have led to another – and where would his career have gone from there, eh?

          Those truths would have been festering away subliminally, but, were necessarily, fiercely repressed. If he had not felt threatened, then, instead of exploding with rancour, he would have addressed her, as I am addressing you, and patiently explained to her, perhaps with a slightly pained demeanour …. “My dear child…….” Scripture tells us we must bear one another’s burdens, so a little kindness however patronising in tone, goes a long way – as I’m sure you would agree.

    • Peter -

      That actually wasn’t what I was referring to. It amazes me how many people argue vehemently against ID without reading the technical papers. If you had bothered to actually read the paper, you would have found that you quoted from section 3.3, but skipped over 3.1, 3.2, and 3.4 which had applications to non-creationary aspects of biology, irrelevant of the truth/falsity of common descent. Did you intentionally skip over those?

      So, in 3.1, I discuss the following:

      Pallen and Matzke (2006) argue for the exaptational origin of the flagellum. As we’ve shown, just the FleQ/FleN pathway makes the evolution of this system solely by natural selection unlikely. However, that does not completely nullify the argument of exaptation. Because we lack total knowledge, this system is an RIC system. However, as discussed in section 2.6, this leaves open a few possibilities for its evolution. If it is evolvable, then it means that the traversed sequence space has been somehow regularized. An analogous (though not functionally homologous) way of looking at the possible evolution is to compare it to the V(D)J recombination system in which specific gene regions, designated as either variable (V), diversity (D), or joining (J) regions of the immune system, are randomly selected and assembled. In the V(D)J recombination system, the formation of immunoglobulin genes is facilitated by recombination signal sequences (RSSs), which mark segments of functionality. These, in turn, are then assembled in a regularized way, and the whole process resembles a computer metaprogram—a program which generates other programs (Bartlett 2006). These pathways are not deterministic, but information is the main driving force in their generation. The RSSs provide the information within the genome to guide the recombination towards likely functional paths. The FleQ/FleN pathways (and others) could be evolved through an analogous system which put together pieces of functionality based on templates. Rigoutsos et al. (2006) have claimed to have found gene sequences that match such a description. Whatever the exact mechanism, the RIC concept indicates that although an unguided evolution of the flagellum by exaptation is unlikely, it would be possible if the evolution was regularized in some way.

      The point being, that if an RIC system (similar to IC, but see paper for definition) is found to be evolvable, then we have direct evidence of mutational machinery at play. Therefore, by identifying evolvable RIC systems, we can use RIC to detect higher-order levels of evolution occurring.

      • 4.2.1

        Pallen and Matzke (2006) argue for the exaptational origin of the flagellum. As we’ve shown, just the FleQ/FleN pathway makes the evolution of this system solely by natural selection unlikely.

        How so? The FliQ/FleN pathway isn’t even universally required in known flagellar systems. You write in the paper:

        An example of a relatively irreducibly complex mechanism, then, would be the control of the flagellar assembly in the bacterium Psuedomonas aeruginosa, which uses a multilevel control system to regulate the formation of the flagellum. FleQ is a transcription factor that regulates a number of other genes used in flagellar assembly. One of the downstream products of the assembly is FleN. FleN interacts with FleQ to deactivate it, preventing multiflagellation (Dasgupta et al. 2003). The regulation of FleQ is done downstream of FleQ itself, making a step-at-a-time evolution of the pathway extremely difficult.

        This doesn’t work either. E.g.:

        Starting point: System with just FleQ. Bacterium produces multiple flagella, and not even an exact number of them, just like many bacteria living today.

        Next step: Selection for a fewer flagella (for efficiency or whatever) selects mutation(s) on FleQ and one of the downstream proteins (proto-FleN) such that they interact a little more strongly. First FleQ is weakly downregulated, producing slightly fewer flagella. Later it is stronger downregulated.

        “Just not so” stories are pretty worthless when the author makes no effort whatsoever to make the minimal effort to look up relevant data or consider the plausibility of alternative contentions. Unfortunately, this is almost universally all that creationists/IDists do — throw out some half-baked argument that “seems obvious” to them without doing anything like the serious research and thinking required. Then they get mad when they don’t get taken seriously by real biologists.

        • Nick Matzke:

          Starting point: System with just FleQ. Bacterium produces multiple flagella, and not even an exact number of them, just like many bacteria living today.

          Next step: Selection for a fewer flagella (for efficiency or whatever) selects mutation(s) on FleQ and one of the downstream proteins (proto-FleN) such that they interact a little more strongly. First FleQ is weakly downregulated, producing slightly fewer flagella. Later it is stronger downregulated.

          So in order to explain the origin of one IC config you rely on another? Talk about a totally uncooked argument. No thinking required when you start out with the very stuff you need to explain in the first place.

          How about starting with a flagella-less system…

        • Nick –

          Don’t forget that FleN is regulated by FleQ. For this to work, you have to have the coordinated mutations of both having the receptors for FleN being regulated by FleQ as well as having the capacity for FleN to regulate FleQ.

          Now, my point was NOT that it couldn’t evolve. In fact, the whole point of the paper was this – what would it mean if it did! The point was that half-baked, biologically unrealistic stories like yours are intrinsically unlikely. You didn’t even bother to give a realistic story. It isn’t science if you say “what if things were different and they magically changed incrementally”. What did “proto-FleN” do? What happened to that function when it changed? That isn’t an argument, or even an alternative, it’s just B.S. It’s like if every time I looked at a thermometer, I thought, “man, I wonder what kind of non-thermal force might have been at work to produce the temperature reading”. Is it true that it is possible for non-thermal processes to interfere with thermometers? Sure. But if you invoke them for every temperature reading, you’re not doing science, you’re doing make-believe.

          What is needed are people to think about the informational requirements for change, and how those informational requirements could be met, and then to experiment to see how far those informational requirements actually exist in the cell.

          You should take a look at books like Caporale’s “The Implicit Genome” which covers a lot of topics such as this.

          • 4.2.1.2.1

            What did “proto-FleN” do?

            It could have been any of a large number of downstream genes producing some accessory flagellar protein (or even some non-flagellar protein that gets transposed into the flagellar operon).

            What happened to that function when it changed?

            You’ve never heard of gene duplication, I guess. You don’t have an objection, you have “I personally haven’t looked into this *at all*, but I’m going to declare the downfall of Darwinian evolution based on it nevertheless.

            That isn’t an argument, or even an alternative, it’s just B.S.

            You’re the one whose “argument” relies on the nonexistence of gene duplication, the nonexistent of functional shifts, and the nonexistence of successful bacteria (a) without this system and (b) which happily produce multiple flagella.

          • 4.2.1.2.2

            Don’t forget that FleN is regulated by FleQ.

            FleQ just turns up expression of the operon containing FleN, along with operons containing 20+ other early flagellum genes (mostly the basal body genes homologous to nonflagellar T3SS). FleN then downregulates FleQ.

            This isn’t how regulation works even in somewhat closely-related flagellated bacteria, like the standard model systems in E. coli and Salmonella.

            Knocking out FleN in Pseudomonas doesn’t even knock out motility. So even on Behe’s IC argument, you don’t have a case.

            The FleQ homolog in Bacillus subtilis, when knocked out, produced no obvious unusual motility phenotype.

            You need to explain why it would be so amazingly hard to evolve such a system from a standard multiflagellated ancestor, and to have any hope at all your argument has to not contradict well-known facts.

          • Nick -

            You have again misread me, and, likewise, the whole ID movement. I never said that gene dupication is impossible. My point is that when it occurs, there are enough issues that it requires *orchestration* for them to occur. You have already presupposed 3 incredibly unlikely pieces to explain one previously unlikely event. You are making the problem worse, not better. You should look into recent research regarding mutation theory – more and more we are seeing that mutations – whether gene duplications, SNPs, or other fun, are actually orchestrated by existing information. And the goal of my paper is simple – to show the kinds of evolutionary events which appear to presuppose orchestration.

            Imagine, for instance, if you told me that a blind person drove from one side of town to the other, by themselves. I’m not going to believe you. “But,” you would argue, “if they turned left on seventh, right on Bristleblock, and then did a hard left on Mistletoe street, they’d get there just fine”. That may or may not be true enough, but the point is irrelevant. If the driver is blind, they probably aren’t going to do that.

            On the other hand, if the driver had a navigation computer which said “turn left now”, “turn right now”, then it is possible for them to navigate. The point of the paper is to show which types of systems are likely to require organization to evolve.

            You keep on arguing as if ID says that these are totally unevolvable. I didn’t say that. Behe didn’t say that. Dembski didn’t say that. What we said is that information/orchestration is required.

            One interesting read is this paper by Zhang and Saier, In it, they describe an orchestrated mutational event. However, you should note that it took, I think it was 13 separate experiments to determine that the mutation was part of a regulated system rather than being haphazard. It would not have taken any experiment for them to have declared (quite authoritatively) that the mutational event was haphazard and fortuitous. It is not currently required in biology to prove one’s position when they say that a mutation was fortuitous, but it requires an enormous amount of data to meet the requirements to prove a mutation was orchestrated. As such, I imagine there are a many more mutations which are part of orchestrated systems than are currently noted. The point of my paper was to present the scenarios likely to need orchestration to occur.

          • 4.2.1.2.4

            You should look into recent research regarding mutation theory – more and more we are seeing that mutations – whether gene duplications, SNPs, or other fun, are actually orchestrated by existing information. And the goal of my paper is simple – to show the kinds of evolutionary events which appear to presuppose orchestration.

            Claims of adaptive mutation, directed mutation, etc. have been made again and again, for decades. They almost always get a splash in the press, and then don’t pan out, and turn out to be due to something the scientists missed, usually a selective step the researchers didn’t detect, or an experimental problem, or contamination. That’s why most scientists are skeptical of most of it.

            You cite one interesting study, but it looks like a very special case, not any sort of evidence for a general mechanism. Whereas random mutation and natural selection is a very general mechanism.

            You keep on arguing as if ID says that these are totally unevolvable. I didn’t say that. Behe didn’t say that. Dembski didn’t say that. What we said is that information/orchestration is required.

            Which is the same thing as saying that normal evolution as scientists understand it, i.e. evolution by natural processes, doesn’t work. Which is what you were arguing, except that your arguments didn’t work because they are contradicted by known facts. None of the events I described (gene duplication, mutation to increase binding, selection for regulation for efficiency) is difficult or rare, and the mutation events didn’t even have to happen together.

            To seriously examine the question, we’d have to get a survey of all the related systems, phylogenies of FleN, FleQ, and relatives, etc., and then see how plausible my scenario is, vs. your assertions about why a gradual, natural evolutionary pathway wouldn’t work. I’ve already pointed out some basic problems with your claims based on the little we know now. If you’re going to claim that natural evolution is impossible/really ridiculously improbable on the basis of the FleQ/FleN system, you’d better do better than that, and frankly you should at the very least do the above minimal research work for your readers.

            Too bad the peer-reviewers at that creationist journal didn’t insist on this. Any real science journal would.

          • Claims of accumulations of random mutations actually construct multi-protein machines but it never pans out.

            That is why scientists are skeptical of it.

            And Nick, if you testified in a Court case involving ID and tried to use gene duplication you will get laughed out of the Court-room.

          • Nick -

            Again, you have completely misunderstood ID. Perhaps if you stopped hating ID for just a few minutes – long enough to figure out what it says, you might at least see what we are saying.

            For instance, you made the following false claims about my position:

            “Which is the same thing as saying that normal evolution as scientists understand it, i.e. evolution by natural processes, doesn’t work.”

            Actually, my proposed idea (and that of Behe) ENTIRELY consists of natural processes. In what sense is a mechanism a non-natural process? Are computers non-natural processes? I am simply positing that there was an existing information source (much like a hard drive on a computer) which held information to assist the changes, and giving criteria for when we should be looking for such information. What part of that is non-natural? I have other ideas for non-natural processes (you can come to the Engineering and Metaphysics conference this summer to see them if you are interested), but they were not discussed in this paper at all, and, in fact, aren’t even about evolution.

            “None of the events I described (gene duplication, mutation to increase binding, selection for regulation for efficiency) is difficult or rare”

            I didn’t claim otherwise.

            “and the mutation events didn’t even have to happen together.”

            *Pieces* of the mutation events don’t have to happen together, assuming that you already have proto-FleN. So, if you assume half of your argument, then, amazingly, you are halfway there ;) The fact, though, is that for them to work well together, there is a likely need to have some amount of coordination.

            In addition, you are very wrong about the directed mutation controversy. In fact, some of the major players who were *against* directed mutations have switched sides (such as Rosenberg). The problem is that they put an evidential requirement on the hypothesis that was just silly. For instance, in the Lac+ mutation, it was assumed that the Lac gene wasn’t targeted because they found mutations in other genes. However, most of the genes they found mutations on were other genes for metabolizing sugars! So it seemed that while it was not a 100% target of the right mutation on the right gene, the cell had heuristics about which genes were likely to contain the correct hit.

            Also, another issue that happened is that most of the selection experiments involved lethal selection. It is true in the face of *lethal* selection that, in those cases, only the cells which had the mutation pre-existing survived. However, many of the researchers noted that after a while (several days after one would normally perform a fluctuation test), additional colonies would start to get the mutation which did *not* follow the Luria-Delbruck pattern. These were apparently organisms which did not get a lethal dosage the first time, and therefore could trigger a mutational response.

            Likewise, a clear example of directed mutation is in the adaptive immune system, where the mutation machinery skips over 99% of the genome to land mutations in the correct *half* of the correct gene. The mutations are targeted by an upstream non-coding sequence which points to where the mutations should go (if you move the sequence, you move the location of the mutations). As you can see this process is directed. In addition, the selection is artificial, rather than natural. That is, the cells that die aren’t ones that don’t have a sufficient metabolism to keep going, but instead they are the ones which don’t perform the proper function. Therefore, the selection is not about *survivability*, but *aptness*, or, to put it another way, if they match the teleology of the organism.

            So, I wouldn’t be so quick to put the nail in the coffin of the directed mutation hypothesis. If you allow for semi-directed mutations (after all *no one* claimed that the cells were omniscient), then all of a sudden the tests that are used to show that the mutations aren’t directed don’t seem so worthwhile. That’s why I advocate for using Active Information as a metric – this gives an actual value to the amount of directedness that any mutation has given the present selection pressures.

          • Actually, it may have been Foster that switched sides (or maybe both!). My memory is faded at the moment.

    • Peter,

      Without bandwidth-wasting, I suggest you simply demonstrate the emergence of control, functionality and symbolic information processing in nature by chance and necessity alone. ok?

    • “Can you name a single fact that ID/Creationism has discovered that Darwinism has not? If not, are not claims of it’s potential utility in research premature?”

      That’s easy – genetics. It was discovered by Mendel in use as an anti-evolutionary argument (read the conclusion of Mendel’s paper if you don’t believe me). I think that the Price equation is similar, though its original founder was a little more coy about it’s origin from theology. If you want to see the relevance of Creation in the history of biology, you should check out my article on the subject:

      The Doctrine of Creation and the Making of Modern Biology

  5. Very good video!!! As to your question, I don’t think there is a ‘metric’ for it (YET??). But there recently been found to be ‘Quantum’ information in the cell. Moreover Quantum information offers rigid falsification of neo-Darwinism based from physics itself:

    Falsification Of Neo-Darwinism by Quantum Entanglement/Information

    Neo-Darwinian evolution purports to explain all the wondrously amazing complexity of life on earth by reference solely to chance and necessity processes acting on energy and matter (i.e. purely material processes). In fact neo-Darwinian evolution makes the grand materialistic claim that the staggering levels of unmatched complex functional information we find in life, and even the ‘essence of life’ itself, simply ‘emerged’ from purely material processes. And even though this basic scientific point, of the ability of purely material processes to generate even trivial levels of complex functional information, has spectacularly failed to be established, we now have a much greater proof, than this stunning failure for validation, that ‘put the lie’ to the grand claims of neo-Darwinian evolution. This proof comes from the fact that it is now shown from quantum mechanics that ‘information’ is its own unique ‘physical’ entity. A physical entity that is shown to be completely independent of any energy-matter space-time constraints, i.e. it does not ‘emerge’ from a material basis. Moreover this ‘transcendent information’ is shown to be dominant of energy-matter in that this ‘information’ is shown to be the entity that is in fact constraining the energy-matter processes of the cell to be so far out of thermodynamic equilibrium.

    First, Here is the falsification of local realism (reductive materialism).
    Here is a clip of a talk in which Alain Aspect talks about the failure of ‘local realism’, or the failure of reductive materialism, to explain reality:

    The Failure Of Local Realism – Reductive Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    The falsification for local realism (reductive materialism) was recently greatly strengthened:

    ‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011
    Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically.
    http://www.sciencedaily.com/re.....111942.htm

    Falsification of Local Realism without using Quantum Entanglement – Anton Zeilinger – video
    http://vimeo.com/34168474

    Physicists close two loopholes while violating local realism – November 2010
    Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.
    http://www.physorg.com/news/20.....alism.html

    Quantum Measurements: Common Sense Is Not Enough, Physicists Show – July 2009
    Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables.
    http://www.sciencedaily.com/re.....142824.htm

    of note: hidden variables were postulated to remove the need for ‘spooky’ forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light. This following video illustrates just how ‘spooky’, to use Einstein’s infamous word, this quantum action truly is:

    Light and Quantum Entanglement Reflect Some Characteristics Of God – video
    http://www.metacafe.com/watch/4102182/

    And yet, this ‘spooky’ quantum entanglement, which rigorously falsified local realism (reductive materialism) as the ‘true’ description of reality, is now found in molecular biology on a massive scale!

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

    Quantum entanglement holds together life’s blueprint – 2010
    Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours (arxiv.org/abs/1006.4053v1). “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
    http://neshealthblog.wordpress.....blueprint/

    The relevance of continuous variable entanglement in DNA – July 2010
    Excerpt: We consider a chain of harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. The binding energies between entangled and classically correlated states are compared. We apply our model to DNA. By comparing our model with numerical simulations we conclude that entanglement may play a crucial role in explaining the stability of the DNA double helix.
    http://arxiv.org/abs/1006.4053v1

    • Quantum Entanglement/Information is confirmed in DNA by direct observation here;

      DNA Can Discern Between Two Quantum States, Research Shows – June 2011
      Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
      http://www.sciencedaily.com/re.....104014.htm

      Coherent Intrachain energy migration at room temperature – Elisabetta Collini & Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73
      Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.
      http://www.scimednet.org/quant.....d-protein/

      Quantum states in proteins and protein assemblies:
      The essence of life? – STUART HAMEROFF, JACK TUSZYNSKI
      Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign.
      http://www.tony5m17h.net/SHJTQprotein.pdf

      Myosin Coherence
      Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects.
      http://www.energetic-medicine......Page1.html

      The necessity of ‘transcendent’ information, to ‘constrain’ a cell, against thermodynamic effects is noted here:

      Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
      Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
      http://journals.witpress.com/paperinfo.asp?pid=420

      i.e. It is very interesting to note, to put it mildly, that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy space/time) ’cause’ when the quantum entanglement ‘effect’ falsified material particles as its own ‘causation’ in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as neo-Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the energy/matter particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘specified’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!
      ,,,To refute this falsification of neo-Darwinism, one must overturn Alain Aspect, and company’s, falsification of local realism (reductive materialism) !

      And to dovetail into Dembski and Marks’s previous work on Conservation of Information;,,,

      LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information
      William A. Dembski and Robert J. Marks II
      http://evoinfo.org/publication.....ation-law/

      ,,,Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, and such as what we find encoded in computer programs, and yes, as we find encoded in DNA, is found to be a subset of ‘transcendent’ (beyond space and time) quantum entanglement/information by the following method:,,,

      ,,,This following research provides solid falsification for the late Rolf Landauer’s decades old contention that the information encoded in a computer is merely physical (merely ‘emergent’ from a material basis) since he believed it always required energy to erase it;

      Quantum knowledge cools computers: New understanding of entropy – June 2011
      Excerpt: No heat, even a cooling effect;
      In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
      http://www.sciencedaily.com/re.....134300.htm

      ,,,And to dot the i’s, and cross the t’s, here is the empirical confirmation that quantum information is in fact ‘conserved’;,,,

      Quantum no-hiding theorem experimentally confirmed for first time
      Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.
      http://www.physorg.com/news/20.....tally.html

      Further note:

      Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
      Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC (Functional Sequence Complexity). FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,
      Testable hypotheses about FSC
      What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

      Null hypothesis #1
      Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.

      Null hypothesis #2
      Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.

      Null hypothesis #3
      Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.

      Null hypothesis #4
      Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

      We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.
      http://www.tbiomed.com/content/2/1/29

      The following describes how quantum entanglement is related to functional information:

      Quantum Entanglement and Information
      Excerpt: A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems.
      http://plato.stanford.edu/entries/qt-entangle/

    • Quantum Entanglement/Information is confirmed in DNA by direct observation here;

      DNA Can Discern Between Two Quantum States, Research Shows – June 2011
      Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
      http://www.sciencedaily.com/re.....104014.htm

      Coherent Intrachain energy migration at room temperature – Elisabetta Collini & Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73
      Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.
      http://www.scimednet.org/quant.....d-protein/

      Quantum states in proteins and protein assemblies:
      The essence of life? – STUART HAMEROFF, JACK TUSZYNSKI
      Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign.
      http://www.tony5m17h.net/SHJTQprotein.pdf

      Myosin Coherence
      Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects.
      http://www.energetic-medicine......Page1.html

      The necessity of ‘transcendent’ information, to ‘constrain’ a cell, against thermodynamic effects is noted here:

      Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
      Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
      http://journals.witpress.com/paperinfo.asp?pid=420

      i.e. It is very interesting to note, to put it mildly, that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy space/time) ’cause’ when the quantum entanglement ‘effect’ falsified material particles as its own ‘causation’ in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as neo-Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the energy/matter particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘specified’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!
      ,,,To refute this falsification of neo-Darwinism, one must overturn Alain Aspect, and company’s, falsification of local realism (reductive materialism) !

      And to dovetail into Dembski and Marks’s previous work on Conservation of Information;,,,

      LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information
      William A. Dembski and Robert J. Marks II
      http://evoinfo.org/publication.....ation-law/

      ,,,Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, and such as what we find encoded in computer programs, and yes, as we find encoded in DNA, is found to be a subset of ‘transcendent’ (beyond space and time) quantum entanglement/information by the following method:,,,

      ,,,This following research provides solid falsification for the late Rolf Landauer’s decades old contention that the information encoded in a computer is merely physical (merely ‘emergent’ from a material basis) since he believed it always required energy to erase it;

      Quantum knowledge cools computers: New understanding of entropy – June 2011
      Excerpt: No heat, even a cooling effect;
      In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
      http://www.sciencedaily.com/re.....134300.htm

      ,,,And to dot the i’s, and cross the t’s, here is the empirical confirmation that quantum information is in fact ‘conserved’;,,,

      Quantum no-hiding theorem experimentally confirmed for first time
      Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.
      http://www.physorg.com/news/20.....tally.html

      Further note:

      Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
      Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC (Functional Sequence Complexity). FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,
      Testable hypotheses about FSC
      What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

      Null hypothesis #1
      Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.

      Null hypothesis #2
      Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function.

      Null hypothesis #3
      Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function.

      Null hypothesis #4
      Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

      We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.
      http://www.tbiomed.com/content/2/1/29

      • The following describes how quantum entanglement is related to functional information:

        Quantum Entanglement and Information
        Excerpt: A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems.
        http://plato.stanford.edu/entries/qt-entangle/

        Anton Zeilinger, a leading researcher in Quantum mechanics, relates how quantum entanglement is related to quantum teleportation in this following video;

        Quantum Entanglement and Teleportation – Anton Zeilinger – video
        http://www.metacafe.com/watch/5705317/

        A bit more detail on how teleportation is actually achieved, by extension of quantum entanglement principles, is here:

        Quantum Teleportation
        Excerpt: To perform the teleportation, Alice and Bob must have a classical communication channel and must also share quantum entanglement — in the protocol we employ*, each possesses one half of a two-particle entangled state.
        http://www.cco.caltech.edu/~qoptics/teleport.html

        And quantum teleporation has now shown that atoms, which are suppose to be the basis from which ALL functional information ‘emerges’ in the atheistic neo-Darwinian view of life, are now shown to be, in fact, reducible to the transcendent functional quantum information that the atoms were suppose to be the basis of in the first place!

        Ions have been teleported successfully for the first time by two independent research groups
        Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,
        http://www.rsc.org/chemistrywo.....ammeup.asp

        Atom takes a quantum leap – 2009
        Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
        “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.
        http://www.freerepublic.com/fo.....1769/posts

        Thus the burning question, that is usually completely ignored by the neo-Darwinists that I’ve asked in the past, is, “How can quantum information/entanglement possibly ‘emerge’ from any material basis of atoms in DNA, or any other atoms, when entire atoms are now shown to reduce to transcendent quantum information in the first place in these teleportation experiments??? i.e. It is simply COMPLETELY IMPOSSIBLE for the ’cause’ of transcendent functional quantum information, such as we find on a massive scale in DNA and proteins, to reside within, or ever ‘emerge’ from, any material basis of particles!!! Despite the virtual wall of silence I’ve seen from neo-Darwinists thus far, this is not a trivial matter in the least as far as developments in science have gone!!

        Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff – video (notes in description)
        http://vimeo.com/29895068

        Here is a clear example of ‘quantum computation’ in the cell:

        Quantum Dots Spotlight DNA-Repair Proteins in Motion – March 2010
        Excerpt: “How this system works is an important unanswered question in this field,” he said. “It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It’s akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour.” Dr. Bennett Van Houten – of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
        http://www.sciencedaily.com/re.....123522.htm

        Of note: DNA repair machines ‘Fixing every pothole in America before the next rush hour’ is analogous to the traveling salesman problem. The traveling salesman problem is a NP-hard (read: very hard) problem in computer science; The problem involves finding the shortest possible route between cities, visiting each city only once. ‘Traveling salesman problems’ are notorious for keeping supercomputers busy for days.

        NP-hard problem
        http://en.wikipedia.org/wiki/NP-hard

        Since it is obvious that there is not a material CPU (central processing unit) in the DNA, or cell, busily computing answers to this monster logistic problem, in a purely ‘material’ fashion, by crunching bits, then it is readily apparent that this monster ‘traveling salesman problem’, for DNA repair, is somehow being computed by ‘non-local’ quantum computation within the cell and/or within DNA;

        verses and music:

        John 1:1-3
        In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made.

        1 Corinthians 2:14
        The natural person does not accept the things of the Spirit of God, for they are folly to him, and he is not able to understand them because they are spiritually discerned.

        Brooke Fraser – Lord of Lords(Legendado Português) -
        http://www.youtube.com/watch?v=rkF3iVjOZ1I

  6. BA,
    Which interpretation of quantum mechanics do you hold to be most accurate?

    Perhaps the Ensemble interpretation? Or perhaps de Broglie-Bohm?

    Whatever it is, why choose that one over another?

    • I hold a theistic interpretation of quantum mechanics:

      “As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter.”
      Max Planck – The Father Of Quantum Mechanics – Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck Planck was a devoted Christian from early life to death, was a churchwarden from 1920 until his death, and believed in an almighty, all-knowing, beneficent God
      http://en.wikiquote.org/wiki/Max_Planck

      Alain Aspect and Anton Zeilinger by Richard Conn Henry – Physics Professor – John Hopkins University
      Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the “illusion” of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one’s own mind is sure to exist). (Dr. Henry’s referenced experiment and paper – “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 – “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007
      http://henry.pha.jhu.edu/aspect.html

      Wave function
      Excerpt “wave functions form an abstract vector space”,,, This vector space is infinite-dimensional, because there is no finite set of functions which can be added together in various combinations to create every possible function.
      http://en.wikipedia.org/wiki/W.....ctor_space

      Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh
      Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1)
      http://www.cas.umt.edu/phil/fa.....lPSA2K.pdf

      Quantum Computing – Stanford Encyclopedia
      Excerpt: Theoretically, a single qubit can store an infinite amount of information, yet when measured (and thus collapsing the Quantum Wave state) it yields only the classical result (0 or 1),,,
      http://plato.stanford.edu/entr.....tcomp/#2.1

      Single photons to soak up data:
      Excerpt: the orbital angular momentum of a photon can take on an infinite number of values. Since a photon can also exist in a superposition of these states, it could – in principle – be encoded with an infinite amount of information.
      http://physicsworld.com/cws/article/news/7201

      It is important to note that the following experiment actually encoded information into a photon while it was in its quantum wave state, thus destroying the notion, held by many, that the wave function was not ‘physically real’ but was merely ‘abstract’. i.e. How can information possibly be encoded into something that is not physically real but merely abstract?

      Ultra-Dense Optical Storage – on One Photon
      Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage, and then retrieve the image intact.
      http://www.physorg.com/news88439430.html

      The following paper mathematically corroborated the preceding experiment and cleaned up some pretty nasty probabilistic incongruities that arose from a purely statistical interpretation, i.e. it seems that stacking a ‘random infinity’, (parallel universes to explain quantum wave collapse), on top of another ‘random infinity’, to explain quantum entanglement, leads to irreconcilable mathematical absurdities within quantum mechanics:

      Quantum Theory’s ‘Wavefunction’ Found to Be Real Physical Entity: Scientific American – November 2011
      Excerpt: David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. “This strips away obscurity and shows you can’t have an interpretation of a quantum state as probabilistic,” he says.
      http://www.scientificamerican......vefunction

      The quantum (wave) state cannot be interpreted statistically – November 2011
      http://lanl.arxiv.org/abs/1111.3328

  7. I’m missing where the actual calculation of a specific case is shown, it’s just glossed over unless I’m missing something. I would like to see it applied to a case where there is a mistake in protein synthesis that results in a faulty polymerase.

    • Starbuck – I provided examples – specifically of somatic hypermutation. This is from the linked abstract:

      Historically, Active Information has only been applied to computer-based evolutionary algorithms. However, it can also be applied to biological systems. Applying Active Information to the Somatic Hypermutation (SMH) process for refining binding sites during an immune response makes an excellent test case for using this concept biologically. Because SMH primarily works by restricting the physical range of base pairs which it mutates, it simplifies the Active Information calculations. One can simply subtract log2(SMH mutation space) from log2(whole genome search space) and estimate that the SMH process contributes approximately 22 bits of Active Information to the search. Additional factors can complicate this estimate, such as taking into account the number of mutations required for hitting the target.

      The example you reference does not apply to active information, because it is not a biological search. For it to be a biological search, you would need to specify the selection pressures. Then we could ask that question using Active Information, and see how that contributes (positively or negatively) to Active Information. Note that it is entirely possible that you could come up with *negative* Active Information (meaning that the evolutionary program of the genome points *away* from likely solutions). I think those are just as important, because they help you find which specific problems genomes are built to be modified for.

      And that brings me to the other nice part of Active Information – you don’t have to know about a mechanism ahead-of-time to make the calculation, but if there is significant positive active information, that tells you that there is probably a mechanism worth finding in the system.

  8. johnnyb:

    Very interesting. I was not aware of that calculation, but it is obviously right.

    I have discussed many time the algorithm that generates the increse of antibody affinity after the primary immune response as a wondersul example of intelligent protwin engineering embedded in our immune system.

    I usually refer to the process described in your reference as “targeted mutation”. It is, indeed, a very good example of added information. But not the only one in that specific system.

    Two more information adding mechanisms highly contribute to the final result:

    a) the hypermutating system, which is complex and implies many enzymes;

    and:

    b) a very powerful process of intelligent selection, based on the affinity of the new clones to the original epitope, stored in the antigen presenting cells.

    Those mechanisms are summed up in the following quote from Wikipedia:

    “Experimental evidence supports the view that the mechanism of SHM involves deamination of cytosine to uracil in DNA by an enzyme called Activation-Induced (Cytidine) Deaminase, or AID.[6][7] A cytosine:guanine pair is thus directly mutated a to a uracil:guanine mismatch. Uracil residues are not normally found in DNA, therefore, to maintain the integrity of the genome most of these mutations must be repaired by high-fidelity DNA mismatch repair enzymes. The uracil bases are removed by the repair enzyme, uracil-DNA glycosylase.[7] Error-prone DNA polymerases are then recruited to fill in the gap and create mutations.[6][8]
    The synthesis of this new DNA involves error-prone DNA polymerases, which often introduce mutations either at the position of the deaminated cytosine itself or neighboring base pairs. During B cell division the immunoglobulin variable region DNA is transcribed and translated. The introduction of mutations in the rapidly-proliferating population of B cells ultimately culminates in the production of thousands of B cells, possessing slightly different receptors and varying specificity for the antigen, from which the B cell with highest affinities for the antigen can be selected. The B cells with the greatest affinity will then be selected to differentiate into plasma cells producing antibody and long-lived memory B cells contributing to enhanced immune responses upon reinfection.”

    So, the whole process is a very good example of how an intelligent process of protein engineering, based on targeted hypermutation and intelligent selection by direct measurement of a specific function, can esily enough find a functional target, optimizing an existing function.

    Finally, I would like to add that dFSCI (a metric O often use here) is only a subset of CSI, defined by the kind of objects considered (only digital sequences), and the kind of specification (functional specification). Essentially, the concept is the same.

  9. Johnnyb:

    This appears to be a backup system. A complex of Crp and cAMP is what normally switches on glycerol production. However, this mutational process is for E. coli with a Crp deletion. Therefore, in the absence of Crp/ cAMP,the genome is marked in such a way that a backup mutation can provide the necessary promoter.

    This “backup” mutation does not provide a new promoter, it activates the original one. However, it is interesting that the mutations that relieve the starvation stress arise at high rates only when the mutation would diminish the stress, it exhibits the features of Lamarckian evolution.

Leave a Reply