The origin of life is a mystery to most.  For many, it is allocated to an ancient accident, occurring in some primordial swamp and, not yet understood.

To me, the odds are heavily stacked against the idea of an accidental beginning. The two sides of the argument typically fall like this:   Christian’s cling to the idea of a Devine creation event, while materialists insist that the origin of life can be explained by some yet to be identified, but predictable event which will one day be explained, perhaps even duplicated.

Let’s consider just one aspect of a living cell; something that happens in every cell, whether plant or animal.  The activity that I am referring to is the storage and use of digital information.  The cell’s DNA stores digital instructions for the creation of every component within the cell and for complete replacement of a cell, when it grows old. DNA is a chemically implemented digital code that very closely mirrors the coding found in modern computing technology.  DNA’s encoding scheme consists of four bits.  The chemical materials representing those bits are abbreviated as A,C,G and T1.  

What I will write here is admittedly a layman’s interpretation of what is happening, but let’s look at how every cell in your body, as well as each cell in the tomato that you ate for lunch, makes use of digital technology.

In digital computing, a binary bit has two states, on and off.  It can represent two pieces of information, one represented by the on state (a one) and the other by the off state (a zero).  DNA’s bits, on the other hand, are always present, they have no off state, but each bit can be set to one of the four different chemical coding materials, A,C,G or T, and therefore, each bit can represent four distinct pieces of information. In digital computing, we group bits into eight-bit bytes; a byte is the largest group of bits that are used by a computer’s processor, as a single unit.  In DNA, bits are grouped into three-bit bytes, and as we will see, these three-bit bytes are also utilized as single units.

DNA’s three-bit bytes (codons) are converted to something called “transfer RNA,” or tRNA, which then transports the instructions and feeds them, as a serial bit stream, into a molecular machine called a Ribosome and, you guessed it, the Ribosome reads them as a unit, three bits at a time.  The Ribosome’s job is to build complex proteins. Proteins are made from hundreds and sometimes thousands of amino acids.  For each three-bit instruction received, the Ribosome adds an appropriate amino acid to the protein under construction.

This amounts to the use of a chemically instantiated, intracellular, computing and manufacturing processes, which result in each cell’s survival.  Now let’s consider some timelines.  Envision an aboriginal hunter, his tools are limited to a spear and a snare. Perhaps, he depends on the occasional wildfire as his source flame for cooking.  To say that he lives “off the grid” is an understatement.  Nevertheless, he carries with him more than 37 trillion cells2, each with data storage media, a data retrieval system, data transmission capabilities and molecular machines that literally sustain his existence through numerically controlled manufacturing of life sustaining, proteins.

Modern man, on the other hand, developed binary computing, with associated “onboard programming,” as recently as the 1950’s.  His initial computers were huge, often occupying entire buildings, and consuming megawatts of power.  The development of computer-controlled manufacturing roughly paralleled that of the computer development itself.  Both then, have come of age only since the 1950’s. To this date, we have not miniaturized those processes to a point where both data handling and manufacturing can be accomplished in a space no larger than the space that lies within a single living cell.

At the end of this paragraph, I will provide you with links to two video presentations that helped me appreciate the complexity of this process (and they greatly simplify it).  Once you’ve seen them, I want you to consider only one thing, what are the chances that such an incredibly complex capability came into being through random chance?  I must warn you, there are some strong headwinds opposing the argument for chance.  Any sequence of random changes must contend with the law of entropy (the second law of thermodynamics).  The law of entropy says that random changes will more likely bring about degradation than result in consistent improvements to some item or process.  As an example, lock up the house that you are living in and come back in 1,000 years.  You will probably find only a few remnants of your house.  Or begin to make random changes to Microsoft Office.  How often do you think that one of your random changes is going to make Microsoft Office better?  How many random changes will it take to encounter a positive outcome and how will that outcome offset all of the negative outcomes that will precede it?  The problem is that, while you wait for random changes to make things better, they will consistently be making things worse.  Would you wager your bank account in a casino where the odds against your winning were as great?  

As I leave you, I would like you to contemplate this single verse from the Bible: Romans 1:20 “For the invisible things of Him, from the creation of the world are clearly seen, being understood by the things that (He) made … so that (the unbeliever is) without excuse.  Basically what this is saying is that we should be able to assess this single sub-function of a living cell and our common sense should inform us that such things do not organize by accident.

Here are two links that back up what I’m thinking:

1) DNA (stored data), to RNA (data in transit), to proteins:  https://www.youtube.com/watch?v=zwibgNGe4aY

2) Cell division (reproduction), an excellent illustration of the complexity involved: https://www.youtube.com/watch?v=X_tYrnv_o6A

Footnotes:

  1. adenine (A), thymine (T), guanine (G) and cytosine (C)
  2. nationalgeographic.com