A Very Sophisticated Code Optimized to Reduce Errors
There is much to be said about the sophistication of the genetic code
It is far beyond the reach of any website to do justice to the sophistication of the genetic code, this is the very focus of biological research and is an area of knowledge that is expanding exponentially. Research has however began to lift the lid on what appears to be a vast bottomless chasm of sophisticated information processing.
However I would like to share some of the basic elements that has deepened my own appreciation for the very foundation of all life...code.
For those who would like a basic introduction to the biological information process that takes you from the information storage in DNA through transcription and translation to a fully folded protein please view the following link.
Researchers have come to recognise that the basic structure of the code itself is highly optimised to reduce the damaging effect of errors that may occur in the reading and copying process.
For instance, any copying error that causes an individual nucleotide base to be not copied would result in what is called a frame shift mutation. So instead of a section reading correctly like this:-
"the man saw his new red car"
If the same "sentence" begin reading one letter later, because of the missed nucleotide base, the resulting code would be would read:-
"hem ans awh isn ewr edc ar"
This would potentially result in the whole RNA chain being mistranslated. However, because there are three different stop codons, (codons are the three letter "words" used in DNA and RNA) all of which are very close in sequence to the most common codons, such frame shift mutations very quickly produces a stop codon, which signals the translation machinery to stop translating and aborts the whole process thus preventing the damaging effects of that one mistake from going any further. This is just one of a whole host of different strategies that are inbuilt into the genetic code to minimise errors.
How optimised is the genetic code to prevent mutational errors?
It is first important to note the name of the journal, Molecular Biology and Evolutionand thus the obvious objective of the organisation publishing this paper.
The paper is titled The Early Fixation of an Optimal Genetic Code This paper sets out to analyze the efficiency of the (near) universal genetic code by using computer analysis techniques. This is what they found :- Our analysis shows that when the canonical code is tested against a sample of one million random variants using PAM matrix data to measure amino acid dissimilarity, the code appears to be extremely highly optimized
Estimates based on PAM data for the restricted set of codes indicate that the canonical code achieves between 96% and 100% optimization relative to the best possible code configuration (fig. 2c ). If our definition of biosynthetic restrictions are a good approximation of the possible variation from which the canonical code emerged, then it appears at or very close to a global optimum for error minimization: the best of all possible codes. However, the process by which an adaptive code evolved at present remains unclear, and yet its resolution may be of key importance to our understanding of the amino acid components universal to life.
So what are the implications of the findings of this paper?
While it needs to be emphasised that this papers is endeavoring to present its findings within an evolutionary framework, the conclusions drawn are very illuminating. Consider this, If this genetic code is nearly universal, it means that we share the same code with plants, fungi and bacteria. If the scientific consensus is correct, it follows that along with bacteria etc, we have inherited this code from a common ancestor. LUCA, the Last Universal Ancestor, must have existed a very long time ago! Cyanobacteria, that also share our genetic code, have existed for at least 2.4 billion years.
This means that this extremely highly optimizedthe best of all possible codesexisted at the very foundation of life on earth!
Is that what you would expect from a undirected evolutionary process? To be highly optimized to reduce errors right out of the box?
An evolutionary process depends upon having variation to select from, why would and evolutionary process progressively hone and select a code that minimises and reduces the very fuel needed to drive evolution?
Intelligent human software engineers have produced many different language codes for different types of applications and no doubt will continue to develop more. Despite great efforts, no universal, best of all possible computer language codes has been produced. The financial rewards for developing such a code would be enormous. Nobody is expecting such a highly optimised code is to be achieved by a succession of accidental download errors. Anyone who might suggest that such a happy accident is a real possibility knows nothing about coding!
Does the discovery of such a supremely optimised code indicate an undirected natural process or a purposeful design?