DNA is no longer just ACGT: synthetic DNA is here to stay
Posted 23rd November 2018 by Jane Williams
One of the most amazing aspects surrounding us is life itself – not just humans, but the environment: trees, flowers, insects, animals and even bacteria. They all share one central molecule which is crucial for their existence.
The Biotechnology Industry: In the beginning
A new era started when in 1952 Alfred Hershey and Martha Chase performed an experiment which is known today as the Hershey-Chase blender experiment. Using radioactively labelled protein and DNA, they proved that DNA was the molecule which was responsible for our heredity. In 1953 James Watson and Francis Crick discovered the structure of DNA. Soon after their discovery in April 1953, a world-renowned physicist, Niels Bohr, wrote to Max Delbrück that “Very remarkable things are happening in biology”.
James Watson’s discovery may have rivalled Rutherford’s discovery of the atomic nucleus in 1911. Rutherford is known as “father of nuclear physics” and Delbrück played an important role launching research into molecular biology. Approximately twenty years later new techniques started to emerge, and DNA was crucial as the biotechnology industry started to take form. But it took until 2003 before the nucleotide sequence of human DNA was completely resolved.
Sometimes called the most important molecule, genomic information is encoded by DNA. This information is copied from generation to generation. Amazingly, this molecule contains all the genetic information required to construct the organism. Additionally, by following your genetic family tree far enough, you will find that you are related to the squirrels in your garden and even the mould in your bread. Even bread is typically a product of baker’s yeast Saccharomyces cerevisiae, again an organism remotely related to humans.
All these organisms use the same universal genetic information system to reproduce. The information stored in DNA is transcribed via RNA molecules to proteins, and this cascade of events is called the “central dogma of biology”. The proteins, on the other hand, take care of pretty much all the functions of our cells, from drug metabolism to regulation of our heart beat. The linear order of nucleotides in DNA specifies the composition of proteins; hence a single change in the DNA sequence can lead to a change in protein structure and function.
The DNA is a double stranded helical polymer and the twisted ladder-like structure adds stability to the molecule. The actual coding of the information is based on the order of four nucleotides in the double helix: adenosine (A), guanine (G), thymidine (T) and cytosine (C). In the complete DNA helix, adenosine always pairs with thymidine of the opposite strand. In a similar manner, cytosine only binds with guanine nucleotides. Thus, changing any of these bases to anything else than the traditional genetic alphabets A, T, C or G will probably effect the stability and the structure of DNA, and could render it to a non-functional, unstable ladder-like molecule.
In the twisted helix of DNA, guanine is bound with three hydrogen bonds to cytosine and adenine with two to thymine. These two rules of complementarity are crucial for DNA structure and function: size complementarity (large purines T and C pair with small pyrimidines A and G) and the bonding complementarity of hydrogens. There are in fact three different versions of DNA in nature but the so-called B-DNA is the most common. This molecule is right-handed, and has approximately 10 nucleotide pairs, either A-T or G-C, in one turn. This construction has retained through millions of years of evolution so it clearly must be optimal for its purpose.
Why would we want to change it? Perhaps to create new lifeforms or at least new proteins for producing unique products to be used in medicine? However, manipulations like this carry a risk if these new lifeforms or modifications are released from the laboratory, even by an unforeseen accident. Thus, in the worst case scenario they could start to propagate in our environment with unknown consequences. In order to prevent this, the most logical approach is perhaps to modify or “hack” DNA to use nucleotides that are unavailable in nature. These are also known as unnatural base pairs (UBP).
Where are we now?
Experiments with UBP’s and artificial genetic systems started nearly 30 years ago. In 1990, Steven Benner’s research group published a paper describing a new Watson-Crick base pair with alternative hydrogen bonding pattern. Soon this field caught the eye of scientific press and in 2000, an article titled “Creation’s Seventh Day” appeared in the prestigious Science journal, describing efforts of the scientists at Scripps Research Institute, led by Professor Romesberg.
Fast forward to 2008, Romesberg’s research group published a couple of important progress reports. The first one described their work in search of finding new unnatural nucleotide pairs, namely d5SICS:dMMO2. Their characterisation of these UBP’s suggested that the synthesis and extension were both efficient and selective. These properties are crucial as the DNA is used as a template when cells are dividing and the DNA is being copied for new daughter cells. They went on to publish more papers on the subject, and they showed that the DNA containing these synthetic analogues can be amplified using Polymerase Chain Reaction (PCR), a method which has been one of the most important cornerstones in the development of modern biotechnology.
Soon after, Benner’s group, based in Gainesville Florida, demonstrated successful amplification of DNA sequences containing six-letter synthetic genetic system which they nicknamed AEGIS. They demonstrated that the synthetic nucleotides are not lost when the DNA is copied and amplified by DNA polymerases. This again is an important consideration as there could have been a natural bias towards the four letter system, tried and tested by evolution. Furthermore, they also showed that the mutation rate was low, 0.2% per theoretical cycle. This low mutation rate is sufficient to allow Darwinian evolution but low enough not to cause catastrophic instability in the genome. This novel genetic system appeared to behave like normal genomes found in nature.
In 2014, Denis Malyshev from Romesberg’s research group published a paper which described the first bacterium to stably propagate an “expanded genetic alphabet”, using unnatural base pairs d5SICS and dNaM. These unnatural bases were not removed or repaired by the natural DNA repair pathways which are present in the cell. Normally these systems would correct any pairing errors or abnormalities appearing in the DNA. For example, base oxidation is one of the most frequent insults to DNA and most of these oxidised bases are removed from DNA by enzymes operating within the base excision repair pathway. Hence, one of the key issues was to verify that the repair system is not rendering the DNA containing novel nucleotides back to the standard, natural four-letter genetic system. In summary, this was an important proof-of-principle paper but was evidenced only using one single sequence context and one single genetic locus. Apparently the growth rates of this semisynthetic organism were not great, thus limiting the applicability.
In February 2017, Romesberg’s latest publication for Proceedings of the National Academy of Sciences raised lots of interest as his group reported to have optimised systems, avoiding the limitations of their first attempt. They described this system as healthy and more autonomous, capable of storing the increased genetic information indefinitely.
Since the last common ancestor of life on Earth, biological information has been encoded by four nucleotides that form two complementary base pairs. It has been more than 50 years since Alex Rich first proposed the development of UBPs, and 25 years since Steve Benner’s lab produced the first viable candidates. Novel synthetically engineered lifeforms have lots of promise in biotechnology, medicine, environmental and industrial applications. The price of medical and pharmaceutical solutions can be greatly reduced using these novel organisms, and this is partially because of the time savings.
Instead of taking years to develop, these new solutions can take as little as 24 hours to create. However, just like the genetically modified organisms (GMO) in the past decades, they can possess a potential threat if their genes escape in natural environment without control. Ethical issues will be an issue with semi-synthetic organisms, some critics are already saying the scientists are playing God. Very strict regulation should be required but will it prevent the possible misuse of this technology?
One of the fears is the potential for creating new biological weapons and pathogens which could end up in the hands of bioterrorists. As these are designed using a computer, some ethics experts are questioning what is the difference between living organism’s and machines. For several decades we have already been using mammalian cells for manufacturing antibodies and vaccines to save people’s lives. For cancer research, we have been growing human cell lines originating from single individuals.
These donor individuals have since deceased, so are these living cells organisms or machines? They have the complete genetic information of the donor individuals. In nature evolution and mutations take place all the time. Whatever the answer is, we are entering a new era of biotechnology. If these risks are kept in control, we might have a new, very powerful tool to develop novel proteins for therapeutics, vaccines, biofuels, and even new devices or materials.
Jari Louhelainen is the Associate Professor of Biochemistry at the University of Helsinki. He will give his presentation “Evaluation of PCR/qPCR inhibitors, with effect of various polymerases and extraction methods: not all approaches are created equal” at the 4BIO Summit: Europe.
Leave a Reply