Navigate

Contents

Home

Frontline Index

Subscribe

Feedback

 

 

Politics, Science and Money. Behind the ‘Artificial Life’ Headlines


Following the announcement in May this year of the creation of so-called “artificial life” in the laboratory, Neil Bennet looks at some of the scientific, social and political issues raised

The newspaper headline writers all agreed. “Craig Venter creates synthetic life form”, the Guardian reported on 20th May 2010. “Synthetic cell is a giant leap for science, and could be bigger still for mankind”, the Independent declared the following day. The Times went with “Scientists create artificial life in the laboratory”, while the Telegraph were a little more troubled, going with “Scientist Craig Venter creates life for first time in laboratory sparking debate about ‘playing god’”. The Scottish nationals got in on the act as well, with “Experts hail breakthrough as first synthetic cell is created” and “Scientists play God: Synthetic cells are given life” from the Herald and the Scotsman respectively. The tabloids retreaded some old ground with their overtly dramatic accounts, with the Express announcing that “‘Frankenstein’ lab creates life in a test tube”, and the Sun declaring “‘Frankenstein’ doc creates life”. My personal favourite though was the Daily Mail, who, with their apparent desire for their readers live in a constant state of fear, went with “Artificial life: have scientists created a monster?” in the first instance, following it up a few days later with “Artificial life created by Craig Venter - but could it wipe out humanity?”


In what is certainly the biggest science story to reach the mainstream media so far this year, the reports that scientist-entrepreneur Craig Venter and his laboratory teams in Rockville, Maryland and San Diego, California have successfully produced “the world’s first artificial life form” have generated a mixed response. Most coverage, following from the promissory discourse of Venter and his colleagues, have hailed it as a crowning achievement of molecular biology and biotechnology, marvelling at the technological opportunities seemingly opening up in front of us: from new medicines to sustainable biofuels, Venter it seems is getting set to save us from all the ills of the world. Even Edinburgh-based socialist and science-fiction author Ken MacLeod , writing in the pages of the Guardian shortly after the initial reports, couldn’t contain his excitement over the announcement – declaring that “this is a moment in evolution, the origin of a new kingdom: the Synthetica,[…] supplementing nature’s bacteria, eukarya, and archaea”, adding that the possible applications for the new technology “outstrip our imagination”.


Others, such as the Canadian environmental organisation Action Group on Erosion, Technology and Concentration (ETC Group) have warned against what they describe as potentially disastrous consequences: linking the new development, and the field of synthetic biology in general, to possible applications in the production of biological weapons, and warning of the – extremely doubtful – scenario in which the novel bacteria created in Venter’s lab (or others that might follow) could pose a risk if they escape from the laboratory or are released into the environment – leading them to call for a moratorium on the entire field.
However few of the initial commentators questioned what was meant by “artificial life”, or considered in detail what the researchers had and had not achieved. Here I will argue that Venter and his team have not created “artificial life” or a “synthetic cell” as many of the reports suggested, and that the confusion that has led to such claims being made is based on a profoundly ideological reductionist perspective that has dominated much of biology, both in practice and in popular discourse. I will proceed to look at how synthetic biology, including ‘Synthia’, as the new bacterium has been dubbed by some, has been shaped in relation to the needs of capitalism, particularly with respect to intellectual property rights (IPRs) and the commodity form. Finally I will put this discussion into the context of a socialist approach to understanding technology, and the social processes shaping scientific research and technological development in general.

What is ‘Synthia’?

‘Synthia’, or to call it by its proper name as used in the paper in Science reporting its successful creation, “Mycoplasma mycoides JCVI-syn1.0” (the initials taken from the J. Craig Venter Institute, the somewhat vainly designated not-for-profit wing of Venter’s current research empire) is a novel form of M. mycoides subspecies capri, a type of parasitic bacterium that is normally more likely to be found causing lung disease in goats.

Its creation is the most recent development in the Minimum Genome Project, a major undertaking of JCVI over the past decade and a half, costing an estimated US$40 million and receiving significant funding from the US Department of Energy, as well as backing from various oil/energy companies such as BP and Exxon, together with venture capital investment. Venter was already a controversial figure due to his role in the Human Genome Project (HGP) in the late ‘90s – he and his former company, Celera Genomics, went head-to-head in a race with the publicly-funded US National Institute of Health (NIH)-led project to produce the first draft sequence of the human genome (also supported by the Wellcome Trust in the UK), with Celera originally planning to charge researchers for access to packaged genomic information. The institutional set-up Venter helped pioneer during that period – with a private for-profit company set up to fund a not-for-profit research organisation and acquire any intellectual property generated – has been replicated, with new company Synthetic Genomics backing the work of the JCVI.

In line with the general aims of synthetic biology, Venter and his team had the idea to build a minimal bacterial genome, containing only the essential genes needed to ensure the organism could survive and replicate. In 1995 they sequenced the genome – a genome is the full complement of genes in any living thing – of M. genitalium, a bacterium with the fewest genes of any known organism (it has since been surpassed in this regard with the discovery of a new species of archaea in a hydrothermal vent near Iceland in 2002). They proceeded in a series of experiments to disrupt each of the genes of M. genitalium, showing that a little over 100 of its 485 protein-encoding genes could be surplus to requirement, at least if disrupted one-at-a-time. In May 2007 the JCVI filed for a patent in the US for a minimal bacterial genome, having successfully worked out a method for assembling a synthetic version of the M. genitalium genome made from chemically-synthesised pieces joined together inside a yeast cell. However for the next stage of the project – attempting to transplant the genome from yeast back into a bacterial cell – the extremely slow growth rate led the researchers to drop M. genitalium in favour of using the related species M. mycoides subspecies capri as donor and M. capricolum subspecies capricolum as recipient.

Using the already-known genomic sequence of the M. mycoides subspecies (published genetic and genomic sequences are held on massive online databases for scientists to search and use), overlapping sequences of around 1080 base-pairs (the sequences of A’s, C’s, G’s & T’s that make up genetic code of DNA) were produced commercially and then assembled in stages inside yeast cells. Four of the sequences that were not involved in coding for proteins were made to contain “watermark” sequences, which in a code spell out a website address, the names of some the people involved in the project – including Venter himself – and some famous quotations (“To live, to err, to fall, to triumph, to recreate life out of life”; “What I cannot build, I cannot understand”; “See things not as they are, but as they might be”).

After resolving a number of technical issues arising from differences in the biology of the organisms used and the imprecision of the procedures, the team eventually managed to synthesise and transfer the genome from a yeast cell into a cell of M. capriocolum which had had its own DNA removed. It was tested and found that the transplant had indeed succeeded, that the M. capricolum cell had been converted into an M. mycoides cell, complete with the extra watermark DNA sequences.

DNA is Not Life

The story outlined above might come as some surprise to anyone who had simply scanned some of the newspaper headlines about the creation of artificial life in a test tube. While the scientists’ accomplishments are undoubtedly impressive, and have the potential to contribute to both basic research and the development of novel biotechnologies, they have not created “artificial life” nor made a “synthetic cell”. They have succeeded in copying and slightly modifying the genome of a particular species of bacteria, assembling it and successfully transplanting it into the living cell of another closely-related species of bacteria. This was made possible by significant advances in DNA sequencing and de novo DNA synthesis technologies in the last two decades, meaning procedures which used to take months and years now take days and weeks and are much cheaper to perform. Ken Wolfe, Professor of Genome Evolution at Trinity College Dublin told the Irish Times, “I think [the claims have] been exaggerated. [Venter] has a reputation for showmanship” adding, “he hasn’t created life, he has mimicked life. It is a technical achievement to synthesise a piece of DNA that size…it was an achievement of scale.”

As leftist biologist and popular science writer Steven Rose reminded us in the letters page of the Guardian soon after the initial reports, inserting a synthesised strand of DNA into a living cell does not equate to creating artificial or synthetic life. DNA is a dead molecule – it does not function independently of everything else that goes on inside a living cell. The commonplace description of DNA as “self-replicating” is false and misleading, seemingly endowing DNA with a special power over all living things – leading to such metaphors as that of the “master molecule”, “blueprints” or indeed computer code – the existence of a planned sequence for the synthesised genome in silico before it was produced in the lab was one of the themes in media coverage of the experiment. It is in this sense that Richard Dawkins famously described living organisms as “lumbering robots” whose sole raison d’être is simply to make sure their genes are passed on to the next generation. But DNA is not “self-replicating”. It is copied by a complex set of cellular machinery, and is useless and lifeless without it. As Rose commented:

“What [Craig Venter’s team] have emphatically not done is “create life”. DNA is a relatively inert molecule unless placed in the environment provided by a living cell. When biologists learn to create cells from scratch, then and only then will they have created life.”

The equating of the creation of functioning DNA with the creation of life has strong ideological roots, particularly in relation to human genetics and evolution. The mythical power attributed to DNA is part of the modern incarnation of an older biological reductionism that continues to attempt to provide biological explanations for social phenomena, through so-called sociobiology, and more recently evolutionary psychology. [For a more detailed discussion of these issues, see my previous article, Biology and Ideology: the Case against Biological Determinism, Frontline 2(8) (December 2008)].

Being “in the DNA” has in popular discourse come to represent something permanent or unchanging, something fixed that helps define and limit who we are and what we can do. So David Cameron has on various occasions described family values, belief in his country, “Atlanticism” (i.e. propping up US imperialism) and support for Israel as being “in his DNA” and that of his party, with the implication that these are ideological commitments that are not going to change by the time the next election or military adventure comes around.

By situating agency in a supposedly immutable molecular realm, the social relations of capitalism can become naturalised through biology. Social problems are reconceptualised as genetic problems, and poverty, inequality and injustice are removed from their social and historical context and given new life as eternal, immutable conditions of human nature. So for example at the first Human Genome Conference, in October 1989, Daniel Koshland – then editor of the prestigious journal Science – was asked why the vast quantities of money being committed to the Human Genome Project should not be given to the homeless instead. He replied, “what these people don’t realise is that the homeless are impaired…Indeed no group will benefit more from the application of human genetics”.

Biology as Engineering

Synthetic biology is a relatively new discipline, the term only having been used to describe a distinct field of the life sciences in the last decade. In that time it has established its own journals and academic societies, courses have been created, conferences have been held and departments formed – including a proposed new centre at the University of Edinburgh. There are now even some synthetic biology textbooks appearing, something historian of science Thomas Kuhn considered an important indicator of the maturation of a new scientific discipline.

Many have described synthetic biology with reference to another relatively new field within the life sciences, systems biology. Systems biology too only appeared within the last decade, and deals primarily with attempts to use computer models to gain an understanding of complex biological systems. According to some, it has in part emerged as a reaction to the reductionism of the molecular biology paradigm that has dominated the biological sciences for much of the last half-century, i.e. that it marks at least in part something of a recognition amongst biologists that the exclusive focus on genes as isolated effector molecules cannot alone cannot hope to explain complex biological systems and the different ways in which they interact.

Synthetic biology on the other hand can be considered an attempt to work towards the opposite goal – in other words to actively seek to reduce the complexity of biological systems, often in order to put them to use as novel biotechnologies. One commonplace way to depict synthetic biology then has been as the principles of engineering applied to biological and biotechnological systems. Specifically synthetic biology aims to use the engineering principles of functional abstraction (i.e. functions of specific biological or genetic components that can taken out of their original context and put into others), standardisation and modularity (i.e. different biological or genetic “parts” can be swapped about freely to use in different systems) and the intentional, rational design process to produce useful biological systems. In doing so it takes advantage of recent advances in gene sequencing, computation and genetic engineering techniques.

One important aspect of the development of both these two fields is the relationship with systems of Intellectual Property Rights (IPRs). Reductionist molecular biology is particularly well-suited to intellectual property regulations: if organisms are perceived of as being determined by their genetic components, which are themselves considered complex chemicals, it is a relatively small step to allowing intellectual property claims over living things to be recognised. Indeed this is essentially what occurred in the landmark Diamond vs Chakrabarty ruling in the US in 1980, where General Electric filed a patent on a genetically-modified bacterium intended to help clean up oil spills. After initially being refused on the grounds that living things could not be patented, an appeal led to the overturning of the decision by a 5-4 vote, the Board of Appeal declaring that “the fact that microorganisms are alive is without legal significance for purposes of the patent law.” The case formed a precedent that allowed living things to become the property of corporations and research institutions. The same year also saw the introduction Bayh-Dole Act in the US, which has since been imitated by other countries around the world, leading to the increased privatisation of scientific research and an increased focus on commercial imperatives in public-funded research. As historian of technology David F. Noble describes it,

“What the Bayh-Dole [Act] said was that the universities automatically now own all patent rights on publicly-funded research. What that meant was that universities were now in the patent-holding business and they could license private industry and in that way give them the rights over the results of the research funded by the taxpayer. It was the biggest give-away in American history.”

Systems biology, with its concept of ‘emergent’ phenomena, i.e. the recognition that “the whole is greater than the sum of its parts” (or, in more materialist terms, that at each [ontological] level new interactions and relationships appear between component parts, which cannot be inferred just by considering the properties of the parts alone) is much more difficult to bring into line with IPR regimes. IPR generally requires entities to be bounded and isolable in order to be recognised as commodities and appropriated. As such the recognition of complexity, interrelatedness and context-dependence in the systems approach has meant that for the most part IPR has been quite limited in scope, usually relating to the computer models used to describe systems rather than biological entities themselves. Synthetic biology on the other hand is explicit in its emulation of engineering, and so highly conducive to IPR regimes. Some working in the field have self-consciously embraced an open source approach (drawing on the open source movement in computer software), with so-called “BioBrick™ standard biological parts” being subject to open source agreements promoting sharing of such “parts” and the systems they create from putting them together.

However others, including Venter, have been working actively to obtain patent rights over their synthetic biology creations and techniques. With respect to the “synthetic cell”, British Nobel Prize-winning biologist John Sulston has suggested that the project has as much to do with ownership of the methods involved in synthesising genomes than with the resulting bacterium itself. Speaking at the Science Museum in London in June, he said the work was “clever and pretty”, but was not artificial life:

“What that advance is being used for is an attempt to monopolise, through the patenting system, essentially all the tools for genomic manipulation […] let’s be clear that the tools for manipulating genomes should be in the public domain. This is not just a philosophical point of view: it’s actually the case that monopolistic control of this kind would be bad for science, bad for consumers and bad for business, because it removes the element of competition.”

Sulston also told the BBC:

“I’ve read through some of these patents and the claims are very, very broad indeed….I hope very much [that] these patents won’t be accepted because they would bring genetic engineering under the control of the J Craig Venter Institute. They would have a monopoly on a whole range of techniques.”

Of Commons and Anti-Commons

The situation Suslton describes is what is known as a “tragedy of the anti-commons”, a neologism made in reference to the “tragedy of the commons” – a widely circulated concept originating in a paper of the same name Garrett Hardin in a 1968 edition of Science. In that paper Hardin argued that resources held in common (eg. clean air and water, shared pasture etc.) are necessarily depleted, as each individual using the shared resource acts in self-interest – i.e. free and unrestricted access to commons by individuals leads to over-exploitation, as the benefits of exploitation accrue to the individuals while the costs are shared by all those to whom the resource is available. In more commonplace economic terms we would refer to these costs as negative externalities. Hardin was a neo-Malthusian and eugenicist primarily concerned with what he considered to be the problem of over-population – similar to the other pillar of neo-Malthusianism and conservative environmentalism Paul Ehrlich, whose book of the same year, The Population Bomb, helped resurrect the anti-poor ideology of Thomas Malthus (Hardin’s second best known paper was called “Lifeboat Ethics: the Case against Helping the Poor” (1974)). This resurrection of Malthus, as Monthly Review editor John Bellamy Foster has argued, marked an effort to give ecology and environmentalism a conservative, pro-capitalist character in contrast to the radicalism of the likes of Rachel Carson.

The “tragedy of the commons” model became for some a justification for enclosing commonly held resources, for the extension of private property in order that resources are “properly managed” – its influence can be clearly seen in attempts to introduce emissions trading schemes in international agreements on climate change, for example (this does not in itself mean such attempts are necessarily without merit, though I will avoid discussing this issue here). The problem here is the confusion over commonly-held resources and uncontrolled, open-access resources, together with the assumption of self-interested behaviour – a hallmark of capitalism made out to be a universal feature of human society. Hardin’s model assumes capitalist social relations in such a way that precludes social planning – in contrast, for example, to Marx’s vision of communal rights and responsibilities with respect to natural resources, as well as the real existence of community-managed resources for millennia in pre-capitalist and still-existing traditional societies. Furthermore by laying the blame at the door of population growth and some apparent human propensity to act in a selfish way, Hardin’s thesis neglects the role of capitalist social relations – and the “treadmill of production” – in causing ecological degradation and depleting natural resources. It was also completely ahistorical, as radical economist Michael Perleman points out, “With no evidence whatsoever, except for an obscure nineteenth century pamphlet […] Hardin insisted that the absence of property rights in the English common lands inevitably led to an environmental disaster.” Of course such false justifications mask the naked “class robbery” that the enclosures really represented (see E.P. Thompson, The Making of the English Working Class (1963)).

While the “tragedy of the commons” was supposedly meant to describe a breakdown in coordination and management of a resource due to there being “insufficient” claims to ownership rights over it, a “tragedy of the anti-commons” is the opposite – where the existence of too many rights-holders prevents some desirable purpose from being achieved. This more valid idea has been most successfully applied to patents and other forms of intellectual property. If the creation of a new product necessarily uses different techniques or components that are held under patents by private interests, it might become too difficult or expensive to negotiate a licence agreement with all parties. This means that potential products, even those in great demand (including – though not limited to – those that might fulfil a genuine social need, such as a new medical drugs) may not get produced, as the costs associated with the patents are too high. This is a lose-lose-lose situation economically, as neither the patent-holder, the potential manufacturer, or those who would benefit from the product have gained from the situation caused by the patents. A related situation is the holding of broad or “upstream” patent claims (such as those on genes, or techniques such as those used to culture stem cells) that while they do benefit the patent-holder, who maintains a monopoly covering a large research area, they severely inhibit research and innovation in general by excluding non-patent holders.

So for example a study from 2005 found that around one fifth of all the genes in the human genome are currently held under patent. Patent law allows for the ownership of genes that have been isolated from the organism they occur in, the isolation of potentially useful DNA sequences supposedly being closer to an invention than a discovery. A survey from 2003 of leading laboratory directors in the US found that 53% had “decided not to develop or perform a test/service for clinical or research purposes because of a patent”. This issue came to a head in March this year, when a US judge overturned two of the very few human gene patents which have proved to be profitable for their owners. BRCA1 is a gene associated with breast cancer – the most common type of cancer affecting women. Different people have slightly different forms of the gene, which confer different levels of risk for the cancer, so genetic tests are performed to see which form people have. A US company, Myriad Genetics, rather than holding a patent on the tests, held patents on the gene itself (and a second, related gene) – meaning development of different tests using the BRCA1 gene (including improved tests) have been effectively barred. Meanwhile only Myriad could offer the tests, for which they charge over US$3000, well outside the means of those without adequate health insurance coverage. The judge in the case – brought by the American Civil Liberties Union, the Public Patent Foundation and individual patients and medical groups – said that critics consider the idea that isolating a gene should make it patentable “a ‘lawyer’s trick’ that circumvents the prohibition on the direct patenting of the DNA in our bodies but which, in practice, reaches the same result.” Myriad are currently appealing the decision.

The “tragedy of the anti-commons” as it has been formulated is limited to showing the failure of the intellectual property rights system for the purposes of capitalist innovation itself, and only applies in some limited cases. However anti-capitalists of course have a broader scope for criticising IPRs as an increasingly important part of the system of private ownership and capital accumulation. The real tragedy of the anti-commons – that is of the enclosing of intellectual commons by private property – is not just to be found in problems of competing or monopolistic rights-holders blocking new innovations, but far more importantly by restricting access to and potential uses of existing technologies and knowledge, in contributing to the polarisation between rich and poor and expanding social and economic injustice, and in the directing of research and shaping of technologies according to the priorities of capital rather than the basic needs and wants of people.

Understanding technology

It is a commonplace belief that technology is an autonomous factor in society – that technological change and development occurs because of scientific advance or according to its own internal logic or rationality in order to become better, more efficient or to fulfil a particular universal purpose – that technologies are neutral tools designed simply to extend human abilities, having a one-way impact on social development.

However such a view is mistaken, as it neglects to consider the ways in which technologies are shaped by the prevailing social institutions in which they are created. Under capitalism, control of technology and technological design is generally in the hands of capitalist enterprise, with the very narrow goal of the pursuit of profit – together with other societal institutions that work to maintain the political and economic order. So under capitalism technology can be designed to de-skill and disempower workers in relation to management; to put people out of work; to spy on us; to shape our consumption habits, and to extend the destructive projection of imperialist military power. Even “supposedly beneficial innovations are often overshadowed by dubious side effects, ranging from stress or disease at the individual level to resource-depletion and pollution at the level of the ecosphere” (Wallis, 2004).

To illustrate with another example, even in areas with seemingly obvious universal benefits, such as medical technologies (e.g. pharmaceuticals) the contradictions at the heart of capitalism shape the research orientations and technological outcomes. Diseases affecting the world’s poorest people are commonly neglected: note for example the infamous so-called “10-90 divide”, whereby 90% of medical research is done for the potential benefit of just 10% of the world’s population. Many of the medical innovations that do make it to market are designed not to improve public health as an end itself, but to extend the patents held by pharmaceutical companies by slightly modifying existing products. More “innovation” still in the capitalist sense is seen in the invention and boundary-stretching of disease definitions by pharmaceutical companies (known as “disease-mongering”), in attempts to sell more drugs, including by reframing social problems created by capitalist society as personal, medical issues to be treated with chemicals.

This is certainly not to dismiss the innovation taking place under capitalism altogether, or to ignore the real advances of medical science (and certainly not to lend support to any of the various so-called “alternative medicines”). Rather my intention is simply to note with Wallis (2004), that for anti-capitalists “innovation” should not be considered an end in itself, and that different technological formations can be judged according to how they have been shaped by the priorities of capitalism. As Wallis points out:

“It is important not to lose sight of the long-term duality between the capitalist aspect and the human aspect of everything that goes on in capitalist society. This duality parallels and in part reflects the distinction drawn by Marx between use-value and exchange-value. Use-value, because it does not readily lend itself to quantitative measurement, was often slighted in political debate (even by socialists), but it has pursued a kind of suppressed existence which is coming back to our attention now that its classic embodiments – air, water, soil, species-diversity – are increasingly threatened. With regard to innovation, the use-value dimension serves to remind us that there is an ongoing basis for creative activity that exists and flourishes despite capitalism and not because of it. This is important in terms of our recognition that while innovation is not necessarily good, it may well be good in some instances. What we can then suggest is that the basis for distinguishing negative from positive innovations is precisely the degree to which they are – or are not – shaped by the priorities of capital.” (Wallis, 2004)

‘Synthia’ in context

With respect to synthetic biology, while we can recognise the ways in which the research agenda and scientific outcomes are being shaped with respect to IPRs – and can be critical of the impacts this might have, as part of a broader critique of IPRs as part of the functioning of present-day monopoly capitalism – this does not imply that any novel technologies produced will necessarily be determined solely by the priorities of capital, that they might not also have an inherent use-value. Nor does it make sense to consider synthetic biology as if it were a single technology, rather than a broad and diverse techno-scientific enterprise still in its relative infancy.

In many ways then it is understandable that Ken MacLeod, as a science fiction writer with a background in biology and computer science, should be excited by the prospects of an “organism…that got its genome not from the direct replication of another organism’s, but from a description of another organism’s, stored in a computer”, and his targets of clerics and ethicists warning about scientists “playing God”, and New Age supporters of vitalism (the belief that there’s some magical quality about living things that distinguishes them from the inorganic world) are well deserving, if not exactly socially powerful forces worthy of critique.

However it is important not to lend uncritical support to any novel innovations derived from altering the genetic make-up of living things without considering the social, economic and political implications – a lesson well-known to peasant activists in the Global South fighting against genetically-modified crops that threaten their livelihoods and wrest control of part of the agricultural process from producers. In this respect it is important to note the political-economic context of Venter’s current research programme, the main claims for which have been in the areas of medicine and vaccine production, and in the theoretical possibility of developing genetically engineered microorganisms capable of being put to use in the efficient production of liquid biofuels and other energy-generation projects. On the former, it remains to be seen whether genome-engineering techniques will prove useful in the near future: as Paul Nurse, Nobel Prize-winning British geneticist told the BBC, we already have systems for producing biological molecules in microorganisms that are far cheaper and more straightforward to use. More importantly, the development of energy-producing microorganisms is being promoted as a major potential techno-fix solution for the crises of climate change and declining energy reserves. This is the major reason for the substantial backing the work has received from the Department of Energy and the big oil companies.

However the narrow technology-led response of the US government and big business to the issue of climate change is a dangerous one, illustrating the desperation of the rich and powerful to find a technical ‘magic bullet’ solution that can be adopted without any major social upheaval or changes to the capitalist system of production and consumption. Liquid biofuels and other such technologies are all the more favoured by this approach, as if they were successful they could have the potential to rescue the automobile industry and the private transportation system, i.e. the hope is that a technology might be created that could preserve the lifestyles of the rich in the developed world. However they remain fantasy solutions and it could well prove impossible to replace the massive quantities of energy derived from fossil fuels with ambient energy from living things. In the meantime the promotion and adoption of already-existing biofuels – based on agricultural products like corn, palm oil, sugar cane and soybeans, and renamed agrofuels by critics – has had a devastating impact on the world’s poor, contributing significantly to the food price crisis of 2007-8, while doing little if anything to reduce net greenhouse gas emissions.

The belief that a technological magic bullet – whether from synthetic biology or elsewhere – will appear as a dues ex machina is perfectively representative of the techno-optimism of the capitalist response to climate change. By putting blind faith in science and technology to come up with a solution, those in power can avoid committing to policies that will come into conflict with the prevailing market orthodoxy, and that seriously address the issue of reducing greenhouse gas emissions to prevent environmental catastrophe. While those in power look to the likes of Venter to save the planet, we should instead look to ourselves to help build the worldwide movement for ecological and social justice.