Costs and economics of geoengineering
Climate Geoengineering Governance Working Paper Series: 013.
Published online 09 July 2014
About the Author
Gordon MacKerron (Gordon.Mackerron@sussex.ac.uk) was Director of SPRU from 2008 until the end of 2013 and is now Professor of Science and Technology Policy. His academic career has specialized in the economics and policy issues of electricity and especially nuclear power, in which he has published and broadcast widely. He has frequently been Specialist Adviser or invited witness before House of Commons Select Committee inquiries on energy subjects. From June to December 2001 he was on secondment to the PIU, Cabinet Office, as Deputy leader of the UK Government's Energy Review team.
Professor MacKerron chaired the Energy Panel, DTI/OST Technology Foresight Programme (1995-98). Between 2003 and 2007 he was Chair of the Committee on Radioactive Waste Management, an independent body charged with recommending the best approach to long-term radioactive waste management to the UK Government. He was a member of the Royal Commission on Environmental
Pollution from 2009 until its demise in 2011.
An apparently attractive feature of some geoengineering (GE) technologies is their very low anticipated costs. Among methods expected to have extensive potential impacts on the climate, the expectation applies mainly to the use of stratospheric aerosols (SAs), but is also attributed to a lesser degree to the use of space mirrors. Much debate about GE concentrates on aerosols, partly because of the existence of a partial analogue (volcanic eruptions, especially at Mount Pinatubo) but also because of the idea that the cost of aerosol deployment will be extremely low by comparison with climate mitigation technologies. One estimate reported by the Royal Society (2009) suggests that SAs might be around 1,000 times cheaper than average mitigation costs.
One context here is the issue of lock-in (Arthur 1989) – the idea that decisions might be taken to pursue SOs because of their apparent cheapness, and then that it would be difficult to escape from the commitment, even though the real costs might turn out to be much higher than originally expected. The more difficult lock-in that might affect SAs is that the ‘termination effect’ could require a continuation of the use of SAs because of the potentially catastrophic effects of stopping an SA programme. Nevertheless to the extent that very low cost estimates might influence decisions to proceed with GE investments, it is still worth examining how far existing estimates are, or are not, realistic, and this is one focus of this paper.
The starting point here is the frequently observed phenomenon of ‘appraisal optimism’ or optimism bias (UK Treasury 2013)– the escalation of the costs of a new technology well beyond original expectations. These escalations have been frequent – virtually pervasive – where technologies are novel and/or the scale of activity has been large (‘megaprojects’), and they have overwhelmed any learning or economy of scale effects that have, in other technologies or products, acted to reduce costs over time. Because all GE technologies are at a very early stage of development, it is impossible to say - even in the most general terms - to what extent the present numbers are under-estimates of eventual cost. But to get some perspective on possible under-estimations of costs, partial historical analogues of other technologies will be employed. Although primary attention here is given to escalations in construction costs of early devices, issues of potential escalations at later stages – for example in the construction of subsequent devices and in operating costs – are also raised.
The paper also considers the distinction between engineering-based costs and the wider issues considered by economic analyses, which take account of wider social costs and offer analysis relevant to resource allocation.
For purposes of this paper GE will be defined broadly, to include all the carbon reduction management (CRM) and solar radiation technologies (SRM) that were considered by the Royal Society in its 2009 report.
2. Cost vs. Economics
A distinction needs to be drawn between issues of cost and those of economics. There is often confusion here. For example Barrett (2008) entitles a paper ‘The Incredible Economics of Geoengineering’ when he is in fact referring only to some estimates of the direct costs of one geoengineering technology. If his paper were about economics it would need to have also considered at a minimum the external (social) costs that might be incurred and some systematic analysis of resource allocation issues, other than the superficial assertion that because apparently cheap, geoengineering offers large financial incentives for decision-makers to substitute GE for mitigation. The great bulk of the financial estimates of GE are of the Barrett type – that is, they aim to quantify the direct costs of a given technology, and make no quantified estimate of external (social) costs, not do they address resource allocation issues such as are raised by issues of discounting and its impact on alternative resource allocations. Only these latter analyses should be referred to as representing ‘economics’ as opposed to ‘costs’.
Various attempts have been made to estimate what the direct costs of various GE technologies might be. Nearly all of these attempts come with caveats about the tentative nature of the results, since the technologies are either not yet implemented (especially true of SRM ideas) or have been implemented on such a small scale (for example, iron fertilisation) that it is difficult to predict how a massive scaling-up would affect costs. In almost all cases these cost estimates take the form of ‘engineering’ forecasts. Methods are of a bottom-up type and are simple to the point of being simplistic. They involve an essentially quantity-survey-based aggregation of all the various inputs (material and labour) that are expected to be needed for a project, sometimes with a relatively small contingency allowance – characteristically 20% – to allow for unforeseen events.
Even in their own terms these engineering-based estimates are generally far from contemporary good practice. NASA for example regularly publishes guides to cost estimating for its own potential projects – some of which are technologically relatively close analogues, and even overlap with, some GE technologies. The most recent comprehensive manual (NASA 2008) runs to 342 pages and provides a sophisticated set of practices designed to minimize risks of optimism. The manual provides advice on a range of ways of improving forecasting accuracy, including construction of systematic scenarios, engaging with analogous technology outturn experience and conducting interviews and Delphi surveys involving knowledgeable but independent analysts.
Most of the estimated GE technology costs have been done on a shoestring budget (sometimes none) and give an impression of having been formulated if not quite on the back of an envelope, then by a single individual over a few days. They therefore have lacked any possibility of approaching NASA standards for cost estimating, quite apart from the problem that the technologies themselves are in all cases theoretical constructs and do not have working small-scale prototypes. The inadequacies of current GE cost forecasts are therefore understandable. Nevertheless the numbers that these cost estimates produce have the apparent solidity of quantification and are taken as the best (often only) current guide to costs in a wider context than simply. They are sometimes visibly taken more seriously by wider communities than the estimators would themselves.
The fundamental problem is the implicit or explicit assumption in the GE cost estimates that early theoretical designs will not be affected by any subsequent influences that might raise costs. These influences might be purely technical, and/or the result of regulatory, social and political influences that – on the historic record of broadly analogous technologies– virtually always push costs upwards, often very steeply. A further significant problem is that the implementation of virtually all GE options would require a massive scaling up of the technology. This is turn would necessitate a vast increase in the demand for material inputs for these technologies, for example ships or aircraft (Klepper and Rickels 2012). This large increase in demand for particular inputs would most likely raise the prices of these goods relative to the overall price level. This would mean that – even if the technology design remained constant – real costs would escalate. This effect would of course be exacerbated if application of GE technologies were implemented in response to a perceived climate ‘emergency’ and a rapid response followed.
One approach in economics as well as in everyday appraisal is to balance considerations of cost with estimates of benefit – formally in economics, cost-benefit analysis. In the GE studies reviewed here, benefits are expected to occur with complete certainty (at least implicitly, as no qualifications are provided) whereas the cost estimates are usually hedged with qualifications about uncertainty. Analogous technologies have a strong tendency towards optimism in terms of calculations of benefits as well as cost, so that the overall cost-benefit balance is , in analogous technologies, skewed not only by under-estimating costs but also by over-estimating benefits. Given that GE estimates are presented in terms of costs alone with benefits are taken as fixed, the only issue is then the cost of acquiring those benefits.
Equally the GE cost estimates pay no quantitative attention to the issue of wider uncertainties. They usually point to a range of unwanted potential side-effects of GE technologies. Examples, especially for SRM, are possibilities of increased drought in some regions, reduced sunlight, and increasing ocean acidification. But these possibilities are not quantified in financial terms. Admittedly such quantification is effectively impossible: it requires knowledge of probabilities, of the extent of damage and a means
of putting a financial cost on the outcome of these processes. But it is clear that even if probabilities of such side-effects were to be very low, the cost of the damage could be very high (Klepper and Rickels 2012; Goes et al. 2011). It is therefore misleading to give quantified and apparently often precise estimates of cost, while offering only a generalised admission that there could be large ‘external’ costs from unwanted side effects. Inability to provide such a consideration of external costs therefore gives an impression that a social cost-benefit balance will be heavily biased towards regarding GE as having massively positive excesses of cost over benefit.
More recently, as explored below, some economists have attempted to introduce GE technologies into large-scale integrated climate models, especially the well-known DICE model, the economic components of which are of a traditional optimizing economic growth kind, using standard economic assumptions of rational behavior and based squarely on consequentialist/utilitarian ethical foundations. Whatever view is taken about the overall usefulness of the basis or results of such models, the results to date do provide some interesting further insights into the status of geoengineering economics, generally in the direction of adding to the scepticism about the cheapness of GE which other considerations in this paper suggest.
3. For novel, large-scale technologies, why are early cost estimates likely to be optimistic?
So far the argument has been – mainly by arguing from the example of analogous technologies that will be enumerated and described below – that GE cost estimates will turn out to be serious under-estimates of eventual cost. But the underlying mechanism – fundamentally, gross uncertainty about the eventual costs, benefits and side effects – would not automatically produce systematic under-estimates of cost. It would, all else equal, mean that cost estimates would be wildly wrong, but with no obvious bias in terms of under- or over-estimation: estimates would as likely be too low as too high. There are however two main explanations commonly offered for the empirical observation that, where technologies are novel and large-scale, early estimates are almost without exception large under-estimates of eventual costs.
The first of these explanations is fundamentally psychological and datesback at least as far as Keynes’ well-known reference to ‘animal spirits’ as an explanation of the over-optimistic behavior exhibited by investors and entrepreneurs. This argument has been formalized in more recent times, especially by Kahneman and Tversky (1979) and others, who argue that there are cognitive biases in the way that people process information, leading to issues of ‘planning fallacy’ and ‘optimism bias’. These insights have been incorporated into the development of a school of behavioural economics, which systematically examines – and largely undermines – the traditional economics assumption of rationality in decision-making. As argued by Flyvbjerg (2006), such inherent optimism may however, at least in principle, be counteracted by the application of external ‘reality checks’ by independent observers.
There is however a second underlying explanation for optimism in cost estimates which is much harder to overturn. It derives from strategic behavior in response to economic incentives. This has been a feature of principal-agent theory through its concern with asymmetry of information between principals (in this context, decision-making or funding bodies) and agents (in this context, potential GE project developers or advocates; Laffont and Tirole 1993).
However the problem goes well beyond information asymmetries and is fundamentally based on the structure of incentives and ‘contracts’ - whether the latter are implicit or explicit. Most large, complex and novel technologies are either implemented by the state, or are contracted out to private interests on a ‘cost-plus’ basis, where private actors are fully insured against risk by being paid all their costs plus a margin. In these situations, the sponsors of new technologies are characteristically a small group of designers, engineers etc. whose expertise makes them the only authoritative source of cost estimates. These groups characteristically benefit from making relatively low cost estimates because apparently low costs maximize the chances of project approval. But these groups do not bear the risks of any subsequent cost overruns, because these are normally borne from public funds (Sadnicki and MacKerron 1997) . The asymmetry of incentives here means that there is likely to be a structural bias towards early cost estimates that are systematically lower than probable outturns. Flyvbjerg (2006) provides a range of cases where this is phenomenon has been explicit. The underlying problem in correcting this asymmetry is that it is very difficult (i) to challenge the knowledge monopoly of the project promoters and (ii) to correct for the fact that estimators do not bear costs of overruns (for example by introducing private risk capital into the process).
4. Geoengineering cost estimates
The Royal Society (2009) provides a useful summary of the estimated costs of a range of GE technologies. In all cases where it has proved possible, the original sources of these estimates have been checked and further – more recent – estimates have also been added. Table 1 below reproduces the Royal Society cost figures for SRM technologies and Table 2 adds the Society’s views on CDR options. Note that the Royal Society records quantified cost estimates for SRM technologies (Table 1) but only provides a rough qualitative assessment for CDR options, though some of the original sources do provide quantified cost estimates (for example Zhou and Flynn (2005) and Shuiling and Krijsman (2006)). For CDR technologies the Society used the more subjective categories of low, medium and high to evaluate costs (Table 2)
Table 1 Cost estimates for SRM in the Royal Society report Technology Cost/year per unit of radiative forcing
Other direct cost estimates have appeared since the Royal Society report. Four examples are given below. Note that the basis on which cost estimates are presented varies, so it is not always possible to compare estimates directly with each other.
• Blackstock et al. (2009) who argue, in an ‘emergency framing’ that the total costs of ‘modest’ reductions in incoming solar radiation (in practice probably SA) could be achieved at a cost of $30 bn.
• I MechE (2009), who choose a limited range of potential technologies and suggest that air capture might have an (eventual) direct cost of as little as $30/tonne of carbon dioxide, a much lower figure than quoted elsewhere
• Robock et al. (2009) who quote annual figures for the costs of injecting 1 Tg of sulphur into the stratosphere between $375m. (a very low figure indeed) and $30 bn. depending on delivery method.
However Robock et al. also give much more attention to the risks of SA technology – 17 are listed – though, as for other authors, these are not quantified in any way.
• McClellan et al. (2012) who suggest that to deliver 5 Mt of albedo modification material into the stratosphere might cost between $2 bn. and $8 bn. annually. They explicitly ignore issues of risk and effectiveness (i.e. benefits) but are careful to point to the fact that these engineering-based estimates are for “systems that are purely theoretical and are at or beyond the limits of today’s materials and technologies” (McClellan et al 2012, p. 10)
Two main issues arise from Tables 1 and 2 and the other studies quoted above.
First the stand-out result is that cloud albedo and SAs are on all reckonings phenomenally cheap, apparently coming in (Royal Society 2009, and comparably in the other studies) at 1000 times less expensive than conventional mitigation. This suggests that the level of optimism that may be present in these low figures would have to be unprecedentedly large for these options to be as expensive as mitigation in direct cost terms. Thus direct costs could rise by a factor of 100 and there would still be a large advantage for cloud albedo and SA over mitigation. However of these two technologies, only SAs appear to have almost unlimited technical potential.
The identification of geoengineering with SAs in much popular writing therefore seems to be based on its apparently low cost, its unlimited technical potential and a certain degree of confidence in the likely physical benefits because of the analogy to volcanic eruptions. However it seems probable that the direct costs of SA or cloud albedo technologies might still rise very substantially (see discussion below about analogous technologies), taking some of the ‘shine’ from their currently perceived status. But what will likely be more important is the possibility that negative side effects (external costs) could easily exceed the effects of these potential increases in direct costs (see discussion below on modeling).
Second, while the characterization of CDR methods in terms of low, medium and high cost is a possible recognition of the uncertainties inherent in these technologies, it is not clear why the uncertainties are treated in the Royal Society report as more acute for CDR than for the sometimes more exotic SRM options. However the report does argue that for apparently feasible and high-physical-potential CDR technologies, direct CDR costs, as presently estimated, are in most cases higher than conventional mitigation. This means that, allowing for the optimism that is probably present in CDR estimates, the chances of CDR technologies becoming competitive with the direct costs of mitigation are small. Indeed it is arguable that several of the CDR technologies are simply new forms of mitigation rather than a genuinely distinct class of GE option. The discussion below on escalations and modeling is therefore effectively most applicable to SRM methods because, on cost grounds, the CDR methods seem uncompetitive (which is not to say that they may not be preferred to SRM on other grounds).
Third, it is clear that for all options, but especially SRM technologies, the state of technological development is exceptionally limited, and in most cases does not yet extend to any kind of even small-scale demonstration. As McClellan et al. say, we are here dealing with ideas formulated in theory rather than any practical devices at any scale. In the well-known characterization of Technology Readiness Levels they qualify as TRL 1: this is relevant to the escalations that analogous technologies have experienced as discussed below.
5. Cost escalation in analogous technologies
There is now quite wide-ranging agreement that engineering-based cost estimated for novel and large-scale technologies are liable to produce highly optimistic results e.g. Merrow et al (1979) and Merrow (1988); Bacon et al (1986); NASA (2008) and several works by Flyvbjerg (e.g. 2006) and associates. This can be for a mixture of the reasons suggested above: psychological ‘optimism bias’; and structural incentives to underestimate costs. In addition, as also mentioned above, there can also be escalations due to a need to change designs for purely technological reasons and there can also be a relative price effect as scaling up raises input prices and frustrates the hopes for economies of scale (either of size as devices get bigger, or of number, due to mass production of multiple devices). It is impossible to separate out these factors in any given ex post evaluation of actual (outturn) costs compared to earlier forecasts but arguably most or all of these factors will generally play a role and they may interact.
Finding analogous technologies as a guide to potential under-estimation of costs is an established approach in the attempt to correct for cost under-estimation. It is recommended for example by NASA (2008), by Flyvberg et al. (2003) and subsequently, and by Kern (2013) in a slightly broader context. The selection of analogues for GE technologies, especially SRM technologies, is however problematic. This is because GE technologies appear to be at an extreme end of a spectrum of both novelty in technological terms, with virtually nothing beyond theoretical calculations yet available, and of scale in terms of the potential implementation (for example the 8,100 barges that might be needed to implement ocean downwelling: Zhou and Flynn (2005)). This suggests that the potential cost escalations might be expected to exceed almost any previously observed escalation in analogous technologies.
Candidates for analogues are innovative military technologies, where data are unfortunately not generally reliable or detailed enough to be useable, and various civilian technologies, including novel chemical process plants and nuclear technologies. These latter include nuclear fusion (probably the best analogue in terms of the demanding nature of the technology) and elements of nuclear fission, including breeder and thermal reactors, as well as elements of the ‘back end’ of the nuclear fuel cycle, where both decommissioning and waste management provide data. Large-scale energy projects, especially one-off projects like large hydro-electric schemes might also be relevant.
Unfortunately there are limited numbers of rigorous evaluations of cost outturns relative to early cost estimates for these analogues, probably because the institutional incentives to under-estimate costs also work to discourage dispassionate evaluations of the relation between cost estimates and outturns. Some of these evaluations are presented here. What is common to all of them is that they all show a failure of learning to reduce costs as experience accumulates, even when early uncertainties are reduced and there is an attempt at replication.
5a. RAND studies
RAND did pioneering work for a range of complex civilian technologies in the late 1970s and early 1980s (Merrow et al. 1979 and Merrow (1988), mostly to help inform decision-making on military technologies. While all of these exhibited substantial technological novelty, for many of them (for example chemical process and nuclear power plants), such novelty was generally a good deal less than in the case of SRM technologies; and while they were individually large, they were very small by comparison with a large deployment of GE. This suggests that the overruns they experienced are liable to be small relative to those that might occur for GE options in SRM.
The Merrow studies were rigorous and covered a large number of projects – 56 in the 1988 ‘megaproject’ study, including the Itaipu dam and the SASOL synthetic fuel project. The 1979 study is however the most useful. It divided estimates into four periods in the pre-construction estimating process:
• initial (where only concepts had been considered), and this may be considered to correspond roughly to TRL 1 as outlined above;
• preliminary (where design work is up to 10% complete);
• budget (where design is 30% to 50% complete); and
• definitive (where design is 90% to 100% complete).
Escalations were found at all stages, including from definitive to actual cost. For present purposes it is the extent of escalation between initial and actual cost that matters because SRM technologies are at best at the ‘initial’ or TRL 1 stage. Aggregating across the escalations between each of these stages, the average cost escalation was 286% between initial estimate and actual cost.
5b. The World Bank
The study by Bacon et al (1996) is chosen here, not because the cost overruns were very large, but rather because they illustrate two important results from most studies: first the very strong tendency for optimism across a large number of mostly quite conventional though often relatively large-scale technologies; and second that over time there was no evidence of learning from earlier misestimations that might have been fed back into later attempts to estimate costs more accurately. Thus of 135 power projects overall, the average cost overrun was 21% but for the more unique hydro projects the overrun averaged 27% (It is not possible to determine where the original estimate might fall on the Rand scale outlined above). Only 16 of the 135 projects came in below budget, and these were mostly thermal projects, where replication is in principle more possible than for hydro, and usually the ‘pessimistic’ forecast was marginal. As mentioned above there was no improvement in the estimation accuracy of later projects than earlier ones.
Since the early 2000s, Flyvbjerg and colleagues have been working on megaprojects and their discontents (Flyvbjerg et al 2003). They have concentrated primarily on very large transport-based projects, all of which are substantially less novel technologically than GE technologies, and all of which are significantly smaller in scale that is the case for GE. They have publicized the idea of reference class forecasting, which essentially involves trying to anticipate future cost escalations by matching new
projects against historic project analogues and seeing how optimistic these earlier analogue estimations were.
Among the results they provide for escalations in past megaprojects (again with the caveat that it is not clear at what project stage in terms of the Rand categories the early estimates corresponded to) are the
following. Rail projects have an average overrun of 45%; bridges and tunnels by 34%; and roads by 20% (Flyvbjerg 2006). These are quite modest overruns compared to those experienced in the Rand studies, but again these are, by comparison with GE technologies, quite small and technologically involving little novelty. Rail escalations are highest, as might be expected from their generally greater uniqueness. As in the World Bank case there is no evidence of learning or improvement in estimating processes over time, and an observation that very few projects involve appraisal pessimism.
5d. Nuclear technologies
So far, cases presented only refer to the excess of final construction costs compared to earlier estimates: the nuclear cases below also allow consideration of cost escalation between the final costs of devices built early and later in the same programme, and also escalation in operating costs of a given completed device.
Fusion A promising analogue when considering GE options is nuclear fusion. This is a technologically immensely difficult technology, with challenges that often exceed those found in SRM options. It is also genuinely global in scale, with funding for the current ITER project in France coming from Japan, Russia, the EU and the USA, and very longlasting in development: work started seriously in the 1950s and commercialization is still at least 40 years away. However the problems of leaning too heavily on fusion experience are twofold: first we have no idea what the eventual cost will be; and second that data shortages and technological/project discontinuities prevent any kind of consistent look at escalations to date.
However there is one indication of the extent of some cost escalation in fusion. When the international parties signed the agreement to build the current research device ITER in 2006 the announced construction cost was 5.2 bn. euros. By 2014 this number had increased to 15 or 16 bn. euros. (Brumfiel 2012). Because most of the expenditures on ITER are contributions in kind from the members of the agreement, it is difficult to pin down a precise number for the estimates, and inflation is not taken into account in the 2012 analysis: however expected costs now seem likely to have come close to a tripling compared to 2006. But ITER is far from complete and represents only a single stage in the process of developing a viable commercial technology. This rather short interval however demonstrates the potential for large overruns in complex, novel and highly demanding technology.
Nuclear reactors Unlike fusion, fission reactors have been built in some number. The most complex and challenging have been fast neutron reactors, often designed to work in ‘breeding’ mode, where initial plutonium fuel, in association with uranium, might breed more plutonium fuel than the original input. Very few of these reactors have been built (the largest is Super-Phenix in France; a prototype and a demonstration reactor were built at Dounreay in the UK) and it is probable that their outturn costs were substantially higher than the early estimates. However it has proved difficult to discover reliable data and so attention turns to conventional thermal reactors, several hundred of which have been built at large scale.
The first example is the Advanced Gas Cooled Reactors (AGRs) in the UK, the first generation of which was built between the 1960s and 1980s. Four double-unit stations were built in England to three different detailed designs and on the basis of a single small prototype. The cost overruns for the four were 30%, 63%, 142% and 158% (MacKerron 1983) and in the last two cases these data were produced before final reactor completion and so the eventual overrun might have been larger. In terms of the RAND categories of estimate stages, the base for these calculations was probably close to the third stage (budget).
The second set of examples relate to the very large and technically successful programme of Pressurised Water Reactors (PWRs) built in France in the 1970s and 1980s. This programme aimed as far as possible at standardization and 55 reactors of similar design were built. This example differs from all those so far presented, because it relates to escalations in real costs after the first units were built. The last reactors built in the French programme (The N4 design) turned out to have real costs 130% higher than the first units (Grubler 2010), even though these later reactors represented technical improvements relative to the early units. As Grubler remarks this is a clear case of negative learning. It suggests that even when a technology becomes established, the costs of later units may be substantially more expensive than earlier ones. This could be because the relative price of material inputs has risen or there are political, social or regulatory factors that raise costs.
Further PWR examples concern plants not yet completed. Two PWRs, both of the same EPR design, are currently under construction in Europe, one each in Finland and France. The first reactor, at Olkiluoto in Finland, is currently expected to cost some 8.5 bn. euros, compared to a (noninflation adjusted original turnkey, fixed-price contract of 3.2 bn. euros in 2003) (World Nuclear Association 2014). After allowing for inflation this increase amounts in real terms to around 130% The final cost could be even higher, as the plant is not yet complete and Areva, who are building the plant, and the Finnish utility consortium customer are now cross-suing each other for several billions of euros. The Flamanville PWR being built in France is also experiencing major escalations in estimated cost, up
from the original estimate in 2005 of 3.3 bn. euros to 8 bn. as at 2012 (World Nuclear News 2012). Again after allowing for inflation, this represents an increase of over 100% compared to the initial announcement. Finally the two PWRs that EDF may build in the UK may, according to EDF (2013), now cost 16 bn. pounds, in RAND terms probably a definitive estimate. This represents an estimated cost 138% more than the top end of the uncertainty range presented by the thenrelevant UK Department (Department of Business Enterprise and Regulatory Reform 2008) six years ago in what was probably close to a preliminary estimate.
The third set of examples concerns the so-called ‘back end’ of the nuclear fuel cycle: those activities that are carried out after nuclear fuel is irradiated, reactors are closed and wastes have to be managed. The UK’s large Thermal Oxide Reprocessing Plant (THORP) had an outturn cost between 230% and 270% above an earlier estimate (probably corresponding to the budgetary category in RAND’s schema) (Berkhout and Walker (1990)).
A second set of cases within this third category relates to the UK nuclear programme at the time of attempted privatizations in 1988-89 and 1996. All concern escalations in estimates before projects are completed and without any changes in the technical scope of the projects. They therefore represent a clear case of how estimated costs can change when the structural incentive to under-estimate costs is undermined and the estimators have to make more realistic estimates because they would for the first time have to meet any costs of overruns themselves, rather than passing them on to consumers or taxpayers.
• When the state utility the Central Electricity Generating Board (CEGB) owned the first ‘Magnox’ generation of UK reactors, they could pass on all incurred costs to consumers. But when Government made an attempt to privatise those reactors, the new private owners would have to meet all costs themselves and then sell the electricity into a market whose price they could not determine. The prospective owners therefore re-assessed the estimated costs of decommissioning the Magnoxes. Up until 1988 these had been assessed at an average of 211m. pounds per station; overnight the estimate rose to 599m. pounds without any change to the expected technology to be used (MacKerron 1991).
• Similarly, but more spectacularly, the state-owned back-end company British Nuclear Fuels (BNFL) had previously estimated the costs of decommissioning its own facilities (for reprocessing spent fuel and managing wastes) at 438 m. pounds. Under this system, the great majority of this decommissioning work would be paid for by its customers, primarily the CEGB. Under the proposed new structure after privatization, it would now fall to BNFL to pay for its own decommissioning expenses. Again, overnight, the BNFL estimate rose eleven-fold, from 438 m. to 4.605 bn. pounds (House of Commons Energy Committee 1989). BNFL rationalized this in part by arguing that they now planned a return of their sites to greenfield status, which was not previously the case, but it seems clear that the great bulk of the large increase was due to the changed incentive structure, where all decommissioning costs were now internalized within BNFL.
• The third example concerns the state-owned research agency the Atomic Energy Authority (AEA). In 1996, Government decided to privatise those parts of it that could be attractive to private investors while retaining the liabilities in the public sector. AEA’s previous estimates of its own decommissioning and waste management liabilities, all of which would fall to the UK taxpayer, would amount to 3.4 bn. pounds. While this structure for the nuclear liabilities would remain the case after the non-nuclear parts
of the AEA were sold off, Government appointed independent advisors to check on the realism of the AEA liability estimate. This led to an immediate change in the liability cost estimate from 3a range of 3 to 4 billion pounds to 8.2 bn. (NAO 1993). This number was later reduced to 7.2 bn. but a new uncertainty range was added, amounting to 4.9 to 11.1 bn. pounds. These changes were not accompanied by any expectation of change in design or technical scope and they provide another example of how estimates can change quite radically once the incentive structure is itself revised to internalise costs.
The fifth and final example from nuclear power concerns escalations in operating costs after the technology is in place. Low level radioactive waste in the UK is permanently disposed at Drigg, a site close to Sellafield in north west England. Originally the low level waste was tipped into unlined trenches and then covered over. Regulatory change meant that in the 1990s it became necessary to encapsulate and package the wastes and place them in concrete vaults before coverage. This raised the charges per cubic metre of disposal by a factor of 20 (2000%; Marshall 1994)). This was partly due to the need for new capital investment so the change was not just in annual operating costs, but it does show that regulatory tightening can have major impacts on costs after a technology is established.
5e. Conclusions on analogues
The above examples show first that where technology is complex, novel, large-scale and often embroiled in major political, social and regulatory issues, cost escalations are likely to occur – sometimes to a very high degree – between early estimates of cost and eventual outturns.
Second, they also show clearly in some of the nuclear cases that when the structure of incentives is changed so that costs become internalised in the estimating organization, large amounts of prior ‘strategic optimism’ are revealed, often with very large increases in cost estimates. Third, the French nuclear programme shows that the outturn costs of a new technology may not represent a peak or even a ceiling in relation to the costs of new investments of the same type and in the same
Finally the low-level nuclear waste example shows that after a specific piece of technology is established its operating costs may rise steeply as a result of subsequent regulatory change. What is the relevance of these analogies to GE technologies, especially SRM and SA in particular? It is impossible to make any kind of quantitative estimate of the extent to which actual costs of geoengineering technologies might exceed current estimates because we do not possess knowledge about the ways that (mainly notional) geoengineering technologies would actually develop. But relative to the cases described above, GE technologies are probably at an even earlier stage of development than any of the analogues; they are substantially larger in scale, being potentially multi-national or global; and may be embroiled in more discordant political, social and regulatory processes. This suggests that the scope for outturn costs to rise way beyond current estimates is very large, probably in the multiple hundreds of percent; that later SRM devices may be more expensive than early ones; and that operational costs can also rise even after a single example of a technology is established. Because SRM cost estimates, especially for SAs are currently so much below competitive mitigation technologies, they may of course still prove less expensive than mitigation options. But the evidence here does suggest that the approach of some studies – which is to assume that SAs are essentially free (Barrett 2008) – is extremely risky and that SAs will probably have significant, currently unanticipated cost escalations between estimate and outturn and possibly also between first devices and later devices, and in operational costs.1
In relation to nuclear reactors (Admiral) Rickover (1970) made the following observation in 1953. ‘An academic reactor…almost always has the following characteristics: (1) it is simple (2) it is small (3) it is cheap (4) it is light (5) it can be built very quickly (6) it is very flexible… (7) Very little development will be required…. (8) the reactor is in the study phase. It is not being built now. A practical reactor ..[has] the following characteristics: (1) it is being built now (2) it is behind schedule (3) it requires an immense amount of development on apparently trivial items (4) it is very expensive (5) it takes a long time to build because of engineering development problems (6) it is large (7) it is heavy (8) it is complicated. Readers are invited to make their own speculation on the extent to which such issues may be relevant for GE technologies.
6. Economic studies of geoengineering
Until recently, very little work had been undertaken by economists on geoengineering. In the most recent years this has started to change to a small extent and a few studies have appeared, using traditional tools such as cost-benefit analysis (including discounting) and modeling alternative outcomes that include an attempt to aid resource allocation decisions. Two such studies are considered here.
First it is worth briefly noting again the widely-cited paper by Barrett (2008) with its startling title ‘The incredible economics of geoengineering’. ‘Incredible’ has two possible meanings: formally that something is literally not credible; or colloquially that despite appearances that something is not credible, it in fact is credible. It is clear from reading Barrett’s paper that he is using the word in its colloquial sense. He endorses the idea that SA will be extremely inexpensive and raises no significant qualifications to this idea: he is also dealing, as in all the other geoengineering cost estimates, with engineering-based cost estimates, not economics.
The first of the two economic studies considered has a direct riposte to Barrett in its title: ‘The real economics of climate engineering’ (Klepper and Rickels 2011). It first considers optimism in the engineering cost estimates of some SRMs and argues that the cost of space mirrors may be 8 or 9 times higher than reported in the Royal Society (2009). The authors argue that a neglected issue in comparisons between CDR and SRM is that while a CDR approach only requires a single application, SRM methods need to be employed for as long as the carbon in the atmosphere is eliminated. They then give attention to the wider, external effects of SRM technologies (which could be favourable or unfavourable, and geographically unequally distributed). They argue however that the unfavourable external costs may be very large, but that even so, the direct and indirect costs of SRM will probably still be between 10 and 100 times lower than CDR.
The second, most interesting study of all, is that from Goes et al (2011), also with a provocative title ‘The economics (or lack thereof) of aerosol geoengineering’. As the title implies this paper again considers only SAs and sets out four possible risks:
• the possibility that aerosol injection might be discontinuous, which they argue is a high risk because of the potential for war, breakdown of international agreements etc.
• significant damage to the polar ozone layer
• continuing acidification to the world’s oceans as carbon emissions continue to rise
• modifications to El Nino such that African and Asian summer monsoons are disrupted.
The authors use and modify the integrated assessment model DICE, the economic parts of which involve an optimal economic growth model maximizing the sum of discounted future utilities. They add to this model a more refined climate model with improved representation of abrupt climate change and represent SAs to include future continuous/interruption modes. They then investigate four strategies or
scenarios: business as usual (BAU); optimal abatement using abatement only; continuous and discontinuous use of aerosols; and a mixed and variable strategy of abatement and aerosol use. Among their results the most interesting are, in all cases using the assumption (challenged by the earlier analysis in this paper) that SAs are costless:
• BAU would lead to a loss of Gross World Product (GWP) of 3%
• abatement-alone could reduce this loss to 1%
• continuous aerosol use could reduce the loss of Gross World Product to under 1%
• if there were discontinuity in aerosol injection (cessation in the year 2065 in their assumption) the overall loss in GWP would be 6% because of the immediate heating effect that would result.
The mixed abatement/SA strategy allows the use of SAs to be determined by the model, and the result they obtain here is that SA would be postponed until a climate ‘emergency’ (no clear definition is provided) occurred. They argue that this involves a risk transfer to future generations.
Overall, Goes et al. argue that geoengineering represents an economically efficient strategy only if probability of intermittency is very low and the external economic damages caused by aerosol use are very limited. They apply a cost-benefit test allowing for damage caused by SAs and produce a scenario map plotting probability of interruption of SAs and such damage: their conclusion is that SAs fail a conventional cost-benefit test in the great majority of assumed future cases.
These results cannot be taken as definitive because the assumptions that have to be made in such modeling and cost-benefit analysis are heroic, to say the least. (There are for example methodological concerns about applying cost-benefit analysis where huge, non-marginal changes are being contemplated.) Further, their negative results with respect to SAs apply to a scenario in which geoengineering alone is pursued, and mitigation is abandoned. Less stark results follow when they explore the mixed abatement/SA strategy further. Clearly much more work is needed in this very early area of work. But the conclusion of Goes et al at least serve as a reminder that the economic case for SRM technology and in particular SAs is far from open and shut. This is a major contradiction to much of the conventional wisdom about the costs and economics of SAs.
The scope of this paper has broadened considerably from its initial intended focus on possible escalations in the construction cost of geoengineering technologies between early estimates and final outturns. It has argued that the quantified presentation of GE construction costs accompanied by only qualitative presentations of external costs and risks has given a misleading impression that the total costs of some GE technologies, and especially stratospheric aerosols, are so trivial as to be hardly worth considering. There has been confusion between estimates of cost, based on engineering methods, and economics, where the latter considers wider social costs and offers analysis of resource allocation issues,
By introducing a wide range of historic experience in analogous technologies, the paper has argued that the direct construction costs of SRM technologies might well be very much larger than currently expected. It has also argued that further escalations might occur after devices are constructed: this could include cost increases in later devices of similar generic intent (learning will most likely be counteracted by rising relative price levels and/or regulatory tightening); and operational costs might also rise, 230% and 270% between an early estimate (probably corresponding to the budgetary category in RAND’s schema) and the outturn again as a result of regulatory intervention. Finally, and even without introducing more realistic assumptions about direct costs, the introduction of economic modeling, with (heroic) attempts to quantify external costs and risks, shows that it is far from impossible to imagine that geoengineering might in the end prove more costly to the world than alternative strategies. Hence any lock-in that might be encouraged by an expectation that – whatever else may be true about geoengineering – it is at least a cheap option, could be a cause for regret.
Arthur, B. (1989) ‘Competing technologies, increasing returns and lock-in by historical events’ Economic Journal 97: 642-665.
Bacon, R., Besant-Jones, J., and Heidarian, J. (1996) Estimating construction costs and schedules- experience with power generating projects in developing countries World Bank Technical Paper 325, energy Series, Washington D.C.
Barrett, S. (2008) ‘The incredible economics of geoengineering’ Environmental and Resource Economics 49: 45-54
Berkhout, F. and Walker, W. (1990) THORP and the economics of reprocessing SPRU, University of Sussex, November
Blackstock, J. and Kronin, S. (lead authors) (2009) Climate responses to climate emergencies Novim, Santa Barbara, July.
Brumfiel, G. (2012) ‘Fusion project struggles to put the pieces together’ Nature, 26 October.
Department of Business, Enterprise and Regulatory Reform (2008) Meeting the challenge: a White Paper on nuclear power HMSO, January
EDF (2013) HinkleyPoint press conference, 21 October
Flyvbyerg, B. Bruzelius, N. and Rotherngatter, W. (2003) Megaprojects and risk: anatomy of ambition, Cambridge University Press.
Flyvbjerg, B. ‘ From Nobel Prize to project management: getting risks right’ Project Management Journal 37:3, 5-15.
Goes, M., Tuana, N. and Keller, K. (2011) ‘The economics (or lack thereof) of aerosol geoengineering’ Climatic Change DOI10.1007/S10584-010-9961-z
Grubler, A. ‘the costs of the French scale-up: a case of negative learningby- doing’ Energy Policy 38:9, 5174-5188
House of Commons Energy Committee (1989) British Nuclear Fuels plc: Report and Accounts 1987-883rd report, session 1987-88, HC 50, 5 April
I Mech E (2009) Geoengineering: giving us the time to adapt? August
Kahneman, D. and Tversky, A. (1979) ‘Prospect theory: an analysis of decision under risk’ Econometric, 47:2, March
Kern, F. (2013) ‘Learning from partial historical analogues’ Ppaer to the workshop on geoengineering: issues of path-dependence and sociotechnical lock-in. UCL, London, 25 October
Klepper, G. and Rickels, W. (2012) ‘The real economics of climate engineering’ Economics Research International Vol 2012, Article ID 316564
Laffont, J and Tirole, J. (1993) A theory of incentives in procurement and regulation, MIT Press
MacKerron, G. (1983) ‘Construction costs and times’ ECC/P/2 Proof of Evidence to the Sizewell B Public Inquiry, Electricity Consumers’ Council, July
MacKerron, G. (1991) ‘Decommissioning costs and British nuclear policy’ Energy Journal Special Issue12.
Marshall, P. (1994) Nucleonics Week 5 May
McClellan, J. Keith, D. and Apt, J. (2012) ‘Cost analysis of stratospheric albedo modification delivery systems’ Environmental Research Letters 7:3, 1-8.
Merrow, E., Chapel, S. and Worthing, C. (1979) A review of cost estimation in new technologies: implications for chemical process plants R-2481, Rand, Santa Monica, July
Merrow, E. (1988) Understanding the outcomes of megaprojects R-3560, Rand, Santa Monica
NAO (National Audit Office) (1993) The cost of decommissioning nuclear facilities, HMSO, London
NASA (2008) 2008 NASA cost estimating handbook
Rickover (1970) Paper reactors, real reactors, published in AEC Authorizing Legislation: Hearings before the Joint Committee on Atomic Energy
Robock, A., Marquardt, A., Kravitz, B. and Stenchikov, G. (2009) ‘Benefits, risks and costs of stratospheric geoengineering’ Geophysical research Letters, 36:19
Royal Society (2009) Geoengineering the climate: science, governance and uncertainty RS 1636, September.
Sadnicki, M. and MacKerron, G. (1997) Managing UK nuclear liabilities
STEEP special report 7, SPRU, University of Sussex, October
Schuiling, R. and Krijsman, P. (2006) ‘Enhanced weathering: an effective and cheap tool to sequester carbon dioxide’ Climatic Change 74, 349-354.
UKAEA (1994) Annual review 1994-95, Harwell, March
UKAEA (1997) Annual review 1996-97 Harwell
UK Treasury (2013) Green book supplementary guidance: optimism bias 21 April
World Nuclear Association (2014) Nuclear power in Finland March
World Nuclear News (2012) ‘Flamanville costs up 2 billion euros’ 4 December
Zhou, S. and Flynn, P. ‘Geoengineering downwelling ocean currents: a cost assessment’ Climatic Change 71: 1-2, 203-220.