Discussion Paper Prepared for the Annual General Meeting of the Canadian Association for the Club of Rome held on 16 June 1999. This paper, along with other contributions to this Meeting, was subsequently published in the Proceedings of the Association, Series 2, No. 2, (Autumn) 1999.
This paper was prepared as a background document for the study of the future of energy through the next century. The main characteristic of the present fossil fuel industry is its great size which permits only a slow rate of change. The importance of determining the sustainable limit of carbon dioxide emissions to the atmosphere is stressed as is the magnitude and timing of the peak in the world production of oil from conventional sources. The possible role of the capture and sequestering of carbon dioxide is examined in the context of the production of Coal Bed Methane to augment the supply of natural gas which has been identified by the National Energy Board as potentially significant in the next decades. Techniques for the generation of atmospherically-neutral electricity and an energy supply suitable for vehicles equipped with fuel cells are then explored under Canadian conditions based on this possibly important new energy source.
This paper was prepared as a background document for this meeting. Reviews of this nature must inevitably go over some old ground and the indulgence of the reader is sought since some of the material to follow repeats what has appeared in previous papers.
Carbon dioxide passes ultimately to the ocean where it finally becomes fixed in solid forms such as carbonates though the transfer steps involved are complex. This passage is complicated by the very large interchange between the biota, the land surface and the air, and there is still a wide (and disputed) gap in the mass balance. But it is certain that more carbon dioxide is being released from the fossil fuels than can be transferred across the air/ocean boundary. The question then becomes: How much can be released to the atmosphere in a sustainable situation?
Based upon current knowledge, constant concentrations in the atmosphere would be achieved if the rate of release from the fossil fuels were reduced to the range of 2.5-2.8 GT C per year. This range will be termed here the sustainable level although there is still a major proviso. The ocean thermal haline circulation, which drives such currents as the Gulf Stream, may be affected by an increase in greenhouse gas emissions. Predicted increases in rainfall in northern latitudes over the North Atlantic downwelling sites may be expected to reduce by dilution the already small forces which drive the flow. These forces arise from the greater density of the seawater due to a combination of increases in salinity and decreases in temperature. If this effect proves to be significant, it is not known at present what the sustainable level of emissions may be. Increases in greenhouse gas concentrations in the atmosphere may also make surface oscillation effects in the Pacific Ocean, such as the recent El Niño event, more frequent. It may take another five to ten years before this issue is resolved. What can be said with certainty is that the sustainable limit is less than one-half the present levels of emissions and is consequently substantially lower that the target for reductions agreed in the Kyoto Protocol of 1997.
A knowledge of this limit is of great importance in thinking about the role of the fossil fuels in the next century. The important point is that this level is likely to be greater than the emissions from all mobile sources for the foreseeable future. Meeting the needs for transport is already the largest market for oil and an even higher proportion will be consumed in vehicles and planes as the years go by. Moreover, if the production of the fossil fuels were to be limited to control carbon dioxide emissions, it follows that the resource base would last longer. The combination of these two effects is highly relevant to possible energy strategies over the next decades.
The recent major study of the energy field published in 1998 by the International Institute of Applied Systems Analysis (IIASA) in cooperation with the World Energy Council (WEC) entitled Global Energy Perspectives finds that in each of the six scenarios studied, which covered a very wide range of possibilities, the presently identified resources of the fossil fuels are adequate through to the end of the next century.(1) This conclusion does not, however, take into account the issues concerning the production of conventional oil which, because of their importance, are dealt with separately here.
What is Conventional Oil?
By conventional oil is meant that produced in the normal way in wells. The main characteristic of this category is the low-cost production of the desirable light and medium grades, though there is some output of heavy oil which may be classified as by conventional means. To make this distinction clear, heavy oil from the Cold Lake field in Alberta, the jewel in the crown of Imperial Oil, though produced in wells, requires prior steam treatment and for this reason is not classified as conventional production.
The Special Attributes of Oil
There are three attributes of oil which set it apart from the other fossil fuels. First, oil and its main products can meet almost all energy needs one way or another ranging from the light, energy-dense liquid favoured to power cars and the less costly heavy oil used to generate electricity. Second, oil is cheap to move long distances on either the sea or on land and thus its price in energy terms can be essentially the same everywhere: the differences observed are mainly the result of policy actions on the part of governments such as variations in taxes. Third, very large resources of oil exist mainly in the Middle East where the technical cost of production is low, perhaps only a few dollars per barrel. Oil and possibly diamonds are unusual in that they are supplied in an inverse cost pattern: high-cost supplies, such as those derived from the oil sands, are in full production while low-cost sources, mainly in the Middle East, are idling. These three attributes lead directly to two main consequences before the peak in world production of conventional oil is reached. First, the owners of oil may assume any fraction of the energy market they choose at any time and at any place by simply lowering prices. Second, the introduction of new oil- displacing technologies on either the supply or demand side simply results in a fall in the price of oil.
There is a long-standing debate as to what the price would be if the resources of oil were better distributed, if the OPEC organization did not exist, and if something approximating an open market existed. Many experts believe the price on American trading markets would be in the range of $US8 -10 per barrel (or even lower) at present as compared to the recent high of about $US19 reached in early May of 1999 before it fell somewhat again. This inverse supply pattern is inherently unstable yet has persisted for over two decades. Nevertheless, it could end tomorrow.
There are also domestic policy issues arising from this upside-down supply pattern Since traditionally some 85% of domestic production is from Alberta, where the Province in oversimplified terms is the owner of the resource, domestic financial strains become evident when oil prices are high. Under Canadian circumstances, the revenue stream is increased still further because the higher prices also result in greater domestic production than would otherwise be the case. That Province has little difficulty with its fiscal arrangements for the reason everyone is helping to fund it through the payment of higher oil (indeed all energy) prices than would exist in an open trading market. At the present moderate prices, these strains are manageable, but the situation could become difficult once again at higher levels. This financial tension may be less than in the past since low-cost conventional production in Alberta is past its peak and the national output is becoming geographically more diversified. Oil is now flowing from the Hibernia field off Newfoundland and other nearby fields, such as Terra Nova, are being readied for production in a year or two. The first natural gas from off-shore Nova Scotia is expected in late 1999. There may also be significant oil and gas production from the northern Territories in the future. Nevertheless, a failure to appreciate the nature of this problem could conceivably lead to a repeat of the revenue imbalance that existed in the early 1970s and 80s. It is interesting that the present devolution of power to Scotland does not involve any major transfer of the revenues generated from the off-shore Scottish oil fields (whose output is larger than Alberta's) away from the central government of the U.K.
The Current Surplus in Oil Production Capacity
The effective surplus installed oil production capacity around the world (mainly in the Middle East) is probably about five million barrels per day (mb/d). Both world oil consumption and production have been increasing an average of about one mb/d per year with total output in 1998 some 73 mb/d. After another decade, there may still be the same surplus capacity but, with the greater output of some 83 mb/d expected then, a gradual tightening is to be expected in the system. Authors such as Duncan (2) are probably correct in noting that the peak in per capita world oil production (or consumption) was passed some twenty years ago. However, world oil consumption over the past 16 years has been remarkably constant at some 4.43 barrels per capita despite wide price fluctuations, wars and economic cycles.(3) The probable reason for this extraordinary stability in an otherwise quite unpredictable system, is that world consumption is divided between two quite different groups of people - about one billion people in the developed world consume a great deal of oil and about five billion in the developing countries consume much less. The rich countries have been gradually increasing the efficiency with which they utilize oil and other forms of energy but the poorer countries are at a stage of expanding infrastructure and are rapidly acquiring more vehicles with the result they need more oil no matter what the efficiency of its use. A rough balance between the two effects results in constant world per capita consumption.
Why is the Timing of the Peak in World Oil Production Important?
The question of the timing of the peak is important because the energy system will behave differently once the production of conventional oil is in decline. At the present time, before the peak, as already noted, any major attempt to displace oil by encouraging other sources of supply, or even conservation measures on the demand side, merely results in the price of oil falling. This is because of the low technical cost of producing and shipping conventional oil from reservoirs operating below installed capacity, particularly from those in the Middle East. This writer has argued previously that since the Kyoto Protocol was negotiated, it is no longer in OPEC's long-term interest to maintain the price of oil at too high a level.(4) The view of this organization might reasonably be expressed along these lines: `if any oil is to be consumed in the world, we will produce it.' But the recent experience of early 1999 suggests there is some lower price level which will be defended to ensure minimum revenue needs. Given this situation prior to reaching the peak, the price pattern may well take the form of a saw-toothed voltage curve as a result of the following sequence of events. After an agreement to control production is reached, the price may rise sharply during a brief period as it did in early 1999. It may then slowly decline over a longer time as production gradually increases due to less compliance of one kind or another by OPEC members and increasing output outside the OPEC countries, including Canada, which is a major oil trader but only a minor net exporter. Another agreement is then reached and the price responds almost immediately (as was the experience in early 1999) and then tends to fall slowly once more. This cycle may repeat several times over the next few years. Though many doubt the power of OPEC to control prices, Robert Mabro, of the Oxford Institute for Energy Studies, described the present situation in the form of a paradox: `OPEC is never stronger than when it is weak and never weaker than when it is strong.' The frequency and amplitude of the saw-toothed cycles may well increase with time and this pattern may prove very difficult to deal with by both industry and governments. The major consolidation now underway among the main players in the oil industry may help alleviate this situation somewhat.
Falling prices have an asymmetric effect on production. In mature oil producing regions such as the U.S., which is well past its peak in conventional output with many wells operating at low output, a fall in prices may cause their closure. U.S. production was 3.3% lower (a net fall of 275,000 barrels/day) in 1998 than in 1997 - a loss greater than what was to be expected on normal decline grounds. Given the continuing increase in production in the Gulf of Mexico, the loss of existing capacity was probably about 400,000 bbls/day and perhaps about another one million bbls/day remain at risk in another period of low prices. It is not clear how much of this lost capacity can be recovered with a return to more normal prices because many wells may have been plugged permanently to comply with environmental regulations. Re-drilling would only be justified at much higher prices. The Western Canada Sedimentary Basin, also past its peak in the production of conventional oil, is now approaching the maturity where such irreversible effects may become noticeable. There was in fact a fall in output by some producers during the recent period of low prices.
After the peak is past, the energy system will turn to other options including non- conventional sources of oil. These costs will be higher and the pattern of supply will be less flexible. The energy system as a whole will lose resilience. Prices will be both higher and more stable. The development of new supplies or a reduction in demand will no longer tend to drive the price down. Though the supply problem may prove difficult to deal with at that time, it is paradoxical that the pattern of prices in the post-peak era may well be more rational and easier to predict. It is a paradox that OPEC could loose its influence over prices at a time of higher prices after the peak is past.
The Economic Policy Response to Higher Oil Prices
Oil and energy supplies generally are no longer as important to the overall economy as they were in the 1970s and 80s. Nevertheless, a large increase in the price of oil will have significant inflationary effects. The lesson from the past is that this problem must be dealt with in a sophisticated way. A crude monetarist response may well make the situation worse as it no doubt did twenty years ago. If the price of oil is increased by some external factor to an economy, such as by a concerted action of OPEC, an attempt to stabilize prices using monetary tools raises the following question: What other prices are to fall to maintain internal stability? Worse, raising interest rates defeats the very object of dealing with a problem resulting from an increase in the price of oil. This is because nearly all other options, whether on the supply or the demand side, require greater front-end investment even at equal life-cycle cost with the notable exception of the combustion of natural gas, whether by individuals in houses or by utilities and general industry. Fortunately, gas may well be in adequate supply through at least the early decades of the next century. At the time of the last oil crisis, at least two major facilities to produce oil from the oil sands and heavy oils of Alberta were cancelled in large measure due to the high interest rates imposed at that time. What was needed was additional oil supply outside of OPEC. Unfortunately, the policy options chosen to deal with the inflationary situation resulting from higher oil prices had the effect of limiting new supplies from sources requiring heavy investment and thus were perverse. Nowhere was this more clear than in the nuclear field with its high unit investment requirement. Moreover, the higher oil prices had the effect of reducing the growth of the economy (there was in fact a recession during this period) which led in turn to a deceleration in the growth of demand for electricity. (Electrical demand still remains strongly linked to economic growth at the present time.) The combination of these two factors made it difficult for the nuclear option to serve as a substitute for oil.
How May the Peak in the World Production of Oil be Predicted?
After discovery, the output of oil from a conventional reservoir follows a cycle of development involving steadily rising production to reach a stable plateau, after which there is a long, slow period of decline. If prices are high enough, more expensive enhanced recovery techniques may be justified which extend the life of the reservoir but, in the general case, such measures do not affect either the timing or the magnitude of the peak very greatly. Many reservoirs taken together, such as in a major producing province or for the world as a whole, show a tendency for their joint production to rise to a peak and then fall. This pattern may be represented mathematically by a curve of the parabolic type. M.K. Hubbard (5) and later Duncan and Youngquist (6) have projected oil production by parabolic curves determined in different ways but based upon a common approach. In their work, the past pattern of production over time is used to predict the future. The ultimate recoverable resource is determined from the area of the parabolic curve so defined. This author has also applied parabolic techniques to this problem but on a quite different basis.(7) The method depends upon data determined from geological assessments of the resource base: it does not displace them as with the Hubbard-type parabola. A staged technique was devised to distribute the as yet unproduced oil in parabolic form. This technique is forward-looking in that past production is deducted from the geological assessment of the ultimate recoverable resource to determine the quantity of as yet unproduced oil. This latter value is itself the sum of the known established reserves and the estimate of remaining undiscovered resources. The production-time pattern of the past is irrelevant in this methodology, an important advantage when the world production as a whole is to be projected given the severe dislocation that took place at the time of the oil crisis.
Geological assessments are normally provided in expected ranges. With an estimated ultimate recovery of conventional oil lying somewhere between a median value of 2200 gigabarrels (GB) and a high outside value of 3000 GB, the world peak in production is predicted to lie between 2015 and 2020 by this method. For the peak year not to fall within this period requires heroic assumptions on both the supply and the demand side of the energy system.
From their parabolic studies, Duncan and Youngquist (6) estimate the peak in world production will occur in 2007. Campbell and Laherrèrre (8) place the peak at before 2010 based upon a `bottom up' assessment of the world's main oil reservoirs. The International Energy Agency limits itself to stating that its experts believe non-OPEC production will peak by 2010.
It would be less than fair to the reader not to refer to the controversy surrounding such estimates. Some economists doubt a peak will be reached at all.(9) Still others have begun to question once again the established theories for the origin of oil especially as it has been reported in the press earlier this year (notably on the front page of The Wall Street Journal) that one reservoir in the Gulf of Mexico is re-charging itself. This writer believes a conventional explanation for this anomaly will be forthcoming.
How Soon Will the Peak in Conventional Oil Production Affect Prices?
With discount rates in the normal ranges, the earliest prices will be affected by a peak in production of conventional oil occurring between 2015 - 2020 will be about ten years before, that is, some time during the period 2005 - 2010. Prior to this time, many other influences may affect the price, but not resource depletion itself. Nevertheless, if in fact the price tends to increase regularly earlier than this date, it may be an indication that the peak may be coming sooner than estimated here.
What About Canada?
The National Energy Board (NEB) published its new assessment entitled Canadian Energy Supply and Demand to 2025 in June of 1999.(10) Two scenarios were formulated for both the demand and the supply sides. On the supply side, the two scenarios were termed Current Supply Trends and Low-Cost Supply, the latter reflecting expected advances in the technology of the recovery of the fossil fuels. On the demand side, the two scenarios were termed Current Demand Trends and Accelerated Demand Efficiency. The two supply and two demand curves intersected at a total of four market-clearing points but only the two outer values defined the range for the two cases developed further in the Report. Taken together, these scenarios may be thought of as a broad-ranged `business-as-usual' projection with the two cases defined on each side of the band. Estimates for oil and natural gas were prepared for these cases.
For total oil defined as `net available crude supply,' the peak occurs in the same year of 2007 for the two cases. The production that year is predicted to range from 2.77 to 3.15 million barrels per day for the two boundary cases. Production is expected to fall to 2.08 to 2.58 million bbls/day by 2025. In 1998, the corresponding production was 2.2 million barrels per day. Greater Canadian production is anticipated if prices are higher than the range thought probable by the Board. Nevertheless, the peak in production of conventional oil from the Western Canada Sedimentary Basin has already passed. These values are expressed according to the Board's practice which includes pentanes plus and an allowance for the diluent employed to enable the pipelining of the heavy oils but not the three LPGs: these projections are thus not strictly comparable to other statistical sources.
As far as conventional natural gas is concerned from the Western Canada Sedimentary Basin, the two corresponding cases range from a peak of 6.90 trillion cubic feet (TCF) per year in 2008 to 7.88 TCF per year in 2013 which may be compared to a production of 5.66 TCF in 1998. For the high case, there is no peak before 2025 in total Canadian production (including gas from the Scotia Shelf on the East Coast) if gas from non-conventional sources is included but, for the low case, one is reached in 2018 even including this new class of supply.
This non-conventional gas is assumed to be derived from coal-bed methane (CBM) though there are also extensive resources of gas in tight formations not considered in the report. The resource base for CBM is placed by the Canadian Gas Potential Committee in the range of 140-273 TCF mostly in Alberta. This gas would augment the supply from conventional sources and would have the effect of extending the life of the pipeline system with its large sunk capital investment. Nevertheless, it is not obvious Canada has any comparative advantage over the U.S. in this field.
The continuing supply of natural gas will have major implications for the electrical supply system. Generation from gas may be considered at three different levels. Large-scale combined cycle facilities, as will be described below, are the favoured generation option at margin. A continuing expansion of smaller co-generation facilities where suitable thermal loads are available is to be expected. The advent of mini-turbines (and perhaps fuel cells) for application in institutions, factories or even eventually homes may also prove important somewhat later. Operating in cogeneration mode and thus at high overall efficiency, these small turbines, originally designed to propel small missiles of the Cruise type, would supply heat and hot water as well as electricity. Taken together, a major change in the electrical supply system appears inevitable.
It is plain the NEB believes Canada is approaching its peak production of oil within the next decade and possibly of natural gas within the first quarter of the next century
Perhaps 4-5% of the domestic natural gas production in the U.S. is now obtained in the form of CBM from coal seams mainly in New Mexico and Alabama. Australia also reports commercial production from this non-conventional source. Nevertheless, most of the production in the U.S. case comes from certain geologically favoured situations (mainly in the San Juan Basin in New Mexico) and was encouraged, in the past at least, with the help of taxation incentives. It is also true the broadly-defined CBM resource base is extensive in the two countries but the technology of recovery is still immature and its costs uncertain. The allowable price of natural gas is constrained by supplies delivered in liquefied form (LNG) by specialized tanker to coastal locations in the U.S. from very large conventional resources occurring mainly in the Middle East and in some other countries around the world. In 1998, some 25.4% of the natural gas flowing across international borders was carried in these tankers. This alternative source to major Canadian markets normally served by overland pipelines restricts the allowable price for gas from CBM operations probably to the $US2.50 - 4.00 per thousand cubic feet range in Alberta. It is not known at this time what fraction of the CBM resource will prove attractive to exploit under this price ceiling.
The new NEB Study is `business-as-usual' in the sense it did not take into account any major policy measures introduced to reduce the emissions of carbon dioxide. Nevertheless, the identification of large quantities of natural gas potentially available from CBM suggests another route to meeting Canada's energy requirements in the next century which will be explored in the following section.
The Generation of Electricity in Gas Turbines with the Sequestering of Carbon Dioxide
The emerging combined-cycle turbine technique for the generation of electricity from natural gas is examined here in some detail because of its several advantages. In the main, these are:
The second option is to combust the natural gas with oxygen under near stoichiometric conditions. This procedure greatly simplifies the capture of the carbon dioxide because the only other major constituent present in the exhaust gas is water vapour which may be condensed at low cost. This practice has two main disadvantages. In addition to the need for a conventional oxygen plant, a new design of turbine is required. This is because the composition of the gas expanded in the turbine is rich in carbon dioxide which has unusual thermodynamic properties in addition to its higher molecular weight. No such turbine has yet been built.
The third option is to reform the natural gas to separate the carbon dioxide in advance leaving a gas rich in hydrogen to serve as the fuel for the turbine. In current proposals, the natural gas would be blended over a catalyst with a mixture of air and steam chosen so there is no external heat required for this step. The air would be obtained from the turbine compression stage and the steam would be extracted from the steam-cycle of the combined- cycle plant in a closely integrated thermal arrangement. After autoreforming, the gases would be shifted with steam to convert the carbon monoxide present to carbon dioxide nearly all of which gas would then be separated by the cheaper physical processes. Physical processes may be employed because the gas is pressurized and the concentration of CO2 is higher than in the dilute gases of the first option. The nitrogen which entered with the air in the autoreforming step results in a final fuel gas containing about 50/50% H2/N2. The presence of the nitrogen increases the average molecular weight of the combustion gases to a more favourable range than would be the case if pure hydrogen were the fuel.
The reduction of NOX emissions to a low level is also a major objective in all three options. It is unclear at the present time which of the three routes will be favoured but it is certain that (a) the additional capital required to modify combined-cycle facilities for the capture of carbon dioxide will about double the unit investment required, (b) the thermal conversion efficiency will be reduced some 15-20% below what it would be otherwise, and (c) the other operating costs, including those arising from the inevitably greater down-time to be expected in a much more complicated facility, will be higher. Overall, adding stages to capture and sequester carbon dioxide might increase the cost of generation some 60-75% but this estimate is preliminary and may prove optimistic. Calculations of this kind are carried out by the Greenhouse Gas R and D Programme of the International Energy Agency on a standardized basis.(13) The need to reduce emissions of CO2 leads paradoxically to greater consumption of the fossil fuels per unit of useful energy output.
The Advent of Coal Bed Methane
Because the National Energy Board has identified Coal-Bed Methane (CBM) as a potentially important augmentation of the natural gas production starting in the early decades of the next century, this new category of supply will be explored here in the context of the capture and sequestering of carbon dioxide from natural gas processes. This section will focus on the special variant of this process which may be operated in a carbon- neutral mode. Canada is a co-leader of an international project to assess this possibility in the CBM field organized under the aegis of the Climate Technology Initiative which was negotiated at the time of the 1997 Kyoto Protocol.
In the carbon-neutral version of the CBM process, captured carbon dioxide is used to flush methane from the coal seams in place. One operation of this kind is in service in the San Juan Basin of New Mexico where geological conditions are especially favourable. In principle, two molecules of carbon dioxide are required to substitute for one molecule of methane on the coal surfaces so, after taking into account the usual uncertainties, the sequestered CO2 more than balances the carbon content of the methane produced.(14) The purity requirements for the flushing gas may be less rigourous than in the corresponding case for the enhanced of oil which would reduce separation costs somewhat. There is the additional advantage in that the CO2 remains more firmly bound underground than in the enhanced oil recovery process. The `natural gas' produced in this way may then be said to be atmospherically-neutral with respect to CO2.
Methane recovered in this neutral manner could be linked to gas turbine combined- cycle facilities equipped to separate carbon dioxide in one or other of the generation options already described. The captured gas would then be returned to CBM operations to close the loop. As a result, electricity could be generated from the large resources of methane known in this form with only minor net emissions (if any) of carbon dioxide to the atmosphere.
Operating Vehicles in an Atmospherically-Neutral Way from CBM Gas
The problem of supplying a neutral fuel from CBM gas to power vehicles will be linked here to the capture and sequestering of carbon dioxide in a Canadian context. Hydrogen could be produced from this gas in the conventional underfired reformer/shifting stage/separation process combination and consumed in fuel cells mounted on vehicles. About two-thirds of the carbon must be separated as the dioxide from the reformed and shifted process stream in any case: at present this gas is vented to the atmosphere. It is thus a relatively easy matter to capture this fraction. The remainder of the carbon is associated with the gas combusted in the underfiring of the reformer chamber and more expensive chemical methods must be applied to the separation from this low-pressure stack gas which contains ~9.5% CO2 dioxide. Total capture thus involves two distinctly different cost levels. (Alternatively, part of the hydrogen produced could be directed back for combustion in the reformer chamber but this approach also incurs major costs arising from the lower net output of the process.) The captured carbon dioxide would be returned underground to produce more methane as in the electrical case above.
The first vehicles equipped with fuel cells based upon a hydrogen gas feed are expected to be in production starting in 2003 by the Honda Company using a fuel cell of Japanese design and by DaimlerChrysler and Ford in 2004 using technology developed by Ballard Power Systems of Burnaby, B.C. The cost of such power trains is presently an order of magnitude greater than a conventional internal combustion system and will have to be reduced substantially over the next few years if more than a niche market is to be penetrated.
The difficulty with this approach is that hydrogen is inconvenient to power vehicles. This disadvantage is off-set to some degree by the higher energy conversion efficiency characteristic of fuel cells which may reach twice (or even a little higher) that of conventional engines. The problem is all the options for the direct use of hydrogen on board vehicles are awkward whether based upon pressurized tanks, storage in decomposable solid hydrides, or conversion to liquid form which requires cryogenic equipment. The latter approach may be practical for fueling fleets such as buses or even taxis, and a cryogenic supply system will be employed in forthcoming trials of fuel cell-equipped vehicles in California.
Hydrogen could be produced on board the vehicle itself by reforming more convenient liquid fuels such as methanol, ethanol or even gasoline, and a number of companies are pursuing this possibility. The vehicles to be available in the 2003-4 timeframe may be equipped to reform methanol although this is not entirely clear at the present time. The application of fuel cells of the homogenous type now under development in California which operate directly on methanol (no prior reforming needed) may also be attractive if they can be perfected.
The efficiency of energy conversion in fuel cells, already high, may not be a strong function of the scale of operation. Large power plants so equipped may not have much greater conversion efficiency than the cells in individual cars. The possible existence in the future of a large fleet of vehicles with fuel cells installed that spend much of their time stationary in driveways at home or parking lots at work has implications that remain unexplored. Certainly means would be found to supply houses during long power outages such as occurred during the recent ice storm in eastern Canada and perhaps for such special cases as the provision of electricity for remote cottages visited only occasionally. Nevertheless, the total installed capacity in the vehicle fleet could reach high levels representing a sunk capital investment in efficient but idle energy conversion technology. It is conceivable large quantities of electricity could be supplied from parked vehicles if a suitable fueling system could be devised. In essence, houses could be plugged into cars, not the other way around.
Methanol as a Hydrogen Carrier
Because Canada may have more biomass potentially available per capita than any other nation, the option of producing methanol (or ethanol) from the biomass for use in transportation may become important. When biomass sources of energy are grown sustainably they are atmospherically-neutral with respect to carbon dioxide emissions regardless of their point of application. Methanol derived from the biomass is thus an attractive fuel on this account. The normal process sequence involves gasification (either with oxygen or by re-circulating a solid heat carrier) followed by the synthesis of methanol from the product gases after they are adjusted to the correct range of composition. The gasification processes are, however, as yet immature as applied to this application. Unfortunately, at the most, only about two-thirds of the relatively expensive carbon of the biomass may be recovered in the methanol due to the need to shift the process gases with steam to increase the hydrogen-to-carbon monoxide ratio to about 2.2:1 as required for the synthesis stage.
Hydrogen could be added to the process gas stream instead of the shifting stage to achieve the desired ratio for synthesis. If this hydrogen were to be produced from the neutral CBM recovery process as already described, the methanol synthesized from the biomass and the added hydrogen would also remain atmospherically-neutral with respect to carbon dioxide emissions. The advantage of this technique is a greater recovery of the costly biomass carbon in the methanol which, in turn, leads directly to a second advantage. Since sources such as wood are in general dispersed over a wide area, there is a limit to the economic supply available at any one site before gathering costs become excessive. Adding hydrogen to the process gases allows a larger, more economic scale for the synthesis stage because, with higher carbon recovery, more methanol may be produced from a given supply of wood from a defined catchment area. This process combination would then become an example of a link between two of Canada's strongest assets - its fossil fuel industry and its biomass potential - to allow the output of a liquid fuel for the operation of the vehicle fleet in an atmospherically-neutral way. Though the methanol produced in this manner could be consumed in any suitably modified internal combustion engine, this approach is made all the more attractive by the successful development of the fuel cell which is also a Canadian activity. As already noted, the higher fuel costs might be justified in vehicles so equipped because of their higher energy conversion efficiency.
There are difficulties with this option which may be characterized as typically Canadian. The CBM resource is largely in Alberta though substantial production may be possible in B.C. and some workable resources may exist in the Atlantic Provinces. The cost of transporting energy in the form of hydrogen through pipelines is some three times greater than that in the form of natural gas. Consequently, it is markedly less costly to send natural gas and not hydrogen the long distance to central Canadian markets but it is unlikely captured carbon dioxide would be pipelined back to Alberta for re-cycling to CBM operations on cost grounds. Other sources of carbon dioxide would have to be found in Alberta to meet this need, possibly by capturing carbon dioxide from existing thermal power facilities burning coal. Though this latter practice would not reduce net emissions in an absolute sense, it would allow approximately twice as much energy to be produced per unit released to the atmosphere since the carbon would be in effect used twice, first in the thermal power plant and then in the CBM extraction stage.
Another choice for Ontario would be to produce hydrogen from the present conventional natural gas supply in a large reforming facility equipped to separate most of the carbon dioxide. The captured dioxide would then be sequestered in suitable local aquifers, if any can be found. The hydrogen so produced would be distributed to several biomass gasification plants through short pipelines to synthesize neutral methanol as described before. The difficulty is that the domestic supply of conventional natural gas may peak by the end of the first quarter of the next century without augmentation from such non-conventional sources as CBM, which process requires more than an equivalent supply of carbon dioxide for its most desirable mode of operation from the environmental point of view. This option also requires the identification of aquifers for disposal in Ontario where, in contrast to Alberta, little information is available.
The existing natural gas-to-methanol industry may serve as a bridge to this future because some carbon dioxide captured from other industries, such as electric utilities, may be added to the process gas stream of the conventional synthesis facility in most cases to increase the quantity of the alcohol produced from a given quantity of methane. In addition to extending the resources of natural gas in this way, part of the carbon involved does double duty first in the fuel consumed by the utility (usually coal) and then as a component of the methanol. Petro-Canada, Methanex Corporation and Ballard Power Systems have recently announced a pilot project for the supply of methanol to vehicles equipped with fuel cells.
Natural gas may be linked to the combined production of ethanol and electricity when carbon dioxide is captured and sequestered from an energy complex. This latter possibility is of interest since the carbon dioxide produced in the fermentation stage required in any case is easy to capture. Provided a balance is kept between the carbon of the natural gas entering the energy complex and that sequestered from the fermentation stage, both the ethanol and electricity produced within the complex remain atmospherically-neutral. In effect, the neutral carbon from the biomass is switched with the fossil fuel carbon of the natural gas: this is possible because carbon dioxide is still carbon dioxide no matter what its source. The gas turbine in the complex would probably be operated in the co-generation mode so that the exhaust gases would supply the thermal energy required for the distillation stage.
Other Environmental Factors May be More Decisive in
Determining How Vehicles will be Propelled
The possibilities explored above are predicated on the assumption that if carbon dioxide emissions are to be limited, reductions from the mobile transportation sector will be necessary. This may not be the most economic approach if emissions from stationary sources can be reduced more cheaply. For this reason, other environmental factors are likely to be decisive in determining how vehicles will be propelled in the future. Zero- emission cars will soon be required in California for air quality reasons and not to control carbon dioxide emissions. In the course of dealing with this problem, electric vehicles equipped with re-chargeable batteries will probably fill a niche market with those equipped with fuel cells offering the best hope for most of the fleet in the longer term. Though the main objective is to reduce other emissions, the desired energy carriers, whether electricity or hydrogen or methanol, could be produced in such a way as to also reduce emissions of carbon dioxide from mobile sources.
The application of geoengineering techniques such as those proposed to increase the rate of transfer of carbon dioxide across the air/ocean interface were not examined in this paper though the success of such measures might permit the continued direct large-scale use of the fossil fuels in the long term. Nor were the implications of other such options at the world scale considered such as the beaming of energy from satellites to earth as is now under study in the space community. On the demand side, large new markets for electricity may arise for the widespread desalination of seawater in the coming century. Costs have already been reduced to the $US 2.00 per thousand U.S. gallon range.
The possible application of long-lived technologies with the characteristics of today's seventy year-old plus hydro plants, such as advanced photovoltaic systems, raises the question as to how to calculate the cost of energy from such sources in the long run. The consequences of options arising from recent advances in the theory of technology, such as the law of increasing returns and path-determined technology development, were not taken into account. The paradoxes arising from the gradual dematerialization of the economy in an increasingly knowledge-based economy will no doubt influence outcomes in the next century as well. Nevertheless, all other options, whether conventional or non-conventional, must be weighed against the great ponderosity of the present energy system whose huge existing capital stock can only be replaced slowly. This factor may prove to be the main limitation on the introduction of new technology.
No paper on the future of the fossil fuels would be complete without some reference to the possibility of an emergency situation arising in the future perhaps from environmental consequences. The effect of physical shortages has been experienced before in the form of line-ups at the pumps of gas stations at the time of the first oil crisis some twenty years ago and the long interruption of the electrical supply that occurred during the ice storm that hit Eastern Canada in January of 1998. The example of one reactor continuing to be operated at the Chernobyl nuclear power plant in Ukraine, despite vigorous international pressure to close it, is a vivid illustration of the maxim that the most expensive electricity is the power one does not have. Though a physical shortage of one kind or another may not occur in the future, the effect of much higher energy prices on the economy is unclear. We know from the recent conflict in the Balkans what form the movement of peoples in an increasingly crowded world may take under stress. It may well be that the greater mobility of people will prove a major determinant of energy consumption in the next decades.
May, 1999. Statistics updated June 1999.