Methane (CH4)


Methane sources

Methane is emitted to the atmosphere primarily by anaerobic microorganisms that obtain their metabolic energy from carbohydrates by fermentation, yielding methane as a waste product. Such organisms are found in wet soils, especially in swamps and flood-irrigated fields (e.g. rice paddies). They also live in the guts of termites and other wood-eating insects, and in the intestines of most herbivorous animals, especially cattle and sheep, where they assist with the digestion of cellulose.

These sources are anthropogenic to the extent that humans alter natural patterns, for example by deforestation to facilitate cattlegrazing, by draining wetlands, or by irrigation of drylands. However, because the methane production associated with various land uses and animal husbandry practices are not yet well understood, we have not quantified these changes in methane emissions for the United States over the past century. Our focus here is on methane emissions from fossil-fuel production and use, that is, coal-mining, coking, oil and gas drilling, and natural gas distribution. We discuss these four sources in this order.

Coal-mining

Coal is a porous carbonaceous material in which substantial quantities of gas (mainly methane) are typically found in cracks or adsorbed on the surfaces. The coal gas contains upward of 90 per cent methane - a typical figure would be 95 per cent. As the coal is broken up in the drilling and mining process and later crushed for use, most of this adsorbed methane is released into the atmosphere. This is the source of methane seepage into coal mines, which has resulted in many explosions and fires.

There is enormous variability between coal seams with regard to adsorbed methane, but for a given rank of coal the major variable is depth. Deeper coals contain more adsorbed gas than surface coals, probably owing to the higher pressure. The US Bureau of Mines has developed an empirical equation relating gas volume (in litres/kg or m³/tonne) with depth:

V = k0(0.096h)n = -bx(1.8/100)+11

where a is depth in metres, ko and no are parameters depending on the rank of the coal, and b is a function of density. Both are related to the ratio of fixed carbon to volatile matter, as shown in figure 6. Adsorptive capacity v. depth is plotted in figure 7.

From figure 7 we can estimate that the gas content of anthracite (from underground mines) ranges between 15 and 20 litres/kg (or cubic metres per metric ton). Appalachian bituminous coals average 10-15 litres/kg. Data for a number of mines in the Pittsburgh area are shown in table 6. Midwestern and Western strip-mined medium or low volatile coals would cluster generally in the range 5-10 litres/ kg. The national average for all bituminous and anthracite coals has been estimated to be 6.25 or 7.0 Iitres/kg (SAI, 1980). Emissions from sub-bituminous coals and lignite are smaller (2.8 litres/kg and 1.4 litres/kg, respectively) and can probably be neglected for our purposes, since the quantities of these fuels mined in the United States are small.

Fig. 7 Adsorptive capacity of coal as a function of rank and depth (Source: US Bureau of Mines)

Table 6 Comparison of estimated and direct determination of methane content of coal
 

Depth

Direct
determination
(cm³/ga)
Estimated
range
(cm³/g)
Difference
direct deter.
and est.
range (cm³/g)
Coalbed Metres Feet
Anthracite
Tunnel #19 183 600 19 13-18 +1
183 600 14 13-18 0
183 600 13 13-18 0
Peach Mountain #18 213 699 22 14-19 +3
213 699 19 14-19 0
Low-volatile bituminous
Beckley 302 991 13 12-15 0
267 876 14 12-14 0
253 830 15 12-14 +1
236 742 14 11-14 0
Hartshorne 451 1,480 16 13-16 0
395 1,295 18 13-16 +2
174 571 12 11-13 0
169 533 13 11-13 0
148 488 11 10-12 0
77 252 5 8-10 -3
New Castle 651 2,137 17 15-17 0
Pocahontas #3 643 2,110 14 14-17 0
621 2,038 17 14-17 0
529 1,736 11 14-17 -3
494 1,621 12 13-16 -1
484 1,588 16 14-16 0
466 1,529 15 14-16 0
232 761 9 11-14 -2
Pratt 416 1,365 15 14-16 0
Medium-volatile bituminous
Mary Lee 666 2,185 16 13-15 +1
520 1,706 12 13-14 0
519 1,703 14 13-14 0
518 1,700 13 13-14 0
335 1,099 14 12-13 +1
High-volatile bituminous
Pittsburgh 259 850 7 10-11 -3
235 771 6 9-11 -3
206 676 5 9-10 -4
130 427 3 8-9 -5
95 312 2 5-6 -3
Redstone 225 747 4 9- 11 - 5
Sewell 207 679 9 9- 10 0
Sewickley 205 675 5 9-10 -4
Waynesburg 122 402 3 8-9 -5

Source: US Bureau of Mines.

a. 1 cm³/g= 1 litre/kg= 1 m/³/mt.

The earliest coal mines in the United States were in Virginia. However, Pennsylvania and West Virginia soon became dominant sources and remained so until the last quarter of the nineteenth century, when the Illinois coal fields were opened. Western coals were extensively exploited even later. Utah, Wyoming, and Montana have large deposits of low-sulphur, high-ash coals relatively near the surface and accessible to large-scale strip-mining techniques. Meanwhile, some of the older Eastern mines have been largely exhausted (especially in the so-called Pittsburgh seam) and Eastern coals are increasingly from deeper mines. Nevertheless, coking coal used in the steel industry is still obtained mainly from the Appalachian mines.

Two long-term trends are apparent:

1. Eastern coal mines have become continuously deeper, on average, over time, with a corresponding gradual increase in associated methane release per ton of coal output to the present level.
2. Most of the increased total production since the late nineteenth century is due to the opening of shallower mines - mostly strip mines - in the Midwest and Far West. Western coals yield less methane per ton than Appalachian coal, but have increased as a fraction of total output.

Taking these two change factors into account we assume a slight increase in methane emissions from Eastern coals but a constant average for the United States as a whole. These contrary trends result in relatively constant average emission rates, as reflected in our historical emission coefficient estimates (table 7) at the end of this chapter.

Coking

Coke is the solid residue produced from the carbonization of bituminous coal after the volatile matter has been distilled off. The main object of coking is to free the bituminous coal from impurities water, hydrocarbons and volatilizable sulphur - leaving in the coke fixed carbon, ash, and the non-volatilizable sulphur. The suitability of a coal for conversion to coke is determined by its "coking" properties. Coke is used primarily as a reducing agent for metal oxides in metallurgical processes and secondarily as a fuel. It was first used in the iron-making process in Great Britain in the early 1750s as a replacement for charcoal. The primary factor driving the change was the costs - the cost of producing charcoal pig iron greatly increased in the latter half of the eighteenth century while the costs of coke pig iron fell sharply. By the end of the century, coke pig iron provided some 90 per cent of the total iron industry production in Great Britain (Hyde, 1977).

Table 7 Methane emissions coefficients (metric tons CH4/metric ton of fuel)
  1800 1860 1890 1920 1950 1980
Anthraciteaa - 0.005 0.006 0;007 0.007 0.007
Appalachian bituminous  
(underground)a 0.005 0.005 0.005 0.005 0.005 0.005
Bituminous, US average - - 0.005 0.005 0.005 0.005
Cokingb - - 0.270 0.054 0.030 -
Gasc - 0.30 0.25 0.22 0.20 -
Gas distributiond - - 0.03 -0.02 0.01 0.01

a. Emissions coefficients for coal are calculated on the basis of an assumed density of 0.714 kg/ m³ for methane, and gas adsorption of 101/kg for anthracite and Appalachian bituminous coals, and 71/kg for average US bituminous coals.

b. Based on coal used for coking.

c. Based on unaccounted potential production of associated gas.

d. Based on gas marketed.

This substitution took place about a century later in the United States because of the greater availability of wood for charcoal manufacturing in the eastern US, as discussed previously. Before the 1830s, almost all pig iron was made with charcoal. In the 1830s, ironmakers began using mineral fuel in the iron-making process, but it was primarily anthracite rather than bituminous coal. By 1854, the first year for which aggregate statistics are available, pig iron made with anthracite constituted 45 per cent of the total pig iron produced in the country, while that made with bituminous coal only furnished 7.5 per cent. By 1880, however, the percentage of pig iron made with bituminous coal and coke had reached 45 per cent, mixed anthracite and coke provided the fuel for 42 per cent, while the remaining 13 per cent was made with charcoal. One state alone, Pennsylvania, provided 84.2 per cent of US coke production in that year. By 1911, bituminous coal and coke provided the reducing agent for 98 per cent of the pig iron manufactured (Temin, 1964; Warren, 1973).

The first method of making coke was copied from that used to prepare charcoal. Coal was heaped in piles or rows on level ground and covered with a layer of coal dust to minimize airflow. Once the process had been started (with the help of wood), the heat drove off the volatile gases, consisting of methane and ethane plus some ammonia and hydrogen sulphide (H2S). These gases burned at the surface of the pile, which provided heat to keep the process going. When the gaseous matter had been used up, the heap was smothered with a coating of dust or duff, then cooled by wetting, leaving a silvery white residue high in carbon. If a higher, dryer heat was applied, the hydrogen sulphide gas was driven off but the sulphur remained and combined with the carbon. No attempt was made to capture any of the escaping gases. The time necessary for coking a heap was usually between five and eight days. The coke yield was approximately 59 per cent of the original mass (Binder, 1974). However, no information is available concerning the total amount of coke produced by this crude process."

By the late nineteenth century, coke was produced mainly in the so-called beehive coke oven. Beehive coke was supposedly first made in western Pennsylvania in 1817; coke iron was produced for the first time in 1837, also in western Pennsylvania, from high-quality coking coal from the Connellsville seam (Warren, 1973). Extensive use of coke in the iron-making process, however, did not begin until after the Civil War. In 1855, there were 106 beehive ovens in the country; by 1880 there were 12,372 ovens in 186 plants, and by 1909 the maximum was reached with 103,982 ovens in 579 plants. By 1939, the number of beehive ovens had shrunk to 10,934 in 76 plants. In terms of their distribution, initially almost all of the beehive ovens were in the so-called Pittsburgh coal bed area located in western Pennsylvania and northern West Virginia. As late as 1918, over half the ovens in the country were still in this region (Eavenson, 1942).

Beehive ovens were arranged in single stacks or in banks of single or double rows. Most late-nineteenth-century ovens were charged from coal delivered by a car running on tracks above the wagon. Before charging, the ovens were preheated by a wood and coal fire. After the coal had been charged, the front opening was bricked up, with a 2- or 3-inch opening left at the top.

The coking process proceeded from the top downward, with the required heat for the coking process produced by the burning of the volatile byproducts escaping from the coal. When no more volatile matter was escaping, the coking process was complete. The brickwork was removed from the door, and the coke was cooled with a water spray and then removed from the oven by either hand or mechanical means (Wagner, 1916). The yield of high-class Connellsville coal coked in beehive ovens in 1875 was 63 per cent (Platt, 1875); in 1911 the US Geological Survey reported that the average yield nationally for beehive ovens was 64.7 per cent (Wagner, 1916).

Still, no attempt was made to capture and utilize the valuable byproducts resulting from the beehive coking process. The maximum tonnage of coal utilized in the beehive process was 53 million tons in 1910. If it is assumed that 10,000 cubic feet of gas can be produced from each ton of bituminous coal, then a potential of 530 billion cubic feet of gas that could have been utilized for various heating and lighting processes was theoretically available from the beehive ovens. (Only a fraction of this was needed to provide heat for the process.) In addition, it is estimated that 400 million gallons of coal tar, nearly 150 million gallons of light oils, and 600,000 tons of ammonium sulphate an important fertilizer - were also wasted. Of course, capturing these by-products depended on the availability of a feasible and economical technology as well as on markets for the products (Schurr and Netschert, 1960).

There were some limited attempts to recapture by-products in the years before the Civil War. The so-called Belgian or retort oven resulted in the recovery of some by-products and was utilized primarily for low-volatile or "dry" coals. It had been pioneered by Belgian, German, and French engineers and the technology was gradually applied in the American coal fields. Retort ovens generated a higher coke yield per ton of coal than the beehive ovens (average 70 per cent), produced valuable by-products, and provided for more rapid coking. However, the process was much more expensive than the beehive oven since the coals used had to be crushed, sorted, and cleaned before coking. A number of retort ovens were used by the Cambria Iron and Steel Works at Johnstown, Pennsylvania, in the 1870s. But the extensive adoption of the by-product oven in the United States did not occur until after 1900 (Warren, 1973).

By-product coke ovens constructed during the first decades of the twentieth century were of two types: the horizontal flue construction of Simon-Carves or the vertical flue of Coppee. In both cases the coking chamber consisted of long, narrow, retort-shaped structures built of firebrick and located side by side in order to form a battery of ovens. The retorts were usually about 33 feet long, from 17 to 22 inches wide, and about 61/2 feet high. The oven ends were closed by fire-brick-lined iron doors luted with clay to form a complete seal. The heat required for distillation was supplied by burning a portion of the gas evolving from the coal in flues which surrounded the oven. Some types of ovens constructed at the beginning of the century employed the recuperative principle for preheating the air for combustion, but most used a regenerative chamber to conserve heat better. The yield of by-products was determined by the quality and quantity of coke desired (Wagner, 1916).

The use of the by-product oven was originally limited by undeveloped markets for by-products to balance the higher costs of the capital equipment used by the process. Thus, beehive coke produced from high-quality Connellsville coal long maintained a cost advantage for local iron smelters. Utilization of some of the by-products, especially the high-calory coke-oven gas, to provide supplementary heat (e.g. for "soaking" pits or rolling mills) in the integrated iron and steel works themselves finally reduced the costs associated with the coking process below that required to produce beehive coke (Meissner, 1913). The result was a large expansion of the use of byproduct ovens located at or near integrated steel mills, especially after the First World War (Warren, 1973). Fractional by-product recovery from all coking operations in the United States is shown in figure 8.

Emissions of methane to the atmosphere from by-product ovens can be assumed to be rather low. Emissions from beehive ovens can be presumed to correspond more or less to the methane content of coke-oven gas that was recovered from by-product ovens. The methane content of coking coal (based on average recovery from by-product ovens) can be taken as 27 per cent of the weight of the original coal.

Oil and gas drilling

Natural gas is found in distinct gas reservoirs or is present with crude oil in reservoirs. Two types of gas can be distinguished on the basis of their production characteristics: (1) non-associated gas, which is free gas not in contact with crude oil, or where the production of the gas is not affected by the crude oil production; (2) associated gas (also called "casinghead" gas), which is free gas in contact with crude oil that is significantly affected by the crude oil production; it occurs either as a "gas cap" overlying crude oil in an underground reservoir or as gas dissolved in the oil, held in solution by reservoir pressure (Schurr and Netschert, 1960).

Natural gas was encountered in the United States early in the nineteenth century in the drilling of water wells and brine wells. It was not put to any practical use until 1824, when it was utilized for illumination and other purposes in Fredonia, New York. Systematic exploitation of the resource, however, whether for domestic or industrial purposes, did not occur until after the middle of the century and primarily in connection with drilling for oil (Stockton et al., 1952). Oil was first discovered in sizeable quantities in 1859 in western Pennsylvania, and the oilfields of the Appalachian area in the states of New York, Ohio, Pennsylvania, Indiana, Kentucky, and West Virginia were the first to be developed. Natural gas was "associated" with these oilfields, but "non-associated" gas wells were also regularly discovered in these areas (Henry, 1970). Other nineteenth-century discoveries of oil and natural gas were made in California, Kansas, Arkansas, Louisiana, Texas, and Wyoming.

Fig. 8 By-product recovery from coke in the US (Source: US Bureau of Mines )

Statistics on sources of gas (gas wells and oil wells) have been kept in the United States only since 1935. The fraction attributable to gas wells producing no oil (non-associated gas) has been rising almost continuously since the statistics have been kept. Fitting a logistic curve to the data and extrapolating backward in time suggests that the "non-associated" fraction might have been about 10 per cent in 1860 when oil production began (see figure 9). It is clear from anecdotal evidence that some wells at least were producing gas alone as early as 1870 (Henry, 1970).

Oilfield gas was first put to use in the areas around the wells, where it lighted the oil derricks and raised steam for the well-pumping engines. This utilization was dependent on the invention of separators or gas traps to separate the oil from the gas. These were first developed in about 1865, with the subsequent invention of a variety of separators (Kiessling et al., 1939). As late as 1950, the largest single class of natural gas consumption was in gas and oilfield operations (18.9 per cent or 1,187 billion cubic feet) (Stockton et al., 1952).

Fig. 9 Natural gas production from oil and gas wells (Source: Nakicenovic and Gruebler, 1987)

After field use, another important industrial use of natural gas that did not require transmission over long distances was the manufacture of carbon black. Carbon black plants were located near the wells where they took advantage of large volumes of cheap gas not usually tapped by transmission lines. Carbon black manufacture (for ink) began in Cumberland, Md., in 1870 (Henry, 1970). It was widespread in the Appalachian fields during the late nineteenth century, although most plants were gone by 1929 (Thoenen, 1964). In 1950, about 93 per cent of the industry was located in the south-western states of Texas, Louisiana, New Mexico, and Oklahoma. An upsurge in demand for carbon black occurred after 1915, when it was found that by adding carbon black to natural latex the structure and durability of rubber products such as lyres was greatly increased (Stockton et al., 1952).

Where municipal markets were relatively close, the gas was piped to them. As early as 1873, for instance, several towns in the oil region of New York State, Pennsylvania, Ohio, and West Virginia - including Buffalo, N.Y., and Erie, Pa. - were furnished with natural gas from nearby wells through pipelines (Henry, 1970). The gas was used to light the streets with large, flaring torches, and to light homes and provide fuel for cooking stoves. Even with this use, an excess usually had to be vented from a safety valve (Henry, 1970; Pearse, 1875). Industrial uses also occurred where firms were relatively close to sources, especially in Erie, Pa.

The lack of the availability of markets and limitations on transmission capabilities meant that huge amounts of gas were wasted (Henry, 1970; Schurr and Netschert, 1960). Waste took place in the form of either venting or flaring. Venting is defined as the escape of gas without burning, while flaring is defined as escape with burning. In many cases, gas that was accidentally ignited burned for some time before being extinguished. In Pennsylvania, for instance, a gas well struck in 1869 burned for about a year (Pearce, 1875); the Cumberland well burned for two years before being utilized for carbon black (Henry, 1970). It was estimated that losses and waste of gas in oilfields in the early part of the twentieth century were as high as 90 per cent of all gas associated with oil production.

Many gas wells were left to "blow," especially because of the expectation that oil would flow when the gas head had gone (Stockton et al., 1952; Williamson and Daum, 1959). West Virginia was an important oil and gas producer at the turn of the century and in 1903 it was estimated that during the previous five years 500 million cubic feet of gas had been "allowed to escape into the air" each day from the state wells (Thoenen, 1964). In Illinois, in 1939, it was estimated that 95 per cent (134 billion cubic feet) of the gas associated with the new Salem oilfield in the state was flared. From 1937 to 1942 it was estimated that 416 billion feet of gas were flared in Illinois (Murphy, 1948). In other cases, discovery wells in gas fields were capped or plugged and "forgotten." In the case of the early fields, many wells were inadequately plugged (Thoenen, 1964; Prindle, 1981). Losses from such wells cannot be estimated with any accuracy, although the quantity lost was probably quite small by later standards.

The best-known example of waste was in the Texas fields in the 1930s. When the natural gas in an unassociated well is allowed to expand rapidly or is cooled, somewhat less than 10 per cent of the gas condenses into a liquid (natural gasoline) suitable for use in vehicles. The phenomenon had first been observed around the turn of the century in the Appalachian fields, where socalled "drip" or casinghead gasoline was often considered a nuisance. The invention of the internal combustion engine, however, provided a market for such "natural" gasoline, and a number of small gasoline plants were established starting in 1910 in the producing fields. In West Virginia, the utilization of natural gas in the making of casinghead gasoline was viewed as a "great force in the conservation of natural gas" (Thoenen, 1964).

In Texas in the 1920s and 1930s, when markets for natural gas were still quite limited, natural gasoline became a most valuable product. The process used in Texas to produce gasoline from natural gas wells was known as stripping gas, and numerous companies engaged in the practice of marketing the stripped condensate and then venting or releasing the remaining 90 per cent to the atmosphere or flaring it (Prindle, 1981). One historian estimated that in 1934 approximately a billion cubic feet of unassociated gas was stripped and released or flared daily in the Texas Panhandle alone (Prindle, 1981).

The possibilities of recovering and marketing even a small part of the associated gas were small in cases where the rate of production could not be controlled. The gas from these wells was almost invariably flared. Waste was probably most severe in the East Texas oilfields. During the early 1940s it was estimated that one-and-a-half billion cubic feet of casinghead gas was flared each day from the larger fields. Motorists could supposedly drive for hours at night near the Texas fields without ever having to turn on their automobile lights because of the illumination from the casinghead flares (Prindle, 1981).

State legislation was an obvious approach to the conservation of natural gas. At one time or another almost all states involved in petroleum and natural gas production passed conservation laws (Murphy, 1948). Pennsylvania had the first legislation, passed in 1868, and West Virginia had a law in 1891. The West Virginia law, for instance, applied to all wells producing petroleum, natural gas, salt water, or mineral water. In regard to natural gas, owners were required to "shut in and confine" the gas and to plug the well after exhaustion. There was no provision, however, to prevent venting or flaring (Thoenen, 1964). In Texas, an 1899 law prohibited the flaring of unassociated gas. A 1925 law, however, specifically permitted the flaring of associated gas from an oil well (Prindle, 1981). But since it was often difficult to define clearly the difference between a gas and an oil well, enforcement was difficult. It was not until the middle 1930s that the Texas law was successfully enforced through the reclassification of hundreds of oil wells as gas wells and the prohibition of flaring (Prindle, 1981). Still a few states that had major oil fields, such as Illinois, had no gas and oil conservation legislation as late as the Second World War (Murphy, 1948).

Considerable amounts of natural gas were conserved and utilized by technological developments. Methods of capturing associated natural gas, for instance, were developed and the gas used to run pumps and lights at the works. Other developments, such as the Starke Gas Trap to purify wet gas, occurred over the years, and also led to conservation (White, 1951). An important development was the application of highpressure compressors to the extraction of gasoline from casinghead natural gas. These compressors made it possible for field operators to develop small gasoline plants on their producing fields. After 1913 the absorption process increasingly replaced the compressor-condensation system as a means of extracting gasoline from both dry and wet natural gases. Between 1911 and 1929 (the peak year), the volume of natural gas liquids produced increased from 3,660,000 gallons to 72,994,000 gallons (Thoenen, 1964).

The most important factor reducing the waste of natural gas has been the development of long-distance pipelines to available markets. The first major attempt was by the Bloomfield & Rochester Gas Light Co. in 1820, which organized the piping of gas 40 km from a well in Bloomfield, N.Y., to the city of Rochester. The gas was pronounced inferior by consumers, however, resulting in the failure of the company. The first successful cast-iron gas transmission pipeline in 1872 linked a well in Newton, Pa., with nearby Titusville (about 9 km). In 1875, natural gas was piped 27 km from a well near Butler, Pa., to ironworks at Sharpsburg and Allebheny, near Pittsburgh (Pearce, 1875). In 1883, the Chartiers Valley Gas Co. was formed to supply the city of Pittsburgh with gas from wells near Murraysville about 25 km. By the following year 500 km of pipelines were in place, supplying natural gas to the city. The field, however, was exhausted in little more than a decade and natural gas was replaced by coal in the city's growing steel industry (Tarr and Lamperes, 1981).

High-pressure technology was first used in 1891 by the Indiana Natural Gas and Oil Company to bring gas 120 miles from northern Indiana gas fields to Chicago. By the 1920s, integrated companies that combined production, transmission, distribution, and storage facilities had been developed in the Appalachian area, in the Midwest, and in California, Oklahoma, and Texas. By 1925, pipelines as much as 300 miles in length had been constructed and were serving 3,500,000 customers in 23 states (Stockton et al., 1952). Most of the interstate movement of natural gas took place in the north-eastern United States, where densely populated urban areas were located nearby the Appalachian fields. In 1921, 150 billion cubic feet of gas moved interstate, of which approximately 65 per cent was produced in West Virginia and flowed mostly into Pennsylvania and Ohio. Less than 2 per cent of the total interstate movement of gas originated in Texas (Sanders, 1981).

During the late 1920s, important metallurgical advances as well as improvements in welding and compression methods resulted in the possibility of constructing much longer and bigger pipelines. Most critical was the development of continuous butt-welding and of seamless tubes made of steel with greater tensile strength. Also important were improvements in methods of compression, which made it possible to move higher volumes of gas without recompression. By 1934, approximately 150,000 miles of field, transmission, and distribution lines existed in 32 US states, with some transmission lines of as long as 1,200 miles (Sanders, 1981; Stockton et al., 1952; Schurr and Netschert, 1960).

The post-Second World War period saw a great expansion of long-distance pipelines, with 20,657 miles of natural gas lines constructed between 1946 and 1950. Probably most significant was the conversion to natural gas transmission of two long-distance pipelines ("big" inch and "little" inch), built during the war by the government to transport petroleum. These pipelines were the first connecting the East Texas field through the Midwest to Appalachia and the Atlantic seaboard. By 1949, gas from the Southwest was supplying 60 per cent of the Columbia Gas Company's 2 million customers in Pennsylvania, Ohio, and West Virginia. Markets for gas clearly meant the reduction of waste and increased resource utilization. At the end of 1946, 39 per cent of Texas gas wells were shut because of the lack of pipelines, but by 1951 this number had been reduced to 25 per cent (Sanders, 1981; Stockton et al., 1952; Schurr and Netschert, 1960).

An important method of dealing with the problem of seasonal peaks in regard to natural gas utilization has been the development of storage facilities. The first successful storage facility was developed in Ontario in 1915 and applied in a field near Buffalo, New York. Largescale underground storage was initially developed in the Manifee field of eastern Kentucky and its use has spread since then. In 1940, there were only 19 underground storage pools in operation, but by the mid1950s the number had grown to nearly 200. To a large extent, storage took place in consumer rather than producer states. Storage was especially important for states that had developed a dependence on natural gas through local supplies that had later become depleted. In 1949, for instance, Pennsylvania, Ohio, Michigan, and West Virginia were the leading states in terms of amount of gas stored and withdrawn from storage (Stockton et al., 1952; Schurr and Netschert, 1960).

With regard to methane emissions, two questions now arise:

1. How much natural gas was potentially recoverable from oil (or gas) wells that were opened prior to the development of significant markets for the gas?

2. How much of this gas was vented or flared? (As already explained, flaring converts most methane to harmless carbon dioxide and water vapour.)

As noted previously, the discovery of natural gas was a by-product of petroleum exploration. Gas was not sought independently until 20 years or so ago. Although the gas content of petroleum varies widely from field to field, it is likely that the potential gas output of petroleum wells is, on average, proportional to the petroleum output.

The "proportionality" hypothesis above implies that the gas/oil recovery ratio should, on average, have gradually increased over time approaching a limit as gas increased enough in commercial value to justify its complete recovery. One would also expect the relative quantity of natural gas recovered to increase, relative to oil, as markets for gas developed. A gas pipeline distribution system was an essential precondition for an increasing demand for gas. In actuality, the gas/oil output ratio has increased, on average, since 1882 - when recovery began - but has done so quite unevenly (fig. 10).

Fig. 10 Gas/oil production ratios, US and Northeast US

In the Northeast (mainly Pennsylvania), the gas/oil ratio rose gradually to about 2:1 in the early 1920s, then moved down to a trough in the 1930s, followed by a second, higher peak in the late 1950s and a still higher peak in 1980 of nearly 5:1. In the case of the United States as a whole, the initial peak recovery rate was earlier (c. 1900) and lower (around 0.4), and was followed by a trough in the 1930s and 1940s.

The troughs between the first and second peaks are difficult to explain in terms of the proportionality hypothesis. It is hard to believe that the troughs are accidental or that a physical phenomenon (such as declining pressure) could be responsible. Instead, an economic explanation for the troughs seems to be most plausible. The demand for oil outstripped the demand for gas in the 1920s and early 1930s for two reasons. Demand for petroleum products rose sharply because of the fuel needs of a fast-growing fleet of automobiles and trucks. On the other hand, demand for gas was limited by its lack of availability in urbanized areas, especially in the north-eastern part of the country. During that period the technology of large-scale gas distribution pipelines was still being developed. Moreover, finance was a problem: long-term financing could only be obtained (e.g. from life insurance companies) on the assurance of long-term supply contracts at fixed prices. But gas supplies were regulated (by the Texas Railroad Commission) in order to conserve oil, not to sell gas. This conflict of interest required a number of years to resolve. Until the interstate gas pipeline system was created, only relatively local areas near the wells could be supplied with natural gas. This probably accounts for the lag in gas demand growth.

Figure 11 is constructed by using the US data for associated gas after 1935. For earlier years total (gross) gas production is multiplied by the imputed fraction associated with oil wells taken from figure 9, statistically smoothed to eliminate some of the scatter in early decades. It suggests two things. First, it implies that most of the increase in the gas/oil extraction ratio (after 1935 at least) is attributable to nonassociated gas wells. Second, it implies that the ratio of associated gas to oil production peaked around 1900 (at about 0.35, plus or minus 0.10). The dip in apparent associated gas/oil ratio after 1960 coincides with the period of gas scarcity due to low (regulated) prices.

A modified "proportionality" hypothesis seems to fit the facts best, that is, that the (average) potential production of associated gas is roughly constant over time for a given area. The difference between (imputed) output of associated gas and "potential" output of associated gas is unaccounted for. This must have been used on site (e.g. for carbon black), vented or flared. Actually, use of gas to manufacture carbon black is roughly equivalent to flaring, in the sense that combustion is deliberately inefficient to maximize soot (unburned carbon) production. We can assume that unaccounted-for gas was mostly (90 per cent) flared for safety and/or economic reasons or used on site, but that flaring (including gas used in carbon black: production) was only 90 per cent efficient in terms of methane oxidation.

This suggests a total emission factor of 20 per cent, although this estimate must be regarded as somewhat uncertain. The 20 per cent (give or take 10 per cent) of associated gas that is assumed to be vented comes from three sources: (1) blowouts and "gas volcanoes" (occasionally large) and "gushers"; (2) leaks; and (3) small "stripper" wells, for which gas recovery is uneconomic and flaring is unnecessary or impossible.

Fig. 11 Ratio of associated gas to crude oil, US: raw data compared lo model (Sources: (a) data on gas marketed 1882-1889 based on estimates of coal replacement, originally by USGS, cited in Schurr and Netschert, 1960; (b) data 1990-1904 cited by Schurr and Netschert, 1960, and attributed to Minerals Yearbook, but disagrees with figures in Historical Statistics, also attributed to Minerals Yearbook.)

Natural gas distribution

Methane losses also occur in gas distribution, mostly at the local level. In this connection, we have to point out that the difference between "net" production (after gas used for oil well repressurization) and "marketed" gas is not a loss. In fact, most of this statistical difference is attributable to gas used as fuel to operate compressors in the pipeline system. The actual loss rate is probably of the order of 1 per cent of the quantity marketed, and is almost certainly less than 3 per cent. This might be the biggest loss mechanism for natural gas in the United States at present. However, in past years, venting/flaring losses were certainly dominant, as they are today in most of the rest of the world, for instance in Russia.

Methane emission coefficients

Based on the foregoing data and analysis, our "projected" emission coefficients for methane are summarized in table 7. The coefficients for gas venting, assuming 20 per cent venting of associated gas, as argued above, should be regarded as a plausible value. Actual losses of methane from this source could be as much as 50 per cent greater, or perhaps as little as half of that. It is, unfortunately, not possible to improve the estimate on the basis of the historical data which is available to us and which is summarized in this chapter. A better estimate of the venting/flaring factor would require a more intensive search of archival sources, supplemented by historical research on the technology of oil/gas drilling, pumping, separation, and flaring.

Continue