The pre-industrial society used almost instant solar energy flow, and the conversion of the inexhaustible solar radiation energy was so small that it was almost negligible.
Modern civilization relies on extracting huge energy savings and depleting limited fossil fuels. Although reliance on nuclear fission and the use of other renewable energy sources have been increasing, as of 2015, fossil fuels still accounted for 86% of global primary energy, which was only 4% less than in 1990 a generation ago.
Through the use of these abundant reserves, the society we have created has transformed an unprecedented huge energy, and finally produced a new high-energy-consuming service-oriented economy, which has also caused many worrying consequences. They have caused the instability of the global biosphere, especially the relative acceleration of global warming has brought many bad results.
The best compilation of global statistics shows that the production of fossil fuels has continued to increase exponentially since people began to mine fossil fuels on a large scale in the 19th century.
From 1810 to 1910, the amount of coal mining increased from 10Mt to 1Gt, an increase of 100 times; the amount of coal mining reached 1.53Gt in 1950, 4.7Gt in 2000, and it reached 8.25Gt before falling to 7.9Gt in 2015.
The amount of crude oil extracted from less than 10Mt in the late 1880s increased to more than 3Gt in 1988, an increase of about 300 times; it was 3.6 Gt in 2000 and nearly 4.4 Gt in 2015. Natural gas production has increased 1,000 times, from less than 2Gm3 in the late 1880s to 2Tm3 in 1991; 2.4Tm3 in 2000 and 3.5Tm3 in 2015. Throughout the 20th century, the total amount of global fossil energy mining has increased by 14 times.
A better way to trace this expansion is to calculate the true increase in useful energy: express the increase in heat, light, and momentum actually transferred.
Early fossil fuel conversion efficiency was quite low (incandescent lamps <2%, steam locomotives <5%, thermal power generation <10%, small coal stoves <20%). The improvement of coal-fired boilers and stoves quickly turned these efficiencies Once again, household stoves and industrial and power plant boilers can convert liquid hydrocarbons with higher efficiency. Only passenger cars that use gasoline have lower internal combustion engines. Whether it is a blast furnace, a boiler or a gas turbine, the use of natural gas is highly efficient, and the efficiency is usually more than 90%. The same is true for the conversion efficiency of primary power. In 1900, the average weighted efficiency of global energy use was not higher than 20%; by 2015, the average amount of global fossil fuel and primary power conversion has reached 50% of the total commercial energy input. Statistics from the International Energy Agency show that the global primary energy supply in 2013 was 18.8Gt oil equivalent, and the final consumption was 9.3Gt oil equivalent. At the same time, the total supply of fossil energy has increased by 14 times in the 20th century, and the efficiency of energy supply has increased steadily. Compared with 1990, it has increased by more than 30 times. As a result, in rich countries that had already used fossil fuels as their main source of energy supply in 1900, the useful energy provided by each unit of primary energy supply is now twice or even three times that of a century ago; until the second half of the 20th century In low-income countries dominated by modern energy, the useful energy provided by each unit of primary energy supply can generally reach 5-10 times that of a century ago. The rapid increase in energy use has also raised the per capita consumption level to an unprecedented level. The energy needed by people in the society mainly comes from food supply, and the average annual consumption per capita at that time did not exceed 5-7GJ. During the New Kingdom of Egypt, the per capita annual consumption did not exceed 10-12 GJ. The best estimate of energy consumption in the early Roman Empire was about 18 GJ per capita per year. The early industrial society easily doubled the traditional per capita energy consumption. Most of the increased consumption comes from the manufacturing and transportation industries supported by the burning of coal. It is estimated that the European average in 1500 is about 22 GJ per person per year, and has stagnated at 16.6-18.1 GJ until 1800. The annual per capita consumption in the United States increased from less than 70 GJ in 1820 to 150 GJ in 1910. A century later, the annual per capita energy consumption of all wealthy European countries has reached more than 150 GJ, while the average per capita energy consumption in the United States has exceeded 300 GJ. While energy consumption has increased, the energy structure has also changed. In the gathering society, food is the only source of energy. It is estimated that in the early days of the Roman Empire, food and feed accounted for 45% of energy sources. In pre-industrial Europe, food and feed accounted for 20%-60% of energy sources. By 1820, their average share was no more than 30%; by 1990, their share in Britain and Germany was less than 10%. . By the 1960s, the share of energy supplied by feed fell to a negligible level. In the richest societies, food energy did not exceed 3% or even less than 2% of the total. In these rich countries, industry, transportation, and household fuel and electricity have become the main sources of energy consumption. In high-income economies, the value of electricity transmission per capita has risen by two orders of magnitude. By 2010, the average annual electricity transmission value per capita in Western Europe will reach 7MWh, and that in the United States will be 13MWh. The contrast between the energy flows directly controlled by the individual is equally impressive. In 1900, a farmer on the North American Great Plains cultivated wheat fields with the reins of six large horses. Sitting on a steel seat, he is often covered with dust, and requires considerable physical labor to control the bioenergy of no more than 5kW. A century later, his great-grandson sat high in a comfortable, air-conditioned tractor cab and easily controlled a diesel engine with a power of more than 250kW. In 1900, an engineer operated a 1MW steam-powered coal-fired locomotive to pull the train at a speed of 100km/h-this was the best performance that manual coal refueling could show. By 2000, pilots flew a Boeing 747 across the continent at an altitude of 11km. Propelled by the power of 120MW output by four gas turbines, the aircraft flies at a speed of 900km/h. The more concentrated the energy, the safer protection measures are required. Until the 19th century, a coachman sitting on a carriage for intercity traffic usually had a stable control energy of no more than 3kW (a carriage with 4 horses) and carried 4-8 passengers. Intercity jet pilots control 30MW jet engines and carry 150-200 passengers. In the process of manipulating two energies (ie 3kW and 30MW) with a difference of 4 orders of magnitude, there is obviously a huge difference in the consequences of a brief distraction or misjudgment. One obvious way to control such risks is to adopt electronic controls. Electronic control and continuous monitoring, like widely used computers and mobile electronic devices, have become the main new types of electricity demand. "Coal-based" Industrialization The term "industrial revolution" is both quite attractive and misleading. The English origin of this concept goes back at least to the late 16th century, but the full-scale industrial development in the UK did not begin until 1850. Even then, the number of traditional craftsmen greatly exceeded the workers operating the machines in the factories: the 1851 census showed that there were more shoemakers in Britain than coal miners and more blacksmiths than ironworkers. Key national characteristics have led to very different models of industrialization. France pays attention to water power development, the United States and Russia have long relied on wood, and Japan has a tradition of meticulous craftsmanship. Coal and steam were not originally revolutionary factors in industrialization. Slowly, they provide thermal and mechanical energy with unprecedented levels and reliability. At this time, the process of industrialization began to expand and accelerate, and eventually became synonymous with the expansion of fossil energy consumption. For industrial expansion, coal mining is not essential-but it is undoubtedly essential for the acceleration of industrialization. A comparison between Belgium and the Netherlands can illustrate this effect. The highly urbanized Dutch society, with excellent shipping capabilities and relatively developed commerce and finance, eventually lags behind Belgium, which was poor at first but rich in coal resources. Belgium became the country with the highest degree of industrialization in the European continent in the mid-19th century. European regions where coal-based economies took off earlier include the Rhine-Ruhr region, Bohemia and Moravia in the Habsburg Empire, and Silesia in Prussia and Austria. The development model of industrialization based on coal has repeatedly appeared outside Western and Central Europe. Pennsylvania, which has high-quality anthracite coal, and Ohio, which has high-quality bituminous coal, became the early leaders in the development of industrialization in the United States. In Russia before World War I, the discovery of the rich Ukrainian coal mine in Donetsk and the development of the Baku oil field in the 1870s led to the rapid industrial expansion that followed. During the Meiji era, Japan’s modernization also benefited from coal in northern Kyushu. In 1901, only 48 years after Japan opened its doors, the Yawata Steel Works (the predecessor of Nippon Steel) in northern Kyushu opened the Higashida No. 1 blast furnace, marking the start of production of Japan's first modern integrated steel plant. India's largest commercial empire (Tata Group) originated from the blast furnace using Bihari coke established by J. Tata in Jamshedpur in 1911. Once driven by coal and steam power, traditional manufacturers can produce more high-quality products at a lower cost. This achievement is a necessary prerequisite for mass consumption. Cheap and reliable supply of mechanical energy ensures that the processing technology becomes more and more complex. In turn, this has caused the manufacturing of parts, tools, and machines to become more complex and professional. After the formation of new industries powered by coal, coke and steam, they supply goods to domestic and international markets at an unprecedented speed. After 1810, high-pressure boilers and pipes began to be manufactured. After 1830, the output of railways, locomotives and freight cars increased rapidly. The output of water turbines and propellers began to increase after 1840. After 1850, there was a huge new market for steel hulls and submarine telegraph cables. The commercial method of producing cheap steel—first the Bessemer converter after 1856, and then the “Siemens-Martin” open-hearth furnace in the 1860s—created a larger emerging market for manufactured goods, from tableware to railroad tracks, from Iron plows to building beams and so on. The increase in fuel input and the replacement of tools with machines have made human muscles a source of marginal energy. Human labor is constantly shifting to the work of supporting, controlling, and managing the production process. Analyzing the results of the census and labor force survey in England and Wales for a century and a half can well illustrate this trend. In 1871, about 24% of workers were engaged in “muscle strength” jobs (agriculture, construction, and industry), and only about 1% were engaged in “management” jobs (health and education, child and home care, welfare work) . By 2011, "management" work accounted for 12%, and "muscle strength" work accounted for only 8%. And many of today's "muscle strength" jobs (such as cleaning, housekeeping, and routine factory assembly line work) have been mechanized to a large extent. Even if the importance of human labor begins to decline, some recent systematic studies of individual tasks and complete industrial processes still show that by optimizing, rearranging, and standardizing muscle activity, labor productivity can be greatly increased. Frederick Winslow Taylor (1856-1915) was the pioneer of this type of research. Beginning in 1880, he spent 26 years quantifying all the key variables involved in steel cutting, simplifying all his discoveries into a set of simple calculation rules, and writing them in The Principles of Scientific Management (The Principles of Scientific Management). ) Summarizes the general conclusions about efficiency management. A century later, it is still guiding some of the most successful consumer goods manufacturers in the world. The power revolution When the steam engine was eclipsed by electrification, a new era of industrialization came. Electricity is a better form of energy (not only when compared to steam power). Only electricity can be connected instantly and easily, and it can provide services for every consumption (except flying) very reliably. Just by flipping the switch, we can convert electricity into light, heat, kinetic energy or chemical energy. The current is easy to adjust, achieving unprecedented precision, speed and process control. In the 20th century, the growth rate of global electricity output even exceeded the growth of fossil energy mining-the latter's annual growth rate was about 3%. In 1900, less than 2% of fuel was converted into electricity, and by the end of the 20th century, this proportion had risen to nearly 25%. In addition, new hydropower plants (large-scale development after the First World War) and new nuclear power facilities have further expanded the power generation. From 1900 to 1935, the global electricity supply increased by about 11% every year. Since then, the annual growth rate of more than 9% has continued to increase until the early 1970s. During the remaining time, the annual growth rate of power generation has dropped to about 3.5%. To a large extent, it is the result of lower demand and higher conversion rates in high-income economies. The replacement of water wheels by steam engines has not changed the way of mechanical energy transmission in industrial production. Therefore, this substitution has almost no impact on the overall layout of the factory. The space under the ceiling of the factory is still packed with a bunch of countershafts connected to the main shaft, which transmit power to various machines through belts. The original electric motors could drive shorter drive shafts, and they could only provide power for a small group of machines. After 1900, independent unit drives quickly became the norm. From 1899 to 1929, the total installed power of machinery in the US manufacturing industry increased by about three times, and the capacity of industrial motors increased by nearly 60 times, providing more than 82% of the usable power. At the end of the 19th century, this share was less than 5%. Since the late 1890s, electric motors have basically replaced steam-powered and directly hydraulically driven devices in only 30 years. The impact of this efficient and reliable unit power supply is far more than eliminating the messy overhead pipelines (and the inevitable noise and accident risks that come from them). The removal of the drive shaft frees the ceiling, which allows the installation of better lighting and ventilation systems, and makes the design of the factory more flexible and easier to expand. The high efficiency of the motor and the precise, flexible and independent power control in a better working environment ultimately greatly increase labor productivity. Electrification also created a large number of specialized industries, starting with the manufacture of light bulbs, generators and transmission lines (after 1880), followed by the production of steam turbines and water turbines (after 1890). After 1920, a high-pressure boiler that burns pulverized fuel was born, and construction of a huge dam using a large amount of reinforced concrete began 10 years later. After 1950, air pollution control facilities were generally installed everywhere. The first nuclear power plant was put into production before 1960. The increasing demand for electricity has also promoted the development of geophysical prospecting and fuel extraction and transportation. A large amount of basic research in material properties, engineering control and automation is necessary for the production of better steel, other metals and alloys, and for improving the reliability and life of expensive equipment used to extract, transport, and convert energy. Extension is also necessary. The availability of reliable and cheap electricity has changed almost all industrial activities. The classic but outdated (slightly rigid) Ford-style assembly line was developed based on the conveyor belt invented in 1913. The modern and flexible Japanese-style assembly line relies on the immediate delivery of parts and components and workers with a clear division of labor. This system adopted in the Toyota plant combines American experience elements with Japanese practices and original ideas. The Toyota Production System relies on the continuous improvement of products and the continuous pursuit of the best quality control. Similarly, the basic commonality of all these behaviors is to minimize energy waste. Cheap power supply has also spawned a whole new metal production and electrochemical industry. By electrolyzing alumina (Al2O3) in cryolite (Na3AlF6) solvent, it is possible to extract and smelt aluminum on a large scale. Since the 1930s, electricity has been indispensable for the synthesis and shaping of an increasing variety of plastics and the recent introduction of a new class of composite materials (especially carbon fiber). The energy cost of these materials is about three times that of aluminum, and their biggest commercial use is to replace aluminum alloys in commercial aircraft manufacturing: the latest Boeing 787 is composed of composite materials about 80% by volume. While new lightweight materials have replaced steel in many places, the steelmaking process itself is increasingly using electric arc furnaces. The new, lighter but stronger steel has many uses, especially in the automotive industry. Without electricity, a large-scale micro-machining industry with strict tolerances would be impossible, and there would be no jet engines or medical diagnostic equipment that are common today. Of course, there will be no accurate electronic controls, let alone computers and billions of telecommunications equipment all over the world. The evolution of smelting The massive flow of energy and materials is the basis of the industrialization process; metal is still a typical industrial material; iron, which appears in the form of various steels, is still the main metal. In 2014, steel output was nearly 20 times higher than the combined output of the four major non-ferrous metals (namely aluminum, copper, zinc, and lead). The use of blast furnaces to smelt iron ore, then steel smelting in oxygen top-blown converters, and smelting of secondary recovered steel in electric arc furnaces still dominates steel production. Without a larger and more efficient blast furnace, a huge increase in steel production is impossible. Similarly, the improvement of steelmaking technology efficiency lies not only in the reduction of energy consumption, but also in the increase in output. The conversion rate of the early Bessemer converters was less than 60% at first, and later rose to more than 70% (turning iron into steel). The conversion rate of the open-hearth furnace can reach about 80% in the end. The oxygen top-blown converter, which was used in the 1950s, now has a maximum conversion rate of 95%. The conversion rate of the electric arc furnace is as high as 97%. Electric arc furnaces today consume less than 350kWh for every ton of steel produced, but in 1950 they consumed more than 700kWh. These benefits are also accompanied by a reduction in waste discharge rates: from 1960 to 2010, the carbon dioxide emissions per ton of molten iron produced in the United States fell by nearly 50%, and dust emissions fell by 98%. Through continuous casting of molten iron, energy costs are further reduced. This innovation replaces the traditional cast iron production process. The resulting increase in output was very large, even per capita, exponential growth: In 1850 (before the beginning of the modern steel industry), the annual steel output was less than 100kt, on average only 75g per person, and all were produced manually. In 1900, the total output of steel was 30Mt, and the global average value per capita was 18kg. By 2000, the total output was 850Mt, 140kg per capita. By 2015, total global steel production reached 1,650Mt, 225kg per capita, about 12.5 times that of 1900. It is estimated that in 2013, global steel production will require at least 35EJ of fuel and electricity, accounting for less than 7% of the world's total primary energy supply. Therefore, the steel industry is the industrial sector with the largest energy consumption in the world. In comparison, the total energy consumption of all other industries is 23%, transportation is 27%, and residential and service industries are 36%. So far, the advancement of aluminum smelting technology is the most important innovation in the field of non-ferrous metallurgy. Aluminum was first purified in 1824, and it was not until 1866 that an economical process for mass production of aluminum appeared. The independent inventions of Charles M. Hall in the United States and PLT Héroult in France are based on the electrolysis technology of alumina. At that time, the energy required to extract aluminum metal was at least 6 times higher than that of smelting steel. Even after the start of large-scale power generation, the development of the aluminum industry is still slow. In the 1880s, the specific power consumption of aluminum smelting exceeded 50,000 kilowatt-hours per ton. Later, the steady improvement of the Hall-Elu process reduced this ratio by more than 2/3 by 1990. The expansion of the use of aluminum was initially driven by the development of the aviation industry. In the late 1920s, metal fuselages replaced fuselages made of wood and cloth. After that, the demand for the construction of fighter jets and bombers during World War II led to a sharp increase in the demand for aluminum. Since 1945, as long as the design in any field requires lightness and high strength of materials, aluminum and aluminum alloys will replace steel in this field. These application scenarios include automobiles, railway hopper cars and spacecraft. It should be noted that new lightweight steel alloys can also play a role in these markets. Since the 1950s, titanium has replaced aluminum in high-temperature applications (especially supersonic aircraft). However, the energy intensity of the titanium production process is at least three times that of aluminum. Although in a society focused on the latest advances in electronic technology, the fundamental importance of mass production of metals is often overlooked, there is no doubt that modern manufacturing has changed through continuous integration with modern electronic products. The combination of the two brings unprecedented precision control and flexibility, greatly enriches the choice of available designs, and changes marketing, distribution, and performance monitoring methods. A global comparison shows that in 2005, the cost of services purchased by American manufacturers from external companies accounted for 30% of the added value of industrial products, and this share was similar in the major EU economies (23%-29%) . In 2008, service-related jobs accounted for slightly more than half (53%) of all manufacturing jobs in the United States. They also reached 44%-50% in Germany, France, and the United Kingdom, and 32% in Japan. "Geophysical Experiment" The supply and use of fossil fuels and electricity are the most important human factors in air pollution and greenhouse gas emissions, as well as the main causes of changes in water pollution and land use. Of course, no matter what kind of fossil fuel combustion is related to the rapid oxidation of carbon, it will increase carbon dioxide emissions. Methane (CH4), as a more effective greenhouse gas, is released during the production and transportation of natural gas; the combustion process of fossil fuels also releases a small amount of nitrous oxide (N2O). In the past, the combustion of coal was the main source of particulate matter, sulfur and nitrogen oxides (SOx and NOx). Nowadays, most of the fixed emissions of these substances are controlled by electrostatic precipitators, desulfurization and nitrogen oxide removal processes. Even so, the emissions from coal combustion can still have a major impact on health. Fuel and electricity will also indirectly cause more pollution and ecosystem degradation, the most obvious of which are industrial production (mainly from ferrous metallurgy and chemical synthesis), agricultural chemicals, urbanization and transportation. Compared with the past, these influences have increased in degree and intensity, and the scope of influence has also expanded from local to regional. These costs have forced all major economies to pay more and more attention to environmental management. By the 1960s, one of the manifestations of environmental degradation was acid rain in Central Europe, Western Europe, and Eastern North America. It is mainly caused by the emissions of sulfur oxides and nitrogen oxides from large coal-fired power plants and automobile emissions, and its impact once covered half of the continent. Until the mid-1980s, acid rain was generally regarded as the most pressing environmental problem faced by rich countries. A series of actions—using low-sulfur coal and sulfur-free natural gas to generate electricity, using cleaner gasoline and diesel and more efficient automobile engines, and installing flue gas desulfurization facilities at major pollution sources—not only prevented the deepening of acidification, but also used it by 1990 The situation has been reversed. Since 1990, the same problem has reappeared in East Asia. The ozone layer over Antarctica and surrounding seas is partially destroyed, and it has briefly become the primary topic of environmental issues related to energy use. As early as 1974, scientists accurately predicted that the concentration of the stratospheric ozone layer, which protects the earth from excessive ultraviolet radiation, might decrease. It was not until 1985 that this phenomenon was first measured over Antarctica. Ozone loss is mainly caused by the emission of chlorofluorocarbons (CFCs, mainly used as refrigerants). In 1987, countries around the world signed an effective international treaty, the "Montreal Protocol", and used less harmful compounds to replace chlorofluorocarbons, which quickly alleviated this concern. The threat to the ozone layer is only the first of several new issues that have global consequences caused by climate change. Since the late 1980s, there has been a global climate issue that has been particularly critical: anthropogenic greenhouse gases have led to relatively rapid climate changes, especially tropospheric warming, ocean acidification and sea level rise. As early as the end of the 19th century, mankind had a considerable understanding of the nature of greenhouse gases and the warming effects they may cause. The most important human factor is carbon dioxide, which is the final product of the efficient combustion of all fossil fuels and biomass fuels. Degradation of forests (especially in humid tropical regions) and grasslands has been the second largest source of carbon dioxide emissions. Since 1850 when there was only 54 Mt of carbon in the atmosphere (to convert to carbon dioxide, multiply by 3.667), global anthropogenic carbon dioxide emissions have increased exponentially, keeping pace with the increase in fossil fuel consumption: by 1900, the atmosphere The carbon content has risen to 534Mt and exceeded 9Gt by 2010 (Boden and Andres 2015). In 1957, Hans Suess and Roger Revelle concluded: Human beings are conducting a large-scale geophysical experiment. This kind of experiment was impossible in the past and will not be repeated in the future. It took us hundreds of millions of years to deposit concentrated organic carbon stored in the rock formations in the atmosphere and oceans in just a few centuries. The first experiment to systematically measure the increase in carbon dioxide concentration was organized by Charles Keeling (1928-2005) and was carried out in 1958 near the summit of Mauna Loa Volcano in Hawaii and at the South Pole (Keeling 1998). The carbon dioxide concentration data of Mauna Loa volcano has always been a global indicator of rising tropospheric carbon dioxide concentration: in 1959 its average value was about 316ppm (ppm refers to the concentration of parts per million, 1ppm=0.001‰), and it exceeded 350ppm in 1988 until 2014. It reaches 398.55ppm annually. The volume of other greenhouse gases released by human activities is much smaller than that of carbon dioxide, but because their molecules can absorb more infrared radiation (in 20 years, the infrared radiation absorbed by methane is 86 times that of carbon dioxide, and the infrared amplitude absorbed by nitrous oxide The radiation is 268 times that of carbon dioxide), which together contribute 35% of the thermal radiation enhanced by human factors. The current consensus is that in order to avoid the most serious consequences of global warming, the increase in average temperature should be limited to less than 2°C. This requires immediate and substantial reductions in the use of fossil fuels and a rapid transition to the era of non-carbon energy. Although this scheme is not impossible, it is very difficult to achieve. Because we have to consider the dominance of fossil fuels in the global energy system and the huge demand for energy in low-income society: we can use renewable power generation to meet certain new large-scale energy needs. For chemical raw materials (ammonia, plastics) and iron ore smelting, there is no economical and large-scale alternative energy source.