title stringlengths 3 84 | content stringlengths 216 367k |
|---|---|
Avgas | Avgas (aviation gasoline, also known as aviation spirit in the UK) is an aviation fuel used in aircraft with spark-ignited internal combustion engines. Avgas is distinguished from conventional gasoline (petrol) used in motor vehicles, which is termed mogas (motor gasoline) in an aviation context. Unlike motor gasoline, which has been formulated without lead since the 1970s to allow the use of catalytic converters for pollution reduction, the most commonly used grades of avgas still contain tetraethyl lead, a toxic lead-containing additive used to aid in lubrication of the engine, increase octane rating, and prevent engine knocking (premature detonation). There are ongoing efforts to reduce or eliminate the use of lead in aviation gasoline.
Kerosene-based jet fuel is formulated to suit the requirements of turbine engines which have no octane requirement and operate over a much wider flight envelope than piston engines. Kerosene is also used by most diesel piston engines developed for aviation use, such as those by SMA Engines, Austro Engine, and Thielert.
== Properties ==
The main petroleum component used in blending avgas is alkylate, which is a mixture of various isooctanes. Some refineries also use reformate. All grades of avgas that meet CAN 2–3, 25-M82 have a density of 6.01 pounds per US gallon (720 g/L) at 15 °C (59 °F). (6 lb/U.S. gal is commonly used in America for weight and balance computation.) Density increases to 6.41 pounds per US gallon (768 g/L) at −40 °C (−40 °F), and decreases by about 0.1% per 1 °C (1.8 °F) increase in temperature.
Avgas has an emission coefficient (or factor) of 18.355 pounds per US gallon (2.1994 kg/L) of CO2 or about 3.07 units of weight CO2 produced per unit weight of fuel used. Avgas is less volatile, with a Reid vapor pressure range of 5.5 to 7 psi, than automotive gasoline, with a range of 8 to 14 psi. A minimum limit ensures adequate volatility for engine starting. The upper limits are related to atmospheric pressure at sea level, 14.7 psi, for motor vehicles and ambient pressure at 22,000 ft, 6.25 psi, for aircraft. The lower avgas volatility reduces the chance of vapor lock in fuel lines at altitudes up to 22,000 ft.
The particular mixtures in use today are the same as when they were first developed in the 1940s, and were used in airline and military aero engines with high levels of supercharging; notably the Rolls-Royce Merlin engine used in the Spitfire and Hurricane fighters, Mosquito fighter-bomber and Lancaster heavy bomber (the Merlin II and later versions required 100-octane fuel), as well as the liquid-cooled Allison V-1710 engine, and air-cooled radial engines from Pratt & Whitney, Wright, and other manufacturers on both sides of the Atlantic. The high octane ratings were traditionally achieved by the addition of tetraethyllead, a highly toxic substance that was phased out of automotive use in most countries in the late 20th century.
Leaded avgas is currently available in several grades with differing maximum lead concentrations. (Unleaded avgas is also available.) Because tetraethyllead is a toxic additive, the minimum amount needed to bring the fuel to the required octane rating is used; actual concentrations are often lower than the permissible maximum. Historically, many post-WWII developed, low-powered 4- and 6-cylinder piston aircraft engines were designed to use leaded fuels; an unleaded replacement fuel is being developed and certified for these engines. Some reciprocating-engine aircraft still require leaded fuels, but some do not, and some can burn unleaded gasoline if a special oil additive is used.
== Consumption ==
The annual US usage of avgas was 186 million US gallons (700,000 m3) in 2008, and was approximately 0.14% of the motor gasoline consumption. From 1983 through 2008, US usage of avgas declined consistently by approximately 7.5 million US gallons (28,000 m3) each year. As of 2024, the annual US usage of avgas was 180 million US gallons (680,000 m3), most of which contained lead, and 170,000 aircraft in the US used leaded avgas.
In Europe, avgas remains the most common piston-engine fuel. High prices have encouraged efforts to convert to diesel engines burning jet fuel, which is more readily available, less expensive, and has advantages for aviation use.
== Grades ==
Grades of avgas are identified by two numbers associated with its
Motor Octane Number (MON). The first number indicates the octane rating of the fuel tested to "aviation lean" standards, which is similar to the anti-knock index or "pump rating" given to automotive gasoline in the US. The second number indicates the octane rating of the fuel tested to the "aviation rich" standard, which tries to simulate a supercharged condition with a rich mixture, elevated temperatures, and a high manifold pressure. For example, 100/130 avgas has an octane rating of 100 at the lean settings usually used for cruising and 130 at the rich settings used for take-off and other full-power conditions.
Antiknock agents such as tetraethyl lead (TEL) help to control detonation and provide lubrication. One gram of TEL contains 640.6 milligrams of lead.
=== 100LL (blue) ===
100LL (pronounced "one hundred low lead") may contain a maximum of one-half the tetraethyllead allowed in 100/130 (green) avgas.
Some of the lower-powered (100–150 horsepower or 75–112 kilowatts) aviation engines that were developed in the late 1990s are designed to run on unleaded fuel and on 100LL, an example being the Rotax 912.
=== Automotive gasoline ===
Automotivegasoline –known as mogas or autogas among aviators –that does not contain ethanol may be used in certified aircraft that have a Supplemental Type Certificate (STC) for automotive gasoline, as well as in experimental aircraft and ultralight aircraft. Some oxygenates other than ethanol are approved, but these STC's prohibit ethanol-laced gasolines. Ethanol-treated gasoline is susceptible to phase-separation which is very possible due to the altitude/temperature changes light airplanes undergo in ordinary flight. This ethanol-treated fuel can flood the fuel system with water which can cause in-flight engine failure. Additionally, the phase-separated fuel can leave remaining portions that do not meet octane requirements due to the loss of the ethanol in the water-absorption process. Further, the ethanol can attack materials in aircraft construction which pre-date "gasahol" fuels. Most of these applicable aircraft have low-compression engines which were originally certified to run on 80/87 avgas and require only "regular" 87 anti-knock index automotive gasoline. Examples include the popular Cessna 172 Skyhawk or Piper Cherokee with the 150 hp (110 kW) variant of the Lycoming O-320.
Some aircraft engines were originally certified using a 91/96 avgas and have STC's available to run "premium" 91 anti-knock index (AKI) automotive gasoline. Examples include some Cherokees with the 160 hp (120 kW) Lycoming O-320 or 180 hp (130 kW) O-360, or the Cessna 152 with the O-235. The AKI rating of typical automotive fuel might not directly correspond to the 91/96 avgas used to certify engines, as motor vehicle pumps in the US use the so-called "(R + M)/2" averaged motor vehicle octane rating system as posted on gas station pumps. Sensitivity is roughly 8–10 points, meaning that a 91 AKI fuel might have a MON of as low as 86. The extensive testing process required to obtain an STC for the engine/airframe combination helps ensure that, for those eligible aircraft, 91 AKI fuel provides sufficient detonation margin under normal conditions.
Automotive gasoline is not a fully viable replacement for avgas in many aircraft, because many high-performance and/or turbocharged airplane engines require 100 octane fuel and modifications are necessary in order to use lower-octane fuel.
Many general aviation aircraft engines were designed to run on 80/87 octane, roughly the standard (as unleaded fuel only, with the "{R+M}/2" 87 octane rating) for North American automobiles today. Direct conversions to run on automotive fuel are fairly common, by supplemental type certificate (STC). However, the alloys used in aviation engine construction are chosen for their durability and synergistic relationship with the protective features of lead, and engine wear in the valves is a potential problem on automotive gasoline conversions.
Fortunately, significant history of engines converted to mogas has shown that very few engine problems are caused by automotive gasoline. A larger problem stems from the higher and wider range of allowable vapor pressures found in automotive gasoline; this can pose some risk to aviation users if fuel system design considerations are not taken into account. Automotive gasoline can vaporize in fuel lines, causing a vapor lock (a bubble in the line) or fuel pump cavitation, thereby starving the engine of fuel. This does not constitute an insurmountable obstacle, but merely requires examination of the fuel system, ensuring adequate shielding from high temperatures and maintaining sufficient pressure in the fuel lines. This is the main reason why both the specific engine model as well as the aircraft in which it is installed must be supplementally certified for the conversion. A good example of this is the Piper Cherokee with high-compression 160 or 180 hp (120 or 130 kW) engines. Only later versions of the airframe with different engine cowling and exhaust arrangements are applicable for the automotive fuel STC, and even then require fuel-system modifications.
Vapor lock typically occurs in fuel systems where a mechanically-driven fuel pump mounted on the engine draws fuel from a tank mounted lower than the pump. The reduced pressure in the line can cause the more volatile components in automotive gasoline to flash into vapor, forming bubbles in the fuel line and interrupting fuel flow. If an electric boost pump is mounted in the fuel tank to push fuel toward the engine, as is common practice in fuel-injected automobiles, the fuel pressure in the lines is maintained above ambient pressure, preventing bubble formation. Likewise, if the fuel tank is mounted above the engine and fuel flows primarily due to gravity, as in a high-wing airplane, vapor lock cannot occur, using either aviation or automotive fuels. Fuel-injected engines in automobiles also usually have a "fuel return" line to send unused fuel back to the tank, which has the benefit of equalizing the fuel's temperature throughout the system, further reducing the chance of vapor lock developing.
In addition to vapor locking potential, automotive gasoline does not have the same quality tracking as aviation gasoline. To help solve this problem, the specification for an aviation fuel known as 82UL was developed as essentially automotive gasoline with additional quality tracking and restrictions on permissible additives. This fuel is not currently in production and no refiners have committed to producing it.
=== Gasohol ===
Rotax allows up to 10% ethanol (similar to E10 fuel for cars) in the fuel for Rotax 912 engines. Light sport aircraft that are specified by the manufacturer to tolerate alcohol in the fuel system can use up to 10% ethanol.
=== Fuel dyes ===
Fuel dyes aid ground crew and pilots in identifying and distinguishing the fuel grades
and most are specified by ASTM D910 or other standards.
Dyes for the fuel are required in some countries.
== Phase-out of leaded aviation gasoline ==
The 100LL phase-out has been called "one of modern GA's most pressing problems", because 70% of 100LL aviation fuel is used by the 30% of the aircraft in the general aviation fleet that cannot use any of the existing alternatives.
There are three fundamental issues in using unleaded fuels without serious modification of the airframe/engine:
The fuel must have a high enough octane rating (and meet other specifications) to replace leaded fuels,
The engine must be certified to use the fuel, and
The airframe must also be certified to use the fuel.
In February 2008, Teledyne Continental Motors (TCM) announced that the company is very concerned about future availability of 100LL, and as a result, they would develop a line of diesel engines.
In a February 2008 interview, TCM president Rhett Ross indicated belief that the aviation industry will be "forced out" of using 100LL in the near future, leaving automotive fuel and jet fuel as the only alternatives. In May 2010, TCM announced that they had licensed development of the SMA SR305 diesel engine.
In November 2008, National Air Transportation Association president Jim Coyne indicated that the environmental impact of aviation is expected to be a big issue over the next few years and will result in the phasing out of 100LL because of its lead content.
By May 2012, the US Federal Aviation Administration (FAA Unleaded Avgas Transition rulemaking committee) had put together a plan in conjunction with industry to replace leaded avgas with an unleaded alternative within 11 years. Given the progress already made on 100SF and G100UL, the replacement time might be shorter than that 2023 estimate. Each candidate fuel must meet a checklist of 12 fuel specification parameters and 4 distribution and storage parameters. The FAA has requested a maximum of US$60M to fund the administration of the changeover. In July 2014, nine companies and consortiums submitted proposals to the Piston Aviation Fuels Initiative (PAFI) to assess fuels without tetraethyl lead. Phase one testing is performed at the William J. Hughes Technical Center for a FAA approved industry replacement by 2018.
In July 2021, the first commercially-produced unleaded avgas, GAMI's G100UL, was approved by the Federal Aviation Administration through a Supplemental Type Certificate.
Lycoming Engines provides a list of engines and fuels that are compatible with unleaded fuel. However, all of their engines require that an oil additive be used when unleaded fuel is used: "When using the unleaded fuels identified in Table 1, Lycoming oil additive P/N LW-16702, or an equivalent finished product such as Aeroshell 15W-50, must be used." Lycoming also notes that the octane rating of the fuel used must also meet the requirements stated in the fuel specification, otherwise engine damage may occur due to detonation.
Prior to 2022, Teledyne Continental Motors (TCM) indicated that leaded avgas is required in their engines, and not unleaded auto fuels: "Current aircraft engines feature valve gear components which are designed for compatibility with the leaded ASTM D910 fuels. In such fuels, the lead acts as a lubricant, coating the contact areas between the valve, guide, and seat. The use of unleaded auto fuels with engines designed for leaded fuels can result in excessive exhaust valve seat wear due to the lack of lead with cylinder performance deteriorating to unacceptable levels in under 10 hours."
In 2022, TCM changed its policy. They have announced a formal application to the FAA to approve the use of UL91 and UL94 in selected engines, stating that "Continental considers 91UL and 94UL fuel as a transitional step in a long-term strategy to reach a more sustainable aviation".
== New unleaded fuel grades ==
=== 91UL (or UL91) ===
Hjelmco Oil first introduced unleaded Avgas grades in Europe in 2003, after its success with 80UL. This grade of Avgas is manufactured to meet ASTM D7547. Many common Lycoming engines are certified to run on this particular grade of Avgas, and Cessna has approved the use of this fuel in a large number of their piston fleet. This fuel is also usable in any aircraft in Europe or the United Kingdom where the engine is certified to use it, whether or not the airframe is certified to do so, too.
=== 93UL (Ethanol-free 93AKI automotive gasoline) ===
The firm Airworthy AutoGas tested an ethanol-free 93 anti-knock index (AKI) premium auto gas on a Lycoming O-360-A4M in 2013. The fuel is certified under Lycoming Service Instruction 1070 and ASTM D4814.
=== UL94 (formerly 94UL) ===
Unleaded 94 Motor octane fuel (UL94) is essentially 100LL without the lead.
In March 2009, Teledyne Continental Motors (TCM) announced they had tested a 94UL fuel that might be the best replacement for 100LL. This 94UL meets the avgas specification including vapor pressure but has not been completely tested for detonation qualities in all Continental engines or under all conditions. Flight testing has been conducted in an IO-550-B powering a Beechcraft Bonanza and ground testing in Continental O-200, 240, O-470, and O-520 engines. In May 2010, TCM indicated that despite industry skepticism, they are proceeding with 94UL and that certification was expected in mid-2013.
In June 2010, Lycoming Engines indicated their opposition to 94UL. Company general manager Michael Kraft stated that aircraft owners do not realize how much performance would be lost with 94UL and characterized the decision to pursue 94UL as a mistake that could cost the aviation industry billions in lost business. Lycoming believes the industry should be pursuing 100UL instead. The Lycoming position is supported by aircraft type clubs representing owners of aircraft that would be unable to run on lower octane fuel. In June 2010, clubs such as the American Bonanza Society, the Malibu Mirage Owners and Pilots Association, and the Cirrus Owners and Pilots Association collectively formed the Clean 100 Octane Coalition to represent them on this issue and push for unleaded 100 octane avgas.
In November 2015, UL94 was added as a secondary grade of unleaded aviation gasoline to ASTM D7547, which is the specification that governs UL91 unleaded avgas. UL91 is currently being sold in Europe. UL94 meets all of the same specification property limits as 100LL with the exception of a lower motor octane number (94.0 minimum for UL94 vs. 99.6 minimum for 100LL) and a decreased maximum lead content. UL94 is an unleaded fuel, but as with all ASTM International unleaded gasoline specifications, a de minimis amount of unintentionally added lead is permitted.
Since May 2016, UL94, now a product of Swift Fuels, is available for sale at dozens of airports in the United States. Swift Fuels has an agreement for distribution in Europe.
UL94 is not intended to be a full replacement for 100LL, but rather is designed to be a drop-in replacement for aircraft with lower-octane-rated engines, such as those that are approved for operation on Grade 80 avgas (or lower), UL91, or mogas. It is estimated that up to 65% of the fleet of current general aviation piston-engine-powered aircraft can operate on UL94 with no modifications to either the engine or airframe. Some aircraft, however, do require a FAA-approved Supplemental Type Certificate (STC) to be purchased to allow for operation on UL94.
UL94 has a minimum Motor octane number (MON, which is the octane rating employed for grading aviation gasoline) of 94.0. 100LL has a minimum MON of 99.6.
AKI is the octane rating used to grade all U.S. automotive gasoline (typical values at the pump can include 87, 89, 91, and 93), and also the 93UL fuel from Airworthy AutoGas.
The minimum AKI of UL94, as sold by Swift Fuels, is 98.0.
Concurrent with the addition of UL94 to ASTM D7547, the FAA published Special Airworthiness Information Bulletin (SAIB) HQ-16-05, which states that "UL94 meets the operating limitations or aircraft and engines approved to operate with grade UL91 avgas," meaning that "Grade UL94 avgas that meets specification D7547 is acceptable to use on those aircraft and engines that are approved to operate with ... grade UL91 avgas that meets specification D7547." In August 2016, the FAA revised SAIB HQ-16-05 to include similar wording regarding the acceptability of using UL94 in aircraft and engines that are approved to operate with avgas that has a minimum Motor octane rating of 80 or lower, including Grade 80/87.
The publication of the SAIB, especially the August 2016 revision, eliminated the need for many of the UL94 STCs being sold by Swift Fuels, as the majority of the aircraft on the STC's Approved Model List are type-certified to use 80-octane or lower avgas.
On April 6, 2017, Lycoming Engines published Service Instruction 1070V, which adds UL94 as an approved grade of fuel for dozens of engine models, 60% of which are carbureted engines. Engines with displacements of 235, 320, 360, and 540 cubic inches make up almost 90% of the models approved for UL94.
=== UL102 (formerly 100SF Swift Fuel) ===
Swift Fuels, LLC, has attained approval to produce fuel for testing at its pilot plant in Indiana. Composed of approximately 85% mesitylene and 15% isopentane, the fuel is reportedly scheduled for extensive testing by the FAA to receive certification under the new ASTM D7719 guideline for unleaded 100LL replacement fuels. The company eventually intends to produce the fuel from renewable biomass feedstocks, and aims to produce something competitive in price with 100LL and currently available alternative fuels. Swift Fuels has suggested that the fuel, formerly referred to as 100SF, will be available for "high performance piston-powered aircraft" before 2020.
John and Mary-Louise Rusek founded Swift Enterprises in 2001 to develop renewable fuels and hydrogen fuel cells. They began testing "Swift 142" in 2006 and patented several alternatives for non-alcohol based fuels which can be derived from biomass fermentation. Over the next several years, the company sought to build a pilot plant to produce enough fuel for larger-scale testing and submitted fuel to the FAA for testing.
In 2008, an article by technology writer and aviation enthusiast Robert X. Cringely attracted popular attention to the fuel, as also did a cross-country Swift-Fueled flight by the AOPA's Dave Hirschman. Swift Enterprises' claims that the fuel could eventually be manufactured much more cheaply than 100LL have been debated in the aviation press.
The FAA found Swift Fuel to have a motor octane number of 104.4, 96.3% of the energy per unit of mass, and 113% of the energy per unit of volume as 100LL, and to meet most of the ASTM D910 standard for leaded aviation fuel. Following tests in two Lycoming engines, the FAA concluded it performs better than 100LL in detonation testing and will provide a fuel saving of 8% per unit of volume, though it weighs 1 pound per US gallon (120 g/L) more than 100LL. GC–FID testing showed the fuel to be made primarily of two components — one about 85% by weight and the other about 14% by weight. Soon afterward, AVweb reported that Continental had begun the process of certifying several of its engines to use the new fuel.
From 2009 through 2011, 100SF was approved as a test fuel by ASTM International, allowing the company to pursue certification testing. satisfactorily tested by the FAA, tested by Purdue University, and approved under ASTM specification D7719 for high-octane Grade UL102, allowing the company to test more economically in non-experimental aircraft.
In 2012, Swift Fuels LLC was formed to bring in oil and gas industry experience, scale up production and bring the fuel to market. By November 2013, the company had built its pilot plant and received approval to produce fuel in it. Its most recent patent, approved in 2013, describes methods by which the fuel can be produced from fermentable biomass.
The FAA scheduled UL102 for 2 years of phase 2 testing in its PAFI initiative beginning in the summer of 2016.
=== G100UL ===
In February 2010, General Aviation Modifications Inc. (GAMI) announced that it was in the process of developing a 100LL replacement to be called G100UL ("unleaded"). This fuel is made by blending existing refinery products and yields detonation margins comparable to 100LL. The new fuel is slightly more dense than 100LL, but has a 3.5% higher thermodynamic output. G100UL is compatible with 100LL and can be mixed with it in aircraft tanks for use.
In demonstrations held in July 2010, G100UL performed better than 100LL that just meets the minimum specification and performed as well as average production 100LL.
G100UL was approved by the Federal Aviation Administration by the issuance of a Supplemental Type Certificate at AirVenture in July 2021. The STC was initially only applicable to Lycoming-powered models of the Cessna 172. The company indicated that the retail cost was expected to be 0.60–0.85 US dollars per US gallon higher than 100LL. This was later revised to 1.00 US dollar per US gallon.
In 2022, Paul Bertorelli of AVweb reported that the FAA was dragging its feet on broadly certifying G100UL, delaying approval of the fuel for more engines and spending over $80 million on EAGLE to re-start a search for an unleaded fuel when G100UL had been under evaluation for over 10 years.
In September 2022, in a surprise announcement, the FAA approved an STC for the use of the fuel for all piston-engined aircraft and engine combinations. In February 2023, GAMI began selling supplemental type certificates to allow aircraft owners to use the fuel when it becomes available. In April 2024, GAMI announced that 1 million gallons of G100UL had been produced. Fuel availability in the US was forecast for airports in California, Washington and Oregon by the middle of 2024 and the rest of the country by 2026.
In December 2024, shortly after G100UL was made available at selected airports in California, several concerns regarding material compatibility arose, as users reported fuel leaks, paint staining, and paint stripping.
=== Shell Unleaded 100-Octane Fuel ===
In December 2013, Shell Oil announced that they had developed an unleaded 100 octane fuel and will submit it for FAA testing with certification expected within two to three years. The fuel is alkylate-based with an additive package of aromatics. No information has yet been published with regard to its performance, producibility or price. Industry analysts have indicated that it will likely cost as much as or more than existing 100LL.
=== UL100E ===
In 2018, LyondellBasell and VP Racing Fuels, an established motorsport fuel manufacturer, announced its intention to participate in developing an unleaded fuel for piston-powered aircraft. As of December 2024, it has reached Phase 4 of testing, according to the FAA.
== Environmental regulation ==
TEL found in leaded avgas and its combustion products are potent neurotoxins that have been shown in scientific research to interfere with brain development in children. Children in residences or childcare facilities in close proximity to airports with moderate to high piston engine aircraft traffic are at especially high risk of high blood lead levels. The United States Environmental Protection Agency (EPA) has noted that exposure to even very low levels of lead contamination has been conclusively linked to loss of IQ in children's brain function tests, thus providing a high degree of motivation to eliminate lead and its compounds from the environment.
While lead concentrations in the air have declined, scientific studies have demonstrated that children's neurological development is harmed by much lower levels of lead exposure than previously understood. Low level lead exposure has been clearly linked to loss of IQ in performance testing. Even an average IQ loss of 1–2 points in children has a meaningful impact for the nation as a whole, as it would result in an increase in children classified as mentally challenged, as well as a proportional decrease in the number of children considered "gifted".
On November 16, 2007, the environmental group Friends of the Earth formally petitioned the EPA, asking them to regulate leaded avgas. The EPA responded with a notice of petition for rulemaking.
The notice of petition stated:
Friends of the Earth has filed a petition with EPA, requesting that EPA find pursuant to section 231 of the Clean Air Act that lead emissions from general aviation aircraft cause or contribute to air pollution that may reasonably be anticipated to endanger public health or welfare and that EPA propose emissions standards for lead from general aviation aircraft. Alternatively, Friends of the Earth requests that EPA commence a study and investigation of the health and environmental impacts of lead emissions from general aviation aircraft, if EPA believes that insufficient information exists to make such a finding. The petition submitted by Friends of the Earth explains their view that lead emissions from general aviation aircraft endanger the public health and welfare, creating a duty for the EPA to propose emission standards.
The public comment period on this petition closed on March 17, 2008.
Under a federal court order to set a new standard by October 15, 2008, the EPA cut the acceptable limits for atmospheric lead from the previous standard of 1.5 μg/m3 to 0.15 μg/m3. This was the first change to the standard since 1978 and represents an order of magnitude reduction over previous levels. The new standard requires the 16,000 remaining USA sources of lead, which include lead smelting, airplane fuels, military installations, mining and metal smelting, iron and steel manufacturing, industrial boilers and process heaters, hazardous waste incineration, and production of batteries, to reduce their emissions by October 2011.
The EPA's own studies have shown that to prevent a measurable decrease in IQ for children deemed most vulnerable, the standard needs to be set much lower, to 0.02 μg/m3. The EPA identified avgas as one of the most "significant sources of lead".
At an EPA public consultation held in June 2008 on the new standards, Andy Cebula, the Aircraft Owners and Pilots Association's executive vice president of government affairs, stated that general aviation plays a valuable role in the USA economy and any changes in lead standards that would change the current composition of avgas would have a "direct impact on the safety of flight and the very future of light aircraft in this country".
In December 2008, AOPA filed formal comments to the new EPA regulations. AOPA has asked the EPA to account for the cost and the safety issues involved with removing lead from avgas. They cited that the aviation sector employs more than 1.3 million people in the US and has an economic direct and indirect effect that "exceeds $150 billion annually". AOPA interprets the new regulations as not affecting general aviation as they are currently written.
Publication in the USA Federal Register of an Advance Notice of Proposed Rulemaking by the USA EPA occurred in April 2010. The EPA indicated: "This action will describe the lead inventory related to use of leaded avgas, air quality and exposure information, additional information the Agency is collecting related to the impact of lead emissions from piston-engine aircraft on air quality and will request comments on this information."
Despite assertions in the media that leaded avgas will be eliminated in the US by 2017 at the latest date, the EPA confirmed in July 2010 that there is no phase-out date and that setting one would be an FAA responsibility as the EPA has no authority over avgas. The FAA administrator stated that regulating lead in avgas is an EPA responsibility, resulting in widespread criticism of both organizations for causing confusion and delaying solutions.
In April 2011 at Sun 'n Fun, Pete Bunce, head of the General Aviation Manufacturers Association (GAMA), and Craig Fuller, president and CEO of the Aircraft Owners and Pilots Association, indicated that they both are confident that leaded avgas will not be eliminated until a suitable replacement is in place. "There is no reason to believe 100 low-lead will become unavailable in the foreseeable future," Fuller stated.
Final results from EPA's lead modeling study at the Santa Monica Airport shows off-airport levels below current 150 ng/m3 and possible future 20 ng/m3 levels. Fifteen of 17 airports monitored during a year-long study in the US by the EPA have lead emissions well below the current National Ambient Air Quality Standard (NAAQS) for lead.
== Other uses ==
Avgas is occasionally used in amateur auto racing cars as its octane rating is higher than automotive gasoline thus allowing the engines to run at higher compression ratios.
== See also ==
Relative CO2 emission from various fuels
== Notes ==
== References ==
== External links ==
AVGAS |
Aviation (disambiguation) | Aviation refers to activities around mechanical flight.
Aviation may also refer to:
== Music ==
Aviation (album), a 2014 alt rock album and song by Semi Precious Weapons
"Aviation" (song), 2016, by The Last Shadow Puppets
"Aviation", a song on Leo Sayer's 1984 album Have You Ever Been in Love
Aviation, a 1976 instrumental album by R. Stevie Moore
== Other uses ==
Aviation (painting), a 1934 painting by Rufino Tamayo
Aviation (cocktail), an alcoholic drink
Aviation Week & Space Technology (formerly Aviation), a periodical
"Aviation", a Series A episode of the television series QI (2003) |
Aviation accidents and incidents | An aviation accident is an event during aircraft operation that results serious injury, death, or significant destruction. An aviation incident is any operating event that compromises safety but does not escalate into an aviation accident. Preventing both accidents and incidents is the primary goal of aviation safety.
One of the earliest recorded aviation accidents occurred on May 10, 1785, when a hot air balloon crashed in Tullamore, County Offaly, Ireland. The resulting fire seriously damaged the town, destroying over 130 homes. The first accident involving a powered aircraft occurred on September 17, 1908, when a Wright Model A crashed at Fort Myer, Virginia, USA. The pilot and co-inventor, Orville Wright, was injured, and the passenger, Signal Corps Lieutenant Thomas Selfridge, was killed.
== Definitions ==
According to Annex 13 of the Convention on International Civil Aviation, an aviation accident is an occurrence associated with the operation of an aircraft, which takes place from the time any person boards the aircraft with the intention of flight until all such persons have disembarked, and in which (a) a person is fatally or seriously injured, (b) the aircraft sustains significant damage or structural failure, or (c) the aircraft goes missing or becomes completely inaccessible. Annex 13 defines an aviation incident as an occurrence, other than an accident, associated with the operation of an aircraft that affects or could affect the safety of operation.
A hull loss occurs if an aircraft is damaged beyond repair, is lost, or becomes completely inaccessible.
== History ==
The first aircraft accident in which 200 or more people died occurred on March 3, 1974, when 346 died in the crash of Turkish Airlines Flight 981. As of May 2024, there have been a total of 33 aviation incidents in which 200 or more people have died.
The period from 1958 to 1968 saw tremendous growth in aviation. Improvements in aviation safety and accident investigation procedures were rapidly advancing. In 1963, the Civil Aeronautics Board, under the leadership of then Deputy Director Bobbie R. Allen, established the National Aircraft Accident Investigation School in Oklahoma City.
The ICAO's third accident investigation division meeting, held in Montreal, Canada, in January 1965, laid the foundation for accident investigations throughout the world. The proposals were presented by the Director of the Civil Aeronautics Board Bureau of Safety, Bobbie R. Allen, who headed the U.S. delegation. The U.S. formally adopted the proposals at the White House on December 1, 1965.
The top 10 countries with the highest number of fatal civil airliner accidents from 1945 to 2021 are the United States, Russia, Canada, Brazil, Colombia, United Kingdom, France, Indonesia, Mexico, and India. The United Kingdom is noted to have the highest number of air crashes in Europe, with a total of 110 air crashes within the time period, and Indonesia is the highest in Asia at 104, followed by India at 95.
The most fatalities on board a single aircraft are the 520 fatalities of the 1985 Japan Air Lines Flight 123 accident. The largest loss of life in a single aviation accident are the 583 fatalities of the 1977 Tenerife airport disaster, in which two Boeing 747s collided. The largest loss of life overall in a collective incident are the 2,996 fatalities in the coordinated terrorist destruction of airplanes and occupied buildings in the 2001 September 11 attacks, the first plane to be hijacked and crashed as part of the attack, American Airlines Flight 11, was alone responsible for an estimated 1,700 fatalities in total, making it the single deadliest aviation disaster in history.
=== September 11 attacks ===
The deadliest aviation-related disaster regarding fatalities both on board the aircraft and casualties on the ground, was the destruction of the World Trade Center in New York City on September 11, 2001. On that morning, four commercial jet airliners on transcontinental flights from East Coast airports to California were hijacked after takeoff. The four hijacked aircraft were subsequently crashed in a series of four coordinated suicide attacks against major American landmarks by 19 Islamist terrorists affiliated with Al-Qaeda. American Airlines Flight 11 and United Airlines Flight 175, both regularly scheduled domestic transcontinental flights from Boston to Los Angeles, were hijacked by five men each, with the assigned pilot hijacker taking control of the flight, before being intentionally crashed into the North and South Towers of the World Trade Center, respectively, destroying both buildings in less than two hours. The World Trade Center crashes killed 2,753. As both planes were carrying a combined total of 157 occupants, the vast majority of fatalities were the occupants of the two towers and the emergency personnel responding to the disaster. In addition, 184 were killed by the impact of American Airlines Flight 77, which crashed into the Pentagon in Arlington County, Virginia, causing severe damage and partial destruction to the building's west side. The crash of United Airlines Flight 93 into a field in Somerset County, Pennsylvania, which occurred as passengers attempted to retake control of the aircraft from the hijackers, killed all 40 passengers and crew aboard the aircraft. This brought the total number of casualties of the September 11 attacks to 2,996 (including the 19 terrorist hijackers). As deliberate terrorist acts, the 9/11 crashes were not classified as accidents, but as mass-killing. The events were treated by the member nations of NATO as an act of war and terrorism. The war on terror was subsequently launched by NATO in response to the attacks, eventually leading to the death of Al-Qaeda leader Osama Bin Laden, who orchestrated the 9/11 attacks.
=== Tenerife disaster ===
The Tenerife airport disaster on March 27, 1977, remains the accident with the highest number of airliner passenger fatalities. 583 people died when a KLM Boeing 747 attempted to take off and collided with a taxiing Pan Am 747 at Los Rodeos Airport on the Canary Island of Tenerife, Spain. All 234 passengers and 14 crew of the KLM aircraft died and 335 of the 396 passengers and crew of the Pan Am aircraft died. Pilot error was the primary cause; the KLM captain mistakenly believed he had received air traffic control clearance and initiated takeoff. Other contributing factors were a terrorist incident at Gran Canaria Airport that had caused many flights to be diverted to Los Rodeos, a small airport not well equipped to handle aircraft of such size, dense fog, poor radio phraseology, and controller distraction. The KLM flight crew could not see the Pan Am aircraft on the runway until immediately before the collision. The accident had a lasting influence on the industry, particularly in the area of communication. An increased emphasis was placed on using standardized phraseology in air traffic control (ATC) communication by both controllers and pilots alike. "Cockpit Resource Management" has also been incorporated into flight crew training. The captain is no longer considered infallible, and combined crew input is encouraged during aircraft operations.
=== Japan Air Lines Flight 123 ===
The crash of Japan Air Lines Flight 123 on August 12, 1985, has the highest number of fatalities for any single-aircraft accident: 520 people died aboard a Boeing 747. The aircraft experienced explosive decompression due to an improperly repaired aft pressure bulkhead, leading to the destruction of most of its vertical stabilizer and severing all hydraulic lines, rendering the 747 nearly uncontrollable. Pilots were able to keep the plane flying for 32 minutes after the mechanical failure before crashing into a mountain. All 15 crew members and 505 of the 509 passengers aboard died. Japanese military personnel inaccurately assumed, during a helicopter flyover of the impact site, that there were no survivors. Rescue operations were delayed until the following morning. Medical providers involved in rescue and analysis operations determined that several passengers likely survived the impact and probably would have survived the incident had rescue operations not been delayed. Four passengers survived the incident in its entirety, meaning that they were alive when discharged from the hospital.
=== Other crashes with death tolls of 200 or more ===
== Safety ==
In over one hundred years of implementation, aviation safety has improved considerably. In modern times, two major manufacturers still produce heavy passenger aircraft for the civilian market: Boeing in the United States, and the European company Airbus. Both of these manufacturers place a huge emphasis on the use of aviation safety equipment, now a billion-dollar industry in its own right; safety is a key selling point for these companies, as they recognize that a poor safety record in the aviation industry is a threat to corporate survival.
Some major safety devices now required in commercial aircraft are:
Evacuation slides, to aid rapid passenger exit from an aircraft in an emergency situation
Advanced avionics, incorporating computerized auto-recovery and alert systems
Turbine engines with improved durability and failure containment mechanisms
Landing gear that can be lowered even after loss of power and hydraulics
Measured on a passenger-distance calculation, air travel is the safest form of transportation available: Figures mentioned are the ones shared by the air industry when quoting air safety statistics. A typical statement, e.g., by the BBC: "UK airline operations are among the safest anywhere. When compared against all other modes of transport on a fatality per mile basis, air transport is the safest – six times safer than travelling by car and twice as safe as rail."
When measured by fatalities per person transported, however, buses are the safest form of transportation. The number of air travel fatalities per person are surpassed only by bicycles and motorcycles. This statistic is used by the insurance industry when calculating insurance rates for air travel.
For every billion kilometers traveled, trains have a fatality rate that is 12 times higher than that of air travel, and the fatality rate for automobiles is 62 times greater than for air travel. By contrast, for every billion journeys taken, buses are the safest form of transportation; using this measure, air travel is three times more dangerous than car transportation, and almost 30 times more dangerous than travelling by bus.
A 2007 study by Popular Mechanics magazine found that passengers sitting at the back of an aeroplane are 40% more likely to survive a crash than those sitting at the front. The article quotes Boeing, the FAA, and a website on aircraft safety, all of which claim that there is no "safest" seat. The study examined 20 crashes, not taking into account the developments in safety after those accidents. However, a flight data recorder is usually mounted in the aircraft's empennage (tail section) where it is more likely to survive a severe crash.
Between 1983 and 2000, the survival rate for people in U.S. plane crashes was greater than 95 percent.
=== Global Aeronautical Distress and Safety System ===
In an effort to prevent incidents such as the disappearance of Malaysia Airlines Flight MH370, a new standard has been issued requiring all commercial aircraft to report their position every 15 minutes to air traffic controllers regardless of the country of origin. Introduced in 2016 by the ICAO, the regulation has no initial requirement for any new aircraft equipment to be fitted. The standard is part of a long-term plan, called the Global Aeronautical Distress and Safety System (GADSS), which will require new aircraft to be equipped with data broadcast systems that are in constant contact with air traffic controllers. The GADSS is similar to the Global Maritime Distress and Safety System (GMDSS) used for maritime safety.
=== Aviation Safety Reporting System ===
The Aviation Safety Reporting System (ASRS) collects voluntarily submitted aviation safety incident/situation reports from pilots, controllers and others. The ASRS uses reports to identify system deficiencies, issue alert messages, and produce two publications, CALLBACK, and ASRS Directline. The collected information is made available to the public, and is used by the FAA, NASA and other organizations working in research and flight safety.
== Statistics ==
=== Bureau of Aircraft Accidents Archives (B3A) ===
The Bureau of Aircraft Accidents Archives (B3A), formerly known as the Aircraft Crashes Record Office (ACRO), a non-government organization based in Geneva, Switzerland, compiles statistics on aviation accidents of aircraft capable of carrying more than six passengers, excluding helicopters, balloons, and combat aircraft. ACRO only considers crashes in which the aircraft has suffered such damage that it is removed from service, which will further reduce the statistics for incidents and fatalities compared to some other data. The total fatalities due to aviation accidents since 1970 are 83,772. The total number of incidents are 11,164.
According to ACRO, recent years have been considerably safer for aviation, with fewer than 170 incidents every year between 2009 and 2017, compared to as many as 226 as recently as 1998.
The annual fatalities figure is less than 1,000 for ten of the fourteen years between 2007 and 2020, the year 2017 experiencing the lowest number of fatalities, at 399, since the end of World War II.
2014 included the disappearance of flight MH370 over the Indian Ocean and the shootdown of flight MH17 as part of the war in Donbas. The total number of fatalities in 2014 was 869 more than in 2013.
Deaths and incidents in the world per year according to ACRO and Bureau of Aircraft Accident Archives data, as of January 1, 2019:
(Data have significantly changed since November 2015 after a major upgrade to the death rate and crash rate web pages. This may reflect a change between a static and dynamic web page, where data were made to be automatically updated based on the incidents in their archives.)
=== Annual Aviation Safety Review (EASA) ===
The European Aviation Safety Agency (EASA) is tasked by Article 15(4) of Regulation (EC) No 216/2008 of the European Parliament and of the Council of February 20, 2008, to provide an annual review of aviation safety.
The Annual Safety Review presents statistics on European and worldwide civil aviation safety. Statistics are grouped according to type of operation, for instance, commercial air transport, and aircraft category, such as aeroplanes, helicopters, gliders, etc.
The Agency has access to accident and statistical information collected by the International Civil Aviation Organization (ICAO). States are required, according to ICAO Annex 13, on Aircraft Accident and Incident Investigation, to report to ICAO information, on accidents and serious incidents to aircraft with a maximum certificated take-off mass (MTOM) over 2250 kg. Therefore, most statistics in this review concern aircraft above this mass. In addition to the ICAO data, a request was made to the EASA Member States to obtain light aircraft accident data. Furthermore, data on the operation of aircraft for commercial air transport were obtained from both ICAO and the NLR Air Transport Safety Institute.
== Investigation ==
Annex 13 of the Chicago Convention provides the international Standards And Recommended Practices that form the basis for air accident and incident investigations by signatory countries, as well as reporting and preventive measures. The International Civil Aviation Organization (ICAO) is specifically focused on preventing accidents, rather than determining liability.
=== Australia ===
In Australia, the Australian Transport Safety Bureau is the federal government body responsible for investigating transport-related accidents and incidents, covering air, sea, and rail travel. Formerly an agency of the Department of Infrastructure, Transport, Regional Development and Local Government, in 2010, in the interests of keeping its independence it became a stand-alone agency.
=== Brazil ===
In Brazil, the Aeronautical Accidents Investigation and Prevention Center (CENIPA) was established under the auspices of the Aeronautical Accident Investigation and Prevention Center, a Military Organization of the Brazilian Air Force (FAB). The organization is responsible for the activities of aircraft accident prevention, and investigation of civil and military aviation occurrences. Formed in 1971, and in accordance with international standards, CENIPA represented a new philosophy: investigations are conducted with the sole purpose of promoting the "prevention of aeronautical accidents".
=== Canada ===
In Canada, the Transportation Safety Board of Canada (TSB), is an independent agency responsible for the advancement of transportation safety through the investigation and reporting of accident and incident occurrences in all prevalent Canadian modes of transportation – marine, air, rail and pipeline.
=== China ===
In China, the Civil Aviation Administration of China (CAAC) is solely responsible for all air investigations and safety inside the country after the split from the formal CAAC Airlines.
=== Ethiopia ===
In Ethiopia, the Civil Aviation Accident Prevention and Investigation Bureau of the Ethiopian Civil Aviation Authority (ECAA), which is an agency of the Ministry of Transport and Communications, conducts aircraft accident investigations in Ethiopia or involving Ethiopian aircraft.
=== France ===
In France, the agency responsible for investigation of civilian air crashes is the Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile (BEA). Its purpose is to establish the circumstances and causes of the accident and to make recommendations for their future avoidance.
=== Germany ===
In Germany, the agency for investigating air crashes is the Federal Bureau of Aircraft Accidents Investigation (BFU). It is an agency of the Federal Ministry of Transport and Digital Infrastructure. The focus of the BFU is to improve safety by determining the causes of accidents and serious incidents and making safety recommendations to prevent recurrence.
=== Hong Kong ===
The Air Accident Investigation Authority (AAIA) is responsible for investigating civil aviation accidents in Hong Kong, as well as those in other territories involving a Hong Kong-registered aircraft. It is led by Darren Straker, Chief Inspector of Accidents, and headquartered at Hong Kong International Airport. AAIA was established in 2018 in response to an ICAO directive instructing that member states maintain air accident investigation authorities that are independent of civil aviation authorities and related entities. Prior to 2018, accident investigation duties were held by the Civil Aviation Department's Flight Standards & Airworthiness Division and Accident Investigation Division.
=== India ===
Until May 30, 2012, the Directorate General of Civil Aviation investigated incidents involving aircraft. Since then, the Aircraft Accident Investigation Bureau has taken over investigation responsibilities.
=== Indonesia ===
In Indonesia, the National Transportation Safety Committee (NTSC; Indonesian: Komite Nasional Keselamatan Transportasi, KNKT) is responsible for the investigation of incidents and accidents, including air accidents. Its aim is the improvement of transportation safety, not just aviation, in Indonesia.
=== Italy ===
Created in 1999 in Italy, the Agenzia Nazionale per la Sicurezza del Volo (ANSV), has two main tasks: conducting technical investigations for civil aviation aircraft accidents and incidents, while issuing safety recommendations as appropriate; and conducting studies and surveys aimed at increasing flight safety. The organization is also responsible for establishing and maintaining the "voluntary reporting system". Although not under the supervision of the Ministry of Infrastructure and Transport, the ANSV is a public authority under the oversight of the Presidency of the Council of Ministers of Italy.
=== Japan ===
The Japan Transport Safety Board investigates aviation accidents and incidents. The Aircraft Accident Investigation Commission investigated aviation accidents and incidents in Japan until October 1, 2001, when the Aircraft and Railway Accidents Investigation Commission (ARAIC) replaced it, and the ARAIC did this function until October 1, 2008, when it merged into the JTSB.
=== Malaysia ===
Established in 2016, the Air Accident Investigation Bureau (AAIB) Malaysia is the main investigation body for aircraft accident/incident. Separate from Civil Aviation Authority of Malaysia (CAAM) and Malaysian Aviation Commission (MAVCOM) that is the national aviation authority and commission that oversee aviation economy respectively. The AAIB operates from the ministry of transport headquarters in Putrajaya, and its black box laboratory situated in STRIDE, the ministry of defenses research institute. AAIB Malaysia is teamed by civilians and seconded Royal Malaysian Airforce senior officer and a group of pool investigators from Malaysia Institute of Aviation Technology
=== Mexico ===
In Mexico, the Directorate General of Civil Aviation (DGAC) investigates aviation accidents.
=== Netherlands ===
In the Netherlands, the Dutch Safety Board (Onderzoeksraad voor Veiligheid) is responsible for the investigation of incidents and accidents, including air accidents. Its aim is the improvement of safety in the Netherlands. Its main focus is on those situations in which civilians are dependent on the government, companies or organizations for their safety. The Board solely investigates when incidents or accidents occur and aims to draw lessons from the results of these investigations. The Safety Board is objective, impartial and independent in its judgment. The Board will always be critical towards all parties concerned.
=== New Zealand ===
In New Zealand, the Transport Accident Investigation Commission (TAIC) is responsible for the investigation of air accidents. "The Commission's purpose, as set out in its Act, is to determine the circumstances and causes of aviation, rail and maritime accidents, and incidents, with a view to avoiding similar occurrences in the future, rather than to ascribe blame to any person." The TAIC investigates with accordance with annex 13 of the ICAO and specific New Zealand legislation.
=== Poland ===
In Poland, State Commission on Aircraft Accidents Investigation (Polish: Państwowa Komisja Badania Wypadków Lotniczych, PKBWL) is responsible for investigating all civil aviation accidents and incidents occurring in the country. Headquartered in Warsaw, the commission is a division of the Ministry of Infrastructure. As of November 2022, the head of the PKBWL is Bogusław Trela.
=== Russia ===
In Russia, the Interstate Aviation Committee (IAC, MAK according to the original Russian name) is an executive body overseeing the use and management of civil aviation in the Commonwealth of Independent States. This organization investigates air accidents in the former USSR area under the umbrella of the Air Accident Investigation Commission of the Interstate Aviation Committee. There are active discussion to dismantling the committee, and in 2020, Armenia and Russia has signed on a joint agreement establishing the International Bureau for investigating aviation accidents and serious incidents (In Russian: Международное бюро по расследованию авиационных происшествий и серьезных инцидентов), designed to replace the committee and to act as upper body for investigation of aviation incidents and, subordinate to the Eurasian Union. The new body has been assigned duties to investigate serious accidents and incidents in accordance with the requirements of ICAO documents, ensuring independent investigation of accidents, cooperation and interaction between the parties in relation to investigating aircraft accidents, development and use of common rules and procedures for investigating aircraft accidents.
=== Taiwan ===
In Taiwan, the Taiwan Transportation Safety Board (TTSB) is the independent government agency that is responsible for major transportation accident investigations. TTSB's predecessor was ASC, which was established in 1998. TTSB is under the administration of the Executive Yuan and independent from Civil Aviation Administration. The TTSB consisted of five to seven board members, including a chairman and a vice chairman, appointed by the Premier. The managing director of TTSB manages the day-to-day function of the organization, including accident investigations.
=== United Kingdom ===
In the United Kingdom, the agency responsible for investigation of civilian air crashes is the Air Accidents Investigation Branch (AAIB) of the Department for Transport. Its purpose is to establish the circumstances and causes of the accident and to make recommendations for their future avoidance.
=== United States ===
United States civil aviation incidents are investigated by the National Transportation Safety Board (NTSB). NTSB officials piece together evidence from the crash site to determine likely cause, or causes. The NTSB also investigates overseas incidents involving US-registered aircraft, in collaboration with local investigative authorities, especially when significant loss of American lives occurs, or when the involved aircraft is American-built.
Venezuela
In Venezuala, the organization tasked with investigating aviation accidents is the Ministry of Aquatic and Air Transport, more specifically the Directorate General for the Prevention and Investigation of Aeronautical Accidents.
== Retirement of flight numbers ==
Out of respect for the deceased and injured, airlines commonly retire the flight number associated with a fatal crash. For example, following the shootdown of Malaysia Airlines Flight 17, the flight number was changed to MH19. Japan Airlines stopped using the flight number 350 after a fatal plane crash in Tokyo Bay. TransAsia Airways retired the flight number 235 and changed it to 2353 after a plane crash in 2015 that left 15 survivors. However, that is not always the case. For example, China Southern Airlines and FedEx Express continued using the flight number 3456 and 14 respectively even after China Southern Airlines had a fatal accident in 1997 and a FedEx Express aircraft crashed on landing a month later. Similarly, Japan Airlines and Singapore Airlines continued using the flight number 516 and 321 respectively, even after the Japan Airlines flight was involved in a collision in 2024 while the Singapore Airlines flight encountered severe turbulence and caused one death a few months later.
== See also ==
== Notes ==
== References ==
== Bibliography ==
== Further reading ==
Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations | 1959 – 2017 (PDF). Aviation Safety | Boeing Commercial Airplanes. 2018. Archived (PDF) from the original on April 12, 2015.
Airbus Industrie. Commercial Aviation Accidents, 1958–2014: A Statistical Analysis. Blagnac Cedex, France: Airbus, 2015 13p.
Bordoni, Antonio. Airlife's Register Of Aircraft Accidents: Facts, Statistics and Analysis of Civil Accidents since 1951. Shrewsbury: Airlife, 1997 ISBN 1853109029 401p.
S A Cullen MD FRCPath FRAeS. "Injuries in Fatal Aircraft Accidents Archived January 2, 2011, at the Wayback Machine" (Archive). NATO.
== External links ==
Aviation Safety Network Established in 1996. The ASN Safety Database contains descriptions of over 15800 airliner, military and corporate jet aircraft accidents/incidents since 1921.
Bureau of Aircraft Accidents Archives Established in 2000. The B3A contains descriptions of over 22,000 airliner, military and corporate jet aircraft accidents since 1918.
National Transportation Safety Board Aviation Accident Synopses – by month
Aviation Statistics Statistical and geospatial analysis of general aviation accidents.
Aviation Accidents App Access the NTSB Aviation Accidents Database and Final Reports from all over the world on your mobile device
Aviation Accidents Map Explore an interactive map showcasing airplane crash sites |
Aviation biofuel | An aviation biofuel (also known as bio-jet fuel, sustainable aviation fuel (SAF), or bio-aviation fuel (BAF)) is a biofuel used to power aircraft. The International Air Transport Association (IATA) considers it a key element in reducing the environmental impact of aviation. Aviation biofuel is used to decarbonize medium and long-haul air travel. These types of travel generate the most emissions and could extend the life of older aircraft types by lowering their carbon footprint. Synthetic paraffinic kerosene (SPK) refers to any non-petroleum-based fuel designed to replace kerosene jet fuel, which is often, but not always, made from biomass.
Biofuels are biomass-derived fuels from plants, animals, or waste; depending on which type of biomass is used, they could lower CO2 emissions by 20–98% compared to conventional jet fuel.
The first test flight using blended biofuel was in 2008, and in 2011, blended fuels with 50% biofuels were allowed on commercial flights. In 2023 SAF production was 600 million liters, representing 0.2% of global jet fuel use. By 2024, SAF production was to increase to 1.3 billion liters (1 million tonnes), representing 0.3% of global jet fuel consumption and 11% of global renewable fuel production. This increase came as major US production facilities delayed their ramp-up until 2025, having initially been expected to reach 1.9 billion liters.
Aviation biofuel can be produced from plant or animal sources such as Jatropha, algae, tallows, waste oils, palm oil, Babassu, and Camelina (bio-SPK); from solid biomass using pyrolysis processed with a Fischer–Tropsch process (FT-SPK); with an alcohol-to-jet (ATJ) process from waste fermentation; or from synthetic biology through a solar reactor. Small piston engines can be modified to burn ethanol.
Sustainable biofuels are an alternative to electrofuels. Sustainable aviation fuel is certified as being sustainable by a third-party organisation.
SAF technology faces significant challenges due to feedstock constraints. The oils and fats known as hydrotreated esters and fatty acids (Hefa), crucial for SAF production, are in limited supply as demand increases. Although advanced e-fuels technology, which combines waste CO2 with clean hydrogen, presents a promising solution, it is still under development and comes with high costs. To overcome these issues, SAF developers are exploring more readily available feedstocks such as woody biomass and agricultural and municipal waste, aiming to produce lower-carbon jet fuel more sustainably and efficiently.
== Environmental impact ==
Plants absorb carbon dioxide as they grow, therefore plant-based biofuels emit only the same amount of greenhouse gases as they had previously absorbed. Biofuel production, processing, and transport, however, emit greenhouse gases, reducing the emissions savings. Biofuels with the most emission savings are those derived from photosynthetic algae (98% savings) although the technology is not developed, and those from non-food crops and forest residues (91–95% savings).
Jatropha oil, a non-food oil used as a biofuel, lowers CO2 emissions by 50–80% compared to Jet-A1, a kerosene-based fuel. Jatropha, used for biodiesel, can thrive on marginal land where most plants produce low yields. A life cycle assessment on jatropha estimated that biofuels could reduce greenhouse gas emissions by up to 85% if former agro-pastoral land is used, or increase emissions by up to 60% if natural woodland is converted.
Palm oil cultivation is constrained by scarce land resources and its expansion to forestland causes biodiversity loss, along with direct and indirect emissions due to land-use change. Neste Corporation's renewable products include a refining residue of food-grade palm oil, the oily waste skimmed from the palm oil mill's wastewater. Other Neste sources are used cooking oil from deep fryers and animal fats. Neste's sustainable aviation fuel is used by Lufthansa; Air France and KLM announced 2030 SAF targets in 2022 including multi-year purchase contracts totaling over 2.4 million tonnes of SAF from Neste, TotalEnergies, and DG Fuels.
Aviation fuel from wet waste-derived feedstock ("VFA-SAF") provides an additional environmental benefit. Wet waste consists of waste from landfills, sludge from wastewater treatment plants, agricultural waste, greases, and fats. Wet waste can be converted to volatile fatty acids (VFA's), which then can be catalytically upgraded to SAF. Wet waste is a low-cost and plentiful feedstock, with the potential to replace 20% of US fossil jet fuel. This lessens the need to grow crops specifically for fuel, which in itself is energy intensive and increases CO2 emissions throughout its life cycle. Wet waste feedstocks for SAF divert waste from landfills. Diversion has the potential to eliminate 17% of US methane emissions across all sectors. VFA-SAF's carbon footprint is 165% lower than fossil aviation fuel. This technology is in its infancy; although start-ups are working to make this a viable solution. Alder Renewables, BioVeritas, and ChainCraft are a few organizations committed to this.
NASA has determined that 50% aviation biofuel mixture can cut particulate emissions caused by air traffic by 50–70%. Biofuels do not contain sulfur compounds and thus do not emit sulfur dioxide.
== History ==
The first flight using blended biofuel took place in 2008. Virgin Atlantic used it fly a commercial airliner, using feedstocks such as algae. Airlines representing more than 15% of the industry formed the Sustainable Aviation Fuel Users Group, with support from NGOs such as Natural Resources Defense Council and The Roundtable For Sustainable Biofuels by 2008. They pledged to develop sustainable biofuels for aviation. That year, Boeing was co-chair of the Algal Biomass Organization, joined by air carriers and biofuel technology developer UOP LLC (Honeywell).
In 2009, the IATA committed to achieving carbon-neutral growth by 2020, and to halve carbon emissions by 2050.
In 2010, Boeing announced a target 1% of global aviation fuels by 2015.
By June 2011, the revised Specification for Aviation Turbine Fuel Containing Synthesized Hydrocarbons (ASTM D7566) allowed commercial airlines to blend up to 50% biofuels with conventional jet fuel. The safety and performance of jet fuel used in passenger flights is certified by ASTM International. Biofuels were approved for commercial use after a multi-year technical review from aircraft makers, engine manufacturers and oil companies. Thereafter some airlines experimented with biofuels on commercial flights. As of July 2020, seven annexes to D7566 were published, including various biofuel types:
Fischer-Tropsch Synthetic Paraffinic Kerosene (FT-SPK, 2009)
Hydroprocessed Esters and Fatty Acids Synthetic Paraffinic Kerosene (HEFA-SPK, 2011)
usHydroprocessed Fermented Sugars to Synthetic Isoparaffins (HFS-SIP, 2014)
Fischer-Tropsch Synthetic Paraffinic Kerosene with Aromatics (FT-SPK/A, 2015)
Alcohol to Jet Synthetic Paraffinic Kerosene (ATJ-SPK, 2016)
Catalytic Hydrothermolysis Synthesized Kerosene (CH-SK, or CHJ; 2020).
In December 2011, the FAA awarded US$7.7 million to eight companies to develop drop-in sustainable fuels, especially from alcohols, sugars, biomass, and organic matter such as pyrolysis oils, within its CAAFI and CLEEN programs.
Biofuel provider Solena filed for bankruptcy in 2015.
By 2015, cultivation of fatty acid methyl esters and alkenones from the algae, Isochrysis, was under research.
By 2016, Thomas Brueck of Munich TU was forecasting that algaculture could provide 3–5% of jet fuel needs by 2050.
In fall 2016, the International Civil Aviation Organization announced plans for multiple measures including the development and deployment of sustainable aviation fuels.
Dozens of companies received hundreds of millions in venture capital from 2005 to 2012 to extract fuel oil from algae, some promising competitively-priced fuel by 2012 and production of 1 billion US gal (3.8 million m3) by 2012-2014. By 2017 most companies had disappeared or changed their business plans to focus on other markets.
In 2019, 0.1% of fuel was SAF: The International Air Transport Association (IATA) supported the adoption of Sustainable Aviation fuel, aiming in 2019 for 2% share by 2025: 7 million m3 (1.8 billion US gal).
In early 2021, Boeing's CEO Dave Calhoun said drop-in sustainable aviation fuels are "the only answer between now and 2050" to reduce carbon emissions. In May 2021, the International Air Transport Association (IATA) set a goal for the aviation industry to achieve net-zero carbon emissions by 2050 with SAF as the key component.
The 2022 Inflation Reduction Act introduced the Fueling Aviation's Sustainable Transition (FAST) Grant Program. The program provides $244.5 million in grants for SAF-related "production, transportation, blending, and storage." In November, 2022, sustainable aviation fuels were a topic at COP26.
As of 2023, 90% of biofuel was made from oilseed and sugarcane which are grown for this purpose only.
== Production ==
Jet fuel is a mixture of various hydrocarbons. The mixture is restricted by product requirements, for example, freezing point and smoke point. Jet fuels are sometimes classified as kerosene or naphtha-type. Kerosene-type fuels include Jet A, Jet A-1, JP-5 and JP-8. Naphtha-type jet fuels, sometimes referred to as "wide-cut" jet fuel, include Jet B and JP-4.
"Drop-in" biofuels are biofuels that are interchangeable with conventional fuels. Deriving "drop-in" jet fuel from bio-based sources is ASTM approved via two routes. ASTM has found it safe to blend in 50% SPK into regular jet fuels. Tests have been done with blending synthetic paraffinic kerosene (SPK) in considerably higher concentrations.
HEFA-SPK
Hydroprocessed Esters and Fatty Acids Synthetic Paraffinic Kerosine (HEFA-SPK) is a specific type of hydrotreated vegetable oil fuel. As of 2020 this was the only mature technology (but by 2024 FT-SPK was commercialized as well). HEFA-SPK was approved by Altair Engineering for use in 2011. HEFA-SPK is produced by the deoxygenation and hydroprocessing of the feedstock fatty acids of algae, jatropha, and camelina.
Bio-SPK
This fuel uses oil extracted from plant or animal sources such as jatropha, algae, tallows, waste oils, babassu, and Camelina to produce synthetic paraffinic kerosene (bio-SPK) by cracking and hydroprocessing. Using algae to make jet fuel remains an emerging technology. Companies working on algae jet fuel include Solazyme, Honeywell UOP, Solena, Sapphire Energy, Imperium Renewables, and Aquaflow Bionomic Corporation. Universities working on algae jet fuel are Arizona State University and Cranfield University. Major investors for algae-based SPK research are Boeing, Honeywell/UOP, Air New Zealand, Continental Airlines, Japan Airlines, and General Electric.
FT-SPK
Processing solid biomass using pyrolysis can produce oil or gasification to produce a syngas that is processed into FT SPK (Fischer–Tropsch Synthetic Paraffinic Kerosene).
ATJ-SPK
The alcohol-to-jet (ATJ) pathway takes alcohols such as ethanol or butanol and de-oxygenates and processes them into jet fuels. Companies such as LanzaTech have created ATJ-SPK from CO2 in flue gases. The ethanol is produced from CO in the flue gases using microbes such as Clostridium autoethanogenum. In 2016 LanzaTech demonstrated its technology at Pilot scale in NZ – using Industrial waste gases from the steel industry as a feedstock. Gevo developed technology to retrofit existing ethanol plants to produce isobutanol. Alcohol-to-Jet Synthetic Paraffinic Kerosene (ATJ-SPK) is a proven pathway to deliver bio-based, low-carbon fuel.
=== Future production routes ===
Systems that use synthetic biology to create hydro-carbons are under development:
The SUN-to-LIQUID project is examining Fischer-Tropsch hydro-carbon fuels (solar kerosine) through the use of a solar reactor.
Alder Fuels is proposing to convert lignocellulosic biomass (a common type of waste from forestry and agriculture) into a hydrocarbon-rich "greencrude" via pyrolysis (see: pyrolysis oil). Greencrude can be turned into fuel in refineries like crude oil.
Universal Fuel Technologies is marketing its Flexiforming technology that can use different feedstocks and even the byproducts from existing renewable fuel manufacturing processes to produce SAF.
Arcadia eFuels is constructing a plant that will use renewable electricity to perform electrolysis of water to produce green hydrogen coupled with CO2 capture to produce syngas and use Power-to-X gas to liquid processes to produce SAF. The plant at the port of Vordingborg, Denmark is expected to begin production in 2028.
=== Piston engines ===
Small piston engines can be modified to burn ethanol. Swift Fuel, a biofuel alternative to avgas, was approved as a test fuel by ASTM International in December 2009.
=== Technical challenges ===
Nitrile-based rubber materials expand in the presence of aromatic compounds found in conventional petroleum fuel. Pure biofuels without petroleum and paraffin-based additives may cause rubber seals and hoses to shrink. Synthetic rubber substitutes that are not adversely affected by biofuels, such as Viton, for seals and hoses are available.
The United States Air Force found harmful bacteria and fungi in their biofueled aircraft, and use pasteurization to disinfect them.
==== Aromatics and cycloalkanes ====
As of May 2025 SAF is generally required to be blended with fossil fuel—because jet fuel needs cycloalkanes and aromatics, which are generally deficient in SAF; as well as the more prevalent in SAF n-alkanes and isoalkanes.
== Economics ==
In 2019 the International Energy Agency forecast SAF production should grow from 18 to 75 billion litres between 2025 and 2040, representing a 5% to 19% share of aviation fuel. By 2019, fossil jet fuel production cost was $0.3-0.6 per L given a $50–100 crude oil barrel, while aviation biofuel production cost was $0.7-1.6, needing a $110–260 crude oil barrel to break-even.
As of 2020 aviation biofuel was more expensive than fossil jet kerosene, considering aviation taxation and subsidies at that time.
As of a 2021 analysis, VFA-SAF break-even cost was $2.50/US gal ($0.66/L). This number was generated considering credits and incentives at the time, such as California's LCFS (Low Carbon Fuel Standard) credits and the US Environmental Protection Agency (EPA) Renewable Fuel Standard incentives.
== Sustainable aviation fuels ==
Sustainable biofuels do not use food crops, prime agricultural land or fresh water. Sustainable aviation fuel (SAF) is certified by a third-party such as the Roundtable For Sustainable Biofuels.
As of 2022, some 450,000 flights had used sustainable fuels as part of the fuel mix, although such fuels were ~3x more expensive than the traditional fossil jet fuel or kerosene. In 2023, SAFs account for less than 0.1% of all aviation fuels consumed.
=== Certification ===
A SAF sustainability certification ensures that the product satisfies criteria focused on environmental, social, and economic "triple-bottom-line" considerations. Under many emission regulation schemes, such as the European Union Emissions Trading Scheme (EUTS), a certified SAF product may be exempted from carbon compliance liability costs. This marginally improves SAF's economic competitiveness versus fossil-based fuel.
The first reputable body to launch a sustainable biofuel certification system was the European-based Roundtable on Sustainable Biomaterials (RSB) NGO. Leading airlines and other signatories to the Sustainable Aviation Fuel Users Group (SAFUG) pledged to support RSB as their preferred certification provider.
Some SAF pathways procured RIN pathways under the United States's renewable fuel standard which can serve as an implicit certification if the RIN is a Q-RIN.
EU RED II Recast (2018)
Greenhouse gas emissions from sustainable fuels must be lower than those from the fuels they replace: at least 50% for production built before 5 October 2015, 60% after that date and 65% after 2021. Raw materials cannot be sourced from land with high biodiversity or high carbon stocks (i.e. primary and protected forests, biodiversity-rich grasslands, wetlands and peatlands). Other sustainability issues are set out in the Governance Regulation and may be covered voluntarily.
ICAO 'CORSIA'
GHG Reduction - Criterion 1: lifecycle reductions of at least 10% compared to fossil fuel. Carbon Stock - Criterion 1: not produced from biomass obtained from land whose uses changed after 1 January 2008 from primeval forests, wetlands or peatlands, as all these lands have high carbon stocks. Criterion 2: For land use changes after 1 January 2008, (using IPCC land categories), if emissions from direct land use change (DLUC) exceed the default value of the induced land use change (ILUC), the value of the DLUC replaces the default (ILUC) value.
=== Global impact ===
As emissions trading schemes and other carbon compliance regimes emerge, certain biofuels are likely to be exempted ("zero-rated") by governments from compliance due to their closed-loop nature, if they can demonstrate appropriate credentials. For example, in the EUTS, SAFUG's proposal was accepted that only fuels certified as sustainable by the RSB or similar body would be zero-rated. SAFUG was formed by a group of interested airlines in 2008 under the auspices of Boeing Commercial Airplanes. Member airlines represented more than 15% of the industry, and signed a pledge to work towards SAF.
In addition to SAF certification, the integrity of aviation biofuel producers and their products could be assessed by means such as Richard Branson's Carbon War Room, or the Renewable Jet Fuels initiative. The latter works with companies such as LanzaTech, SG Biofuels, AltAir, Solazyme, and Sapphire.
Along with her co-authors, Candelaria Bergero of the University of California's Earth System Science Department stated that "main challenges to scaling up such sustainable fuel production include technology costs and process efficiencies", and widespread production would undermine food security and land use.
=== Market implementation ===
By 2019, Virgin Australia had fueled more than 700 flights and flown more than one million kilometers, domestic and international, using Gevo's alcohol-to-jet fuel. Virgin Atlantic was working to regularly use fuel derived from the waste gases of steel mills, with LanzaTech. British Airways wanted to convert household waste into jet fuel with Velocys. United Airlines committed to 900 million US gal (3,400,000 m3) of sustainable aviation fuel for 10 years from Fulcrum BioEnergy (of its 4.1 billion US gal (16,000,000 m3) fuel consumption in 2018), after a $30 million investment in 2015.
From 2020, Qantas planned to use a 50/50 blend of SG Preston's biofuel on its Los Angeles-Australia flights. SG Preston also planned to provide fuel to JetBlue over 10 years. At its sites in Singapore, Rotterdam and Porvoo, Finland's Neste expected to improve its renewable fuel production capacity from 2.7 to 3.0 million t (6.0 to 6.6 billion lb) a year by 2020, and to increase its Singapore capacity by 1.3 million t (2.9 billion lb) to reach 4.5 million t (9.9 billion lb) in 2022 by investing €1.4 billion ($1.6 billion).
By 2020, International Airlines Group had invested $400 million to convert waste into sustainable aviation fuel with Velocys.
United Airlines has expanded SAF use across multiple airports worldwide, including Amsterdam in 2022, San Francisco and London in 2023, and Chicago O'Hare and Los Angeles in 2024.
In March 2024, regular use of SAF began in the Northeastern United States at John F. Kennedy International Airport, as part of a new effort by JetBlue. Southwest Airlines began using sustainable jet fuel at Chicago Midway International Airport in October 2024.
=== Certified processes ===
== See also ==
Biodiesel
Fossil fuel phase-out
List of emerging technologies
Vegetable oil fuel
== References ==
== Further reading ==
Adam Klauber (Rocky Mountain Institute); Isaac Toussie (Rocky Mountain Institute); Steve Csonka (Commercial Aviation Alternative Fuels Initiative); Barbara Bramble (National Wildlife Federation) (Oct 23, 2017). "Opinion: Biofuels Sustainable, Essential To Aviation's Future". Aviation Week & Space Technology.
"Sustainable Aviation Fuel" (PDF). Gevo. December 2019. Alcohol-to-Jet Synthetic Paraffinic Kerosene Is a Proven Pathway to Deliver a Bio-Based, Low-Carbon Option to Travelers
McKinsey & Company (Nov 2020). Clean Skies for Tomorrow (PDF) (Report). World Economic Forum. Sustainable Aviation Fuels as a Pathway to Net-Zero Aviation
== External links ==
"Sustainable Sky Institute". non-profit think tank/do tank focused on [...] the market transformation of the world's air transport system towards a [...] sustainable long-term future
"Aviation industry reducing its environmental footprint". Aviation: Benefits Beyond Borders. Air Transport Action Group.
"Nordic Initiative for Sustainable Aviation". Archived from the original on 2015-04-02. Retrieved 2015-03-27. Nordic association working to promote and develop a more sustainable aviation industry, with a specific focus on alternative sustainable fuels
"Roundtable on Sustainable Biofuels". The RSB is supporting the development of a sustainable bioeconomy
"International Journal of Sustainable Aviation". Inderscience Publishers.
"Biofuels for aviation". European Commission. 5 September 2023.
Geoff Hunt (22 April 2021). "Why industry needs global standards for sustainable fuel use". Flightglobal. |
Avionics | Avionics (a portmanteau of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
== History ==
The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics".
Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy. They required a two-seat aircraft with a second crewman who operated a telegraph key to spell out messages in Morse code. During World War I, AM voice two way radio sets were made possible in 1917 (see TM (triode)) by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying.
Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented.
=== Modern avionics ===
Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas:
Published Routes and Procedures – Improved navigation and routing
Negotiated Trajectories – Adding data communications to create preferred routes dynamically
Delegated Separation – Enhanced situational awareness in the air and on the ground
LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure
Surface Operations – To increase safety in approach and departure
ATM Efficiencies – Improving the air traffic management (ATM) process
=== Market ===
The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach.
== Aircraft avionics ==
The cockpit or, in larger aircraft, under the cockpit of an aircraft or in a movable nosecone, is a typical location for avionic bay equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo), Shadin Avionics, and Avidyne Corporation.
International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC.
=== Avionics Installation ===
Avionics installation is a critical aspect of modern aviation, ensuring that aircraft are equipped with the necessary electronic systems for safe and efficient operation. These systems encompass a wide range of functions, including communication, navigation, monitoring, flight control, and weather detection. Avionics installations are performed on all types of aircraft, from small general aviation planes to large commercial jets and military aircraft.
==== Installation Process ====
The installation of avionics requires a combination of technical expertise, precision, and adherence to stringent regulatory standards. The process typically involves:
Planning and Design: Before installation, the avionics shop works closely with the aircraft owner to determine the required systems based on the aircraft type, intended use, and regulatory requirements. Custom instrument panels are often designed to accommodate the new systems.
Wiring and Integration: Avionics systems are integrated into the aircraft's electrical and control systems, with wiring often requiring laser marking for durability and identification. Shops use detailed schematics to ensure correct installation.
Testing and Calibration: After installation, each system must be thoroughly tested and calibrated to ensure proper function. This includes ground testing, flight testing, and system alignment with regulatory standards such as those set by the FAA.
Certification: Once the systems are installed and tested, the avionics shop completes the necessary certifications. In the U.S., this often involves compliance with FAA Part 91.411 and 91.413 for IFR (Instrument Flight Rules) operations, as well as RVSM (Reduced Vertical Separation Minimum) certification.
==== Regulatory Standards ====
Avionics installation is governed by strict regulatory frameworks to ensure the safety and reliability of aircraft systems. In the United States, the Federal Aviation Administration (FAA) sets the standards for avionics installations. These include guidelines for:
System Performance: Avionics systems must meet performance benchmarks as defined by the FAA, ensuring they function correctly in all phases of flight.
Certification: Shops performing installations must be FAA-certified, and their technicians often hold certifications such as the General Radiotelephone Operator License (GROL).
Inspections: Aircraft equipped with newly installed avionics systems must undergo rigorous inspections before being cleared for flight, including both ground and flight tests.
==== Advancements in Avionics Technology ====
The field of avionics has seen rapid technological advancements in recent years, leading to more integrated and automated systems. Key trends include:
Glass Cockpits: Traditional analog gauges are being replaced by fully integrated glass cockpit displays, providing pilots with a centralized view of all flight parameters.
NextGen Technologies: ADS-B and satellite-based navigation are part of the FAA's NextGen initiative, aimed at modernizing air traffic control and improving the efficiency of the national airspace.
Autonomous Systems: Advanced automation systems are paving the way for more autonomous aircraft systems, enhancing safety, efficiency, and reducing pilot workload.
=== Communications ===
Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms.
The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication.
=== Navigation ===
Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays.
=== Monitoring ===
The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode-ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls.
Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed.
=== Aircraft flight-control system ===
Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff.
The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested.
=== Fuel Systems ===
Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board.
Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks.
Refuelling control to upload to a certain total mass of fuel and distribute it automatically.
Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks
Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended
Maintaining fuel in the wing tips (to alleviate wing bending due to lift in flight) & transferring to the main tanks after landing
Controlling fuel jettison during an emergency to reduce the aircraft weight.
=== Collision-avoidance systems ===
To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.
To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS).
=== Flight recorders ===
Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident.
=== Weather systems ===
Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas.
Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation.
Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed.
=== Aircraft management systems ===
There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement.
The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners.
== Mission or tactical avionics ==
Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers.
Police and EMS aircraft also carry sophisticated tactical sensors.
=== Military communications ===
While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.).
=== Radar ===
Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar.
The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft.
=== Sonar ===
Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines.
=== Electro-optics ===
Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition.
=== ESM/DAS ===
Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it.
=== Aircraft networks ===
The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include:
Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft
Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft
ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft
ARINC 664: See ADN above
ARINC 629: Commercial Aircraft (Boeing 777)
ARINC 708: Weather Radar for Commercial Aircraft
ARINC 717: Flight Data Recorder for Commercial Aircraft
ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350)
Commercial Standard Digital Bus
IEEE 1394b: Military Aircraft
MIL-STD-1553: Military Aircraft
MIL-STD-1760: Military Aircraft
TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
== See also ==
Astrionics, similar, for spacecraft
ACARS – Aircraft digital message communication system
Acronyms and abbreviations in avionics
ARINC – Aeronautical Radio, Incorporated (1929–2018)
Avionics software
DO-178C – International aeronautics software standard
Emergency locator beacon
Emergency position-indicating radiobeacon station
Flight recorder – Aircraft electronic recording device
Integrated modular avionics
== Notes ==
== Further reading ==
Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006)
Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007)
Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005)
Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ISBN 978-0-88947-908-1).
== External links ==
Avionics in Commercial Aircraft
Aircraft Electronics Association (AEA)
Pilot's Guide to Avionics
The Avionic Systems Standardisation Committee
Space Shuttle Avionics
Aviation Today Avionics magazine
RAES Avionics homepage |
B-2 | The Northrop B-2 Spirit, also known as the Stealth Bomber, is an American heavy strategic bomber, featuring low-observable stealth technology designed to penetrate dense anti-aircraft defenses. A subsonic flying wing with a crew of two, the plane was designed by Northrop (later Northrop Grumman) as the prime contractor, with Boeing, Hughes, and Vought as principal subcontractors, and was produced from 1988 to 2000. The bomber can drop conventional and thermonuclear weapons, such as up to eighty 500-pound class (230 kg) Mk 82 JDAM GPS-guided bombs, or sixteen 2,400-pound (1,100 kg) B83 nuclear bombs. The B-2 is the only acknowledged in-service aircraft that can carry large air-to-surface standoff weapons in a stealth configuration.
Development began under the Advanced Technology Bomber (ATB) project during the Carter administration, which cancelled the Mach 2-capable B-1A bomber in part because the ATB showed such promise. But development difficulties delayed progress and drove up costs. Ultimately, the program produced 21 B-2s at an average cost of $2.13 billion (~$4.17 billion in 2024), including development, engineering, testing, production, and procurement. Building each aircraft cost an average of US$737 million, while total procurement costs (including production, spare parts, equipment, retrofitting, and software support) averaged $929 million (~$1.11 billion in 2023) per plane. The project's considerable capital and operating costs made it controversial in the U.S. Congress even before the winding down of the Cold War dramatically reduced the desire for a stealth aircraft designed to strike deep in Soviet territory. Consequently, in the late 1980s and 1990s lawmakers shrank the planned purchase of 132 bombers to 21.
The B-2 can perform attack missions at altitudes of up to 50,000 feet (15,000 m); it has an unrefueled range of more than 6,000 nautical miles (6,900 mi; 11,000 km) and can fly more than 10,000 nautical miles (12,000 mi; 19,000 km) with one midair refueling. It entered service in 1997 as the second aircraft designed with advanced stealth technology, after the Lockheed F-117 Nighthawk attack aircraft. Primarily designed as a nuclear bomber, the B-2 was first used in combat to drop conventional, non-nuclear ordnance in the Kosovo War in 1999. It was later used in Iraq, Afghanistan, Libya and Yemen.
The United States Air Force has nineteen B-2s in service as of 2024; one was destroyed in a 2008 crash and another one damaged in a crash in 2022 was retired from service likely on account of the cost and duration of a potential repair. The Air Force plans to operate the B-2s until 2032, when the Northrop Grumman B-21 Raider is to replace them.
== Development ==
=== Origins ===
By the mid-1970s, military aircraft designers had learned of a new method to avoid missiles and interceptors, known today as "stealth". The concept was to build an aircraft with an airframe that deflected or absorbed radar signals so that little was reflected back to the radar unit. An aircraft having radar stealth characteristics would be able to fly nearly undetected and could be attacked only by weapons and systems not relying on radar. Although other detection measures existed, such as human observation, infrared scanners, and acoustic locators, their relatively short detection range or poorly developed technology allowed most aircraft to fly undetected, or at least untracked, especially at night.
In 1974, DARPA requested information from U.S. aviation firms about the largest radar cross-section of an aircraft that would remain effectively invisible to radars. Initially, Northrop and McDonnell Douglas were selected for further development. Lockheed had experience in this field with the development of the Lockheed A-12 and SR-71, which included several stealthy features, notably its canted vertical stabilizers, the use of composite materials in key locations, and the overall surface finish in radar-absorbing paint. A key improvement was the introduction of computer models used to predict the radar reflections from flat surfaces where collected data drove the design of a "faceted" aircraft. Development of the first such designs started in 1975 with the Have Blue, a model Lockheed built to test the concept.
Plans were well advanced by the summer of 1975, when DARPA started the Experimental Survivability Testbed project. Northrop and Lockheed were awarded contracts in the first round of testing. Lockheed received the sole award for the second test round in April 1976 leading to the Have Blue program and eventually the F-117 stealth attack aircraft. Northrop also had a classified technology demonstration aircraft, the Tacit Blue in development in 1979 at Area 51. It developed stealth technology, LO (low observables), fly-by-wire, curved surfaces, composite materials, electronic intelligence, and Battlefield Surveillance Aircraft Experimental. The stealth technology developed from the program was later incorporated into other operational aircraft designs, including the B-2 stealth bomber.
=== ATB program ===
By 1976, these programs had progressed to a position in which a long-range strategic stealth bomber appeared viable. President Jimmy Carter became aware of these developments during 1977, and it appears to have been one of the major reasons the B-1 was canceled. Further studies were ordered in early 1978, by which point the Have Blue platform had flown and proven the concepts. During the 1980 presidential election campaign in 1979, Ronald Reagan repeatedly stated that Carter was weak on defense and used the B-1 as a prime example. In response, on 22 August 1980 the Carter administration publicly disclosed that the United States Department of Defense was working to develop stealth aircraft, including a bomber.
The Advanced Technology Bomber (ATB) program began in 1979. Full development of the black project followed, funded under the code name "Aurora". After the evaluations of the companies' proposals, the ATB competition was narrowed to the Northrop/Boeing and Lockheed/Rockwell teams with each receiving a study contract for further work. Both teams used flying wing designs. The Northrop proposal was code named "Senior Ice", and the Lockheed proposal code named "Senior Peg". Northrop had experience developing flying wing aircraft: the YB-35 and YB-49. The Northrop design was larger and had curved surfaces while the Lockheed design was faceted and included a small tail. In 1979, designer Hal Markarian produced a sketch of the aircraft that bore considerable similarities to the final design. The USAF originally planned to procure 165 ATB bombers.
The Northrop team's ATB design was selected over the Lockheed/Rockwell design on 20 October 1981. The Northrop design received the designation B-2 and the name "Spirit". The bomber's design was changed in the mid-1980s when the mission profile was changed from high-altitude to low-altitude, terrain-following. The redesign delayed the B-2's first flight by two years and added about US$1 billion to the program's cost. By 1989, the U.S. had secretly spent an estimated US$23 billion on research and development for the B-2. MIT engineers and scientists helped assess the mission effectiveness of the aircraft under a five-year classified contract during the 1980s. ATB technology was also fed into the Advanced Tactical Fighter program, which would produce the Lockheed YF-22 and Northrop YF-23, and later the Lockheed Martin F-22. Northrop was the B-2's prime contractor; major subcontractors included Boeing, Hughes Aircraft (now Raytheon), GE, and Vought Aircraft.
=== Secrecy and espionage ===
During its design and development, the Northrop B-2 program was a black project; all program personnel needed a secret clearance. Still, it was less closely held than the Lockheed F-117 program; more people in the federal government knew about the B-2, and more information about the project was available. Both during development and in service, considerable effort has been devoted to maintaining the security of the B-2's design and technologies. Staff working on the B-2 in most, if not all, capacities need a level of special-access clearance and undergo extensive background checks carried out by a special branch of the USAF.
A former Ford automobile assembly plant in Pico Rivera, California, was acquired and heavily rebuilt; the plant's employees were sworn to secrecy. To avoid suspicion, components were typically purchased through front companies, military officials would visit out of uniform, and staff members were routinely subjected to polygraph examinations. Nearly all information on the program was kept from the Government Accountability Office (GAO) and members of Congress until the mid-1980s.
The B-2 was first publicly displayed on 22 November 1988 at United States Air Force Plant 42 in Palmdale, California, where it was assembled. This viewing was heavily restricted, and guests were not allowed to see the rear of the B-2. However, Aviation Week editors found that there were no airspace restrictions above the presentation area and took aerial photographs of the aircraft's secret rear section with suppressed engine exhausts. The B-2's (s/n 82-1066 / AV-1) first public flight was on 17 July 1989 from Palmdale to Edwards Air Force Base.
In 1984, Northrop employee Thomas Patrick Cavanagh was arrested for attempting to sell classified information from the Pico Rivera factory to the Soviet Union. Cavanagh was sentenced to life in prison in 1985 but released on parole in 2001. In October 2005, Noshir Gowadia, a design engineer who worked on the B-2's propulsion system, was arrested for selling classified information to China. Gowadia was convicted and sentenced to 32 years in prison.
=== Program costs and procurement ===
A procurement of 132 aircraft was planned in the mid-1980s but was later reduced to 75. By the early 1990s the Soviet Union dissolved, effectively eliminating the Spirit's primary Cold War mission. Under budgetary pressures and Congressional opposition, in his 1992 State of the Union address, President George H. W. Bush announced B-2 production would be limited to 20 aircraft. In 1996, however, the Clinton administration, though originally committed to ending production of the bombers at 20 aircraft, authorized the conversion of a 21st bomber, a prototype test model, to Block 30 fully operational status at a cost of nearly $500 million (~$897 million in 2023). In 1995, Northrop made a proposal to the USAF to build 20 additional aircraft with a flyaway cost of $566 million each.
The program was the subject of public controversy for its cost to American taxpayers. In 1996, the GAO disclosed that the USAF's B-2 bombers "will be, by far, the costliest bombers to operate on a per aircraft basis", costing over three times as much as the B-1B (US$9.6 million annually) and over four times as much as the B-52H (US$6.8 million annually). In September 1997, each hour of B-2 flight necessitated 119 hours of maintenance. Comparable maintenance needs for the B-52 and the B-1B are 53 and 60 hours, respectively, for each hour of flight. A key reason for this cost is the provision of air-conditioned hangars large enough for the bomber's 172 ft (52 m) wingspan, which are needed to maintain the aircraft's stealth properties, particularly its "low-observable" stealth skins. Maintenance costs are about $3.4 million per month for each aircraft. An August 1995 GAO report disclosed that the B-2 had trouble operating in heavy rain, as rain could damage the aircraft's stealth coating, causing procurement delays until an adequate protective coating could be found. In addition, the B-2's terrain-following/terrain-avoidance radar had difficulty distinguishing rain from other obstacles, rendering the subsystem inoperable during rain. However a subsequent report in October 1996 noted that the USAF had made some progress in resolving the issues with the radar via software fixes and hoped to have these fixes undergoing tests by the spring of 1997.
The total "military construction" cost related to the program was projected to be US$553.6 million in 1997 dollars. The cost to procure each B-2 was US$737 million in 1997 dollars (equivalent to US$1.3 billion in 2021), based only on a fleet cost of US$15.48 billion. The procurement cost per aircraft, as detailed in GAO reports, which include spare parts and software support, was $929 million per aircraft in 1997 dollars.
The total program cost projected through 2004 was US$44.75 billion in 1997 dollars (equivalent to US$79 billion in 2021). This includes development, procurement, facilities, construction, and spare parts. The total program cost averaged US$2.13 billion per aircraft. The B-2 may cost up to $135,000 per flight hour to operate in 2010, which is about twice that of the B-52 and B-1.
=== Opposition ===
In its consideration of the fiscal year 1990 defense budget, the House Armed Services Committee trimmed $800 million from the B-2 research and development budget, while at the same time staving off a motion to end the project. Opposition in committee and in Congress was mostly broad and bipartisan, with Congressmen Ron Dellums (D-CA), John Kasich (R-OH), and John G. Rowland (R-CT) authorizing the motion to end the project—as well as others in the Senate, including Jim Exon (D-NE) and John McCain (R-AZ) also opposing the project. Dellums and Kasich, in particular, worked together from 1989 through the early 1990s to limit production to 21 aircraft and were ultimately successful.
The escalating cost of the B-2 program and evidence of flaws in the aircraft's ability to elude detection by radar were among factors that drove opposition to continue the program. At the peak production period specified in 1989, the schedule called for spending US$7 billion to $8 billion per year in 1989 dollars, something Committee Chair Les Aspin (D-WI) said "won't fly financially". In 1990, the Department of Defense accused Northrop of using faulty components in the flight control system; it was also found that redesign work was required to reduce the risk of damage to engine fan blades by bird ingestion.
In time, several prominent members of Congress began to oppose the program's expansion, including Senator John Kerry (D-MA), who cast votes against the B-2 in 1989, 1991, and 1992. By 1992, Bush had called for the cancellation of the B-2 and promised to cut military spending by 30% in the wake of the collapse of the Soviet Union. In October 1995, former Chief of Staff of the United States Air Force, General Mike Ryan, and former chairman of the Joint Chiefs of Staff, General John Shalikashvili, strongly recommended against Congressional action to fund the purchase of any additional B-2s, arguing that to do so would require unacceptable cuts in existing conventional and nuclear-capable aircraft, and that the military had greater priorities in spending a limited budget.
Some B-2 advocates argued that procuring twenty additional aircraft would save money because B-2s would be able to deeply penetrate anti-aircraft defenses and use low-cost, short-range attack weapons rather than expensive standoff weapons. However, in 1995, the Congressional Budget Office (CBO) and its Director of National Security Analysis found that additional B-2s would reduce the cost of expended munitions by less than US$2 billion in 1995 dollars during the first two weeks of a conflict, in which the USAF predicted bombers would make their greatest contribution; this was a small fraction of the US$26.8 billion (in 1995 dollars) life cycle cost that the CBO projected for an additional 20 B-2s.
In 1997, as Ranking Member of the House Armed Services Committee and National Security Committee, Congressman Ron Dellums (D-CA), a long-time opponent of the bomber, cited five independent studies and offered an amendment to that year's defense authorization bill to cap production of the bombers to the existing 21 aircraft; the amendment was narrowly defeated. Nonetheless, Congress did not approve funding for additional B-2s.
=== Further developments ===
Several upgrade packages have been applied to the B-2. In July 2008, the B-2's onboard computing architecture was extensively redesigned; it now incorporates a new integrated processing unit that communicates with systems throughout the aircraft via a newly installed fiber optic network; a new version of the operational flight program software was also developed, with legacy code converted from the JOVIAL programming language to standard C. Updates were also made to the weapon control systems to enable strikes upon moving targets, such as ground vehicles.
On 29 December 2008, USAF officials awarded a US$468 million contract to Northrop Grumman to modernize the B-2 fleet's radars. Changing the radar's frequency was required as the United States Department of Commerce had sold that radio spectrum to another operator. In July 2009, it was reported that the B-2 had successfully passed a major USAF audit. In 2010, it was made public that the Air Force Research Laboratory had developed a new material to be used on the part of the wing trailing edge subject to engine exhaust, replacing existing material that quickly degraded.
In July 2010, political analyst Rebecca Grant speculated that when the B-2 becomes unable to reliably penetrate enemy defenses, the Lockheed Martin F-35 Lightning II may take on its strike/interdiction mission, carrying B61 nuclear bombs as a tactical bomber. However, in March 2012, The Pentagon announced that a $2 billion, 10-year-long modernization of the B-2 fleet was to begin. The main area of improvement would be replacement of outdated avionics and equipment. Continued modernization efforts likely have continued in secret, as alluded to by a B-2 commander from Whiteman Air Force Base in April 2021, possibly indicating offensive weapons capability against threatening air defenses and aircraft. He stated: without getting into specifics, and without getting into things that we frankly just don't discuss in open channels, I will tell you that our current bomber fleet, and this is all of them, we use some pretty innovative ways to integrate modern weapons capabilities to have us both maintain and increase our survivability. And for the B-2 specifically, the expansion of some of our strike capabilities allow us to increase our survivability beyond the fighter escort realm. Now the B-2 fleet is continuing to do that technological advancement, and that's enabled us to expand our strike capabilities, as well. Although we've been around for over 30 years, there's a lot of life left in this platform, and up until the B-21 is well on the scene and doing its job, this aircraft will continue to be at the forefront of our country and our nation's defense... and with these, and continued innovative upgrades, and weapons system capabilities, we will continue to do that until the last jet flies off the ramp into retirement.
It was reported in 2011 that The Pentagon was evaluating an unmanned stealth bomber, characterized as a "mini-B-2", as a potential replacement in the near future. In 2012, USAF Chief of Staff General Norton Schwartz stated the B-2's 1980s-era stealth technologies would make it less survivable in future contested airspaces, so the USAF is to proceed with the Next-Generation Bomber despite overall budget cuts. In 2012 projections, it was estimated that the Next-Generation Bomber would have an overall cost of $55 billion.
In 2013, the USAF contracted for the Defensive Management System Modernization (DMS-M) program to replace the antenna system and other electronics to increase the B-2's frequency awareness. The Common Very Low Frequency Receiver upgrade allows the B-2s to use the same very low frequency transmissions as the Ohio-class submarines so as to continue in the nuclear mission until the Mobile User Objective System is fielded. In 2014, the USAF outlined a series of upgrades including nuclear warfighting, a new integrated processing unit, the ability to carry cruise missiles, and threat warning improvements. Due to ongoing software challenges, DMS-M was canceled by 2020, and the existing work was repurposed for cockpit upgrades.
In 1998, a Congressional panel advised the USAF to refocus resources away from continued B-2 production and instead begin development of a new bomber, either a new build or a variant of the B-2. In its 1999 bomber roadmap the USAF eschewed the panel's recommendations, believing its current bomber fleet could be maintained until the 2030s. The service believed that development could begin in 2013, in time to replace aging B-2s, B-1s and B-52s around 2037.
Although the USAF previously planned to operate the B-2 until 2058, the FY 2019 budget moved up its retirement to "no later than 2032". It also moved the retirement of the B-1 to 2036 while extending the B-52's service life into the 2050s, because the B-52 has lower maintenance costs, versatile conventional payload, and the ability to carry nuclear cruise missiles (which the B-1 is treaty-prohibited from doing). The decision to retire the B-2 early was made because the small fleet of 20 is considered too expensive per plane to retain, with its position as a stealth bomber being taken over with the introduction of the B-21 Raider starting in the mid-2020s.
== Design ==
=== Overview ===
The B-2 Spirit was developed to take over the USAF's vital penetration missions, allowing it to travel deep into enemy territory to deploy ordnance, which could include nuclear weapons. The B-2 is a flying wing aircraft, meaning that it has no fuselage or tail. It has significant advantages over previous bombers due to its blend of low-observable technologies with high aerodynamic efficiency and a large payload. Low observability provides greater freedom of action at high altitudes, thus increasing both range and field of view for onboard sensors. The USAF reports its range as approximately 6,000 nautical miles (6,900 mi; 11,000 km). At cruising altitude, the B-2 refuels every six hours, taking on up to 50 short tons (45,000 kg) of fuel at a time.
The development and construction of the B-2 required pioneering use of computer-aided design and manufacturing technologies due to its complex flight characteristics and design requirements to maintain very low visibility to multiple means of detection. The B-2 bears a resemblance to earlier Northrop aircraft; the YB-35 and YB-49 were both flying wing bombers that had been canceled in development in the early 1950s, allegedly for political reasons. The resemblance goes as far as B-2 and YB-49 having the same wingspan. The YB-49 also had a small radar cross-section.
Approximately 80 pilots fly the B-2. Each aircraft has a crew of two, a pilot in the left seat and mission commander in the right, and has provisions for a third crew member if needed. For comparison, the B-1B has a crew of four and the B-52 has a crew of five. The B-2 is highly automated, and one crew member can sleep in a camp bed, use a toilet, or prepare a hot meal while the other monitors the aircraft, unlike most two-seat aircraft. Extensive sleep cycle and fatigue research was conducted to improve crew performance on long sorties. Advanced training is conducted at the USAF Weapons School.
=== Armaments and equipment ===
In the envisaged Cold War scenario, the B-2 was to perform deep-penetrating nuclear strike missions, making use of its stealthy capabilities to avoid detection and interception throughout the missions. There are two internal bomb bays in which munitions are stored either on a rotary launcher or two bomb-racks; the carriage of the weapons loadouts internally results in less radar visibility than external mounting of munitions. The B-2 is capable of carrying 40,000 lb (18,000 kg) of ordnance. Nuclear ordnance includes the B61 and B83 nuclear bombs; the AGM-129 ACM cruise missile was also intended for use on the B-2 platform.
In light of the dissolution of the Soviet Union, it was decided to equip the B-2 for conventional precision attacks as well as for the strategic role of nuclear-strike. The B-2 features a sophisticated GPS-Aided Targeting System (GATS) that uses the aircraft's APQ-181 synthetic aperture radar to map out targets prior to the deployment of GPS-aided bombs (GAMs), later superseded by the Joint Direct Attack Munition (JDAM). In the B-2's original configuration, up to 16 GAMs or JDAMs could be deployed; An upgrade program in 2004 raised the maximum carrier capacity to 80 JDAMs.
The B-2 has various conventional weapons in its arsenal, including Mark 82 and Mark 84 bombs, CBU-87 Combined Effects Munitions, GATOR mines, and the CBU-97 Sensor Fuzed Weapon. In July 2009, Northrop Grumman reported the B-2 was compatible with the equipment necessary to deploy the 30,000 lb (14,000 kg) Massive Ordnance Penetrator (MOP), which is intended to attack reinforced bunkers; up to two MOPs could be equipped in the B-2's bomb bays with one per bay, the B-2 is the only platform compatible with the MOP as of 2012. As of 2011, the AGM-158 JASSM cruise missile is an upcoming standoff munition to be deployed on the B-2 and other platforms. This is to be followed by the Long Range Standoff Weapon, which may give the B-2 standoff nuclear capability for the first time.
=== Avionics and systems ===
To make the B-2 more effective than previous bombers, many advanced and modern avionics systems were integrated into its design; these have been modified and improved following a switch to conventional warfare missions. One system is the low probability of intercept AN/APQ-181 multi-mode radar, a fully digital navigation system that is integrated with terrain-following radar and Global Positioning System (GPS) guidance, NAS-26 astro-inertial navigation system (first such system tested on the Northrop SM-62 Snark cruise missile) and a Defensive Management System (DMS) to inform the flight crew of possible threats. The onboard DMS is capable of automatically assessing the detection capabilities of identified threats and indicated targets. The DMS will be upgraded by 2021 to detect radar emissions from air defenses to allow changes to the auto-router's mission planning information while in-flight so it can receive new data quickly to plan a route that minimizes exposure to dangers.
For safety and fault-detection purposes, an on-board test system is linked with the majority of avionics on the B-2 to continuously monitor the performance and status of thousands of components and consumables; it also provides post-mission servicing instructions for ground crews. In 2008, many of the 136 standalone distributed computers on board the B-2, including the primary flight management computer, were being replaced by a single integrated system. The avionics are controlled by 13 EMP-resistant MIL-STD-1750A computers, which are interconnected through 26 MIL-STD-1553B-busses; other system elements are connected via optical fiber.
In addition to periodic software upgrades and the introduction of new radar-absorbent materials across the fleet, the B-2 has had several major upgrades to its avionics and combat systems. For battlefield communications, both Link-16 and a high frequency satellite link have been installed, compatibility with various new munitions has been undertaken, and the AN/APQ-181 radar's operational frequency was shifted to avoid interference with other operators' equipment. The arrays of the upgraded radar features were entirely replaced to make the AN/APQ-181 into an active electronically scanned array (AESA) radar. Due to the B-2's composite structure, it is required to stay 40 miles (64 km) away from thunderstorms, to avoid static discharge and lightning strikes.
=== Flight controls ===
To address the inherent flight instability of a flying wing aircraft, the B-2 uses a complex quadruplex computer-controlled fly-by-wire flight control system that can automatically manipulate flight surfaces and settings without direct pilot inputs to maintain aircraft stability. The flight computer receives information on external conditions such as the aircraft's current air speed and angle of attack via pitot-static sensing plates, as opposed to traditional pitot tubes which would impair the aircraft's stealth capabilities. The flight actuation system incorporates both hydraulic and electrical servoactuated components, and it was designed with a high level of redundancy and fault-diagnostic capabilities.
Northrop had investigated several means of applying directional control that would infringe on the aircraft's radar profile as little as possible, eventually settling on a combination of split brake-rudders and differential thrust. Engine thrust became a key element of the B-2's aerodynamic design process early on; thrust not only affects drag and lift but pitching and rolling motions as well. Four pairs of control surfaces are located along the wing's trailing edge; while most surfaces are used throughout the aircraft's flight envelope, the inner elevons are normally only in use at slow speeds, such as landing. To avoid potential contact damage during takeoff and to provide a nose-down pitching attitude, all of the elevons remain drooped during takeoff until a high enough airspeed has been attained.
=== Stealth ===
The B-2's low-observable, or "stealth", characteristics enable the undetected penetration of sophisticated anti-aircraft defenses and to attack even heavily defended targets. This stealth comes from a combination of reduced acoustic, infrared, visual and radar signatures (multi-spectral camouflage) to evade the various detection systems that could be used to detect and be used to direct attacks against an aircraft. The B-2's stealth enables the reduction of supporting aircraft that are required to provide air cover, Suppression of Enemy Air Defenses and electronic countermeasures, making the bomber a "force multiplier". As of September 2013, there have been no instances of a missile being launched at a B-2.
To reduce optical visibility during daylight flights, the B-2 is painted in an anti-reflective paint. The undersides are dark because it flies at high altitudes (50,000 ft (15,000 m)), and at that altitude a dark grey painting blends well into the sky. It is speculated to have an upward-facing light sensor which alerts the pilot to increase or reduce altitude to match the changing illuminance of the sky. The original design had tanks for a contrail-inhibiting chemical, but this was replaced in production aircraft by a contrail sensor that alerts the crew when they should change altitude. The B-2 is vulnerable to visual interception at ranges of 20 nmi (23 mi; 37 km) or less. The B-2 is stored in a $5 million specialized air-conditioned hangar to maintain its stealth coating. Every seven years, this coating is carefully washed away with crystallized wheat starch so that the B-2's surfaces can be inspected for any dents or scratches.
==== Radar ====
The B-2's clean, low-drag flying wing configuration not only provides exceptional range but is also beneficial to reducing its radar profile. Reportedly, the B-2 has a radar cross-section (RCS) of about 0.1 m2 (1.1 sq ft). The bomber does not always fly stealthily; when nearing air defenses pilots "stealth up" the B-2, a maneuver whose details are secret. The aircraft is stealthy, except briefly when the bomb bay opens. The flying wing design most closely resembles a so-called infinite flat plate (as vertical control surfaces dramatically increase RCS), the perfect stealth shape, as it would lack angles to reflect back radar waves (initially, the shape of the Northrop ATB concept was flatter; it gradually increased in volume according to specific military requirements). Without vertical surfaces to reflect radar laterally, side aspect radar cross section is also reduced. Radars operating at a lower frequency band (S or L band) are able to detect and track certain stealth aircraft that have multiple control surfaces, like canards or vertical stabilizers, where the frequency wavelength can exceed a certain threshold and cause a resonant effect.
RCS reduction as a result of shape had already been observed on the Royal Air Force's Avro Vulcan strategic bomber, and the USAF's F-117 Nighthawk. The F-117 used flat surfaces (faceting technique) for controlling radar returns as during its development (see Lockheed Have Blue) in the early 1970s, technology only allowed for the simulation of radar reflections on simple, flat surfaces; computing advances in the 1980s made it possible to simulate radar returns on more complex curved surfaces. The B-2 is composed of many curved and rounded surfaces across its exposed airframe to deflect radar beams. This technique, known as continuous curvature, was made possible by advances in computational fluid dynamics, and first tested on the Northrop Tacit Blue.
==== Infrared ====
Some analysts claim infra-red search and track systems (IRSTs) can be deployed against stealth aircraft, because any aircraft surface heats up due to air friction and with a two channel IRST is a CO2 (4.3 μm absorption maxima) detection possible, through difference comparing between the low and high channel.
Burying engines deep inside the fuselage also minimizes the thermal visibility or infrared signature of the exhaust. At the engine intake, cold air from the boundary layer below the main inlet enters the fuselage (boundary layer suction, first tested on the Northrop X-21) and is mixed with hot exhaust air just before the nozzles (similar to the Ryan AQM-91 Firefly). According to the Stefan–Boltzmann law, this results in less energy (thermal radiation in the infrared spectrum) being released and thus a reduced heat signature. The resulting cooler air is conducted over a surface composed of heat resistant carbon-fiber-reinforced polymer and titanium alloy elements, which disperse the air laterally, to accelerate its cooling. The B-2 lacks afterburners as the hot exhaust would increase the infrared signature; breaking the sound barrier would produce an obvious sonic boom as well as aerodynamic heating of the aircraft skin which would also increase the infrared signature.
==== Materials ====
According to the Huygens–Fresnel principle, even a very flat plate would still reflect radar waves, though much less than when a signal is bouncing at a right angle. Additional reduction in its radar signature was achieved by the use of various radar-absorbent materials (RAM) to absorb and neutralize radar beams. The majority of the B-2 is made out of a carbon-graphite composite material that is stronger than steel, lighter than aluminum, and absorbs a significant amount of radar energy.
The B-2 is assembled with unusually tight engineering tolerances to avoid leaks as they could increase its radar signature. Innovations such as alternate high frequency material (AHFM) and automated material application methods were also incorporated to improve the aircraft's radar-absorbent properties and reduce maintenance requirements. In early 2004, Northrop Grumman began applying a newly developed AHFM to operational B-2s. To protect the operational integrity of its sophisticated radar absorbent material and coatings, each B-2 is kept inside a climate-controlled hangar (Extra Large Deployable Aircraft Hangar System) large enough to accommodate its 172-foot (52 m) wingspan.
==== Shelter system ====
B-2s are supported by portable, environmentally-controlled hangars called B-2 Shelter Systems (B2SS). The hangars are built by American Spaceframe Fabricators Inc. and cost approximately US$5 million apiece. The need for specialized hangars arose in 1998 when it was found that B-2s passing through Andersen Air Force Base did not have the climate-controlled environment maintenance operations required. In 2003, the B2SS program was managed by the Combat Support System Program Office at Eglin Air Force Base. B2SS hangars are known to have been deployed to Naval Support Facility Diego Garcia and RAF Fairford.
== Operational history ==
=== 1990s ===
The first operational aircraft, christened Spirit of Missouri, was delivered to Whiteman Air Force Base, Missouri, where the fleet is based, on 17 December 1993. The B-2 reached initial operational capability (IOC) on 1 January 1997. Depot maintenance for the B-2 is accomplished by USAF contractor support and managed at Oklahoma City Air Logistics Center at Tinker Air Force Base. Originally designed to deliver nuclear weapons, modern usage has shifted towards a flexible role with conventional and nuclear capability.
The B-2's combat debut was in 1999, during the Kosovo War. It was responsible for destroying 33% of selected Serbian bombing targets in the first eight weeks of U.S. involvement in the war. During this war, six B-2s flew non-stop to Yugoslavia from their home base in Missouri and back, totaling 30 hours. Although the bombers accounted 50 sorties out of a total of 34,000 NATO sorties, they dropped 11 percent of all bombs. The B-2 was the first aircraft to deploy GPS satellite-guided JDAM "smart bombs" in combat use in Kosovo. The use of JDAMs and precision-guided munitions effectively replaced the controversial tactic of carpet-bombing, which had been harshly criticized due to it causing indiscriminate civilian casualties in prior conflicts, such as the 1991 Gulf War. On 7 May 1999, a B-2 erroneously dropped five JDAMs on the Chinese Embassy due to an error in targeting instructions, killing three people and injuring 20. By then, the B-2 had dropped 500 bombs in Yugoslavia.
=== 2000s ===
The B-2 bombed ground targets at the beginning of the War in Afghanistan (2001–2021) (Operation Crescent Wind/Operation Enduring Freedom). With aerial refueling support, the B-2 flew one of its longest missions to date from Whiteman Air Force Base in Missouri to Afghanistan and back. B-2s would be stationed in the Middle East as a part of a US military buildup in the region from 2003.
The B-2's combat use preceded a USAF declaration of "full operational capability" in December 2003. The Pentagon's Operational Test and Evaluation 2003 Annual Report noted that the B-2's serviceability for Fiscal Year 2003 was still inadequate, mainly due to the maintainability of the B-2's low observable coatings. The evaluation also noted that the Defensive Avionics suite had shortcomings with "pop-up threats".
During the Iraq War, B-2s operated from Diego Garcia and an undisclosed "forward operating location". Other sorties in Iraq have launched from Whiteman AFB. As of September 2013 the longest combat mission has been 44.3 hours. "Forward operating locations" have been previously designated as Andersen Air Force Base in Guam and RAF Fairford in the United Kingdom, where new climate controlled hangars have been constructed. B-2s have conducted 27 sorties from Whiteman AFB and 22 sorties from a forward operating location, releasing more than 1,500,000 pounds (680,000 kg) of munitions, including 583 JDAM "smart bombs" in 2003.
=== 2010s ===
In response to organizational issues and high-profile mistakes made within the USAF, all of the B-2s, along with the nuclear-capable B-52s and the USAF's intercontinental ballistic missiles (ICBMs), were transferred to the newly formed Air Force Global Strike Command on 1 February 2010.
In March 2011, B-2s were the first U.S. aircraft into action in Operation Odyssey Dawn, the UN mandated enforcement of the Libyan no-fly zone. Three B-2s dropped 40 bombs on a Libyan airfield in support of the UN no-fly zone. The B-2s flew directly from the U.S. mainland across the Atlantic Ocean to Libya; a B-2 was refueled by allied tanker aircraft four times during each round trip mission.
In August 2011, The New Yorker reported that prior to the May 2011 U.S. Special Operations raid into Abbottabad, Pakistan that resulted in the death of Osama bin Laden, U.S. officials had considered an airstrike by one or more B-2s as an alternative; the use of a bunker busting bomb was rejected due to potential damage to nearby civilian buildings. There were also concerns an airstrike would make it difficult to positively identify Bin Laden's remains, making it hard to confirm his death.
On 28 March 2013, two B-2s flew a round trip of 13,000 miles (21,000 km) from Whiteman Air Force base in Missouri to South Korea, dropping dummy ordnance on the Jik Do target range. The mission, part of the annual South Korean–U.S. military exercises, was the first time that B-2s overflew the Korean Peninsula. Tensions between the Koreas were high; North Korea protested against the B-2's participation and made threats of retaliatory nuclear strikes against South Korea and the United States.
On 18 January 2017, two B-2s attacked an ISIS training camp 19 miles (30 km) southwest of Sirte, Libya, killing around 85 militants. The B-2s together dropped 108 500-pound (230 kg) precision-guided Joint Direct Attack Munition (JDAM) bombs. These strikes were followed by an MQ-9 Reaper unmanned aerial vehicle firing Hellfire missiles. Each B-2 flew a 33-hour, round-trip mission from Whiteman Air Force Base, Missouri with four or five (accounts differ) refuelings during the trip.
=== 2020s ===
On 16 October 2024, B-2As carried out strikes on weapons storage facilities in Yemen, including underground facilities owned by the Houthis. Five hardened underground weapons storage locations were struck as part of the campaign against the Houthis for attacking international shipping during the Red Sea crisis. It was believed the strikes also served as a warning to Iran, demonstrating the stealth bomber's ability to destroy targets buried underground. RAAF Base Tindal in the Northern Territory, Australia was used as a staging ground for the strikes.
== Operators ==
United States Air Force (19 aircraft in active inventory)
Air Force Global Strike Command
509th Bomb Wing – Whiteman Air Force Base, Missouri (18 B-2s)
13th Bomb Squadron 2005–present
325th Bomb Squadron 1998–2005
393rd Bomb Squadron 1993–present
394th Combat Training Squadron 1996–2018
Air Combat Command
53rd Wing – Eglin Air Force Base, Florida
72nd Test and Evaluation Squadron (Whiteman AFB, Missouri) 1998–present
57th Wing – Nellis AFB, Nevada
325th Weapons Squadron – Whiteman AFB, Missouri 2005–present
715th Weapons Squadron 2003–2005
Air National Guard
131st Bomb Wing (Associate) – Whiteman AFB, Missouri 2009–present
110th Bomb Squadron
Air Force Materiel Command
412th Test Wing – Edwards Air Force Base, California (has one B-2)
419th Flight Test Squadron 1997–present
420th Flight Test Squadron 1992–present
Air Force Systems Command
6510th Test Wing – Edwards AFB, California 1989–1992
6520th Flight Test Squadron
== Accidents and incidents ==
On 23 February 2008, B-2 "AV-12" Spirit of Kansas crashed on the runway shortly after takeoff from Andersen Air Force Base in Guam. Spirit of Kansas had been operated by the 393rd Bomb Squadron, 509th Bomb Wing, Whiteman Air Force Base, Missouri, and had logged 5,176 flight hours. The two-person crew ejected safely from the aircraft. The aircraft was destroyed, a hull loss valued at US$1.4 billion. After the accident, the USAF took the B-2 fleet off operational status for 53 days, returning on 15 April 2008. The cause of the crash was later determined to be moisture in the aircraft's Port Transducer Units during air data calibration, which distorted the information being sent to the bomber's air data system. As a result, the flight control computers calculated an inaccurate airspeed, and a negative angle of attack, causing the aircraft to pitch upward 30 degrees during takeoff. This was the first crash and loss of a B-2.
In February 2010, a serious incident involving a B-2 occurred at Andersen Air Force Base in Guam. The aircraft involved was AV-11 Spirit of Washington. The aircraft was severely damaged by fire while on the ground and underwent 18 months of repairs to enable it to fly back to the mainland U.S. for more comprehensive repairs. Spirit of Washington was repaired and returned to service in December 2013. At the time of the accident, the USAF had no training to deal with tailpipe fires on the B-2s.
On the night of 13–14 September 2021, B-2 Spirit of Georgia made an emergency landing at Whiteman AFB. The aircraft landed and went off the runway into the grass and came to rest on its left side. The cause was later determined to be faulty landing gear springs and "microcracking" in hydraulic connections on the aircraft. The lock link springs in the left landing gear had likely not been replaced in at least a decade, and produced about 11% less tension than specified. The "microcracking" reduced hydraulic support to the landing gear. These problems allowed the landing gear to fold upon landing. The accident resulted in a minimum of $10.1 million in repair damages, but the final repair cost was still being determined in March 2022.
On 10 December 2022, an in-flight malfunction aboard a B-2 forced an emergency landing at Whiteman AFB. No personnel, including the flight crew, sustained injuries during the incident; there was a post-crash fire that was quickly put out. Subsequently, all B-2s were grounded. On 18 May 2023, Air Force officials lifted the grounding without disclosing any details about what caused the incident, or what steps had been taken return the aircraft to operation. In May 2024, the Air Force announced the B-2 would be divested, as it had been deemed to be "uneconomical to repair." Although no cost estimate was provided, the decision was likely influenced by the coming introduction of the B-21 bomber; after the B-2 crash in 2010, it took almost four years and over $100 million to return the aircraft to service because not losing one of the few penetrating bombers in the inventory was considered necessary to justify the effort. However, the impending arrival of the B-21 and coming retirement of the B-2 sometime after 2029 likely made USAF leaders decide it wouldn't be worth the expense to repair it, only for it to soon be retired.
== Aircraft on display ==
No operational B-2s have been retired by the Air Force to be put on display. B-2s have made occasional appearances on ground display at various air shows.
B-2 test article (s/n AT-1000), the second of two built without engines or instruments and used for static testing, was placed on display in 2004 at the National Museum of the United States Air Force near Dayton, Ohio. The test article passed all structural testing requirements before the airframe failed. The museum's restoration team spent over a year reassembling the fractured airframe. The display airframe is marked to resemble Spirit of Ohio (S/N 82-1070), the B-2 used to test the design's ability to withstand extreme heat and cold. The exhibit features Spirit of Ohio's nose wheel door, with its Fire and Ice artwork, which was painted and signed by the technicians who performed the temperature testing. The restored test aircraft is on display in the museum's "Cold War Gallery".
== Specifications (B-2A Block 30) ==
Data from USAF Fact Sheet, Pace, Spick, Northrop GrummanGeneral characteristics
Crew: 2: pilot (left seat) and mission commander (right seat)
Length: 69 ft 0 in (21.0 m)
Wingspan: 172 ft 0 in (52.4 m)
Height: 17 ft 0 in (5.18 m)
Wing area: 5,140 sq ft (478 m2)
Empty weight: 158,000 lb (71,700 kg)
Gross weight: 336,500 lb (152,200 kg)
Max takeoff weight: 376,000 lb (170,600 kg)
Fuel capacity: 167,000 pounds (75,750 kg)
Powerplant: 4 × General Electric F118-GE-100 non-afterburning turbofans, 17,300 lbf (77 kN) thrust each
Performance
Maximum speed: 630 mph (1,010 km/h, 550 kn) at 40,000 ft (12,000 m) altitude / Mach 0.95 at sea level
Cruise speed: 560 mph (900 km/h, 487 kn) at 40,000 ft (12,000 m) altitude
Range: 6,900 mi (11,000 km, 6,000 nmi)
Service ceiling: 50,000 ft (15,200 m)
Wing loading: 67.3 lb/sq ft (329 kg/m2)
Thrust/weight: 0.205
Armament
2 internal bays for ordnance and payload with an official limit of 40,000 lb (18,000 kg); maximum estimated limit is 50,000 lb (23,000 kg)
80× 500 lb (230 kg) class bombs (Mk-82, GBU-38) mounted on Bomb Rack Assembly (BRA)
36× 750 lb (340 kg) CBU class bombs on BRA
16× 2,000 lb (910 kg) class bombs (Mk-84, GBU-31) mounted on Rotary Launcher Assembly (RLA)
16× B61 or B83 nuclear bombs on RLA (strategic mission)
Standoff weapon: AGM-154 Joint Standoff Weapon (JSOW) and AGM-158 Joint Air-to-Surface Standoff Missile (JASSM)
2× GBU-57 Massive Ordnance Penetrator
== Individual aircraft ==
== Notable appearances in media ==
== See also ==
Northrop YB-49
Northrop Grumman B-21 Raider
Related lists
List of active United States military aircraft
List of bomber aircraft
List of flying wing aircraft
List of aerospace megaprojects
List of military electronics of the United States
=== Notes ===
== References ==
=== Bibliography ===
"Air Force, Options to Retire or Restructure the Force Would Reduce Planned Spending, NSIAD-96-192." US General Accounting Office, September 1996.
Boyne, Walter J. (2002), Air Warfare: an International Encyclopedia: A-L, Santa Barbara, California: ABC-CLIO, ISBN 978-1-57607-345-2
Chudoba, Bernd (2001), Stability and Control of Conventional and Unconventional Aircraft Configurations: A Generic Approach, Stoughton, Wisconsin: Books on Demand, ISBN 978-3-83112-982-9
Crickmore, Paul and Alison J. Crickmore, "Nighthawk F-117 Stealth Fighter". North Branch, Minnesota: Zenith Imprint, 2003. ISBN 0-76031-512-4.
Croddy, Eric and James J. Wirtz. Weapons of Mass Destruction: An Encyclopedia of Worldwide Policy, Technology, and History, Volume 2. Santa Barbara, California: ABC-CLIO, 2005. ISBN 1-85109-490-3.
Dawson, T.W.G., G.F. Kitchen and G.B. Glider. Measurements of the Radar Echoing Area of the Vulcan by the Optical Simulation Method. Farnborough, Hants, UK: Royal Aircraft Establishment, September 1957 National Archive Catalogue file, AVIA 6/20895
Donald, David, ed. (2003), Black Jets: The Development and Operation of America's Most Secret Warplanes, Norwalk, Connecticut: AIRtime, ISBN 978-1-880588-67-3
Donald, David (2004), The Pocket Guide to Military Aircraft: And the World's Airforces, London: Octopus Publishing Group, ISBN 978-0-681-03185-2
Eden, Paul. "Northrop Grumman B-2 Spirit". Encyclopedia of Modern Military Aircraft. New York: Amber Books, 2004. ISBN 1-904687-84-9.
Evans, Nicholas D. (2004), Military Gadgets: How Advanced Technology is Transforming Today's Battlefield – and Tomorrow's, Upper Saddle River, New Jersey: FT Press, ISBN 978-0-1314-4021-0
Fitzsimons, Bernard, ed. (1978), Illustrated Encyclopedia of 20th Century Weapons and Warfare, vol. 21, London: Phoebus, ISBN 978-0-8393-6175-6
Goodall, James C. "The Northrop B-2A Stealth Bomber." America's Stealth Fighters and Bombers: B-2, F-117, YF-22, and YF-23. St. Paul, Minnesota: MBI Publishing Company, 1992. ISBN 0-87938-609-6.
Griffin, John; Kinnu, James (2007), B-2 Systems Engineering Case Study (PDF), Dayton, Ohio: Air Force Center for Systems Engineering, Air Force Institute of Technology, Wright Patterson Air Force Base, archived from the original (PDF) on 22 August 2009
Moir, Ian; Seabridge, Allan G. (2008), Aircraft Systems: Mechanical, Electrical and Avionics Subsystems Integration, Hoboken, New Jersey: John Wiley & Sons, ISBN 978-0-4700-5996-8
Pace, Steve (1999), B-2 Spirit: The Most Capable War Machine on the Planet, New York: McGraw-Hill, ISBN 978-0-07-134433-3
Pelletier, Alain J. "Towards the Ideal Aircraft: The Life and Times of the Flying Wing, Part Two". Air Enthusiast, No. 65, September–October 1996, pp. 8–19. ISSN 0143-5450.
Richardson, Doug (2001), Stealth Warplanes, London: Salamander Books Ltd, ISBN 978-0-7603-1051-9
Rich, Ben R.; Janos, Leo (1996), Skunk Works: A Personal Memoir of My Years of Lockheed, Boston: Little, Brown & Company, ISBN 978-0-3167-4300-6
Rich, Ben (1994), Skunk Works, New York: Back Bay Books, ISBN 978-0-316-74330-3
Rip, Michael Russell; Hasik, James M. (2002), The Precision Revolution: Gps and the Future of Aerial Warfare, Annapolis, Maryland: Naval Institute Press, ISBN 978-1-5575-0973-4
Siuru, William D. (1993), Future Flight: The Next Generation of Aircraft Technology, New York: McGraw-Hill Professional, ISBN 978-0-8306-4376-9
Sorenson, David, S. (1995), The Politics of Strategic Aircraft Modernization, New York: Greenwood, ISBN 978-0-275-95258-7{{citation}}: CS1 maint: multiple names: authors list (link)
Spick, Mike (2000), B-2 Spirit, The Great Book of Modern Warplanes, St. Paul, Minnesota: MBI, ISBN 978-0-7603-0893-6
Sweetman, Bill (2005), Lockheed Stealth, North Branch, Minnesota: Zenith Imprint, ISBN 978-0-7603-1940-6
Sweetman, Bill. "Inside the stealth bomber". Zenith Imprint, 1999. ISBN 1610606892.
Tucker, Spencer C (2010), The Encyclopedia of Middle East Wars: The United States in the Persian Gulf, Afghanistan, and Iraq Conflicts, Volume 1, Santa Barbara, California: ABC-CLIO, ISBN 978-1-8510-9947-4
Withington, Thomas (2006), B-1B Lancer Units in Combat, Botley Oxford, UK: Osprey, ISBN 978-1-8417-6992-9
== Further reading ==
Richardson, Doug. Northrop B-2 Spirit (Classic Warplanes). New York: Smithmark Publishers Inc., 1991. ISBN 0-8317-1404-2.
Sweetman, Bill. Inside the Stealth Bomber. St. Paul, Minnesota: MBI Publishing, 1999. ISBN 0-7603-0627-3.
Winchester, Jim, ed. "Northrop B-2 Spirit". Modern Military Aircraft (Aviation Factfile). Rochester, Kent, UK: Grange Books plc, 2004. ISBN 1-84013-640-5.
The World's Great Stealth and Reconnaissance Aircraft. New York: Smithmark, 1991. ISBN 0-8317-9558-1.
== External links ==
B-2 Spirit Stealth Bomber – Northrop Grumman
B-2 Spirit – US Air Force |
B-52 Stratofortress | The Boeing B-52 Stratofortress is an American long-range, subsonic, jet-powered strategic bomber. The B-52 was designed and built by Boeing, which has continued to provide support and upgrades. It has been operated by the United States Air Force (USAF) since 1955 and was flown by NASA from 1959 to 2007. The bomber can carry up to 70,000 pounds (32,000 kg) of weapons and has a typical combat range of around 8,800 miles (14,200 km) without aerial refueling.
After Boeing won the initial contract in June 1946, the aircraft's design evolved from a straight-wing aircraft powered by six turboprop engines to the final prototype YB-52 with eight turbojet engines and swept wings. The B-52 took its maiden flight in April 1952. Built to carry nuclear weapons for Cold War deterrence missions, the B-52 Stratofortress replaced the Convair B-36 Peacemaker. The bombers flew under the Strategic Air Command (SAC) until it was disestablished in 1992 and its aircraft absorbed into the Air Combat Command (ACC); in 2010, all B-52s were transferred to the new Air Force Global Strike Command (AFGSC).
The B-52's official name Stratofortress is rarely used; informally, the aircraft is commonly referred to as the BUFF (Big Ugly Fat Fucker/Fella). Superior performance at high subsonic speeds and relatively low operating costs have kept them in service despite the development of more advanced strategic bombers, such as the Mach-2+ Convair B-58 Hustler, the canceled Mach-3 North American XB-70 Valkyrie, the variable-geometry Rockwell B-1 Lancer, and the stealthy Northrop Grumman B-2 Spirit. A veteran of several wars, the B-52 has dropped only conventional munitions in combat.
As of 2024, the U.S. Air Force has 76 B-52s: 58 operated by active forces (2nd Bomb Wing and 5th Bomb Wing), 18 by reserve forces (307th Bomb Wing), and about 12 in long-term storage at the Davis-Monthan AFB Boneyard. The operational aircraft received upgrades between 2013 and 2015 and are expected to serve into the 2050s.
== Development ==
=== Origins ===
On 23 November 1945, Air Materiel Command (AMC) issued desired performance characteristics for a new strategic bomber "capable of carrying out the strategic mission without dependence upon advanced and intermediate bases controlled by other countries". The aircraft was to have a crew of five or more turret gunners, and a six-man relief crew. It was required to cruise at 300 miles per hour (260 kn; 480 km/h) at 34,000 feet (10,000 m) with a combat radius of 5,000 miles (4,300 nmi; 8,000 km). The armament was to consist of an unspecified number of 20 mm cannons and 10,000 pounds (4,500 kg) of bombs. On 13 February 1946, the USAAF issued bid invitations for these specifications, with Boeing, Consolidated Aircraft, and Glenn L. Martin Company submitting proposals.
On 5 June 1946, Boeing's Model 462, a straight-wing aircraft powered by six Wright T35 turboprops with a gross weight of 360,000 pounds (160,000 kg) and a combat radius of 3,110 miles (2,700 nmi; 5,010 km), was declared the winner. On 28 June 1946, Boeing was issued a letter of contract for US$1.7 million to build a full-scale mockup of the new XB-52 and do preliminary engineering and testing. However, by October 1946, the USAAF began to express concern about the sheer size of the new aircraft and its inability to meet the specified design requirements. In response, Boeing produced the Model 464, a smaller four-engine version with a 230,000-pound (100,000 kg) gross weight, which was briefly deemed acceptable.
Subsequently, in November 1946, the Deputy Chief of Air Staff for Research and Development, General Curtis LeMay, expressed the desire for a cruising speed of 400 miles per hour (350 kn; 640 km/h), to which Boeing responded with a 300,000-pound (140,000 kg) aircraft. In December 1946, Boeing was asked to change its design to a four-engine bomber with a top speed of 400 miles per hour (350 kn; 640 km/h), range of 12,000 miles (10,000 nmi; 19,000 km), and the ability to carry a nuclear weapon; in total, the aircraft could weigh up to 480,000 pounds (220,000 kg). Boeing responded with two models powered by T35 turboprops. The Model 464-16 was a "nuclear only" bomber with a 10,000-pound (4,500 kg) payload, while the Model 464-17 was a general purpose bomber with a 9,000-pound (4,100 kg) payload. Due to the cost associated with purchasing two specialized aircraft, the USAAF selected Model 464–17 with the understanding that it could be adapted for nuclear strikes.
In June 1947, the military requirements were updated and the Model 464-17 met all of them except for the range. It was becoming obvious to the USAAF that, even with the updated performance, the XB-52 would be obsolete by the time it entered production and would offer little improvement over the Convair B-36 Peacemaker; as a result, the entire project was postponed for six months. During this time, Boeing continued to perfect the design, which resulted in the Model 464–29 with a top speed of 455 miles per hour (395 kn; 732 km/h) and a 5,000-mile (8,000 km) range. In September 1947, the Heavy Bombardment Committee was convened to ascertain performance requirements for a nuclear bomber. Formalized on 8 December 1947, these requirements called for a top speed of 500 miles per hour (430 kn; 800 km/h) and an 8,000-mile (7,000 nmi; 13,000 km) range, far beyond the capabilities of the 464-29.
The outright cancellation of the Boeing contract on 11 December 1947 was staved off by a plea from its president William McPherson Allen to the Secretary of the Air Force Stuart Symington. Allen reasoned that the design was capable of being adapted to new aviation technology and more stringent requirements. In January 1948, Boeing was instructed to thoroughly explore recent technological innovations, including aerial refueling and the flying wing. Noting stability and control problems Northrop Corporation was experiencing with its YB-35 and YB-49 flying wing bombers, Boeing insisted on a conventional aircraft, and in April 1948 presented a US$30 million (US$393 million today) proposal for design, construction, and testing of two Model 464-35 prototypes. Further revisions during 1948 resulted in an aircraft with a top speed of 513 miles per hour (446 kn; 826 km/h) at 35,000 feet (11,000 m), a range of 6,909 miles (6,004 nmi; 11,119 km), and a 280,000-pound (130,000 kg) gross weight, which included 10,000 pounds (4,500 kg) of bombs and 19,875 US gallons (75,240 L) of fuel.
=== Design effort ===
In May 1948, Air Materiel Command asked Boeing to incorporate the previously discarded jet engine, with improvements in fuel efficiency, into the design. That resulted in the development of yet another revision—in July 1948, Model 464-40 substituted Westinghouse J40 turbojets for the turboprops. The USAF project officer who reviewed the Model 464-40 was favorably impressed, especially since he had already been thinking along similar lines. Nevertheless, the government was concerned about the high fuel consumption rate of the jet engines of the day, and directed Boeing to use the turboprop-powered Model 464–35 as the basis for the XB-52. Although he agreed that turbojet propulsion was the future, General Howard A. Craig, Deputy Chief of Staff for Materiel, was not very enthusiastic about a jet-powered B-52, since he felt that the jet engine had not yet progressed sufficiently to permit skipping an intermediate turboprop stage. However, Boeing was encouraged to continue turbojet studies even without any expected commitment to jet propulsion.
On Thursday, 21 October 1948, Boeing engineers George S. Schairer, Art Carlsen, and Vaughn Blumenthal presented the design of a four-engine turboprop bomber to the chief of bomber development, Colonel Pete Warden. Warden was disappointed by the projected aircraft and asked if the Boeing team could produce a proposal for a four-engine turbojet bomber. Joined by Ed Wells, Boeing's vice president of engineering, the engineers worked that night in The Hotel Van Cleve in Dayton, Ohio, redesigning Boeing's proposal as a four-engine turbojet bomber. On Friday, Colonel Warden looked over the information and asked for a better design. Returning to the hotel, the Boeing team was joined by Bob Withington and Maynard Pennell, two top Boeing engineers who were in town on other business.
By late Friday night, they had laid out what was an essentially new airplane. The new design (464-49) built upon the basic layout of the B-47 Stratojet with 35-degree swept wings, eight engines paired in four underwing pods, and bicycle landing gear with wingtip outrigger wheels. A notable feature was the ability to pivot both fore and aft main landing gear up to 20° from the aircraft centerline to increase safety during crosswind landings (allowing the aircraft to "crab" or roll with a sideways slip angle down the runway). After a trip to a hobby shop for supplies, Schairer set to work building a model. The rest of the team focused on weight and performance data. Wells, who was also a skilled artist, completed the aircraft drawings. On Sunday, a stenographer was hired to type a clean copy of the proposal. On Monday, Schairer presented Colonel Warden with a neatly bound 33-page proposal and a 14-inch (36 cm) scale model. The aircraft was projected to exceed all design specifications.
Although the full-size mock-up inspection in April 1949 was generally favorable, range again became a concern, since the J40s and early model J57s had excessive fuel consumption. Despite talk of another revision of specifications or even a full design competition among aircraft manufacturers, General LeMay, now in charge of Strategic Air Command, insisted that performance should not be compromised due to delays in engine development. In a final attempt to increase range, Boeing created the larger 464-67, stating that once in production, the range could be further increased in subsequent modifications. Following several direct interventions by LeMay, Boeing was awarded a production contract for thirteen B-52As and seventeen detachable reconnaissance pods on 14 February 1951. The last major design change—also at General LeMay's insistence—was a switch from the B-47 style tandem seating to a more conventional side-by-side cockpit, which increased the effectiveness of the copilot and reduced crew fatigue. Both XB-52 prototypes featured the original tandem seating arrangement with a framed bubble-type canopy (see above images).
Tex Johnston noted, "The B-52, like the B-47, utilized a flexible wing. I saw the wingtip of the B-52 static test airplane travel 32 feet (9.8 m), from the negative 1-G load position to the positive 4-G load position." The flexible structure allowed "... the wing to flex during gust and maneuvering loads, thus relieving high-stress areas and providing a smoother ride." During a 3.5-G pullup, "The wingtips appeared about 35 degrees above level flight position."
=== Pre-production and production ===
During ground testing on 29 November 1951, the XB-52's pneumatic system failed during a full-pressure test; the resulting explosion severely damaged the trailing edge of the wing, necessitating considerable repairs. The YB-52, the second XB-52 modified with more operational equipment, first flew on 15 April 1952 with "Tex" Johnston as the pilot. A 2-hour, 21-minute proving flight from Boeing Field, near Seattle, Washington, to Larson Air Force Base was undertaken with Boeing test pilot Johnston and USAF Lieutenant Colonel Guy M. Townsend. The XB-52 followed on 2 October 1952. The thorough development, including 670 days in the wind tunnel and 130 days of aerodynamic and aeroelastic testing, paid off with smooth flight testing. Encouraged, the USAF increased its order to 282 B-52s.
Only three of the 13 B-52As ordered were built. All were returned to Boeing and used in their test program. On 9 June 1952, the February 1951 contract was updated to order the aircraft under new specifications. The final 10, the first aircraft to enter active service, were completed as B-52Bs. At the roll-out ceremony on 18 March 1954, Air Force Chief of Staff General Nathan Twining said:
The long rifle was the great weapon of its day. ... today this B-52 is the long rifle of the air age.
The B-52B was followed by progressively improved bomber and reconnaissance variants, culminating in the B-52G and turbofan B-52H. To allow rapid delivery, production lines were set up both at its main Seattle factory and at Boeing's Wichita facility. More than 5,000 companies were involved in the huge production effort, with 41% of the airframe being built by subcontractors. The prototypes and all B-52A, B and C models (90 aircraft) were built at Seattle. Testing of aircraft built in Seattle caused problems due to jet noise, which led to the establishment of curfews for engine tests. Aircraft were ferried 150 miles (240 km) east on their maiden flights to Larson Air Force Base near Moses Lake, where they were fully tested.
As production of the B-47 came to an end, the Wichita factory was phased in for B-52D production, with Seattle responsible for 101 D-models and Wichita 69. Both plants continued to build the B-52E, with 42 built at Seattle and 58 at Wichita, and the B-52F (44 from Seattle and 45 from Wichita). For the B-52G, Boeing decided in 1957 to transfer all production to Wichita, which freed up Seattle for other tasks, particularly the production of airliners. Production ended in 1962 with the B-52H, with 742 aircraft built, plus the original two prototypes.
=== Upgrades ===
A proposed variant of the B-52H was the EB-52H, which would have consisted of 16 modified and augmented B-52H airframes with additional electronic jamming capabilities. This variant would have restored USAF airborne jamming capability that it lost on retiring the EF-111 Raven. The program was canceled in 2005 following the removal of funds for the stand-off jammer. The program was revived in 2007 and cut again in early 2009.
In July 2013, the USAF began a fleet-wide technological upgrade of its B-52 bombers called Combat Network Communications Technology (CONECT) to modernize electronics, communications technology, computing, and avionics on the flight deck. CONECT upgrades include software and hardware such as new computer servers, modems, radios, data-links, receivers, and digital workstations for the crew. One update is the AN/ARC-210 Warrior beyond-line-of-sight software programmable radio able to transmit voice, data, and information in-flight between B-52s and ground command and control centers, allowing the transmission and reception of data with updated intelligence, mapping, and targeting information; previous in-flight target changes required copying down coordinates. The ARC-210 allows machine-to-machine transfer of data, useful on long-endurance missions where targets may have moved before the arrival of the B-52. The aircraft will be able to receive information through Link-16. CONECT upgrades will cost US$1.1 billion overall and take several years. Funding has been secured for 30 B-52s; the USAF hopes for 10 CONECT upgrades per year, but the rate has yet to be decided.
Weapons upgrades include the 1760 Internal Weapons Bay Upgrade (IWBU), which gives a 66 percent increase in weapons payload using a digital interface (MIL-STD-1760) and rotary launcher. IWBU is expected to cost roughly US$313 million. The 1760 IWBU will allow the B-52 to carry eight JDAM 2,000-pound (910 kg) bombs, AGM-158B JASSM-ER cruise missiles and ADM-160C MALD-J decoy missiles internally. All 1760 IWBUs should be operational by October 2017. Two bombers will have the ability to carry 40 weapons in place of the 36 that three B-52s can carry. The 1760 IWBU allows precision-guided missiles or bombs to be deployed from inside the weapons bay; the previous aircraft carried these munitions externally on the wing hardpoints. This increases the number of guided weapons (Joint Direct Attack Munition or JDAM) a B-52 can carry and reduces the need for guided bombs to be carried on the wings. The first phase will allow a B-52 to carry twenty-four GBU-38 500-pound guided bombs or twenty GBU-31 2,000-pound bombs, with later phases accommodating the JASSM and MALD family of missiles. In addition to carrying more smart bombs, moving them internally from the wings reduces drag and achieves a 15 percent reduction in fuel consumption.
The US Air Force Research Lab is investigating defensive laser weapons for the B-52.
The B-52 is due to receive a range of upgrades alongside a planned engine retrofit. These upgrades aim to modernize the sensors and displays of the B-52. They include the new APG-79B4 Active electronically scanned array radar, replacing older mechanically scanned arrays, the streamlining of the nose and deletion of blisters housing the forward-looking infrared/electro-optical viewing system. In October 2022 Boeing released new images of what the upgrade would look like. The upgrades will also include improved communication systems, new pylons, new cockpit displays and the deletion of one crew station. The changes will carry the designation B-52J. The B-52J is scheduled to reach Initial operational capability in 2033.
== Design ==
=== Overview ===
The B-52 shared many technological similarities with the preceding B-47 Stratojet strategic bomber. The two aircraft used the same basic design, such as swept wings and podded jet engines, and the cabin included the crew ejection systems. On the B-52D, the pilots and electronic countermeasures (ECM) operator ejected upwards, while the lower deck crew ejected downwards; until the B-52G, the gunner had to jettison the tail gun to bail out. The tail gunner in early model B-52s was located in the traditional location in the tail of the plane, with both visual and radar gun laying systems; in later models, the gunner was moved to the front of the fuselage, with gun laying carried out by radar alone, much like the B-58 Hustler's tail gun system.
Structural fatigue was accelerated by at least a factor of eight in a low-altitude flight profile over that of high-altitude flying, requiring costly repairs to extend service life. In the early 1960s, the three-phase High Stress program was launched to counter structural fatigue, enrolling aircraft at 2,000 flying hours. Follow-up programs were conducted, such as a 2,000-hour service life extension to select airframes in 1966–1968, and the extensive Pacer Plank reskinning, completed in 1977. The wet wing introduced on G and H models was even more susceptible to fatigue, experiencing 60% more stress during a flight than the old wing. The wings were modified by 1964 under ECP 1050. This was followed by a fuselage skin and longeron replacement (ECP 1185) in 1966, and the B-52 Stability Augmentation and Flight Control program (ECP 1195) in 1967. Fuel leaks due to deteriorating Marman clamps continued to plague all variants of the B-52. To this end, all aircraft variants were subjected to Blue Band (1957), Hard Shell (1958), and finally QuickClip (1958) programs. The latter fitted safety straps that prevented catastrophic loss of fuel in case of clamp failure. The B-52's service ceiling is officially listed as 50,000 feet (15,000 m), but operational experience shows this is difficult to reach when fully laden with bombs. According to one source: "The optimal altitude for a combat mission was around 43,000 feet (13,000 m), because to exceed that height would rapidly degrade the plane's range."
In September 2006, the B-52 became one of the first US military aircraft to fly using alternative fuel. It took off from Edwards Air Force Base with a 50/50 blend of Fischer–Tropsch process (FT) synthetic fuel and conventional JP-8 jet fuel, which burned in two of the eight engines. On 15 December 2006, a B-52 took off from Edwards with the synthetic fuel powering all eight engines, the first time a USAF aircraft was entirely powered by the blend. The seven-hour flight was considered a success. This program is part of the Department of Defense Assured Fuel Initiative, which aimed to reduce crude oil usage and obtain half of its aviation fuel from alternative sources by 2016. On 8 August 2007, Air Force Secretary Michael Wynne certified the B-52H as fully approved to use the FT blend.
=== Flight controls ===
Because of the B-52's mission parameters, only modest maneuvers would be required with no need for spin recovery. The aircraft has a relatively small, narrow chord rudder, giving it limited yaw control authority. Originally an all-moving vertical stabilizer was to be used but was abandoned because of doubts about hydraulic actuator reliability. Because the aircraft has eight engines, asymmetrical thrust due to the loss of an engine in flight would be minimal and correctable with the narrow rudder. To assist with crosswind takeoffs and landings the main landing gear can be pivoted 20 degrees to either side from neutral. The crew would preset the yaw adjustable crosswind landing gear according to wind observations made on the ground.
Like the rudder, the elevator is also very narrow chord and the B-52 suffers from limited elevator control authority. For long-term pitch trim and airspeed changes the aircraft uses a stabilator (or all-moving tail) with the elevator used for small adjustments within a stabilizer setting. The stabilizer is adjustable through 13 degrees of movement (nine up, four down) and is crucial to operations during takeoff and landing due to large pitch changes induced by flap application.
B-52s prior to the G models had very small ailerons with a short span that was approximately equal to their chord. These "feeler ailerons" were used to provide feedback forces to the pilot's control yoke and to fine-tune the roll axes during delicate maneuvers such as aerial refueling. Due to twisting of the thin main wing, conventional outboard flap-type ailerons would lose authority and therefore could not be used. In other words, aileron activation would cause the wing to twist, undermining roll control. Six spoilerons on each wing are responsible for the majority of roll control. The late B-52G models eliminated the ailerons altogether and added an extra spoileron to each wing. Partly because of the lack of ailerons, the B-52G and H models were more susceptible to Dutch roll.
=== Avionics ===
Ongoing problems with avionics systems were addressed in the Jolly Well program, completed in 1964, which improved components of the AN/ASQ-38 bombing navigational computer and the terrain computer. The MADREC (Malfunction Detection and Recording) upgrade fitted to most aircraft by 1965 could detect failures in avionics and weapons computer systems and was essential in monitoring the AGM-28 Hound Dog missiles. The electronic countermeasures capability of the B-52 was expanded with Rivet Rambler (1971) and Rivet Ace (1973).
To improve operations at low altitudes, the AN/ASQ-151 Electro-Optical Viewing System (EVS), which consisted of a low light level television (LLLTV) and a forward looking infrared (FLIR) system mounted in blisters under the noses of B-52Gs and Hs between 1972 and 1976. The navigational capabilities of the B-52 were later augmented with the addition of GPS in the 1980s. The IBM AP-101, also used on the Rockwell B-1 Lancer bomber and the Space Shuttle, was the B-52's main computer.
In 2007, the LITENING targeting pod was fitted, which increased the effectiveness of the aircraft in the attack of ground targets with a variety of standoff weapons, using laser guidance, a high-resolution forward-looking infrared sensor (FLIR), and a CCD camera used to obtain target imagery. LITENING pods have been fitted to a wide variety of other US aircraft, such as the McDonnell Douglas F/A-18 Hornet, the General Dynamics F-16 Fighting Falcon and the McDonnell Douglas AV-8B Harrier II.
=== Armament ===
The ability to carry up to 20 AGM-69 SRAM nuclear missiles was added to G and H models, starting in 1971. To further improve its offensive ability, air-launched cruise missiles (ALCMs) were fitted. After testing of both the USAF-backed Boeing AGM-86 Air Launched Cruise Missile and the Navy-backed General Dynamics AGM-109 Tomahawk, the AGM-86B was selected for operation by the B-52 (and ultimately by the B-1 Lancer). A total of 194 B-52Gs and Hs were modified to carry AGM-86s, carrying 12 missiles on underwing pylons, with 82 B-52Hs further modified to carry another eight missiles on a rotary launcher fitted in the bomb bay. To conform with SALT II Treaty requirements that cruise missile-capable aircraft be readily identifiable by reconnaissance satellites, the cruise missile-armed B-52Gs were modified with a distinctive wing root fairing. As all B-52Hs were assumed modified, no visual modification of these aircraft was required. In 1990, the stealthy AGM-129 ACM cruise missile entered service; although intended to replace the AGM-86, the high cost and the Cold War's end led to only 450 being produced; unlike the AGM-86, no conventional, non-nuclear version was built. The B-52 was to have been modified to utilize Northrop Grumman's AGM-137 TSSAM weapon; however, the missile was canceled due to development costs.
Those B-52Gs not converted as cruise missile carriers underwent a series of modifications to improve conventional bombing. They were fitted with a new Integrated Conventional Stores Management System (ICSMS) and new underwing pylons that could hold larger bombs or other stores than the external pylons could. Thirty B-52Gs were further modified to carry up to 12 AGM-84 Harpoon anti-ship missiles each, while 12 B-52Gs were fitted to carry the AGM-142 Have Nap stand-off air-to-ground missile. When the B-52G was retired in 1994, an urgent scheme was launched to restore an interim Harpoon and Have Nap capability, the four aircraft being modified to carry Harpoon and four to carry Have Nap under the Rapid Eight program.
The Conventional Enhancement Modification (CEM) program gave the B-52H a more comprehensive conventional weapons capability, adding the modified underwing weapon pylons used by conventional-armed B-52Gs, Harpoon and Have Nap, and the capability to carry new-generation weapons including the Joint Direct Attack Munition (JDAM) and Wind Corrected Munitions Dispenser guided bombs, the AGM-154 glide bomb and the AGM-158 JASSM missile. The CEM program also introduced new radios, integrated Global Positioning System into the aircraft's navigation system, and replaced the under-nose FLIR with a more modern unit. Forty-seven B-52Hs were modified under the CEM program by 1996, with 19 more by the end of 1999.
By around 2010, U.S. Strategic Command stopped assigning B61 and B83 nuclear gravity bombs to B-52, and later listed only the B-2 as tasked with delivering strategic nuclear bombs in budget requests. Nuclear gravity bombs were removed from the B-52's capabilities because it is no longer considered survivable enough to penetrate modern air defenses, instead relying on nuclear cruise missiles and focusing on expanding its conventional strike role. The 2019 "Safety Rules for U.S. Strategic Bomber Aircraft" manual subsequently confirmed the removal of B61-7 and B83-1 gravity bombs from the B-52H's approved weapons configuration.
Starting in 2016, Boeing is to upgrade the internal rotary launchers to the MIL-STD-1760 interface to enable the internal carriage of smart bombs, which previously could be carried only on the wings.
While the B-1 Lancer has a larger theoretical maximum payload of 75,000 pounds (34,000 kg) compared to the B-52's 70,000 pounds (32,000 kg), the bombers are rarely able to carry their full loads. The most the B-52 carries is a full load of AGM-86Bs totaling 62,660 pounds (28,420 kg). The B-1 has the internal weapons bay space to carry more GBU-31 JDAMs and JASSMs, but the B-52 upgraded with the conventional rotary launcher can carry more of other JDAM variants.
The AGM-183A Air-Launched Rapid Response (ARRW) hypersonic missile and the future Long Range Stand Off (LRSO) nuclear-armed air-launched cruise missile will join the B-52 inventory in the future.
=== Engines ===
The eight engines of the B-52 are paired in pods and suspended by four pylons beneath and forward of the wings' leading edge. The careful arrangement of the pylons also allowed them to work as wing fences and delay the onset of stall. The first two prototypes, XB-52 and YB-52, were both powered by experimental Pratt & Whitney YJ57-P-3 turbojet engines with 8,700 pounds-force (39 kN) of static thrust each.
The B-52A models were equipped with Pratt & Whitney J57-P-1W turbojets, providing a dry thrust of 10,000 pounds-force (44 kN) which could be increased for short periods to 11,000 pounds-force (49 kN) with water injection. The water was carried in a 360-US-gallon (1,400 L) tank in the rear fuselage.
B-52B, C, D and E models were equipped with Pratt & Whitney J57-P-29W, J57-P-29WA, or J57-P-19W series engines, all rated at 10,500 lbf (47 kN). The B-52F and G models were powered by Pratt & Whitney J57-P-43WB turbojets, each rated at 13,750 pounds-force (61.2 kN) static thrust with water injection.
On 9 May 1961, the B-52H began to be delivered to the USAF with cleaner-burning and quieter Pratt & Whitney TF33-P-3 turbofans with a maximum thrust of 17,100 pounds-force (76 kN).
==== Engine retrofit ====
In a study for the USAF in the mid-1970s, Boeing investigated replacing the engines, changing to a new wing, and other improvements to upgrade B-52G/H aircraft as an alternative to the B-1A, then in development.
In 1996, Rolls-Royce and Boeing jointly proposed fitting each B-52 with four leased Rolls-Royce RB211 engines. This would have involved replacing the eight Pratt & Whitney TF33 engines (total thrust 136,000 lbf (600 kN)) with four RB211-535E4 engines (total thrust 172,400 lbf (767 kN)), which would increase range and reduce fuel consumption. However, a USAF analysis in 1997 concluded that Boeing's estimated savings of US$4.7 billion would not be realized and that reengining would instead cost US$1.3 billion more than keeping the existing engines, citing significant up-front procurement and re-tooling expenditure.
The USAF's 1997 rejection of reengining was subsequently disputed in a Defense Science Board (DSB) report in 2003. The DSB urged the USAF to re-engine the aircraft without delay, saying doing so would not only create significant cost savings but reduce greenhouse gas emissions and increase aircraft range and endurance; these conclusions were in line with the conclusions of a separate Congress-funded study conducted in 2003. Criticizing the USAF cost analysis, the DSB found that among other things, the USAF failed to account for the cost of aerial refueling; the DSB estimated that aerial refueling cost $17.50 per US gallon ($4.62/L), whereas the USAF had failed to account for the cost of delivering the fuel and so had only priced fuel at $1.20 per US gallon ($0.32/L).
On 23 April 2020, the USAF released its request for proposals for 608 commercial engines plus spares and support equipment, with the plan to award the contract in May 2021. This Commercial Engine Reengining Program (CERP) saw General Electric propose its CF34-10 and Passport turbofans, Pratt & Whitney its PW800, and the Rolls-Royce BR725 to be designated F130. On 24 September 2021, the USAF selected the Rolls-Royce F130 as the winner and announced plans to purchase 650 engines (608 direct replacements and 42 spares), for US$2.6 billion.
Unlike the previous re-engine proposal which also involved reducing the number of engines from eight to four, the F130 re-engine program maintains eight engines on the B-52. Although four-engine operation would be more efficient, retrofitting the airframe to operate with only four engines would involve additional changes to the aircraft's systems and control surfaces (particularly the rudder), thereby increasing the time, cost, and complexity of the project. B-52Hs upgraded with Rolls Royce F130 engines will be redesignated as "B-52Js".
=== Costs ===
== Operational history ==
=== Introduction ===
Although the B-52A was the first production variant, these aircraft were used only in testing. The first operational version was the B-52B, which had been developed in parallel with the prototypes since 1951. First flying in December 1954, B-52B, AF Serial Number 52-8711, entered operational service with 93rd Heavy Bombardment Wing (93rd BW) at Castle Air Force Base, California, on 29 June 1955. The wing became operational on 12 March 1956. The training for B-52 crews consisted of five weeks of ground school and four weeks of flying, accumulating 35 to 50 hours in the air. The new B-52Bs replaced operational B-36s on a one-to-one basis.
Early operations were problematic; in addition to supply problems, there were also technical issues. Ramps and taxiways deteriorated under the aircraft's weight, the fuel system was prone to leaks and icing, and bombing and fire control computers were unreliable. The split-level cockpit presented a temperature control problem – the pilots' cockpit was heated by sunlight while the observer and the navigator on the bottom deck sat on the ice-cold floor. Thus, a comfortable temperature setting for the pilots caused the other crew members to freeze, while a comfortable temperature for the bottom crew caused the pilots to overheat. The J57 engines proved unreliable. Alternator failure caused the first fatal B-52 crash in February 1956; as a result, the fleet was briefly grounded. In July, fuel and hydraulic issues grounded the B-52s again. In response to maintenance issues, the USAF set up "Sky Speed" teams of 50 contractors at each B-52 base to perform maintenance and routine checkups, taking an average of one week per aircraft.
On 21 May 1956, a B-52B (52-13) dropped a Mk-15 nuclear bomb over the Bikini Atoll in a test code-named Cherokee. It was the first air-dropped thermonuclear weapon. This aircraft now is on display at the National Museum of Nuclear Science and History in Albuquerque, NM. From 24 to 25 November 1956, four B-52Bs of the 93rd BW and four B-52Cs of the 42nd BW flew nonstop around the perimeter of North America in Operation Quick Kick, which covered 15,530 miles (13,500 nmi; 24,990 km) in 31 hours, 30 minutes. SAC noted the flight time could have been reduced by 5 to 6 hours had the four inflight refuelings been done by fast jet-powered tanker aircraft rather than propeller-driven Boeing KC-97 Stratofreighters. In a demonstration of the B-52's global reach, from 16 to 18 January 1957, three B-52Bs made a non-stop flight around the world during Operation Power Flite, during which 24,325 miles (21,138 nmi; 39,147 km) was covered in 45 hours 19 minutes (536.8 mph or 863.9 km/h) with several in-flight refuelings by KC-97s.
The B-52 set many records over the next few years. On 26 September 1958, a B-52D set a world speed record of 560.705 miles per hour (487.239 kn; 902.367 km/h) over a 10,000 kilometers (6,200 miles; 5,400 nautical miles) closed circuit without a payload. The same day, another B-52D established a world speed record of 597.675 miles per hour (519.365 kn; 961.865 km/h) over a 5,000 kilometers (3,100 miles; 2,700 nautical miles) closed circuit without a payload. On 14 December 1960, a B-52G set a world distance record by flying unrefueled for 10,078.84 miles (8,758.27 nmi; 16,220.32 km); the flight lasted 19 hours 44 minutes (510.75 mph or 821.97 km/h). From 10 to 11 January 1962, a B-52H (60-40) set a world distance record by flying unrefueled, surpassing the prior B-52 record set two years earlier, from Kadena Air Base, Okinawa Prefecture, Japan, to Torrejón Air Base, Spain, which covered 12,532.28 miles (10,890.25 nmi; 20,168.75 km). The flight passed over Seattle, Fort Worth and the Azores.
=== Cold War ===
When the B-52 entered service, the Strategic Air Command (SAC) intended to use it to deter and counteract the vast and modernizing Soviet Union's military. As the Soviet Union increased its nuclear capabilities, destroying or "countering" the forces that would deliver nuclear strikes (bombers, missiles, etc.) became of great strategic importance. The Eisenhower administration endorsed this switch in focus; the President in 1954 expressed a preference for military targets over civilian ones, a principle reinforced in the Single Integrated Operation Plan (SIOP), a plan of action in the case of nuclear war breaking out.
Throughout the Cold War, B-52s and other US strategic bombers performed airborne alert patrols under code names such as Head Start, Chrome Dome, Hard Head, Round Robin and Giant Lance. Bombers loitered at high altitudes near the borders of the Soviet Union to provide rapid first strike or retaliation capability in case of nuclear war. These airborne patrols formed one component of the US's nuclear deterrent, which would act to prevent the breakout of a large-scale war between the US and the Soviet Union under the concept of Mutually Assured Destruction.
Due to the late 1950s-era threat of surface-to-air missiles (SAMs) that could threaten high-altitude aircraft, seen in practice in the 1960 U-2 incident, the intended use of B-52 was changed to serve as a low-level penetration bomber during a foreseen attack upon the Soviet Union, as terrain masking provided an effective method of avoiding radar and thus the threat of the SAMs. The aircraft was planned to fly towards the target at 400–440 mph (640–710 km/h) and deliver their weapons from 400 ft (120 m) or lower. Although never intended for the low-level role, the B-52's flexibility allowed it to outlast several intended successors as the nature of aerial warfare changed. The B-52's large airframe enabled the addition of multiple design improvements, new equipment, and other adaptations over its service life.
In November 1959, to improve the aircraft's combat capabilities in the changing strategic environment, SAC initiated the Big Four modification program (also known as Modification 1000) for all operational B-52s except early B models. The program was completed by 1963. The four modifications were the ability to launch AGM-28 Hound Dog standoff nuclear missiles and ADM-20 Quail decoys, an advanced electronic countermeasures (ECM) suite, and upgrades to perform the all-weather, low-altitude (below 500 feet or 150 m) interdiction mission in the face of advancing Soviet missile-based air defenses.
In the 1960s, there were concerns over the fleet's capable lifespan. Several projects beyond the B-52, the Convair B-58 Hustler and North American XB-70 Valkyrie, had either been aborted or proved disappointing in light of changing requirements, which left the older B-52 as the main bomber as opposed to the planned successive aircraft models. On 19 February 1965, General Curtis E. LeMay testified to Congress that the lack of a follow-up bomber project to the B-52 raised the danger that, "The B-52 is going to fall apart on us before we can get a replacement for it." Other aircraft, such as the General Dynamics F-111 Aardvark, later complemented the B-52 in roles the aircraft was not as capable in, such as missions involving high-speed, low-level penetration dashes.
=== Vietnam War ===
With the escalating situation in Southeast Asia, 28 B-52Fs were fitted with external racks for 24 of the 750 pounds (340 kg) bombs under project South Bay in June 1964; an additional 46 aircraft received similar modifications under project Sun Bath. In March 1965, the United States commenced Operation Rolling Thunder. The first combat mission, Operation Arc Light, was flown by B-52Fs on 18 June 1965, when 30 bombers of the 9th and 441st Bombardment Squadrons struck a communist stronghold near the Bến Cát District in South Vietnam. The first wave of bombers arrived too early at a designated rendezvous point, and while maneuvering to maintain station, two B-52s collided, which resulted in the loss of both bombers and eight crewmen. The remaining bombers, minus one more that turned back due to mechanical problems, continued towards the target. Twenty-seven Stratofortresses bombed a one-by-two-mile (1.6 by 3.2 km) target box from between 19,000 and 22,000 feet (5,800 and 6,700 m), with a little more than 50% of the bombs falling within the target zone. The force returned to Andersen Air Force Base except for one bomber with electrical problems that recovered to Clark Air Base, the mission having lasted 13 hours. Post-strike assessment by teams of South Vietnamese troops with American advisors found evidence that the Viet Cong had departed from the area before the raid, and it was suspected that infiltration of the south's forces may have tipped off the north because of the South Vietnamese Army troops involved in the post-strike inspection.
Beginning in late 1965, a number of B-52Ds underwent Big Belly modifications to increase bomb capacity for carpet bombings. While the external payload remained at 24 of 500 pounds (230 kg) or 750 pounds (340 kg) bombs, the internal capacity increased from 27 to 84 for 500 lb (230 kg) bombs, or from 27 to 42 for 750 lb (340 kg) bombs. The modification created enough capacity for a total of 60,000 pounds (27,000 kg) using 108 bombs. Thus modified, B-52Ds could carry 22,000 pounds (10,000 kg) more than B-52Fs. Designed to replace B-52Fs, modified B-52Ds entered combat in April 1966 flying from Andersen Air Force Base, Guam. Each bombing mission lasted 10 to 12 hours and included an aerial refueling by KC-135 Stratotankers. In spring 1967, B-52s began flying from U-Tapao Airfield in Thailand so that refueling was not required.
B-52s were employed during the Battle of Ia Drang in November 1965, notable as the aircraft's first use in a tactical support role.
On 22 November 1972, a B-52D (55-110) from U-Tapao was hit by a SAM while on a raid over Vinh. The crew was forced to abandon the damaged aircraft over Thailand. This was the first B-52 destroyed by hostile fire.
The zenith of B-52 attacks in Vietnam was Operation Linebacker II (also known as the Christmas bombings), conducted from 18 to 29 December 1972, which consisted of waves of B-52s (mostly D models, but some Gs without jamming equipment and with a smaller bomb load). Over 12 days, B-52s flew 729 sorties and dropped 15,237 tons of bombs on Hanoi, Haiphong, and other targets in North Vietnam. Originally 42 B-52s were committed to the war; however, numbers were frequently twice this figure. During Operation Linebacker II, fifteen B-52s were shot down, five were heavily damaged (one crashed in Laos), and five suffered medium damage. A total of 25 crewmen were killed in these losses. During the war, 31 B-52s were lost, including ten shot down over North Vietnam.
==== Air-to-air combat ====
During the Vietnam War, B-52D tail gunners were credited with shooting down two MiG-21 "Fishbeds". On 18 December 1972 tail gunner Staff Sergeant Samuel O. Turner's B-52 had just completed a bomb run for Operation Linebacker II and was turning away when a Vietnam People's Air Force (VPAF) MiG-21 approached. The MiG and the B-52 locked onto each other. When the fighter drew within range, Turner fired his quad (four guns on one mounting) .50 (12.7 mm) caliber machine guns. The MiG exploded aft of the bomber, as confirmed by Master Sergeant Louis E. Le Blanc, the tail gunner in a nearby Stratofortress. Turner received a Silver Star for his actions. His B-52, tail number 56-676, is preserved on display with air-to-air kill markings at Fairchild Air Force Base in Spokane, Washington.
On 24 December 1972, during the same bombing campaign, the B-52 Diamond Lil was headed to bomb the Thái Nguyên railroad yards when tail gunner Airman First Class Albert E. Moore spotted a fast-approaching MiG-21. Moore opened fire with his quad .50 caliber guns at 4,000 yd (3,700 m), and kept shooting until the fighter disappeared from his scope. Technical Sergeant Clarence W. Chute, a tail gunner aboard another Stratofortress, watched the MiG catch fire and fall away; this was not confirmed by the VPAF. Diamond Lil is preserved on display at the United States Air Force Academy in Colorado. Moore was the last bomber gunner believed to have shot down an enemy aircraft with machine guns in aerial combat.
The two B-52 tail gunner kills were not confirmed by VPAF, and they admitted to the loss of only three MiGs, all by F-4s. Vietnamese sources have attributed a third air-to-air victory to a B-52, a MiG-21 shot down on 16 April 1972. These victories make the B-52 the largest aircraft credited with air-to-air kills. The last Arc Light mission without fighter escort took place on 15 August 1973, as U.S. military action in Southeast Asia was wound down.
=== Post-Vietnam War service ===
B-52Bs reached the end of their structural service life by the mid-1960s and all were retired by June 1966, followed by the last of the B-52Cs on 29 September 1971; except for NASA's B-52B "008" which was eventually retired in 2004 at Edwards Air Force Base, California. Another of the remaining B Models, "52-005" is on display at the Wings Over the Rockies Air and Space Museum in Denver, Colorado.
A few time-expired E models were retired in 1967 and 1968, but the bulk (82) were retired between May 1969 and March 1970. Most F models were also retired between 1967 and 1973, but 23 survived as trainers until late 1978. The fleet of D models served much longer; 80 D models were extensively overhauled under the Pacer Plank program during the mid-1970s. Skinning on the lower wing and fuselage was replaced, and various structural components were renewed. The fleet of D models stayed largely intact until late 1978 when 37 not already upgraded Ds were retired. The remainder were retired between 1982 and 1983.
The remaining G and H models were used for nuclear standby ("alert") duty as part of the United States' nuclear triad; the combination of nuclear-armed land-based missiles, submarine-based missiles, and manned bombers. The B-1, intended to supplant the B-52, replaced only the older models and the supersonic FB-111. In 1991, B-52s ceased continuous 24-hour SAC alert duty.
After Vietnam, the experience of operations in a hostile air defense environment was considered. Due to this, B-52s were modernized with new weapons, equipment, and both offensive and defensive avionics. This, and the use of low-level tactics, marked a major shift in the B-52's utility. The upgrades were:
Supersonic short-range nuclear missiles: G and H models were modified to carry up to 20 SRAM missiles, replacing existing gravity bombs. Eight SRAMs were carried internally on a special rotary launcher and 12 SRAMs were mounted on two wing pylons. With SRAM, the B-52s could strike heavily defended targets without entering the terminal defenses.
New countermeasures: Phase VI ECM modification was the sixth major ECM program for the B-52. It improved the aircraft's self-protection capability in the dense Soviet air defense environment. The new equipment expanded signal coverage, improved threat warnings, provided new countermeasures techniques, and increased the quantity of expendables. The power requirements of Phase VI ECM also consumed most of the excess electrical capacity on the B-52G.
B-52G and Hs were also modified with an electro-optical viewing system (EVS) that made low-level operations and terrain avoidance much easier and safer. EVS system contained a low light level television (LLTV) camera and a forward-looking infrared (FLIR) camera to display information needed for penetration at lower altitudes.
Subsonic-cruise unarmed decoy: SCUD resembled the B-52 on the radar. As an active decoy, it carried ECM and other devices, and it had a range of several hundred miles. Although SCUD was never deployed operationally, the concept was developed, becoming known as the air-launched cruise missile (ALCM-A).
These modifications increased weight by nearly 24,000 pounds (11,000 kg) and decreased operational range by 8–11%. This was considered acceptable for the increase in capabilities.
After the fall of the Soviet Union, all B-52Gs remaining in service were destroyed in accordance with the terms of the Strategic Arms Reduction Treaty (START). The Aerospace Maintenance and Regeneration Center (AMRC) cut the 365 B-52s into pieces. Russia verified the completion destruction task via satellite and first-person inspection at the AMARC facility.
=== Gulf War and later ===
B-52 strikes were an important part of Operation Desert Storm. Starting on 16 January 1991, a flight of B-52Gs flew from Barksdale Air Force Base, Louisiana, refueled in the air en route, struck targets in Iraq, and returned home – a journey of 35 hours and 14,000 miles (23,000 km) round trip. It set a record for the longest-distance combat mission, breaking the record previously held by an RAF Vulcan bomber in 1982; however, this was achieved using forward refueling. Those seven B-52s flew the first combat sorties of Operation Desert Storm, firing 35 AGM-86C CALCM standoff missiles and successfully destroying 85–95 percent of their targets. B-52Gs operating from the King Abdullah Air Base at Jeddah, Saudi Arabia, RAF Fairford in the United Kingdom, Morón Air Base, Spain, and the island of Diego Garcia in the British Indian Ocean Territory flew bombing missions over Iraq, initially at low altitude. After the first three nights, the B-52s moved to high-altitude missions instead, which reduced their effectiveness and psychological impact compared to the low-altitude role initially played.
The conventional strikes were carried out by three bombers, which dropped up to 153 of the 750 lb (340 kg) M117 bomb over an area of 1.5 by 1 mi (2.4 by 1.6 km). The bombings demoralized the defending Iraqi troops, many of whom surrendered in the wake of the strikes. In 1999, the science and technology magazine Popular Mechanics described the B-52's role in the conflict: "The Buff's value was made clear during the Gulf War and Desert Fox. The B-52 turned out the lights in Baghdad." During Operation Desert Storm, B-52s flew about 1,620 sorties and delivered 40% of the weapons dropped by coalition forces.
During the conflict, several claims of Iraqi air-to-air successes were made, including an Iraqi pilot, Khudai Hijab, who allegedly fired a Vympel R-27R missile from his MiG-29 and damaged a B-52G on the opening night of the Gulf War. However, the USAF disputes this claim, stating the bomber was actually hit by friendly fire, an AGM-88 High-speed, Anti-Radiation Missile (HARM) that homed on the fire-control radar of the B-52's tail gun; the jet was subsequently nicknamed In HARM's Way. Shortly following this incident, General George Lee Butler announced that the gunner position on B-52 crews would be eliminated, and the gun turrets permanently deactivated, commencing on 1 October 1991.
Since the mid-1990s, the B-52H has been the only variant remaining in military service; it is currently stationed at:
Minot Air Force Base, North Dakota – 5th Bomb Wing
Barksdale Air Force Base, Louisiana – 2nd Bomb Wing (active Air Force) and 307th Bomb Wing (Air Force Reserve Command)
One B-52H is assigned to Edwards Air Force Base and is used by Air Force Materiel Command at the USAF Flight Test Center.
One additional B-52H has been used by NASA at Dryden Flight Research Center (now Armstrong Flight Research Center), California as part of the Heavy-lift Airborne Launch program.
From 2 to 3 September 1996, two B-52Hs conducted a mission as part of Operation Desert Strike. The B-52s struck Baghdad power stations and communications facilities with 13 AGM-86C conventional air-launched cruise missiles (CALCM) during a 34-hour, 16,000 mi (26,000 km) round trip mission from Andersen Air Force Base, Guam, the longest distance ever flown for a combat mission.
On 24 March 1999, when Operation Allied Force began, B-52 bombers bombarded Serb targets throughout the Federal Republic of Yugoslavia, including during the Battle of Kosare.
The B-52 contributed to Operation Enduring Freedom in 2001 (Afghanistan/Southwest Asia), providing the ability to loiter high above the battlefield and provide Close Air Support (CAS) through the use of precision-guided munitions, a mission which previously would have been restricted to fighter and ground attack aircraft. In late 2001, ten B-52s dropped a third of the bomb tonnage in Afghanistan. B-52s also played a role in Operation Iraqi Freedom, which commenced on 20 March 2003 (Iraq/Southwest Asia). On the night of 21 March 2003, B-52Hs launched at least 100 AGM-86C CALCMs at targets within Iraq.
=== B-52 and maritime operations ===
The B-52 can be employed in ocean surveillance, anti-ship and mine-laying operations. For example, a pair of B-52s, in two hours, can monitor 140,000 square miles (360,000 square kilometers) of the ocean surface. During the 2018 Baltops exercise, B-52s conducted mine-laying missions off the coast of Sweden, simulating a counter-amphibious invasion mission in the Baltic.
In the 1970s, the U.S. Navy worried that combined attacks from Soviet bombers, submarines, and warships could overwhelm its defenses and sink its aircraft carriers. After the Falklands War, US planners feared the damage that could be created by 200-mile (170 nmi; 320 km)-range missiles carried by Tupolev Tu-22M "Backfire" bombers and 250-mile (220 nmi; 400 km)-range missiles carried by Soviet surface ships. New US Navy maritime strategy in the early 1980s called for the aggressive use of carriers and surface action groups against the Soviet navy. To help protect the carrier battle groups, some B-52Gs were modified to fire Harpoon anti-ship missiles. These bombers were based in Guam and Maine in the later 1970s to support both the Atlantic and Pacific fleets. In case of war, B-52s would coordinate with tanker support and surveillance aircraft. B-52Gs could strike Soviet Navy targets on the flanks of the US carrier battle groups, leaving them free to concentrate on offensive strikes against Soviet surface combatants. Mines laid by B-52s could establish minefields in significant enemy chokepoints (mainly the Kuril Islands and the GIUK gap). These minefields would force the Soviet fleet to disperse, making individual ships more vulnerable to Harpoon attacks.
From the 1980s, B-52Hs were modified to use a wide range of cruise missiles, laser- and satellite-guided bombs, and unguided munitions. B-52 bomber crews honed sea-skimming flight profiles that would allow them to penetrate stiff enemy defenses and attack Soviet ships.
Recent expansion and modernization of the People's Liberation Army Navy of China has caused the USAF to re-implement strategies for finding and attacking ships. The B-52 fleet has been certified to use the Quickstrike family of naval mines using JDAM-ER guided wing kits. This weapon provides the ability to lay minefields over wide areas, in a single pass, with extreme accuracy, at a range of over 40 miles (35 nmi; 64 km). Besides this, with a view to enhancing B-52 maritime patrol and strike performance, an AN/ASQ-236 Dragon's Eye underwing pod, has also been certified for use by B-52H bombers. Dragon's Eye contains an advanced electronically scanned array radar that will allow B-52s to quickly scan vast Pacific Ocean areas. This radar will complement the Litening infrared targeting pod already used by B-52s for inspecting ships. In 2019, Boeing selected the Raytheon AN/APG-82(V)1 radar to replace its mechanically scanning AN/APQ-166 attack radar.
=== 21st century service ===
In August 2007, a B-52H ferrying AGM-129 ACM cruise missiles from Minot Air Force Base to Barksdale Air Force Base for dismantling was mistakenly loaded with six missiles with their nuclear warheads. The weapons did not leave USAF custody and were secured at Barksdale.
Four of 18 B-52Hs from Barksdale Air Force Base were retired and were in the "boneyard" of 309th AMARG at Davis-Monthan Air Force Base as of 8 September 2008.
In February 2015, hull 61-0007 Ghost Rider became the first stored B-52 to return to service after six years in storage at Davis-Monthan Air Force Base.
In May 2019, a second aircraft was resurrected from long-term storage in Davis-Monthan. The B-52, nicknamed "Wise Guy", had been at AMARG since 2008. It flew to Barksdale Air Force Base on 13 May 2019. It was completed in four months by a team of 13–20 maintainers from the 307th Maintenance Squadron.
B-52s are periodically refurbished at USAF maintenance depots such as Tinker Air Force Base, Oklahoma. Even while the USAF works on the new Long Range Strike Bomber, it intends to keep the B-52H in service until 2050, which is 95 years after the B-52 first entered service (and will be about 88 years after the last B-52H was delivered to the U.S. Air Force), an unprecedented length of service for any aircraft, civilian or military.
The USAF continues to rely on the B-52 because it remains an effective and economical heavy bomber in the absence of sophisticated air defenses, particularly in the type of missions that have been conducted since the end of the Cold War against nations with limited defensive capabilities. The B-52 has also continued in service because there has been no reliable replacement. The B-52 has the capacity to "loiter" for extended periods, and can deliver precision standoff and direct fire munitions from a distance, in addition to direct bombing. It has been a valuable asset in supporting ground operations during conflicts such as Operation Iraqi Freedom. The B-52 had the highest mission capable rate of the three types of heavy bombers operated by the USAF in the 2000–2001 period. The B-1 averaged a 53.7% ready rate, the B-2 Spirit achieved 30.3%, and the B-52 averaged 80.5%. The B-52's US$72,000 cost per hour of flight is more than the B-1B's US$63,000 cost per hour, but less than the B-2's US$135,000 per hour.
The Long Range Strike Bomber program is intended to yield a stealthy successor for the B-52 and B-1 that would begin service in the 2020s; it is intended to produce 80 to 100 aircraft. Two competitors, Northrop Grumman and a joint team of Boeing and Lockheed Martin, submitted proposals in 2014; Northrop Grumman was awarded a contract in October 2015.
On 12 November 2015, the B-52 began freedom of navigation operations in the South China Sea in response to Chinese human-made islands in the region. Chinese forces, claiming jurisdiction within a 12-mile exclusion zone of the islands, ordered the bombers to leave the area, but they refused, not recognizing jurisdiction. On 10 January 2016, a B-52 overflew parts of South Korea escorted by South Korean F-15Ks and U.S. F-16s in response to the supposed test of a hydrogen bomb by North Korea.
On 9 April 2016, an undisclosed number of B-52s arrived at Al Udeid Air Base in Qatar as part of Operation Inherent Resolve, part of the military intervention against ISIL. The B-52s took over heavy bombing after B-1 Lancers that had been conducting airstrikes rotated out of the region in January 2016. In April 2016, B-52s arrived in Afghanistan to take part in the war in Afghanistan and began operations in July, proving its flexibility and precision carrying out close-air support missions.
According to a statement by the U.S. military, an undisclosed number of B-52s participated in the U.S. strikes on pro-government forces in eastern Syria on 7 February 2018. A number of B-52s were deployed in airstrikes against the Taliban during the 2021 Taliban offensive. In 2022, the US Air Force used a B-52 as a platform to test a Hypersonic Air-breathing Weapon Concept (HAWC) missile. In late October 2022, ABC News reported that the USAF intended to deploy six B-52s at RAAF Tindal in Australia in the near future, which would include building facilities to handle the aircraft.
On 3 November 2024, CENTCOM confirmed an undisclosed number of B-52s from Minot Air Force Base's 5th Bomb Wing arrived in the Middle East. On 8 December 2024, CENTCOM announced that B-52s, alongside undisclosed numbers of F-15E fighter aircraft and A-10 attack aircraft, had participated in a number of airstrikes against over 75 Islamic State targets within Syria, following the ousting of the al-Assad government in the country in the days prior.
== Variants ==
The B-52 went through several design changes and variants over its 10 years of production.
XB-52
YB-52
B-52A
NB-52A
B-52B/RB-52B
NB-52B
B-52C
RB-52C
B-52D
B-52E
JB-52E
NB-52E
B-52F
B-52G
B-52H
B-52J
XR-16A
== Operators ==
United States
United States Air Force operates 72 aircraft of the original 744 B-52 aircraft as of 2022.
Air Combat Command
53rd Wing – Eglin Air Force Base, Florida
49th Test and Evaluation Squadron (Barksdale)
57th Wing – Nellis Air Force Base, Nevada
340th Weapons Squadron (Barksdale)
Air Force Global Strike Command
2d Bomb Wing – Barksdale Air Force Base, Louisiana
11th Bomb Squadron
20th Bomb Squadron
96th Bomb Squadron
5th Bomb Wing – Minot Air Force Base, North Dakota
23d Bomb Squadron
69th Bomb Squadron
Air Force Materiel Command
412th Test Wing – Edwards Air Force Base, California
419th Flight Test Squadron
Air Force Reserve Command
307th Bomb Wing – Barksdale Air Force Base, Louisiana
93d Bomb Squadron
343d Bomb Squadron
NASA
Dryden Flight Research Center
1 modified ex-USAF NB-52B (52-8) "Mothership" Launch Aircraft operated from 1966 to 2004. It was then put on display at the North entrance to Edwards Air Force Base.
1 modified ex-USAF B-52H (61-25) Heavy Lift Launch Aircraft operated from 2001 to 2008. On 9 May 2008, that aircraft was flown for the last time to Sheppard Air Force Base, Texas, where it became a GB-52H maintenance trainer, never to fly again.
== Notable accidents ==
List of incidents resulting in loss of life, severe injuries, or loss of aircraft.
In 1956, there were three crashes in eight months, all at Castle Air Force Base.
A fourth crash occurred 42 days later on 10 January 1957 in New Brunswick, Canada.
On 29 March 1957, B-52C (54-2676) retained by Boeing and used for tests as JB-52C, crashed during Boeing test flight from Wichita, Kansas. Two of the four crew on board were killed.
On 11 February 1958, B-52D (56-0610) crashed short of the runway at Ellsworth AFB, South Dakota, due to total loss of power during final approach. Two of the eight crewmembers on board were killed in addition to three ground personnel. The crash was determined to be from frozen fuel lines that clogged fuel filters. It was previously unknown that jet fuel absorbs water vapor from the atmosphere. After this accident, over two hundred previous aircraft losses listed as "cause unknown" were attributed to frozen fuel lines.
On 8 September 1958, two B-52Ds (56‑0661 and 56‑0681) from the 92d Bombardment Wing collided in midair near Fairchild AFB. All thirteen crew members on the two aircraft were killed.
On 23 June 1959, B-52D (56‑0591), nicknamed "Tommy's Tigator", operating out of Larson AFB, crashed in the Ochoco National Forest near Burns, Oregon. The aircraft was operated by Boeing personnel during a test flight and crashed after turbulence-induced failure in the horizontal stabilizer at a low elevation. All five Boeing personnel were killed.
On 15 October 1959, B-52F (57‑0036) from the 4228th Strategic Wing at Columbus AFB, Mississippi, carrying two nuclear weapons collided in midair with a KC-135 tanker (57-1513) near Hardinsburg, Kentucky during a mid-air refueling. Four of the eight crew members on the bomber and all four crew on the tanker were killed. One of the nuclear bombs was damaged by fire, but both weapons were recovered.
On 15 December 1960, B-52D (55‑0098) from the 4170th Strategic Wing collided with a KC-135 during mid-air refueling. The refueling probe from the KC-135 pierced the skin on the wing of the B-52. Upon landing at Larson AFB, the starboard wing failed, and the aircraft caught fire during the landing roll. The runway at Larson was damaged. All crew members were evacuated. The KC-135 landed at Fairchild AFB.
On 19 January 1961, B-52B (53‑0390), call sign "Felon 22", from the 95th Bombardment Wing out of Biggs AFB, El Paso, Texas crashed just north of Monticello, Utah after a turbulence-induced structural failure, the tail snapped off, at altitude. Only the copilot survived after ejecting. The other seven crewmen died.
On 24 January 1961, B-52G (58‑0187) from the 4241st Strategic Wing broke up in midair and crashed on approach to Seymour Johnson AFB near Goldsboro, North Carolina, dropping two nuclear bombs in the process without detonation. The aircraft suffered a fuel leak at altitude due to fatigue failure of the starboard wing. A loss of control resulted when the flaps were applied during the emergency approach to Seymour Johnson AFB. Three of the eight crew members were killed.
On 14 March 1961, B-52F (57‑0166) of the 4134th Strategic Wing operating out of Mather AFB, California, carrying two nuclear weapons experienced an uncontrolled decompression, necessitating a descent to 10,000 feet (3,000 m) to lower the cabin altitude. Due to increased fuel consumption at the lower altitude and being unable to rendezvous with a tanker in time, the aircraft ran out of fuel. The crew ejected safely, while the now-unmanned bomber crashed 15 miles (24 km) west of Yuba City, California.
On 7 April 1961, B-52B (53‑0380), nicknamed "Ciudad Juarez", from the 95th Bombardment Wing out of Biggs AFB was accidentally shot down by the launch of a AIM-9 Sidewinder from a F-100A Super Sabre (53-1662) of the New Mexico Air National Guard during a practice intercept maneuver. The missile struck the engine pylon on the B-52 resulting in separation of the wing. The aircraft crashed on Mount Taylor, New Mexico with three of the eight crew on board killed. A firing circuit electrical fault caused the inadvertent launch of the missile.
On 24 January 1963, B-52C (53-0406) with nine crew members on board lost its vertical stabilizer due to buffeting stresses during turbulence at low altitude and crashed on Elephant Mountain in Piscataquis County, Maine, United States, six miles (9.7 km) from Greenville. Of the 9-man crew, only the pilot and the navigator survived the accident.
On 13 January 1964, the vertical stabilizer broke off B-52D (55‑0060), callsign "Buzz 14", causing a crash on Savage Mountain in western Maryland. Excessive turbulence resulted in structural failure in a winter storm. The two MK53 nuclear bombs being ferried were found "relatively intact". Four of the crew of five ejected but two of them died due to exposure from the winter cold.
On 18 June 1965, two B-52Fs (57‑0047 and 57‑0179) collided mid-air during a refueling maneuver at 33,000 feet (10,000 m) above the South China Sea. The head-on collision took place just northwest of the Luzon Peninsula, Philippines, in the night sky above Super Typhoon Dinah, a category 5 storm with maximum winds of 185 mph (298 km/h) and waves reported as high as 70 feet (21 m). Both aircraft were from the same squadron (441st Bombardment Squadron) of the 7th Bombardment Wing, Carswell AFB, Texas and assigned to 3960th Strategic Wing operating out of Andersen AFB, Guam. Eight of twelve total crew members in two planes were killed. The rescue of four crew members who had managed to eject only to parachute into one of the largest typhoons of the 20th century remains one of the most remarkable survival stories in the history of aviation. The crash was the first combat mission ever for the B-52. The two jets were part of a 30-plane deployment on an inaugural Operation Arc Light mission to a military target about 25 miles (40 km) northwest of Saigon, South Vietnam.
On 17 January 1966, a fatal collision occurred between a B-52G (58‑0256) from 68th Bombardment Wing out of Seymour Johnson AFB and a KC-135 Stratotanker (61-0273) over Palomares, Almería, Spain, killing all four on the tanker and three of the seven on the B-52G. The two unexploded B-28 FI 1.45-megaton-range nuclear bombs on the B-52 were eventually recovered; the conventional explosives of two more bombs detonated on impact, with serious dispersion of both plutonium and uranium, but without triggering a nuclear explosion. After the crash, 1,400 metric tons (3,100,000 lb) of contaminated soil was sent to the United States. In 2006, an agreement was made between the United States and Spain to investigate and clean the pollution still remaining as a result of the accident.
On 16 October 1984, a B-52G out of Fairchild AFB, Spokane, Washington, crashed on Hunts Mesa, in the Monument Valley Navajo Tribal Park. Five of the seven crew members were able to eject and survived the crash. Sergeant David Felix and Colonel William Ivy were killed.
On 24 June 1994, B-52H Czar 52, 61–0026 crashed at Fairchild AFB, Washington, during practice for an airshow. All four crew members died in the accident.
On 21 July 2008, a B-52H, Raidr 21, 60–0053, deployed from Barksdale AFB, Louisiana, to Andersen AFB, Guam, crashed approximately 25 miles (40 km) off the coast of Guam. All six crew members were killed (five standard crew members and a flight surgeon).
== Aircraft on display ==
== Specifications (B-52H) ==
Data from Knaack, USAF fact sheet, Quest for PerformanceGeneral characteristics
Crew: 5 (pilot, copilot, weapon systems officer, navigator, electronic warfare officer)
Length: 159 ft 4 in (48.5 m)
Wingspan: 185 ft 0 in (56.4 m)
Height: 40 ft 8 in (12.4 m)
Wing area: 4,000 sq ft (370 m2)
Airfoil: NACA 63A219.3 mod root, NACA 65A209.5 tip
Empty weight: 185,000 lb (83,250 kg)
Gross weight: 265,000 lb (120,000 kg)
Max takeoff weight: 488,000 lb (221,323 kg)
Fuel capacity: 312,197 lb (141,610 kg), 47,975 U.S. gal (181,610 L)
Zero-lift drag coefficient: 0.0119 (estimated)
Drag area: 47.60 sq ft (4.42 m2)
Aspect ratio: 8.56
Powerplant: 8 × Pratt & Whitney TF33-P-3/103 turbofans, 17,000 lbf (76 kN) thrust each
Performance
Maximum speed: 650 mph (1,050 km/h, 560 kn)
Cruise speed: 509 mph (819 km/h, 442 kn)
Combat range: 8,800 mi (14,200 km, 7,600 nmi)
Ferry range: 10,145 mi (16,327 km, 8,816 nmi)
Service ceiling: 50,000 ft (15,000 m)
Rate of climb: 6,270 ft/min (31.85 m/s)
Wing loading: 120 lb/sq ft (586 kg/m2)
Thrust/weight: 0.31
Lift-to-drag ratio: 21.5 (estimated)
Armament
Guns: 1× 20 mm (0.787 in) M61 Vulcan cannon originally mounted in a remote-controlled tail turret on the H-model, removed in 1991 from all operational aircraft.
Bombs: Approximately 70,000 pounds (32,000 kg) mixed ordnance; bombs, mines, missiles, in various configurations.
Avionics
Electro-optical viewing system that uses platinum silicide forward looking infrared and high resolution low-light-level television sensors
ADR-8 chaff rocket (1965–1970)
LITENING Advanced Targeting System
Sniper Advanced Targeting Pod
IBM AP-101 computer
AN/ALE-20 – Infrared flare dispenser (12 systems installed)
AN/ALE-24 – chaff dispenser (8 systems installed)
AN/ALQ-122 – Motorola multiple false target generator
AN/ALQ-153 – Northrop Grumman tail missile approach warning system
AN/ALQ-155 – Northrop Grumman jammer power management system
AN/ALQ-172(V) – ITT Inc. electronic countermeasures system
AN/ALR-20A – Radar warning system
AN/ALR-46 – Northrop Grumman digital radar warning receiver (RWR)
AN/ALT-32 – Noise jammer
== Notable appearances in media ==
A B-52 carrying nuclear weapons was a key part of Stanley Kubrick's 1964 black comedy film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. A 1960s hairstyle, the beehive, is also called a B-52 for its resemblance to the aircraft's distinctive nose. The popular band the B-52's was subsequently named after this hairstyle.
== See also ==
BRANE – airborne computer built by IBM for the B-52
James Lore Murray
Related development
Conroy Virtus
Aircraft of comparable role, configuration, and era
V Force bombers:
Avro Vulcan
Handley Page Victor
Vickers Valiant
Convair YB-60
Myasishchev M-4
Tupolev Tu-95
Related lists
Accidents and incidents involving the B-52
List of active United States military aircraft
List of bomber aircraft
List of military electronics of the United States
== Notes ==
== References ==
=== Bibliography ===
== External links ==
USAF B-52 Fact Sheet
B-52 Stratofortress history on fas.org
B-52 profile on AerospaceWeb.org
B-52 Stratofortress Association website
"Boeing B-52 – the Strategic Stratofortress" a 1957 Flight article by Bill Gunston
"Special Feature: The B-52 Stratofortress" (PDF). Air & Space Power Journal. 35 (3): 4–15. Fall 2021. |
Balloon (aeronautics) | In aeronautics, a balloon is an unpowered aerostat, which remains aloft or floats due to its buoyancy. A balloon may be free, moving with the wind, or tethered to a fixed point. It is distinct from an airship, which is a powered aerostat that can propel itself through the air in a controlled manner.
Many balloons have a basket, gondola, or capsule suspended beneath the main envelope for carrying people or equipment (including cameras and telescopes, and flight-control mechanisms).
== Aerostation ==
Aerostation is an obsolete term referring to ballooning and the construction, operation, and navigation of lighter-than-air vehicles. Tiberius Cavallo's The History and Practice of Aerostation was published in 1785. Other books were published on the subject including by Monck Mason. Dramatist Frederick Pilon wrote a play with aerostation as its title.
== Principles ==
A balloon is conceptually the simplest of all flying machines. The balloon is a fabric envelope filled with a gas that is lighter than the surrounding atmosphere. As the entire balloon is less dense than its surroundings, it rises, taking along with it a basket, attached underneath, which carries passengers or payload. Although a balloon has no propulsion system, a degree of directional control is possible by making the balloon rise or sink in altitude to find favorable wind directions.
There are three main types of balloons:
The hot air balloon or Montgolfière obtains its buoyancy by heating the air inside the balloon; it has become the most common type.
The gas balloon or Charlière is inflated with a gas of lower molecular weight than the ambient atmosphere; most gas balloons operate with the internal pressure of the gas the same as the pressure of the surrounding atmosphere; a superpressure balloon can operate with the lifting gas at pressure that exceeds that of the surrounding air, with the objective of limiting or eliminating the loss of gas from day-time heating; gas balloons are filled with gases such as:
hydrogen – originally used extensively but, since the Hindenburg disaster, is now seldom used due to its high flammability;
coal gas – although giving around half the lift of hydrogen, extensively used during the nineteenth and early twentieth century, since it was cheaper than hydrogen and readily available;
helium – used today for all airships and most manned gas balloons;
other gases have included ammonia and methane, but these have poor lifting capacity and other safety defects and have never been widely used.
The Rozière type has both heated and unheated lifting gases in separate gasbags. This type of balloon is sometimes used for long-distance record flights, such as the recent circumnavigations, but is not otherwise in use.
Both the hot air, or Montgolfière, balloon and the gas balloon are still in common use. Montgolfière balloons are relatively inexpensive, as they do not require high-grade materials for their envelopes, and they are popular for balloonist sport activity.
=== Hot air balloons ===
The first balloon which carried passengers used hot air to obtain buoyancy and was built by the brothers Joseph and Etienne Montgolfier in Annonay, France in 1783: the first passenger flight was 19 September 1783, carrying a sheep, a duck, and a rooster.
The first tethered manned balloon flight was by a larger Montgolfier balloon, probably on 15 October 1783. The first free balloon flight was by the same Montgolfier balloon on 21 November 1783.
When heated, air expands, so a given volume of space contains less air. This makes it lighter and, if its lifting power is greater than the weight of the balloon containing it, it will lift the balloon upwards. A hot air balloon can only stay up while it has fuel for its burner, to keep the air hot enough.
The Montgolfiers' early hot air balloons used a solid-fuel brazier which proved less practical than the hydrogen balloons that had followed almost immediately, and hot air ballooning soon died out.
In the 1950s, the convenience and low cost of bottled gas burners led to a revival of hot air ballooning for sport and leisure.
The height or altitude of a hot air balloon is controlled by turning the burner up or down as needed, unlike a gas balloon where ballast weights are often carried so that they can be dropped if the balloon gets too low, and in order to land some lifting gas must be vented through a valve.
=== Gas balloons ===
A man-carrying balloon using the light gas hydrogen for buoyancy was made by Professor Jacques Charles and flown less than a month after the Montgolfier flight, on 1 December 1783. Gas balloons have greater lift for a given volume, so they do not need to be so large, and they can also stay up for much longer than hot air, so gas balloons dominated ballooning for the next 200 years. In the 19th century, it was common to use manufactured town gas (coal gas) to fill balloons; this was not as light as pure hydrogen gas, having about half the lifting power, but it was much cheaper and readily available.
Light gas balloons are predominant in scientific applications, as they are capable of reaching much higher altitudes for much longer periods of time. They are generally filled with helium. Although hydrogen has more lifting power, it is explosive in an atmosphere rich in oxygen. With a few exceptions, scientific balloon missions are unmanned.
There are two types of light-gas balloons: zero-pressure and superpressure. Zero-pressure balloons are the traditional form of light-gas balloon. They are partially inflated with the light gas before launch, with the gas pressure the same both inside and outside the balloon. As the zero-pressure balloon rises, its gas expands to maintain the zero pressure difference, and the balloon's envelope swells.
At night, the gas in a zero-pressure balloon cools and contracts, causing the balloon to sink. A zero-pressure balloon can only maintain altitude by releasing gas when it goes too high, where the expanding gas can threaten to rupture the envelope, or releasing ballast when it sinks too low. Loss of gas and ballast limits the endurance of zero-pressure balloons to a few days.
A superpressure balloon, in contrast, has a tough and inelastic envelope that is filled with light gas to pressure higher than that of the external atmosphere, and then sealed. The superpressure balloon cannot change size greatly, and so maintains a generally constant volume. The superpressure balloon maintains an altitude of constant density in the atmosphere, and can maintain flight until gas leakage gradually brings it down.
Superpressure balloons offer flight endurance of months, rather than days. In fact, in typical operation an Earth-based superpressure balloon mission is ended by a command from ground control to open the envelope, rather than by natural leakage of gas.
High-altitude balloons are used as high flying vessels to carry scientific instruments (like weather balloons), or reach near-space altitudes to take footage or photos of the earth. These balloons can fly over 100,000 feet (30.5 km) into the air, and are designed to burst at a set altitude where the parachute will deploy to safely carry the payload back to earth.
Cluster ballooning uses many smaller gas-filled balloons for flight.
=== Combination balloons ===
Early hot air balloons could not stay up for very long because they used a lot of fuel, while early hydrogen balloons were difficult to take higher or lower as desired because the aeronaut could only vent the gas or drop off ballast a limited number of times. Pilâtre de Rozier realised that for a long-distance flight such as crossing the English Channel, the aeronaut would need to make use of the differing wind directions at different altitudes. It would be essential therefore to have good control of altitude while still able to stay up for a long time. He developed a combination balloon having two gas bags, the Rozier balloon. The upper one held hydrogen and provided most of the steady lift. The lower one held hot air and could be quickly heated or cooled to provide the varying lift for good altitude control.
In 1785 Pilâtre de Rozier took off in an attempt to fly across the Channel, but shortly into the flight the hydrogen gas bag caught fire and de Rozier did not survive the ensuing accident. This earned de Rozier the title "The First to Fly and the First to Die".
It wasn't until the 1980s that technology was developed to allow safe operation of the Rozier type, for example by using non-flammable helium as the lifting gas, and several designs have successfully undertaken long-distance flights.
=== Tethering and kite balloons ===
As an alternative to free flight, a balloon may be tethered to allow reliable take off and landing at the same location. Some of the earliest balloon flights were tethered for safety, and since then balloons have been tethered for many purposes, including military observation and aerial barrage, meteorological and commercial uses.
The natural spherical shape of a balloon is unstable in high winds. Tethered balloons for use in windy conditions are often stabilised by aerodynamic shaping and connecting to the tether by a halter arrangement. These are called kite balloons.
A kite balloon is distinct from a kytoon, which obtains a portion of its lift aerodynamically.
== History ==
=== Antecedents ===
Unmanned hot air balloons are mentioned in Chinese history. Zhuge Liang of the Shu Han kingdom, in the Three Kingdoms era (220–280 AD) used airborne lanterns for military signaling. These lanterns are known as Kongming lanterns (Kǒngmíng dēng 孔明灯). The Mongolian army learned of the Kongming lantern from the Chinese and used it in Battle of Legnica during the Mongol invasion of Poland.
In 1709 the Brazilian-Portuguese cleric Bartolomeu de Gusmão made a balloon filled with heated air rise inside a room in Lisbon. On August 8, 1709, in Lisbon, Gusmão managed to lift a small balloon made of paper with hot air about four meters in front of king John V and the Portuguese court He also claimed to have built a balloon named Passarola (Big bird) and attempted to lift himself from Saint George Castle in Lisbon, landing about one kilometre away. However the claim of this feat remains uncertain, even though there is record of this flight in the source used by the FAI the exact distance and conditions of the flight are not confirmed.
=== The first modern balloons ===
Following Henry Cavendish's 1766 work on hydrogen, Joseph Black proposed that a balloon filled with hydrogen would be able to rise in the air.
The first recorded manned flight was made in a hot air balloon built by the Montgolfier brothers on 21 November 1783. The flight started in Paris and reached a height of 500 feet or so. The pilots, Jean-François Pilâtre de Rozier and François Laurent d'Arlandes, covered about 5.5 miles (8.9 km) in 25 minutes.
On 1 December 1783, Professor Jacques Charles and the Robert brothers made the first gas balloon flight, also from Paris. Their hydrogen-filled balloon flew to almost 2,000 feet (600 m), stayed aloft for over 2 hours and covered a distance of 27 miles (43 km), landing in the small town of Nesles-la-Vallée.
The first Italian balloon ascent was made by Count Paolo Andreani and two other passengers in a balloon designed and constructed by the three Gerli brothers, on 25 February 1784. A public demonstration occurred in Brugherio a few days later, on 13 March 1784, when the vehicle flew to a height of 1,537 metres (5,043 ft) and a distance of 8 kilometres (5.0 mi). On 28 March Andreani received a standing ovation at La Scala, and later a medal from Joseph II, Holy Roman Emperor.
De Rozier, together with Joseph Proust, took part in a further flight on 23 June 1784, in a modified version of the Montgolfiers' first balloon christened La Marie-Antoinette after the Queen. They took off in front of the King of France and King Gustav III of Sweden. The balloon flew north at an altitude of approximately 3,000 metres, above the clouds, travelling 52 km in 45 minutes before cold and turbulence forced them to descend past Luzarches, between Coye et Orry-la-Ville, near the Chantilly forest.
The first balloon ascent in Britain was made by James Tytler on 25 August 1784 at Edinburgh, Scotland, in a hot air balloon.
The first aircraft disaster occurred in May 1785 when the town of Tullamore, County Offaly, Ireland was seriously damaged when the crash of a balloon resulted in a fire that burned down about 100 houses, making the town home to the world's first aviation disaster. To this day, the town shield depicts a phoenix rising from the ashes.
Jean-Pierre Blanchard went on to make the first manned flight of a balloon in America on 9 January 1793, after touring Europe to set the record for the first balloon flight in countries including the Austrian Netherlands, Germany, the Netherlands and Poland. His hydrogen filled balloon took off from a prison yard in Philadelphia, Pennsylvania. The flight reached 5,800 feet (1,770 m) and landed in Gloucester County, New Jersey. President George Washington was among the guests observing the takeoff. Sophie Blanchard, married to Jean-Pierre, was the first woman to pilot her own balloon and the first woman to adopt ballooning as a career.
On 29 September 1804, Abraham Hopman became the first Dutchman to make a successful balloon flight in the Netherlands.
Gas balloons became the most common type from the 1790s until the 1960s. The French military observation balloon L'Intrépide of 1795 is the oldest preserved aircraft in Europe; it is on display in the Heeresgeschichtliches Museum in Vienna. Jules Verne wrote a short, non-fiction story, published in 1852, about being stranded aboard a hydrogen balloon.
The earliest successful balloon flight recorded in Australia was by William Dean in 1858. His balloon was gas-filled and travelled 30 km with two people aboard. On 5 January 1870, T. Gale, made an ascent from the Domain in Sydney. His balloon was 17 metres in length by 31 metres in circumference and his ascent, with him seated on the netting, took him about a mile before he landed in Glebe.
Henri Giffard also developed a tethered balloon for passengers in 1878 in the Tuileries Garden in Paris. The first tethered balloon in modern times was made in France at Chantilly Castle in 1994 by Aerophile SA.
Ballooning developed as a leisure activity. It was given a significant boost when Charles Green discovered that readily-available coal gas, then coming into urban use, gave half the lifting power of hydrogen, which had to be specially manufactured. In 1836 Green made an almost 500 mile long-distance flight from London, England to Weilberg in Germany.
=== Military use ===
The first military use of a balloon was at the Battle of Fleurus in 1794, when L'Entreprenant was used by the French Aerostatic Corps to watch the movements of the enemy. On 2 April 1794, an aeronauts corps was created in the French army; however, given the logistical problems linked with the production of hydrogen on the battlefield (it required constructing ovens and pouring water on white-hot iron), the corps was disbanded in 1799.
The first major use of balloons in the military occurred during the American Civil War with the Union Army Balloon Corps established in 1861.
During the Paraguayan War (1864–70), observation balloons were used by the Brazilian Army.
Balloons were used by the British Royal Engineers in 1885 for reconnaissance and observation purposes during the Bechuanaland Expedition and the Sudan Expedition. Although experiments in Britain had been conducted as early as 1863, a School of Ballooning was not established at Chatham, Medway, Kent until 1888. During the Anglo-Boer War (1899–1902), use was made of observation balloons. A 11,500 cubic feet (330 m3) balloon was kept inflated for 22 days and marched 165 miles into the Transvaal with the British forces.
Hydrogen-filled balloons were widely used during World War I (1914–1918) to detect enemy troop movements and to direct artillery fire. Observers phoned their reports to officers on the ground who then relayed the information to those who needed it. Balloons were frequently targets of opposing aircraft. Planes assigned to attack enemy balloons were often equipped with incendiary bullets, for the purpose of igniting the hydrogen.
The Aeronaut Badge was established by the United States Army in World War I to denote service members who were qualified balloon pilots. Observation balloons were retained well after the Great War, being used in the Russo-Finnish Wars, the Winter War of 1939–40, and the Continuation War of 1941–45.
During World War II the Japanese launched thousands of hydrogen "fire balloons" against the United States and Canada. In Operation Outward the British used balloons to carry incendiaries to Nazi Germany. During 2018, incendiary balloons and kites were launched from Gaza at Israel, burning some 12,000 dunams (3,000 acres) in Israel.
Large helium balloons are used by the South Korean government and private activists advocating freedom in North Korea. They float hundreds of kilometers across the border carrying news from the outside world, illegal radios, foreign currency and gifts of personal hygiene supplies. A North Korean military official has described it as "psychological warfare" and threatened to attack South Korea if their release continued.
=== Hot air returns ===
Ed Yost redesigned the hot air balloon in the late 1950s using rip-stop nylon fabrics and high-powered propane burners to create the modern hot air balloon. His first flight of such a balloon, lasting 25 minutes and covering 3 miles (5 km), occurred on 22 October 1960 in Bruning, Nebraska. Yost's improved design for hot air balloons triggered the modern sport balloon movement. Today, hot air balloons are much more common than gas balloons.
In the late 1970s the British hot air balloonist Julian Nott constructed a hot air balloon using technologies he believed would have been available to the Nazca culture of Peru some 1500 to 2000 years earlier, and demonstrated that it could fly. and again in 2003, Nott has speculated that the Nazca might have used it as a tool for designing the Nazca Lines. Nott also pioneered the use of hybrid energy, where solar power is a significant heat source, and in 1981 he crossed the English Channel.
== Modern ballooning ==
In 2012, the Red Bull Stratos balloon took Felix Baumgartner to 128,100 ft. for a freefall jump from the stratosphere.
=== Sports ===
=== Commercial ===
Tethered gas balloons have been installed as amusement rides in Paris since 1999, in Berlin since 2000, in Disneyland Paris since 2005, in the San Diego Wild Animal Park since 2005, in Walt Disney World in Orlando since 2009, and the DHL Balloon in Singapore in 2006. Modern tethered gas balloons are made by Aerophile SAS.
Hot air balloons used in sport flying are sometimes made in special designs to advertise a company or product, such as the Chubb fire extinguisher illustrated.
=== Astronautics ===
==== Balloon satellites ====
A balloon in space uses internal gas pressure only to maintain its shape.
The Echo satellite was a balloon satellite launched into Earth orbit in 1960 and used for passive relay of radio communication. PAGEOS was launched in 1966 for worldwide satellite triangulation, allowing for greater precision in the calculation of different locations on the planet's surface.
==== Planetary probes ====
In 1984, the Soviet space probes Vega 1 and Vega 2 released two balloons with scientific experiments in the atmosphere of Venus. They transmitted signals for two days to Earth.
== Ballooning records ==
On 19 October 1910, Alan Hawley and Augustus Post landed in the wilderness of Quebec, Canada after traveling for 48 hours and 1887.6 kilometers (1,173 mi) from St. Louis during the Gordon Bennett International Balloon Race, setting a distance record that held for more than 20 years. It took the men a week to hike out of the woods, during which time search parties had been mobilized and many had taken the pair for dead.
On 13 December 1913 through 17 December 1913 Hugo Kaulen stayed aloft for 87 hours. His record lasted until 1976.
On 27 May 1931, Auguste Piccard and Paul Kipfer became the first to reach the stratosphere in a balloon.
On 31 August 1933, Alexander Dahl took the first picture of the Earth's curvature in an open hydrogen gas balloon.
The helium-filled Explorer II balloon, piloted by US Army Air Corps officers Capt. Orvil A. Anderson, Maj. William E. Kepner and Capt. Albert W. Stevens, reached a new record height of 22,066 m (72,395 ft) on 11 November 1935. This followed the same crew's previous near-fatal plunge in July 1934 in a predecessor craft, Explorer, after its canopy ruptured just 190 m (624 ft) short (it transpired) of the then-current altitude record of 22,000 m (72,178 ft) set by the Soviet balloon Osoaviakhim-1.
In 1976, Ed Yost set 13 aviation world's records for distance traveled and amount of time aloft in his attempt to cross the Atlantic Ocean —solo— by balloon (3.938 km, 107:37 h).
In 1978, Ben Abruzzo and his team became the first to cross the Pacific Ocean in a hot air balloon.
The current absolute altitude record for manned balloon flight was set at 34,668 m (113,739 ft) on 4 May 1961 by Malcolm Ross and Victor Prather in the Strato-Lab V balloon payload launched from the deck of the USS Antietam in the Gulf of Mexico.
The previous record altitude for a manned balloon was set at 38,960.5 m (127,823 ft) by Felix Baumgartner in the Red Bull Stratos balloon launched from Roswell, New Mexico on Sunday, 14 October 2012.
The current record altitude for a manned balloon was set at 41,419.0 m (135,889.108 ft) by Alan Eustace on 24 October 2014 as part of the StratEx Space Dive project.
On 1 March 1999 Bertrand Piccard and Brian Jones set off in the balloon Breitling Orbiter 3 from Château d'Oex in Switzerland on the first non-stop balloon circumnavigation around the globe. They landed in Egypt after a 40,814 km (25,361 mi) flight lasting 19 days, 21 hours and 55 minutes.
The altitude record for an unmanned balloon is 53.0 kilometres (173,882 ft) in the mesosphere, reached with a volume of 60,000 cubic metres. The balloon was launched by JAXA on 25 May 2002 from Iwate Prefecture, Japan. This is the greatest height ever obtained by an atmospheric vehicle. Only rockets, rocket planes, and ballistic projectiles have flown higher.
In 2015, the two pilots Leonid Tiukhtyaev and Troy Bradley arrived safely in Baja California, Mexico after a journey of 10,711 km. The two men, originally from Russia and the United States of America respectively, started in Japan and flew with a helium balloon over the Pacific. In 160 hours, the balloon named "Two Eagles" arrived in Mexico, which is new distance and duration records for straight gas balloons.
== See also ==
== Notes ==
== References ==
== External links ==
History and legacy of Professor Thaddeus Lowe – American Union Army Balloonist by his Great Great Grandson
Review of Falling Upwards: How We Took to the Air, Richard Holmes's History of Ballooning, Scarlett Baron, Oxonian Review, June 2013
The Early Years of Sport Ballooning
Hot Air Balloon Simulator – learn the dynamics of a hot air balloon on the Internet-based simulator.
Stratocat Historical recompilation project on the use of stratospheric balloons in the scientific research, the military field and the aerospace activity.
Royal Engineers Museum Royal Engineers and Aeronautics
Royal Engineers Museum Archived 18 January 2007 at the Wayback Machine Early British Military Ballooning (1863)
Balloon fabrics made of Goldbeater's skins by Chollet, L. Technical Section of Aeronautics. December 1922 Archived 2 September 2009 at the Wayback Machine
The principle of a balloon flight – VIDEO |
Bartholomeu Lourenço de Gusmão | Bartolomeu Lourenço de Gusmão (December 1685 – 18 November 1724) was a Portuguese priest and naturalist from Colonial Brazil who was a pioneer of lighter-than-air aerostat design, being among the first scholars at that time to understand the operational principles of the hot air balloon and to build a functional prototype of such device. He is also one of the main characters in Nobel Prize-winning José Saramago's Baltasar and Blimunda.
== Early life ==
Gusmão was born at Santos, then part of the Portuguese colony of Brazil.
He began his novitiate in the Society of Jesus at Bahia when he was about fifteen years old, but left the order in 1701. He went to Portugal and found a patron at Lisbon in the person of the Marquis of Abrantes. He completed his course of study at the University of Coimbra, devoting his attention principally to philology and mathematics, but received the title of Doctor of Canon Law (related to Theology). He is said to have had a remarkable memory and a great command of languages.
== Airship ==
In 1709, he presented a petition to King João V of Portugal, seeking royal favour for his invention of an airship, in which he expressed the greatest confidence. The contents of this petition have been preserved, together with a picture and description of his airship. Developing the ideas of Francesco Lana de Terzi, S.J., Gusmão wanted to spread a huge sail over a boat-like body like the cover of a transport wagon; the boat itself was to contain tubes through which, when there was no wind, air would be blown into the sail by means of bellows. The vessel was to be propelled by the agency of magnets which were to be encased in two hollow metal balls. The public test of the machine, which was set for 24 June 1709, did not take place.
It is known that Gusmão was working on this principle at the public exhibition he gave before the Court on 8 August 1709, in the hall of the Casa da Índia in Lisbon, when he propelled a small balloon to the roof using combustion from a flame. The king rewarded the inventor by appointing him to a professorship at Coimbra and made him a canon. He was also one of the fifty selected as members of the Academia Real de História, founded in 1720; and in 1722 he was made chaplain to the Court. Gusmão also busied himself with other inventions, but in the meantime continued his work on his airship schemes, the idea for which he is said to have conceived while a novice at Bahia. His designs included a ship to sail in the air consisting of a triangular gas-filled pyramid, but he died without making progress.
== Persecution ==
One account of Gusmão's work suggests that the Portuguese Inquisition forbade him to continue his aeronautic investigations and persecuted him because of them, but this is probably a later invention. It dates, however, from at least the end of the 18th century, as the following article in the London Daily Universal Register (later The Times) of 20 October 1786, makes clear:
By accounts from Lisbon we are assured, that in consequence of the experiments made there with the Montgolfier balloon, the literati of Portugal had been incited to make numerous researches on the subject; in consequence of which they pretend that the honour of the invention is due to Portugal. They say that in 1720, a Brazilian Jesuit, named Bartholomew Gusmao, possessed of abilities, imagination, and address, by permission of John V. fabricated a balloon in a place contiguous to the Royal Palace, and one day, in presence of their Majesties, and an immense crowd of spectators, raised himself, by means of a fire lighted in the machine, as high as the cornice of the building; but through the negligence and want of experience of those who held the cords, the machine took an oblique direction, and, touching the cornice, burst and fell.
The balloon was in the form of a bird with a tail and wings. The inventor proposed to make new experiments, but, chagrined at the raillery of the common people, who called him wizzard, and terrified by the Inquisition, he took the advice of his friends, burned his manuscripts, disguised himself, and fled to Spain, where he soon after died in an hospital.
They add, that several learned men, French and English, who had been at Lisbon to verify the fact, had made enquiries at the Carmelite monastery, where Gusmao had a brother, who had preserved some of his manuscripts on the manner of constructing aerostatic machines. Various living persons affirm that they were present at the Jesuit's experiments, and that he received the surname of Voador, or Flying-man.
Contemporary documents do attest that information was laid before the Inquisition against Gusmão, but on quite another charge. The inventor fled to Spain and fell ill of a fever, of which he died in Toledo. He wrote: Manifesto summário para os que ignoram poderse navegar pelo elemento do ar (Short Manifesto for those who are unaware that is possible to sail through the element air, 1709); and Vários modos de esgotar sem gente as naus que fazem água (Several ways of draining, without people, ships that leak water, 1710); some of his sermons also have been printed.
== Legacy ==
In 1936, the Bartolomeu de Gusmão Airport was built in Rio de Janeiro, Brazil, by the Luftschiffbau Zeppelin to operate with the rigid airships Graf Zeppelin and Hindenburg. In 1941, it was taken over by the Brazilian Air Force and renamed Santa Cruz Air Force Base. Presently, the airport serving Araraquara is named Bartolomeu de Gusmão Airport.
== In popular culture ==
Passarola Rising by Azhar Abidi
Baltasar and Blimunda by José Saramago
== See also ==
List of firsts in aviation
Adelir Antônio de Carli, aka Padre Baloeiro, a Brazilian priest who died during an attempt at cluster ballooning in 2008
List of Catholic clergy scientists
== References ==
Daily Universal Register (The Times), Friday, Oct 20, 1786; p. 2; Iss. 560; col C
Gusmao, Bartolomeu de. Reproduction fac-similé d'un dessin à la plume de sa description et de la pétition adressée au Jean V. (de Portugal) en langue latine et en écriture contemporaine (1709) retrouvés récemment dans les archives du Vatican du célèbre aéronef de Bartholomeu Lourenco de Gusmão "l'homme volant" portugais, né au Brésil (1685–1724) précurseur des navigateurs aériens et premier inventeur des aérostats. 1917 (Lausanne : Impr. Réunies S.A.) in French and Latin.
Bartolomeu Lourenço de Gusmão 1685–1724 "Translated from the article which appeared on the Bartolomeu Lourenço de Gusmão page of the Brazilian Air Force website." |
Beechcraft | Beechcraft is an American brand of civil aviation and military aircraft owned by Textron Aviation since 2014, headquartered in Wichita, Kansas. Originally, it was a brand of Beech Aircraft Corporation, an American manufacturer of general aviation, commercial, and military aircraft, ranging from light single-engined aircraft to twin-engined turboprop transports, business jets, and military trainers. Beech later became a division of Raytheon and then Hawker Beechcraft before a bankruptcy sale turned its assets over to Textron (parent company of Beech's historical cross-town Wichita rival, Cessna Aircraft Company). It remains a brand of Textron Aviation.
== History ==
Beech Aircraft Company was founded in Wichita, Kansas, in 1932 by Walter Beech as president, his wife Olive Ann Beech as secretary, Ted A. Wells as vice president of engineering, K. K. Shaul as treasurer, and investor C. G. Yankey as vice president. The company began operations in an idle Cessna factory. With designer Ted Wells, they developed the first aircraft under the Beechcraft name, the classic Beechcraft Model 17 Staggerwing, which first flew in November 1932. Over 750 Staggerwings were built, with 352 manufactured for the United States Army Air Forces and 67 for the United States Navy during World War II.
Beechcraft was not Beech's first company, as he had previously formed Travel Air in 1924 and the design numbers used at Beechcraft followed the sequence started at Travel Air, and were then continued at Curtiss-Wright, after Travel Air had been absorbed into the much larger company in 1929. Beech had become president of Curtiss-Wright's airplane division and VP of sales, but was dissatisfied with being so far removed from aircraft production. He quit to form Beechcraft, using the original Travel Air facilities and employing many of the same people. Model numbers prior to 11/11000 were built under the "Travel Air" name, while Curtiss-Wright built the CW-12, 14, 15, and 16 as well as previous successful Travel Air models (mostly the model 4).
In 1942 Beech won its first Army-Navy "E" Award production award and became one of the elite five percent of war contracting firms in the country to win five straight awards for production efficiency, mostly for the production of the Beechcraft Model 18 which remains in widespread use worldwide. Beechcraft ranked 69th among United States corporations in the value of World War II military production contracts.
After the war, the Staggerwing was replaced by the revolutionary Beechcraft Bonanza with a distinctive V-tail. Perhaps the best known Beech aircraft, the single-engined Bonanza has been manufactured in various models since 1947. The Bonanza has had the longest production run of any airplane, past or present, in the world. Other important Beech aircraft are the King Air and Super King Air line of twin-engined turboprops, in production since 1964, the Baron, a twin-engined variant of the Bonanza, and the Beechcraft Model 18, originally a business transport and commuter airliner from the late 1930s through the 1960s, which remains in active service as a cargo transport.
In 1950, Olive Ann Beech was installed as president and CEO of the company, after the sudden death of her husband from a heart attack on November 29 of that year. She continued as CEO until Beech was purchased by Raytheon Company on February 8, 1980. Ted Wells had been replaced as chief engineer by Herbert Rawdon, who remained at the post until his retirement in the early 1960s.
Throughout much of the mid-to-late 20th century, Beechcraft was considered one of the "Big Three" in the field of general aviation manufacturing, along with Cessna and Piper Aircraft.
In 1973, Beechcraft found Beechcraft Heritage Museum to host its historical aircraft.
In 1994, Raytheon merged Beechcraft with the Hawker product line it had acquired in 1993 from British Aerospace, forming Raytheon Aircraft Company. In 2002, the Beechcraft brand was revived to again designate the Wichita-produced aircraft. In 2006, Raytheon sold Raytheon Aircraft to Goldman Sachs creating Hawker Beechcraft. Since its inception Beechcraft has resided in Wichita, Kansas, also the home of chief competitor Cessna, the birthplace of Learjet and of Stearman, whose trainers were used in large numbers during WW II.
The entry into bankruptcy of Hawker Beechcraft on May 3, 2012, ended with its emergence on February 16, 2013, as a new entity, Beechcraft Corporation, with the Hawker Beechcraft name being retired. The new and much smaller company produce the King Air line of aircraft as well as the T-6 and AT-6 military trainer/attack aircraft, as well as the piston-powered single-engined Bonanza and twin-engined Baron aircraft. The jet line was discontinued, but the new company continues to support the aircraft already produced with parts, plus engineering and airworthiness documentation.
By October 2013, the company, now financially turned around, was up for sale.
On December 26, 2013, Textron agreed to purchase Beechcraft, including the discontinued Hawker jet line, for $1.4 billion. The sale was concluded in the first half of 2014, with government approval. Textron CEO Scott Donnelly indicated that Beechcraft and Cessna would be combined to form a new light aircraft manufacturing concern, Textron Aviation, that would result in US$65M–$85M in annual savings over keeping the companies separate. Textron has kept both the Beechcraft and Cessna names as separate brands.
== Products ==
As of July 2019, Textron Aviation was producing the following models under the Beechcraft brand name:
Beechcraft Bonanza series – single-engined piston general aviation aircraft
Beechcraft Baron – twin-engined piston utility aircraft
Beechcraft Denali
(Super) King Air
C-12 Huron (military version)
Beechcraft T-6 Texan II/CT-156 Harvard II – single-engined turboprop military trainer, based on Pilatus PC-9
== Facilities ==
Beech Factory Airport – houses Beechcraft's head office, manufacturing facility, and runway for test flights
== See also ==
Beech Aircraft Corp. v. Rainey
== References ==
=== Notes ===
=== Bibliography ===
== External links ==
Beechcraft website
Beechcraft Heritage Museum
Aerofiles – Beechcraft model information
Aircraft-Info.net – Beechcraft |
Bell OH-58 Kiowa | The Bell OH-58 Kiowa is a family of single-engine single-rotor military helicopters used for observation, utility, and direct fire support. It was produced by the American manufacturer Bell Helicopter and is closely related to the Model 206A JetRanger civilian helicopter.
The OH-58 was originally developed during the early 1960s as the D-250 for the Light Observation Helicopter (LOH). While the rival Hughes OH-6 Cayuse was picked over Bell's submission in May 1965, the company refined its design to create the Model 206A, a variant of which it successfully submitted to the reopened LOH competition two years later. The initial model, designated by the service as the OH-58A, was introduced in May 1969. Successive models followed, often with uprated engines, enhanced protection systems, and other improvements, culminating in the OH-58F. Additional improvements, such as the OH-58X, were proposed but not pursued.
During the 1970s, the US Army became interested in pursuing an advanced scout helicopter, for which the OH-58 would be further developed, evaluated, and ultimately procured as the OH-58D Kiowa Warrior. The OH-58D is equipped to perform armed reconnaissance missions and to provide fire support to friendly ground forces; it is equipped with a distinctive Mast Mounted Sight (MMS) containing various sensors for target acquisition and laser designation. Another visible feature present on most OH-58s are knife-like extensions above and below the cockpit that form part of the passive wire strike protection system. The early-build OH-58s were equipped with a two-bladed main rotor, while the OH-58D and newer variants have a four-bladed rotor.
The OH-58 was primarily produced for the United States Army, and deployed in the Vietnam War two months after its entry to service. The US Army made extensive use of various OH-58 models across numerous war zones over the decades, seeing active combat during the Gulf War, the invasion of Panama, and the War in Afghanistan among others. In 2017, the US Army withdrew its remaining OH-58s, using alternative rotorcraft such as the Boeing AH-64 Apache and unmanned aerial vehicles (UAVs), to fill the role. The OH-58 has been exported to Austria, Canada, Croatia, the Dominican Republic, Taiwan, Saudi Arabia, and Greece. It has also been produced under license in Australia.
== Development ==
=== Light Observation Helicopter (LOH) ===
On 14 October 1960, the United States Navy approached 25 helicopter manufacturers to request on behalf of the Army the submission of proposals for a Light Observation Helicopter (LOH). Bell Helicopter was one of the manufacturers approached, and chose to enter the competition along with 12 other manufacturers, including Hiller Aircraft and Hughes Tool Co., Aircraft Division. Bell's design was internally referred to as the D-250, and would be officially designated as the YHO-4. On 19 May 1961, Bell and Hiller were announced as winners of the design competition.
Bell developed the D-250 design into the Model 206, while the HO-4 designation was changed to YOH-4A in 1962, and produced five prototype aircraft for the Army's test and evaluation phase. On 8 December 1962, the first prototype performed its maiden flight. The YOH-4A was also called the Ugly Duckling in comparison to other contending aircraft. After a fly off of the Bell, Hughes and Fairchild-Hiller prototypes, the Hughes OH-6 Cayuse was selected in May 1965.
When the YOH-4A was rejected by the Army, Bell went about solving the problem of marketing the aircraft. In addition to the image problem, the helicopter lacked cargo space and only provided cramped quarters for the planned three passengers in the back. The solution was a fuselage redesigned to be more sleek and aesthetic, adding 16 cubic feet (0.45 cubic metres) of cargo space in the process. The redesigned aircraft was designated as the Model 206A, and Bell President Edwin J. Ducayet named it the JetRanger denoting an evolution from the popular Model 47J Ranger.
In 1967, the Army reopened the LOH competition for bids because Hughes Tool Co. Aircraft Division could not meet the contractual production demands. Bell resubmitted for the program using the Bell 206A. Fairchild-Hiller failed to resubmit their bid with the YOH-5A, which they had successfully marketed as the FH-1100. In the end, Bell underbid Hughes to win the contract and the Bell 206A was designated as the OH-58A. Following the U.S. Army's naming convention for helicopters, the OH-58A was named Kiowa in honor of the Native American tribe.
=== Advanced Scout Helicopter ===
In the 1970s, the U.S. Army began evaluating the need to improve the capabilities of their scout aircraft. Anticipating the AH-64A's replacement of the venerable AH-1, the Army began shopping the idea of an Aerial Scout Program to stimulate the development of advanced technological capabilities for night vision and precision navigation equipment. The stated goals of the program included prototypes that would: ...possess an extended target acquisition range capability by means of a long-range stabilized optical subsystem for the observer, improved position location through use of a computerized navigation system, improved survivability by reducing aural, visual, radar, and infrared signatures, and an improved flight performance capability derived from a larger engine to provide compatibility with attack helicopters.
During March 1974, the Army created a special task force at Fort Knox to develop the system requirements; by the following year, the task force had devised the requirements for an Advanced Scout Helicopter (ASH) program. The requirements were formulated around an rotorcraft capable of performing in day, night, and adverse weather, and compatible with all advanced weapons systems planned for development and fielding into the 1980s. The program was approved by the System Acquisition Review Council and the Army prepared for competitive development to begin the next year. However, as the Army tried to get the program off the ground, Congress declined to provide funding in the fiscal year 1977 budget and the ASH Project Manager's Office (PM-ASH) was closed on 30 September 1976.
While no development occurred for some years, the program survived as a requirement without funding. On 30 November 1979, the decision was made to defer development of an advanced scout helicopter in favor of modifying existing airframes in inventory as a near term scout helicopter (NTSH) option. The development of a mast-mounted sight would be the primary focus to improve the ability to perform reconnaissance, surveillance, and target acquisition missions while remaining hidden behind trees and terrain. Both the UH-1 and the OH-58 were evaluated as NTSH candidates, but the UH-1 was dropped from consideration due to its larger size and ease of detection. The OH-58, on the other hand demonstrated a dramatic reduction in detectability with a Mast-Mounted Sight (MMS).
On 10 July 1980, the Army decided that the NTSH would be a competitive modification program based on developments in the commercial helicopter sector, particularly Hughes Helicopters' Hughes 500D, which had made major improvements over the OH-6.
=== Army Helicopter Improvement Program (AHIP) ===
The Army's decision to acquire the NTSH resulted in the "Army Helicopter Improvement Program (AHIP)". Both Bell Helicopter and Hughes Helicopters redesigned their scout aircraft to compete for the contract. Bell offered a more robust version of the OH-58 in their Model 406, and Hughes offered an upgraded version of the OH-6. On 21 September 1981, Bell Helicopter Textron was awarded a development contract. On 6 October 1983, the first prototype performed its maiden flight, and the aircraft entered service two years later as the OH-58D.
Initially intended for attack, cavalry, and artillery roles, the Army only approved a low initial production level and confined the OH-58D's role to field artillery observation. The Army also directed that a follow-on test be conducted to further evaluate it due to perceived deficiencies. On 1 April 1986, the Army formed a task force at Fort Rucker, Alabama, to remedy deficiencies in the AHIP. During 1988, the Army had planned to discontinue the OH-58D and focus on the LHX; however, Congress approved $138 million to expand the program, calling for the AHIP to operate with the Apache as a hunter/killer team; the AHIP would locate targets and the Apache would destroy them in a throwback to the traditional OH-58/AH-1 relationship.
The Secretary of the Army directed instead that the aircraft's armament systems be upgraded, based on experience with Task Force 118's performance operating armed OH-58Ds in the Persian Gulf in support of Operation Prime Chance, and that the type be used primarily for scouting and armed reconnaissance. The armed aircraft would be known as the OH-58D Kiowa Warrior, denoting its new armed configuration. Beginning with the 202nd aircraft (s/n 89-0112) in May 1991, all remaining OH-58Ds were produced in the Kiowa Warrior configuration. During January 1992, Bell received its first retrofit contract to convert all remaining OH-58Ds to the Kiowa Warrior configuration.
=== Production ===
Overall 2,325 OH-58 were produced, with an additional 56 Bell 206B-1 also built. Production of new airframes for the A and B models ended in 1977, and the D model in 2000. Conversions of early models to the D standard continued afterward.
== Design ==
The Bell OH-58 Kiowa is a family of single-engine single-rotor military helicopters principally used for observation, utility, and direct fire support. The primary role of the original OH-58A was to identify targets for other platforms, such as the Bell AH-1 Cobra attack helicopter and ground artillery; it lacked any armaments and weighed 1,451 kg (3,200 lb) when fully loaded, being able to carry a small amount of cargo or up to two passengers. While initial examples were reliant on the crew to conduct observations, later models were furnished with sophisticated sensors to precisely determine a target's location. Payload capacity was also increased considerably on later-build rotorcraft, the OH-58D Kiowa was designed to carry a maximum load of 2,495 kg, 72% more capacity than the original version.
Early Kiowas were fitted with a flexible twin-bladed main rotor; starting with the OH-58D, a four-bladed rigid main rotor was used. This was entirely composed of composite materials, the OH-58D was the first US Army rotorcraft to incorporate an all-composite main rotor hub. Later models were outfitted as light gunships, being equipped with various armaments, such as Stinger air-to-air missiles, a .50-caliber machine gun, podded 70mm Hydra rockets and AGM-114 Hellfire air to ground missiles. Other areas of improvement were the avionics and the cockpit; new navigation and communication systems were installed along with new and larger flight instrumentation, while all light sources were redesigned for compatibility with Night Vision Goggles (NVG). Later versions were outfitted with a glass cockpit, which retained conventional instrumentation as a fallback measure.
The OH-58D introduced perhaps the most distinctive feature of the Kiowa family — the Mast Mounted Sight (MMS), which resembles a beach ball perched above the rotor system. The MMS by Ball Aerospace & Technologies has a gyro-stabilized platform containing a television system (TVS), a thermal imaging system (TIS), and a laser range finder/designator (LRF/D). These features gave the OH-58D the additional mission capability of target acquisition and laser designation in day or night, limited-visibility and adverse weather. In combination with the 1553 databus, the OH-58D being first US Army helicopter to be fielded with such equipment, target data from the sensors could be directly passed to precision-guided weapons.
The MMS was developed by the McDonnell Douglas Corp. in Huntington Beach, CA. Production took place primarily at facilities in Monrovia, CA. As a result of a merger with Boeing, and a later sale of the business unit, the program is currently owned and managed by DRS Technologies, with engineering support based in Cypress, CA, and production support taking place in Melbourne, FL. On the OH-58F, the MMS was removed, its functions having been replaced by the AAS-53 Common Sensor Payload, which is mounted on the chin.
One distinctive feature of operational OH-58s are the knife-like extensions above and below the cockpit which are part of the passive wire strike protection system; it protects 90% of the frontal area of the helicopter from wire strikes that can be encountered at low altitudes by directing wires to the upper or lower blades before they can entangle the rotor blade or landing skids. The OH-58 was the first helicopter to test this system, after which the system was adopted by the US Army for the OH-58 and most of their other helicopters. Various other defensive and survivability measures were incorporated, such as ballistic floor armor, a missile warning system, crashworthy seats, and infrared suppression systems for the engine exhaust.
== Operational history ==
In May 1969, the first OH-58A Kiowa was officially received at a ceremony held at Bell Helicopter's Fort Worth plant, officiated by Major General John Norton, commanding general of the Army Aviation Materiel Command (AMCOM). Two months later, on 17 August 1969, production OH-58A helicopters arrived in South Vietnam for the first time; their deployment was accompanied by a New Equipment Training Team (NETT) comprising personnel from both the US Army and Bell Helicopters. Although the Kiowa production contract had replaced the LOH contract with Hughes, the OH-58A did not automatically replace the OH-6A in operations; subsequently, the Kiowa and the Cayuse would continue operating in the same theater until the end of the conflict.
=== Vietnam War ===
On 27 March 1970, an OH-58A Kiowa (s/n 68-16785) was shot down over South Vietnam, one of the first OH-58A losses of the war. The pilot, Warrant Officer Ralph Quick Jr., was flying Lieutenant Colonel Joseph Benoski Jr. as an artillery spotter. After completing a battle damage assessment for a previous fire mission, the aircraft was damaged by .51 inch (13 mm) machine gun fire and crashed, killing both crew members. Approximately 45 OH-58A helicopters were destroyed during the Vietnam War due to combat losses and accidents. One of the last combat losses in the theatre was of an OH-58A (s/n 68-16888) from A Troop, 3-17th Cavalry, flown by First Lieutenant Thomas Knuckey. On 27 May 1971, Lieutenant Knuckey was also flying a battle damage assessment mission when his aircraft came under machine gun fire and exploded. Knuckey and his observer, Sergeant Philip Taylor, both died in the explosion.
=== Operation Prime Chance ===
During early 1988, it was decided that armed OH-58D (AHIP) helicopters from the 118th Aviation Task Force would be phased in to replace the SEABAT (AH-6/MH-6) teams of Task Force 160th to carry out Operation Prime Chance, the escort of oil tankers during the Iran–Iraq War. On 24 February 1988, two AHIP helicopters reported to the Mobile Sea Base Wimbrown VII, and the helicopter team ("SEABAT" team after their callsign) stationed on the barge returned to the United States. For the next few months, the AHIP helicopters on the Wimbrown VII shared patrol duties with the SEABAT team on the Hercules. Coordination proved difficult, despite frequent requests from TF-160, the SEABAT team on the Hercules was not replaced by an AHIP detachment until June 1988. The OH-58D helicopter crews involved in the operation received deck landing and underwater survival training from the Navy.
In November 1988, the number of OH-58D helicopters that supported Task Force 118 was reduced. However, the rotorcraft continued to operate from the Navy's Mobile Sea Base Hercules, the frigate Underwood, and the destroyer Conolly. OH-58D operations primarily entailed reconnaissance flights at night, and depending on maintenance requirements and ship scheduling, Army helicopters usually rotated from the mobile sea base and other combatant ships to a land base every seven to fourteen days. On 18 September 1989, an OH-58D crashed during night gunnery practice and sank, but with no loss of personnel. When the Mobile Sea Base Hercules was deactivated in September 1989, all but five OH-58D helicopters redeployed to the continental United States.
=== Gulf War ===
During Operation Desert Shield (the build-up to Operation Desert Storm) U.S. Army OH-58Ds would exercise alongside USMC AH-1Ws and assist with targeting and laser spotting. However while this tactic worked and was effective, there is little evidence that this tactic was used, likely to a lack of OD-58Ds.
During Operation Desert Storm, 130 deployed OH-58D helicopters worked alongside the other Army attack helicopters, 145 AH-1 Cobras and 277 AH-64 Apaches, and participated in a wide variety of critical combat ground forces mission. During Operation Desert Shield and Operation Desert Storm, the Kiowas collectively flew nearly 9,000 hours with a 92 percent fully mission capable rate. The Kiowa Warrior had the lowest ratio of maintenance hours to flight hours of any combat helicopter in the war.
Army attack helicopters also worked jointly with close air support and support aircraft such as the USAF A-10As, F-16A/Cs, EF-111As, EC-130H Compass Call, F-4G Phantom II "Wild Weasel", and E-8 Joint STARS.
=== RAID ===
In 1989, Congress mandated that the Army National Guard would take part in the country's War on Drugs, enabling them to aid federal, state and local law enforcement agencies with "special congressional entitlements". In response, the Army National Guard Bureau created the Reconnaissance and Aerial Interdiction Detachments (RAID) in 1992, consisting of aviation units in 31 states with 76 specially modified OH-58A helicopters to assume the reconnaissance/interdiction role in the fight against illegal drugs. During 1994, 24 states conducted more than 1,200 aerial counterdrug reconnaissance and interdiction missions, conducting many of these missions at night. Eventually, the program was expanded to cover 32 states and consisting of 116 aircraft, including dedicated training aircraft at the Western Army Aviation Training Site (WAATS) in Marana, Arizona.
The RAID program's mission has now been expanded to include the war against terrorism and supporting U.S. Border Patrol activities in support of homeland defense. The National Guard RAID units' Area of Operation (AO) is the only one in the Department of Defense that is wholly contained within the borders of the United States.
=== Operation Just Cause and action in the 1990s ===
During Operation Just Cause in 1989, a team consisting of an OH-58 and an AH-1 were part of the Aviation Task Force during the securing of Fort Amador in Panama. The OH-58 was fired upon by Panama Defense Force soldiers and crashed 100 yards (90 m) away, in the Bay of Panama. The pilot was rescued, but the co-pilot was killed in action.
On 17 December 1994, Army Chief Warrant Officers (CWO) David Hilemon and Bobby Hall left Camp Page, South Korea on a routine training mission along the Demilitarized Zone (DMZ). Their flight was intended to be to a point known as Checkpoint 84, south of the DMZ "no-fly zone", but the OH-58C Kiowa strayed nearly four miles (6 km) into the Kangwon Province, inside North Korean airspace, due to errors in navigating the snow-covered, rugged terrain. The helicopter was shot down by North Korean troops and CWO Hilemon was killed. CWO Hall was held captive and the North Korean government insisted that the crew had been spying. Five days of negotiations resulted in the North Koreans turning over Hilemon's body to U.S. authorities. The negotiations failed to secure Hall's immediate release. After 13 days in captivity, Hall was freed on 30 December, uninjured.
=== Afghanistan and Iraq ===
The U.S. Army employed the OH-58D during Operation Iraqi Freedom in Iraq and Operation Enduring Freedom in Afghanistan. Between a combination of combat and accidents, over 35 airframes have been lost, resulting in the deaths of 35 pilots. Their presence was also anecdotally credited with saving lives, having been used to rescue wounded despite their small size. In Iraq, OH-58Ds reportedly flew 72 hours per month, while in Afghanistan, the type flew 80 hours per month. During April 2013, Bell stated that the OH-58 collectively accumulated 820,000 combat hours, and had achieved a 90% mission capable rate.
=== Retirement ===
The U.S. Army's first attempt to replace the OH-58 was the RAH-66 Comanche of the Light Helicopter Experimental program, which was canceled in 2004. Airframe age and losses led to the Armed Reconnaissance Helicopter program and the Bell ARH-70, which was canceled in 2008 due to cost overruns. The third replacement effort was the Armed Aerial Scout program. Due to uncertainty in the AAS program and fiscal restraints, the OH-58F's planned retirement was extended from 2025 to 2036. The Kiowa's scout role was supplemented by tactical unmanned aerial vehicles, the two platforms often acting in conjunction to provide reconnaissance to expose crews to less risk. The OH-58F had the ability to control UAVs directly to safely perform scout missions. In 2011, the Kiowa was scheduled to be replaced by the light version of the Future Vertical Lift aircraft in the 2030s.
In December 2013, the U.S. Army had 338 Kiowas in its active-duty force and 30 in the Army National Guard. The Army considered retiring the Kiowa as part of a wider restructuring to cut costs and reduce the variety of helicopters operated. The Analysis of Alternatives for the AAS program found that operating the Kiowa alongside RQ-7 Shadow UAVs was the most affordable and capable solution, while the AH-64E Apache Guardian was the most capable immediate solution. One proposal was to transfer all Army National Guard and Army Reserve AH-64s to the active Army for use as scouts to divest the OH-58. The Apache costs 50 percent more than the Kiowa to operate and maintain; studies note that had it been used in place of the Kiowa in Iraq and Afghanistan, total operating costs would have risen by $4 billion, but also saved $1 billion per year in operating and sustainment costs. UH-60 Black Hawks would transfer from the active Army to reserve and Guard units. The aim was to retire older helicopters and retain those with the best capabilities to save money. Retiring the Kiowa would fund Apache upgrades.
The Army placed 26 out of 335 OH-58Ds in non-flyable storage during 2014. In anticipation of divestment, the Army looked to see if other military branches, government agencies, and foreign customers had interest in buying the type. The Kiowas were considered to be well priced for foreign countries with limited resources; Bell had not yet agreed to support them if sold overseas. Media expected OH-58s to go to foreign militaries rather than civil operators due to high operating cost. By 2015, the Army had divested 33 OH-58Ds. By January 2016, the Army had divested all but two OH-58D squadrons. In June 2016, members of 1st Squadron, 17th Cavalry Regiment, 82nd Combat Aviation Brigade, arrived in South Korea as part of the Kiowa's last deployment in U.S. Army service; during the following year, the unit reequipped with AH-64s. In January 2017, the last Kiowa Warrior performed their last live fire maneuver before retirement.
Ex-U.S. Army OH-58Ds were made available through Excess Defense Article and foreign military sales (FMS) programs. In November 2014, Croatia sent a letter of intent for the acquisition of 16 OH-58Ds. In 2016, Croatia and Tunisia became the first nations to request the helicopters, ordering 16 and 24, respectively. Croatia received the first batch of 5 OH-58Ds at the Zadar-Zemunik air base on 30 June 2016. In early 2018, Greece was granted 70 OH-58Ds via an FMS arrangement, the type has been initially stationed at Hellenic Army Aviation air base at Stefanovikio.
In March 2020, the U.S. Army selected the Bell 360 Invictus and Sikorsky Raider X as part of the Future Attack Reconnaissance Aircraft (FARA) program to fill the capability gap left by the retirement of the OH-58. On 9 July 2020, the US Army retired its last OH-58Cs from active service at Fort Polk. In February 2024, FARA was cancelled; by this point, there were three abandoned attempts to replace the OH-58 at a cost in excess of $9 billion. The armed scout role has been filled by the AH-64 and the unarmed RQ-7 Shadow UAV; this combination reportedly accomplished 80% of the scouting mission, while also providing greater firepower, durability, and speed.
== Variants ==
=== OH-58A ===
The OH-58A Kiowa is a four-place observation helicopter. It has two-place pilot seating, although the controls in the left seat are designed to be removed to carry a passenger up front. During its Vietnam development, it was fitted with the M134 Minigun, a 7.62 mm electrically operated machine gun.
The Australian Army leased eight OH-58As in 1971 in Vietnam for eight months. The Australian Government procured the OH-58A for the Australian Army and Royal Australian Navy as the CAC CA-32. Licensed produced in Australia by Commonwealth Aircraft Corporation, the CA-32 was the equivalent of the 206B-1 (uprated engine and longer rotor blades). The first twelve of 56 were built in the U.S. then partially disassembled and shipped to Australia, where they were reassembled. Helicopters in the naval fleet were retired in 2000.
A total of 74 OH-58As were delivered to the Canadian Armed Forces as COH-58A and later redesignated CH-136 Kiowa. As many as 12 surplus Kiowas were sold to the Dominican Republic Air Force, and others sold privately in Australia.
In 1978, OH-58As began to be converted to the same engine and dynamic components as the OH-58C. In 1992, 76 OH-58A were modified with another engine upgrade, a thermal imaging system, a communications package for law enforcement, enhanced navigational equipment and high skid gear as part of the Army National Guard's (ARNG) Counter-Drug RAID program. The U.S. Army retired its last OH-58A in November 2017.
=== OH-58B ===
The OH-58B was an export version for the Austrian Air Force. Austria plans to replace the OH-58B by the end of 2030.
=== OH-58C ===
Equipped with a more robust engine, the OH-58C was supposed to solve issues regarding the Kiowa's power. In addition to the improved engine, it had unique IR suppression systems mounted on its exhaust. Early OH-58Cs had flat-panel windscreens as an attempt to reduce glint from the sun, which could reveal its location to enemies. The windscreens had a negative effect of limiting the crew's forward view, a previous strength of the original design.
The aircraft was also equipped with a larger instrument panel, roughly one–third bigger than the OH-58A panel, which held larger flight instruments. The panel was also equipped with Night Vision Goggle (NVG) compatible cockpit lighting. The OH-58C were also the first U.S. Army scout helicopter to be equipped with the AN/APR-39 radar detector, which alerted the crew to active anti-aircraft radar systems nearby. Some OH-58Cs were armed with two AIM-92 Stingers and are sometimes referred to as OH-58C/S, the "S" referring to the Stinger addition. Called Air-To-Air Stinger (ATAS), the weapon system was intended to provide an air defense capability.
The OH-58C was the final Kiowa variant in service with the U.S. Army, with it being used as a training aircraft. On 9 July 2020, the US Army retired the last OH-58Cs from service.
=== OH-58D ===
The OH-58D (Bell Model 406) was the result of the Army Helicopter Improvement Program (AHIP). An upgraded transmission and engine gave extra power, needed for nap-of-the-earth flight profiles, and a four-bladed main rotor made it quieter than the two-bladed OH-58C. The OH-58D introduced the distinctive Mast-Mounted Sight (MMS) above the main rotor, and a mixed glass cockpit with traditional instruments as "standby" for emergencies.
The Bell 406CS "Combat Scout" was based on the OH-58D (sometimes referred to as the MH-58D). Fifteen aircraft were sold to Saudi Arabia. A roof-mounted Saab HeliTOW sight system was opted for in place of the MMS. The 406CS also had detachable weapon hardpoints on each side.
The AH-58D was an OH-58D version operated by Task Force 118 (4th Squadron, 17th Cavalry) and modified with armament in support of Operation Prime Chance. The weapons and fire control systems would become the basis for the Kiowa Warrior. AH-58D is not an official DOD aircraft designation, but is used by the Army in reference to these aircraft.
The Kiowa Warrior, sometimes referred to by its acronym KW, is the armed version of the OH-58D. A key difference between the Kiowa Warrior and original AHIP aircraft is a universal weapons pylon found mounted on both sides of the fuselage, capable of carrying combinations of AGM-114 Hellfire missiles, air-to-air Stinger (ATAS) missiles, 7-shot 2.75 inches (70 mm) Hydra-70 rocket pods, and an M296 0.50 in (12.7 mm) caliber machine gun. The performance standard of aerial gunnery from an OH-58D is to achieve at least one hit out of 70 shots fired at a wheeled vehicle 800 to 1,200 m (2,600 to 3,900 ft) away. The Kiowa Warrior also includes improvements in available power, navigation, communication, survivability, and deployability.
=== OH-58E ===
The OH-58E was one of 13 design candidates in the Advanced Scout Helicopter of 1980. The study's conclusion was to launch the Army Helicopter Improvement Program (AHIP) in 1981 centered on the OH-58D instead.
=== OH-58F ===
The OH-58F is an OH-58D upgrade. The Cockpit and Sensor Upgrade Program (CASUP) adds a nose-mounted targeting and surveillance system alongside the MMS. The AAS-53 Common Sensor Payload has an infrared camera, color Electro-Optical camera, and image intensifier; via weight and drag reductions, flight performance increased by 1–2%. Cockpit upgrades include the Control and Display Subsystem version 5, more storage and processing power, three color multi-function displays, and dual-independent advanced moving maps. It has Level 2 Manned-Unmanned (L2MUM) teaming, the Force Battle Command Brigade and Below (FBCB2) display screen, and can be updated to Blue Force Tracker 2. Survivability enhancements include ballistic floor armor and the Common Missile Warning System. It has greater situational awareness, digital inter-cockpit communications, HELLFIRE future upgrades, redesigned wiring harness, Health and Usage Monitoring (HUMS), and enhanced weapons functionality via 1760 digital interface. The OH-58F is powered by a Rolls-Royce 250-C30R3 engine rated at 650 shp (480 kW); it has a dual-channel, full-authority digital engine-controller that operates at required power levels in all environments. Rolls-Royce proposed engine tweaks to raise output by 12%.
In October 2012, the first OH-58F was finished. Unlike most military projects, the Army designed and built the new variant itself, which lowered development costs. It weighed 3,590 lb (1,630 kg), 53 lb (24 kg) below the target weight and about 200 lb (91 kg) lighter than the OH-58D. The weight savings are attributed to more efficient wiring and a lighter sensor. The first production aircraft began manufacturing in January 2013 and was handed over to the Army by year end. Low rate production was to start in March 2015, with the first operational squadron being fully equipped by 2016. The Army was to buy 368 OH-58Fs, with older OH-58 variants to be remanufactured. Due to battle damage and combat attrition, total OH-58F numbers would be about 321 aircraft. The OH-58F's first flight occurred on 26 April 2013.
The Army chose to retire the Kiowa and end the CASUP upgrades. CASUP and SLEP upgrades were estimated to cost $3 billion and $7 billion respectively. The OH-58D could do 20 percent of armed aerial scout mission requirements, the OH-58F upgrade raised that to 50 percent. Replacing the Kiowa with Apaches and UAVs in scout roles met 80 percent of requirements. In early 2014, Bell received a stop-work order for the Kiowa CASUP program.
=== OH-58F Block II ===
On 14 April 2011, Bell performed the successful first flight of the OH-58F Block II variant. It was Bell's entry in the Armed Aerial Scout (AAS) program. It built on the improvements of the F-model, adding features such as the Honeywell HTS900 turboshaft engine, the transmission and main rotors of the Bell 407, and the tail and tail rotor of the Bell 427. Bell started flight demonstrations in October 2012. Bell hoped for the Army to go with their service life extension models instead of the AAS program. The OH-58F is an "obsolescence upgrade", while the Block II was seen as the performance upgrade. This gave the Army financial flexibility via the option of upgrading the Kiowa to the OH-58F and later continuing to the Block II when there were sufficient funds. In late 2012, the Army recommended that the AAS program proceed. The Army ended the AAS program in late 2013. In light of sequestration budget cuts in 2013, it was decided that the $16 billion cost to buy new armed scout helicopters was too great.
=== Others ===
The OH-58X was a modification of the fourth development OH-58D (s/n 69-16322) with partial stealth features and a chin-mounted McDonnell Douglas Electronics Systems turret as a night piloting system; including a Kodak FLIR system with a 30-degree field of view. Avionics systems were consolidated and moved to the nose, making room for a passenger seat in the rear. No aircraft were produced.
== Operators ==
=== Current operators ===
Austria
Austrian Air Force
Croatia
Croatian Air Force
Dominican Republic
Dominican Air Force
Greece
Hellenic Army
Iraq
Iraqi Army
Saudi Arabia
Royal Saudi Land Forces
Taiwan (Republic of China)
Republic of China Army
Tunisia
Tunisian Air Force
United States
United States Navy - 3 OH-58s in 2024
=== Former operators ===
Australia
Australian Army
Canada
Canadian Armed Forces
Spain
Spanish Army
Spanish Air and Space Force
Turkey
Turkish Army
United States
United States Army
== Aircraft on display ==
68-16940 – International Airport in Palm Springs, California. Transformed into a sculpture.
69-16112 – OH-58A – Pima Air and Space Museum in Tucson, Arizona
69-16123 – Kansas Museum of Military History in Augusta, Kansas
69-16153 – MAPS Air Museum in North Canton, Ohio
69-16338 – Point Alpha Museum in Hesse, Germany
70-15423 - OH-58A – Zephyrhills Museum of Military History in Zephyrhills, Florida
71-20475 – Veterans Memorial Museum, Huntsville, Alabama, United States
71-20869 – National Air Force Museum of Canada, Trenton, Ontario, Canada CH-136
71-20920 – Polish Aviation Museum, Kraków, Poland – CH-136
72-21256 – The Aviation Museum of Kentucky in Lexington, Kentucky
93-0976 – OH-58D – Pima Air and Space Museum in Tucson, Arizona
95-0015 – OH-58D – (not on public display as of 2024) Pima Air and Space Museum in Tucson, Arizona
== Specifications (OH-58D) ==
Data from Jane's All the World's Aircraft, 1996–97, U.S. Army Aircraft Since 1947General characteristics
Crew: 2 pilots
Length: 42 ft 2 in (12.85 m)
Height: 12 ft 10 in (3.93 m)
Empty weight: 3,829 lb (1,737 kg)
Gross weight: 5,500 lb (2,495 kg)
Powerplant: 1 × Rolls-Royce T703-AD-700A turboshaft, 650 hp (485 kW)
Main rotor diameter: 35 ft 0 in (10.67 m)
Main rotor area: 962.11 sq ft (89.42 m2)
Performance
Maximum speed: 149 mph (240 km/h, 129 kn)
Cruise speed: 127 mph (204 km/h, 110 kn)
Range: 161 mi (260 km, 140 nmi)
Endurance: Two hours
Service ceiling: 15,000 ft (4,575 m)
Armament
Hardpoints: Two pylons , with provisions to carry combinations of:
Rockets: 1x LAU-68 rocket launcher with seven 70 mm (2.75 in) Hydra 70 rockets
Missiles: 2x AGM-114 Hellfire missiles
Other: 1x .50 cal (12.7 mm) M3P (or M296) heavy machine gun
== See also ==
Related development
Bell YOH-4
Bell 206
Bell 400
Bell 407
Bell ARH-70
Aircraft of comparable role, configuration, and era
Hughes OH-6 Cayuse
MBB Bo 105
Cicaré CH-14
Mil Mi-36
Changhe Z-11
Aérospatiale Gazelle
Related lists
List of military aircraft of the United States
== References ==
=== Footnotes ===
=== Citations ===
=== Bibliography ===
Elliot, Bryn (March–April 1997). "Bears in the Air: The US Air Police Perspective". Air Enthusiast. No. 68. pp. 46–51. ISSN 0143-5450.
Holley, Charles, and Mike Sloniker. Primer of the Helicopter War. Grapevine, Tex: Nissi Publ, 1997. ISBN 0-944372-11-2.
Spenser, Jay P. "Bell Helicopter". Whirlybirds, A History of the U.S. Helicopter Pioneers. University of Washington Press, 1998. ISBN 0-295-98058-3.
World Aircraft information files Brightstar publishing London File 424 sheet 2
This article incorporates public domain material from websites or documents of the United States Army Center of Military History.
== External links ==
OH-58 Kiowa Warrior and OH-58D fact sheets on Army.mil
OH-58D armament systems page on Army.mil
Kiowa Warrior Mast-Mounted Sight (MMS) Sensor Suite on northropgrumman.com |
Berlin | Berlin ( bur-LIN; German: [bɛʁˈliːn] ) is the capital and largest city of Germany, by both area and population. With 3.7 million inhabitants, it has the highest population within its city limits of any city in the European Union. The city is also one of the states of Germany, being the third smallest state in the country by area. Berlin is surrounded by the state of Brandenburg, and Brandenburg's capital Potsdam is nearby. The urban area of Berlin has a population of over 4.6 million and is therefore the most populous urban area in Germany. The Berlin-Brandenburg capital region has around 6.2 million inhabitants and is Germany's second-largest metropolitan region after the Rhine-Ruhr region, as well as the fifth-biggest metropolitan region by GDP in the European Union.
Berlin was built along the banks of the Spree river, which flows into the Havel in the western borough of Spandau. The city includes lakes in the western and southeastern boroughs, the largest of which is Müggelsee. About one-third of the city's area is composed of forests, parks and gardens, rivers, canals, and lakes.
First documented in the 13th century and at the crossing of two important historic trade routes, Berlin was designated the capital of the Margraviate of Brandenburg (1417–1701), Kingdom of Prussia (1701–1918), German Empire (1871–1918), Weimar Republic (1919–1933), and Nazi Germany (1933–1945). Berlin served as a scientific, artistic, and philosophical hub during the Age of Enlightenment, Neoclassicism, and the German revolutions of 1848–1849. During the Gründerzeit, an industrialization-induced economic boom triggered a rapid population increase in Berlin. 1920s Berlin was the third-largest city in the world by population. After World War II and following Berlin's occupation, the city was split into West Berlin and East Berlin, divided by the Berlin Wall. East Berlin was declared the capital of East Germany, while Bonn became the West German capital. Following German reunification in 1990, Berlin once again became the capital of all of Germany. Due to its geographic location and history, Berlin has been called "the heart of Europe".
Berlin is a global city of culture, politics, media and science. Its economy is based on high tech and the service sector, encompassing a diverse range of creative industries, startup companies, research facilities, and media corporations. Berlin serves as a continental hub for air and rail traffic and has a complex public transportation network. Tourism in Berlin makes the city a popular global destination. Significant industries include information technology, the healthcare industry, biomedical engineering, biotechnology, the automotive industry, and electronics.
Berlin is home to several universities, such as the Humboldt University of Berlin, Technische Universität Berlin, the Berlin University of the Arts and the Free University of Berlin. The Berlin Zoological Garden is the most visited zoo in Europe. Babelsberg Studio is the world's first large-scale movie studio complex, and there are many films set in Berlin. Berlin is home to three World Heritage Sites: Museum Island, the Palaces and Parks of Potsdam and Berlin, and the Berlin Modernism Housing Estates. Other landmarks include the Brandenburg Gate, the Reichstag building, Potsdamer Platz, the Memorial to the Murdered Jews of Europe, and the Berlin Wall Memorial. Berlin has numerous museums, galleries, and libraries.
== History ==
=== Etymology ===
Berlin lies in northeastern Germany, in an area formerly settled by Slavs which thus exhibits many (Germanised) Slavic-derived placenames until today (see below). The word Berlin also has its roots in the language of the West Slavs, and may be related to the Old Polabian stem berl-/birl- ("swamp").
Of Berlin's twelve boroughs, five bear a Slavic-derived name—Pankow, Steglitz-Zehlendorf, Marzahn-Hellersdorf, Treptow-Köpenick and Spandau; furthermore, across the city's 96 neighbourhoods, there are 22 which bear a Slavic-rooted name—Altglienicke, Alt-Treptow, Britz, Buch, Buckow, Gatow, Karow, Kladow, Köpenick, Lankwitz, Lübars, Malchow, Marzahn, Pankow, Prenzlauer Berg, Rudow, Schmöckwitz, Spandau, Stadtrandsiedlung Malchow, Steglitz, Tegel and Zehlendorf.
=== Prehistory of Berlin ===
The earliest human settlements in the area of modern Berlin are dated around 60,000 BC. A deer mask, dated to 9,000 BC, is attributed to the Maglemosian culture. In 2,000 BC dense human settlements along the Spree and Havel rivers gave rise to the Lusatian culture. Starting around 500 BC Germanic tribes settled in a number of villages in the higher situated areas of today's Berlin. After the Semnones left around 200 AD, the Burgundians followed. In the 7th century Slavic tribes, the later known Hevelli and Sprevane, reached the region.
=== 12th century to 16th century ===
In the 12th century the region came under German rule as part of the Margraviate of Brandenburg, founded by Albert the Bear in 1157. Early evidence of middle age settlements in the area of today's Berlin are remnants of a house foundation dated 1270 to 1290, found in excavations in Berlin Mitte. The first written records of towns in the area of present-day Berlin date from the late 12th century. Spandau is first mentioned in 1197 and Köpenick in 1209. 1237 is considered the founding date of the city. The two towns over time formed close economic and social ties, and profited from the staple right on the two important trade routes, one was known as Via Imperii, and the other trade route reached from Bruges to Novgorod. In 1307 the two towns formed an alliance with a common external policy, their internal administrations still being separated.
In 1326 the territory of Berlin was raided by pagan Lithuanians during the Raid on Brandenburg.
Members of the Hohenzollern family ruled in Berlin until 1918, first as electors of Brandenburg, then as kings of Prussia, and eventually as German emperors. In 1443, Frederick II Irontooth started the construction of a new royal palace in the twin city Berlin-Cölln. The protests of the town citizens against the building culminated in 1448, in the "Berlin Indignation" (German: Berliner Unwille). Officially, the Berlin-Cölln palace became permanent residence of the Brandenburg electors of the Hohenzollerns from 1486, when John Cicero came to power. Berlin-Cölln, however, had to give up its status as a free Hanseatic League city. In 1539, the electors and the city officially became Lutheran.
=== 17th to 19th centuries ===
The Thirty Years' War between 1618 and 1648 devastated Berlin. One third of its houses were damaged or destroyed, and the city lost half of its population. Frederick William, known as the "Great Elector", who had succeeded his father George William as ruler in 1640, initiated a policy of promoting immigration and religious tolerance. With the Edict of Potsdam in 1685, Frederick William offered asylum to the French Huguenots.
By 1700, approximately 30 percent of Berlin's residents were French, because of the Huguenot immigration. Many other immigrants came from Bohemia, Poland, and Salzburg.
Since 1618, the Margraviate of Brandenburg had been in personal union with the Duchy of Prussia. In 1701, the dual state formed the Kingdom of Prussia, as Frederick III, Elector of Brandenburg, crowned himself as king Frederick I in Prussia. Berlin became the capital of the new Kingdom, replacing Königsberg. This was a successful attempt to centralise the capital in the very far-flung state, and it was the first time the city began to grow. In 1709, Berlin merged with the four cities of Cölln, Friedrichswerder, Friedrichstadt and Dorotheenstadt under the name Berlin, "Haupt- und Residenzstadt Berlin".
In 1740, Frederick II, known as Frederick the Great (1740–1786), came to power. Under the rule of Frederick II, Berlin became a center of the Enlightenment, but also, was briefly occupied during the Seven Years' War by the Russian army. Following France's victory in the War of the Fourth Coalition, Napoleon Bonaparte marched into Berlin in 1806, but granted self-government to the city. In 1815, the city became part of the new Province of Brandenburg.
The Industrial Revolution transformed Berlin during the 19th century; the city's economy and population expanded dramatically, and it became the main railway hub and economic center of Germany. Additional suburbs soon developed and increased the area and population of Berlin. In 1861, neighboring suburbs including Wedding, Moabit and several others were incorporated into Berlin. In 1871, Berlin became capital of the newly founded German Empire. In 1881, it became a city district separate from Brandenburg.
=== 20th to 21st centuries ===
In the early 20th century, Berlin had become a fertile ground for the German Expressionist movement. In fields such as architecture, painting and cinema new forms of artistic styles were invented. At the end of World War I in 1918, a republic was proclaimed by Philipp Scheidemann at the Reichstag building. In 1920, the Greater Berlin Act incorporated dozens of suburban cities, villages, and estates around Berlin into an expanded city. The act increased the area of Berlin from 66 to 883 km2 (25 to 341 sq mi). The population almost doubled, and Berlin had a population of around four million. During the Weimar era, Berlin underwent political unrest due to economic uncertainties but also became a renowned center of the Roaring Twenties. The metropolis experienced its heyday as a major world capital and was known for its leadership roles in science, technology, arts, the humanities, city planning, film, higher education, government, and industries. Albert Einstein rose to public prominence during his years in Berlin, being awarded the Nobel Prize for Physics in 1921.
In 1933, Adolf Hitler and the Nazi Party came to power. Hitler was inspired by the architecture he had experienced in Vienna, and he wished for a German Empire with a capital city that had a monumental ensemble. The National Socialist regime embarked on monumental construction projects in Berlin as a way to express their power and authority through architecture. Adolf Hitler and Albert Speer developed architectural concepts for the conversion of the city into World Capital Germania; these were never implemented.
NSDAP rule diminished Berlin's Jewish community from 160,000 (one-third of all Jews in the country) to about 80,000 due to emigration between 1933 and 1939. After Kristallnacht in 1938, thousands of the city's Jews were imprisoned in the nearby Sachsenhausen concentration camp. Starting in early 1943, many were deported to ghettos like Łódź, and to concentration and extermination camps such as Auschwitz.
Berlin hosted the 1936 Summer Olympics for which the Olympic stadium was built.
During World War II, Berlin was the location of multiple Nazi prisons, forced labour camps, 17 subcamps of the Sachsenhausen concentration camp for men and women, including teenagers, of various nationalities, including Polish, Jewish, French, Belgian, Czechoslovak, Russian, Ukrainian, Romani, Dutch, Greek, Norwegian, Spanish, Luxembourgish, German, Austrian, Italian, Yugoslavian, Bulgarian, Hungarian, a camp for Sinti and Romani people (see Romani Holocaust), and the Stalag III-D prisoner-of-war camp for Allied POWs of various nationalities.
During World War II, large parts of Berlin were destroyed during 1943–45 Allied air raids and the 1945 Battle of Berlin. The Allies dropped 67,607 tons of bombs on the city, destroying 6,427 acres of the built-up area. Around 125,000 civilians were killed. After the end of World War II in Europe in May 1945, Berlin received large numbers of refugees from the Eastern provinces. The victorious powers divided the city into four sectors, analogous to Allied-occupied Germany the sectors of the Allies of World War II (the United States, the United Kingdom, and France) formed West Berlin, while the Soviet Union formed East Berlin.
All four Allies of World War II shared administrative responsibilities for Berlin. However, in 1948, when the Western Allies extended the currency reform in the Western zones of Germany to the three western sectors of Berlin, the Soviet Union imposed the Berlin Blockade on the access routes to and from West Berlin, which lay entirely inside Soviet-controlled territory. The Berlin airlift, conducted by the three western Allies, overcame this blockade by supplying food and other supplies to the city from June 1948 to May 1949. In 1949, the Federal Republic of Germany was founded in West Germany and eventually included all of the American, British and French zones, excluding those three countries' zones in Berlin, while the Marxist–Leninist German Democratic Republic was proclaimed in East Germany. West Berlin officially remained an occupied city, but it politically was aligned with the Federal Republic of Germany despite West Berlin's geographic isolation. Airline service to West Berlin was granted only to American, British and French airlines.
The founding of the two German states increased Cold War tensions. West Berlin was surrounded by East German territory, and East Germany proclaimed the Eastern part as its capital, a move the western powers did not recognize. East Berlin included most of the city's historic center. The West German government established itself in Bonn. In 1961, East Germany began to build the Berlin Wall around West Berlin, and events escalated to a tank standoff at Checkpoint Charlie. West Berlin was now de facto a part of West Germany with a unique legal status, while East Berlin was de facto a part of East Germany. John F. Kennedy gave his "Ich bin ein Berliner" speech on 26 June 1963, in front of the Schöneberg city hall, located in the city's western part, underlining the US support for West Berlin. Berlin was completely divided. Although it was possible for Westerners to pass to the other side through strictly controlled checkpoints, for most Easterners, travel to West Berlin or West Germany was prohibited by the government of East Germany. In 1971, a Four-Power Agreement guaranteed access to and from West Berlin by car or train through East Germany.
In 1989, with the end of the Cold War and pressure from the East German population, the Berlin Wall fell on 9 November and was subsequently mostly demolished. Today, the East Side Gallery preserves a large portion of the wall. On 3 October 1990, the two parts of Germany were reunified as the Federal Republic of Germany, and Berlin again became a reunified city. After the fall of the Berlin Wall, the city experienced significant urban development and still impacts urban planning decisions.
Walter Momper, the mayor of West Berlin, became the first mayor of the reunified city in the interim. City-wide elections in December 1990 resulted in the first "all Berlin" mayor being elected to take office in January 1991, with the separate offices of mayors in East and West Berlin expiring by that time, and Eberhard Diepgen (a former mayor of West Berlin) became the first elected mayor of a reunited Berlin. On 18 June 1994, soldiers from the United States, France and Britain marched in a parade which was part of the ceremonies to mark the withdrawal of allied occupation troops allowing a reunified Berlin (the last Russian troops departed on 31 August, while the final departure of Western Allies forces was on 8 September 1994). On 20 June 1991, the Bundestag (German Parliament) voted to move the seat of the German capital from Bonn to Berlin, which was completed in 1999, during the chancellorship of Gerhard Schröder.
Berlin's 2001 administrative reform merged several boroughs, reducing their number from 23 to 12.
In 2006, the FIFA World Cup Final was held in Berlin.
Construction of the "Berlin Wall Trail" (Berliner Mauerweg) began in 2002 and was completed in 2006.
In a 2016 terrorist attack linked to ISIL, a truck was deliberately driven into a Christmas market next to the Kaiser Wilhelm Memorial Church, leaving 13 people dead and 55 others injured.
In 2018, more than 200,000 protestors took to the streets in Berlin with demonstrations of solidarity against racism, in response to the emergence of far-right politics in Germany.
Berlin Brandenburg Airport (BER) opened in 2020, nine years later than planned, with Terminal 1 coming into service at the end of October, and flights to and from Tegel Airport ending in November. Due to the fall in passenger numbers resulting from the COVID-19 pandemic, plans were announced to close BER's Terminal 5, the former Schönefeld Airport, beginning in March 2021. The connecting link of U-Bahn line U5 from Alexanderplatz to Hauptbahnhof, along with the new stations Rotes Rathaus and Unter den Linden, opened on 4 December 2020, the Museumsinsel U-Bahn station opened in 2021, which completed all new works on the U5.
A partial opening by the end of 2020 of the Humboldt Forum museum, housed in the reconstructed Berlin Palace, was postponed until March 2021. On 16 September 2022, the opening of the eastern wing, the last section of the Humboldt Forum museum, meant the Humboldt Forum museum was finally completed. It became Germany's currently most expensive cultural project.
=== Berlin-Brandenburg fusion attempt ===
The legal basis for a combined state of Berlin and Brandenburg is different from other state fusion proposals. Normally, Article 29 of the Basic Law stipulates that a state fusion requires a federal law. However, a clause added to the Basic Law in 1994, Article 118a, allows Berlin and Brandenburg to unify without federal approval, requiring a referendum and a ratification by both state parliaments.
In 1996, there was an unsuccessful attempt of unifying the states of Berlin and Brandenburg. Both share a common history, dialect and culture and in 2020, there are over 225,000 residents of Brandenburg that commute to Berlin. The fusion had the near-unanimous support by a broad coalition of both state governments, political parties, media, business associations, trade unions and churches. Though Berlin voted in favor by a small margin, largely based on support in former West Berlin, Brandenburg voters disapproved of the fusion by a large margin. It failed largely due to Brandenburg voters not wanting to take on Berlin's large and growing public debt and fearing losing identity and influence to the capital.
== Geography ==
=== Topography ===
Berlin is in northeastern Germany, in an area of low-lying marshy woodlands with a mainly flat topography, part of the vast Northern European Plain which stretches all the way from northern France to western Russia. The Berliner Urstromtal (an ice age glacial valley), between the low Barnim Plateau to the north and the Teltow plateau to the south, was formed by meltwater flowing from ice sheets at the end of the last Weichselian glaciation. The Spree follows this valley now. In Spandau, a borough in the west of Berlin, the Spree empties into the river Havel, which flows from north to south through western Berlin. The course of the Havel is more like a chain of lakes, the largest being the Tegeler See and the Großer Wannsee. A series of lakes also feeds into the upper Spree, which flows through the Großer Müggelsee in eastern Berlin.
Substantial parts of present-day Berlin extend onto the low plateaus on both sides of the Spree Valley. Large parts of the boroughs Reinickendorf and Pankow lie on the Barnim Plateau, while most of the boroughs of Charlottenburg-Wilmersdorf, Steglitz-Zehlendorf, Tempelhof-Schöneberg, and Neukölln lie on the Teltow Plateau.
The borough of Spandau lies partly within the Berlin Glacial Valley and partly on the Nauen Plain, which stretches to the west of Berlin. Since 2015, the Arkenberge hills in Pankow at 122 meters (400 ft) elevation, have been the highest point in Berlin. Through the disposal of construction debris they surpassed Teufelsberg (120.1 m or 394 ft), which itself was made up of rubble from the ruins of the Second World War. The Müggelberge at 114.7 meters (376 ft) elevation is the highest natural point and the lowest is the Spektesee in Spandau, at 28.1 meters (92 ft) elevation.
=== Climate ===
Berlin has an oceanic climate (Köppen: Cfb) bordering on a humid continental climate (Dfb). This type of climate features mild to very warm summer temperatures and cold, though not very severe, winters. Annual precipitation is modest.
Frosts are common in winter, and there are larger temperature differences between seasons than typical for many oceanic climates. Summers are warm and sometimes humid with average high temperatures of 22–25 °C (72–77 °F) and lows of 12–14 °C (54–57 °F). Winters are cold with average high temperatures of 3 °C (37 °F) and lows of −2 to 0 °C (28 to 32 °F). Spring and autumn are generally chilly to mild. Berlin's built-up area creates a microclimate, with heat stored by the city's buildings and pavement. Temperatures can be 4 °C (7 °F) higher in the city than in the surrounding areas. Annual precipitation is 570 millimeters (22 in) with moderate rainfall throughout the year. Snowfall mainly occurs from December through March. The hottest month in Berlin was July 1757, with a mean temperature of 23.9 °C (75.0 °F) and the coldest was January 1709, with a mean temperature of −13.2 °C (8.2 °F). The wettest month on record was July 1907, with 230 millimeters (9.1 in) of rainfall, whereas the driest were October 1866, November 1902, October 1908 and September 1928, all with 1 millimeter (0.039 in) of rainfall.
== Cityscape and architecture ==
=== Cityscape ===
Berlin's history has left the city with a polycentric metropolitan area and an eclectic mix of architecture. The city's appearance today has been predominantly shaped by German history during the 20th century. 17% of Berlin's buildings are Gründerzeit or earlier and nearly 25% are of the 1920s and 1930s, when Berlin played a part in the origin of modern architecture.
Devastated by the bombing of Berlin in World War II many of the buildings that had survived in both East and West were demolished during the postwar period. After the reunification, many important heritage structures have been reconstructed, including the Forum Fridericianum along with, the Berlin State Opera, Charlottenburg Palace, Gendarmenmarkt, Alte Kommandantur, as well as the City Palace.
The tallest buildings in Berlin are spread across the urban area, with clusters at Potsdamer Platz, City West, and Alexanderplatz.
Over one-third of the city's area consists of green and open-space, with the Großer Tiergarten, one of the largest and most popular parks in Berlin, located in the centre of the city.
=== Architecture ===
The Fernsehturm (TV tower) at Alexanderplatz in Mitte is among the tallest structures in the European Union at 368 m (1,207 ft). Built in 1969, it is visible throughout most of the central districts of Berlin. The city can be viewed from its 204-meter-high (669 ft) observation floor. Starting here, the Karl-Marx-Allee heads east, an avenue lined by monumental residential buildings, designed in the Socialist Classicism style. Adjacent to this area is the Rotes Rathaus (City Hall), with its distinctive red-brick architecture. In front of it is the Neptunbrunnen, a fountain featuring a mythological group of Tritons, personifications of the four main Prussian rivers, and Neptune on top of it. Nearby is the Nikolaiviertel, the reconstructed oldest settlement area in the city.
The Brandenburg Gate is an iconic landmark of Berlin and Germany; it stands as a symbol of eventful European history and of unity and peace. The Reichstag building is the traditional seat of the German Parliament. It was remodeled by British architect Norman Foster in the 1990s and features a glass dome over the session area, which allows free public access to the parliamentary proceedings and magnificent views of the city.
The East Side Gallery is an open-air exhibition of art painted directly on the last existing portions of the Berlin Wall. It is the largest remaining evidence of the city's historical division.
The Gendarmenmarkt is a neoclassical square in Berlin, the name of which derives from the headquarters of the famous Gens d'armes regiment located here in the 18th century. Two similarly designed cathedrals border it, the Französischer Dom with its observation platform and the Deutscher Dom. The Konzerthaus (Concert Hall), home of the Berlin Symphony Orchestra, stands between the two cathedrals.
The Museum Island in the River Spree houses five museums built from 1830 to 1930 and is a UNESCO World Heritage site. Restoration and construction of a main entrance to all museums (James Simon Gallery), as well as reconstruction of the Berlin Palace (Stadtschloss) were completed. Also on the island and next to the Lustgarten and palace is Berlin Cathedral, emperor William II's ambitious attempt to create a Protestant counterpart to St. Peter's Basilica in Rome. A large crypt houses the remains of some of the earlier Prussian royal family. St. Hedwig's Cathedral is Berlin's Roman Catholic cathedral.
Unter den Linden is a tree-lined east–west avenue from the Brandenburg Gate to the Berlin Palace, and was once Berlin's premier promenade. Many Classical buildings line the street, and part of Humboldt University is there. Friedrichstraße was Berlin's legendary street during the Golden Twenties. It combines 20th-century traditions with the modern architecture of today's Berlin.
Potsdamer Platz is an entire quarter built from scratch after the Wall came down. To the west of Potsdamer Platz is the Kulturforum, which houses the Gemäldegalerie, and is flanked by the Neue Nationalgalerie and the Berliner Philharmonie. The Memorial to the Murdered Jews of Europe, a Holocaust memorial, is to the north.
The area around Hackescher Markt is home to fashionable culture, with countless clothing outlets, clubs, bars, and galleries. This includes the Hackesche Höfe, a conglomeration of buildings around several courtyards, reconstructed around 1996. The nearby New Synagogue is the center of Jewish culture.
The Straße des 17. Juni, connecting the Brandenburg Gate and Ernst-Reuter-Platz, serves as the central east–west axis. Its name commemorates the uprisings in East Berlin of 17 June 1953. Approximately halfway from the Brandenburg Gate is the Großer Stern, a circular traffic island on which the Siegessäule (Victory Column) is situated. This monument, built to commemorate Prussia's victories, was relocated in 1938–39 from its previous position in front of the Reichstag.
The Kurfürstendamm is home to some of Berlin's luxurious stores with the Kaiser Wilhelm Memorial Church at its eastern end on Breitscheidplatz. The church was destroyed in the Second World War and left in ruins. Nearby on Tauentzienstraße is KaDeWe, claimed to be continental Europe's largest department store. The Rathaus Schöneberg, where John F. Kennedy made his famous "Ich bin ein Berliner!" speech, is in Tempelhof-Schöneberg.
West of the center, Bellevue Palace is the residence of the German President. Charlottenburg Palace, which was burnt out in the Second World War, is the largest historical palace in Berlin.
The Funkturm Berlin is a 150-meter-tall (490 ft) lattice radio tower in the fairground area, built between 1924 and 1926. It is the only observation tower which stands on insulators and has a restaurant 55 m (180 ft) and an observation deck 126 m (413 ft) above ground, which is reachable by a windowed elevator.
The Oberbaumbrücke over the Spree river is Berlin's most iconic bridge, connecting the now-combined boroughs of Friedrichshain and Kreuzberg. It carries vehicles, pedestrians, and the U1 Berlin U-Bahn line. The bridge was completed in a brick gothic style in 1896, replacing the former wooden bridge with an upper deck for the U-Bahn. The center portion was demolished in 1945 to stop the Red Army from crossing. After the war, the repaired bridge served as a checkpoint and border crossing between the Soviet and American sectors, and later between East and West Berlin. In the mid-1950s, it was closed to vehicles, and after the construction of the Berlin Wall in 1961, pedestrian traffic was heavily restricted. Following German reunification, the center portion was reconstructed with a steel frame, and U-Bahn service resumed in 1995.
== Demographics ==
At the end of 2023 the city-state of Berlin had 3.66 million registered inhabitants, in an area of 891.3 km2 (344.1 sq mi). Berlin is the most populous city proper in the European Union. In 2021, the urban area of Berlin had a population of over 4.6 million inhabitants. As of 2019, the functional urban area was home to about 5.2 million people. The entire Berlin-Brandenburg capital region has a population of more than 6 million in an area of 30,546 km2 (11,794 sq mi).
In 2014, the city-state Berlin had 37,368 live births (+6.6%), a record number since 1991. The number of deaths was 32,314. Almost 2 million households were counted in the city, of which 54 percent were inhabited by a single person. More than 337,000 families with children under the age of 18 lived in Berlin. In 2014, the German capital registered a migration surplus of approximately 40,000 people.
=== Nationalities ===
National and international migration into the city has a long history. In 1685, after the revocation of the Edict of Nantes in France, the city responded with the Edict of Potsdam, which guaranteed religious freedom and tax-free status to French Huguenot refugees for ten years. The Greater Berlin Act in 1920 incorporated many suburbs and surrounding cities of Berlin. It formed most of the territory that comprises modern Berlin and increased the population from 1.9 million to 4 million.
Active immigration and asylum politics in West Berlin triggered waves of immigration in the 1960s and 1970s. Berlin is home to at least 180,000 Turkish and Turkish German residents, making it the largest Turkish community outside of Turkey. In the 1990s the Aussiedlergesetze enabled immigration to Germany of some residents from the former Soviet Union. Today ethnic Germans from countries of the former Soviet Union make up the largest portion of the Russian-speaking community. The last decade experienced an influx from various Western countries and some African regions. A portion of the African immigrants have settled in the Afrikanisches Viertel. Young Germans, EU-Europeans and Israelis have also settled in the city.
In December 2019 there were 777,345 registered residents of foreign nationality and another 542,975 German citizens with a "migration background" (Migrationshintergrund, MH), meaning they or one of their parents immigrated to Germany after 1955. Foreign residents of Berlin originate from about 190 countries. 48 percent of the residents under the age of 15 have a migration background in 2017. Berlin in 2009 was estimated to have 100,000 to 250,000 unregistered inhabitants. Boroughs of Berlin with a significant number of migrants or foreign born population are Mitte, Neukölln and Friedrichshain-Kreuzberg. The number of Arabic speakers in Berlin could be higher than 150,000. There are at least 40,000 Berliners with Syrian citizenship, third only behind Turkish and Polish citizens. The 2015 refugee crisis made Berlin Europe's capital of Arab culture. Berlin is among the cities in Germany that have received the biggest amount of refugees after the 2022 Russian invasion of Ukraine. As of November 2022, an estimated 85,000 Ukrainian refugees were registered in Berlin, making Berlin the most popular destination of Ukrainian refugees in Germany.
Berlin has a vibrant expatriate community involving, among others, precarious immigrants, seasonal workers, and refugees. Therefore, Berlin sustains a broad variety of English-based speakers. Speaking a particular type of English does attract prestige and cultural capital in Berlin.
=== Languages ===
German is the official and predominant spoken language in Berlin. It is a West Germanic language that derives most of its vocabulary from the Germanic branch of the Indo-European language family. German is one of 24 languages of the European Union, and one of the three working languages of the European Commission.
Berlinerisch or Berlinisch is not a dialect linguistically. It is spoken in Berlin and the surrounding metropolitan area. It originates from a Brandenburgish variant. The dialect is now seen more like a sociolect, largely through increased immigration and trends among the educated population to speak standard German in everyday life.
The most commonly spoken foreign languages in Berlin are Turkish, Polish, English, Persian, Arabic, Italian, Bulgarian, Russian, Romanian, Kurdish, Serbo-Croatian, French, Spanish and Vietnamese. Turkish, Arabic, Kurdish, and Serbo-Croatian are heard more often in the western part due to the large Middle Eastern and former-Yugoslavian communities. Polish, English, Russian, and Vietnamese have more native speakers in East Berlin.
=== Religion ===
On the report of the 2011 census, approximately 37 percent of the population reported being members of a legally-recognized church or religious organization. The rest either did not belong to such an organization, or there was no information available about them.
The largest religious denomination recorded in 2010 was the Protestant regional church body—the Evangelical Church of Berlin-Brandenburg-Silesian Upper Lusatia (EKBO)—a united church. EKBO is a member of the Protestant Church in Germany (EKD) and of the Union of Protestant Churches in the EKD (UEK). According to the EKBO, their membership accounted for 18.7 percent of the local population, while the Roman Catholic Church had 9.1 percent of residents registered as its members. About 2.7% of the population identify with other Christian denominations (mostly Eastern Orthodox, but also various Protestants). According to the Berlin residents register, in 2018 14.9 percent were members of the Evangelical Church, and 8.5 percent were members of the Catholic Church. The government keeps a register of members of these churches for tax purposes, because it collects church tax on behalf of the churches. It does not keep records of members of other religious organizations which may collect their own church tax, in this way.
In 2009, approximately 249,000 Muslims were reported by the Office of Statistics to be members of mosques and Islamic religious organizations in Berlin, while in 2016, the newspaper Der Tagesspiegel estimated that about 350,000 Muslims observed Ramadan in Berlin. In 2019, about 437,000 registered residents, 11.6% of the total, reported having a migration background from one of the Member states of the Organization of Islamic Cooperation. Between 1992 and 2011 the Muslim population almost doubled.
About 0.9% of Berliners belong to other religions. Of the estimated population of 30,000–45,000 Jewish residents, approximately 12,000 are registered members of religious organizations.
Berlin is the seat of the Roman Catholic archbishop of Berlin and EKBO's elected chairperson is titled the bishop of EKBO. Furthermore, Berlin is the seat of many Orthodox cathedrals, such as the Cathedral of St. Boris the Baptist, one of the two seats of the Bulgarian Orthodox Diocese of Western and Central Europe, and the Resurrection of Christ Cathedral of the Diocese of Berlin (Patriarchate of Moscow).
The faithful of the different religions and denominations maintain many places of worship in Berlin. The Independent Evangelical Lutheran Church has eight parishes of different sizes in Berlin. There are 36 Baptist congregations (within Union of Evangelical Free Church Congregations in Germany), 29 New Apostolic Churches, 15 United Methodist churches, eight Free Evangelical Congregations, four Churches of Christ, Scientist (1st, 2nd, 3rd, and 11th), six congregations of the Church of Jesus Christ of Latter-day Saints, an Old Catholic church, and an Anglican church in Berlin. Berlin has more than 80 mosques, ten synagogues, and two Buddhist as well as four Hindu temples.
== Government and politics ==
=== German federal city state ===
Since the German reunification on 3 October 1990, Berlin has been one of the three city-states of Germany among the present 16 federal states of Germany. The Abgeordnetenhaus von Berlin (House of Representatives) functions as the city and state parliament, which has 141 seats. Berlin's executive body is the Senate of Berlin (Senat von Berlin). The Senate consists of the Governing Mayor of Berlin (Regierender Bürgermeister), and up to ten senators holding ministerial positions, two of them holding the title of "Mayor" (Bürgermeister) as deputy to the Governing Mayor.
The total annual budget of Berlin in 2015 exceeded €24.5 ($30.0) billion including a budget surplus of €205 ($240) million. The German Federal city state of Berlin owns extensive assets, including administrative and government buildings, real estate companies, as well as stakes in the Olympic Stadium, swimming pools, housing companies, and numerous public enterprises and subsidiary companies. The federal state of Berlin runs a real estate portal to advertise commercial spaces or land suitable for redevelopment.
The Social Democratic Party (SPD) and The Left (Die Linke) took control of the city government after the 2001 state election and won another term in the 2006 state election. From the 2016 state election until the 2023 state election, there was a coalition between the Social Democratic Party, the Greens and the Left Party. Since April 2023, the government has been formed by a coalition between the Christian Democrats and the Social Democrats.
The Governing Mayor is simultaneously Lord Mayor of the City of Berlin (Oberbürgermeister der Stadt) and Minister President of the State of Berlin (Ministerpräsident des Bundeslandes). The office of the Governing Mayor is in the Rotes Rathaus (Red City Hall). Since 2023, this office has been held by Kai Wegner of the Christian Democrats. He is the first conservative mayor in Berlin in more than two decades.
=== Boroughs ===
Berlin is subdivided into 12 boroughs or districts (Bezirke). Each borough has several subdistricts or neighborhoods (Ortsteile), which have roots in much older municipalities that predate the formation of Greater Berlin on 1 October 1920. These subdistricts became urbanized and incorporated into the city later on. Many residents strongly identify with their neighborhoods, colloquially called Kiez. At present, Berlin consists of 96 subdistricts, which are commonly made up of several smaller residential areas or quarters.
Each borough is governed by a borough council (Bezirksamt) consisting of five councilors (Bezirksstadträte) including the borough's mayor (Bezirksbürgermeister). The council is elected by the borough assembly (Bezirksverordnetenversammlung). However, the individual boroughs are not independent municipalities, but subordinate to the Senate of Berlin. The borough's mayors make up the council of mayors (Rat der Bürgermeister), which is led by the city's Governing Mayor and advises the Senate. The neighborhoods have no local government bodies.
=== City partnerships ===
Berlin to this day maintains official partnerships with 17 cities. Town twinning between West Berlin and other cities began with its sister city Los Angeles, California, in 1967. East Berlin's partnerships were canceled at the time of German reunification.
=== Capital city ===
Berlin is the capital of the Federal Republic of Germany. The President of Germany, whose functions are mainly ceremonial under the German constitution, has their official residence in Bellevue Palace. Berlin is the seat of the German Chancellor (Prime Minister), housed in the Chancellery building, the Bundeskanzleramt. Facing the Chancellery is the Bundestag, the German Parliament, housed in the renovated Reichstag building since the government's relocation to Berlin in 1998. The Bundesrat ("federal council", performing the function of an upper house) is the representation of the 16 constituent states (Länder) of Germany and has its seat at the former Prussian House of Lords. The total annual federal budget managed by the German government exceeded €310 ($375) billion in 2013.
The relocation of the federal government and Bundestag to Berlin was mostly completed in 1999. However, some ministries, as well as some minor departments, stayed in the federal city Bonn, the former capital of West Germany. Discussions about moving the remaining ministries and departments to Berlin continue.
The Federal Foreign Office and the ministries and departments of Defense, Justice and Consumer Protection, Finance, Interior, Economic Affairs and Energy, Labor and Social Affairs, Family Affairs, Senior Citizens, Women and Youth, Environment, Nature Conservation and Nuclear Safety, Food and Agriculture, Economic Cooperation and Development, Health, Transport and Digital Infrastructure and Education and Research are based in the capital.
=== Embassies ===
Berlin hosts in total 158 foreign embassies as well as the headquarters of many think tanks, trade unions, nonprofit organizations, lobbying groups, and professional associations. Frequent official visits and diplomatic consultations among governmental representatives and national leaders are common in contemporary Berlin.
== Economy ==
In 2018, the GDP of Berlin totaled €147 billion, an increase of 3.1% over the previous year. Berlin's economy is dominated by the service sector, with around 84% of all companies doing business in services. In 2015, the total labor force in Berlin was 1.85 million. The unemployment rate reached a 24-year low in November 2015 and stood at 10.0%. From 2012 to 2015 Berlin, as a German state, had the highest annual employment growth rate. Around 130,000 jobs were added in this period.
Important economic sectors in Berlin include life sciences, transportation, information and communication technologies, media and music, advertising and design, biotechnology, environmental services, construction, e-commerce, retail, hotel business, and medical engineering.
Research and development have economic significance for the city. Several major corporations like Volkswagen, Pfizer, and SAP operate innovation laboratories in the city.
The Science and Business Park in Adlershof is the largest technology park in Germany measured by revenue. Within the eurozone, Berlin has become a center for business relocation and international investments.
=== Companies ===
Many German and international companies have business or service centers in the city. For several years Berlin has been recognized as a major center of business founders. In 2015, Berlin generated the most venture capital for young startup companies in Europe.
Among the 10 largest employers in Berlin are the City-State of Berlin, Deutsche Bahn, largest railway company in the world, the hospital providers Charité and Vivantes, the Federal Government of Germany, the local public transport provider BVG, Siemens and Deutsche Telekom.
Siemens, a Global 500 and DAX-listed company is partly headquartered in Berlin. Other DAX-listed companies headquartered in Berlin are the property company Deutsche Wohnen and the online food delivery service Delivery Hero. The national railway operator Deutsche Bahn, Europe's largest digital publisher Axel Springer as well as the MDAX-listed firms Zalando and HelloFresh and also have their main headquarters in the city. Among the largest international corporations who have their German or European headquarters in Berlin are Bombardier Transportation, Securing Energy for Europe, Coca-Cola, Pfizer, Sony and TotalEnergies.
As of 2023, Sparkassen-Finanzgruppe, a network of public banks that together form the largest financial services group in Germany and in all of Europe, is headquartered in Berlin. The Bundesverband der Deutschen Volksbanken und Raiffeisenbanken has its headquarters in Berlin, managing around 1.200 trillion euros. The three largest banks in the capital are Deutsche Kreditbank, Landesbank Berlin and Berlin Hyp.
Mercedes-Benz Group manufactures cars, and BMW builds motorcycles in Berlin. In 2022, American electric car manufacturer Tesla opened its first European Gigafactory outside the city borders in Grünheide (Mark), Brandenburg. The Pharmaceuticals division of Bayer and Berlin Chemie are major pharmaceutical companies in the city.
=== Tourism and conventions ===
Berlin had 788 hotels with 134,399 beds in 2014. The city recorded 28.7 million overnight hotel stays and 11.9 million hotel guests in 2014. Tourism figures have more than doubled within the last ten years and Berlin has become the third-most-visited city destination in Europe. Some of the most visited places in Berlin include: Potsdamer Platz, Brandenburger Tor, the Berlin wall, Alexanderplatz, Museumsinsel, Fernsehturm, the East-Side Gallery, Schloss-Charlottenburg, Zoologischer Garten, Siegessäule, Gedenkstätte Berliner Mauer, Mauerpark, Botanical Garden, Französischer Dom, Deutscher Dom and Holocaust-Mahnmal. The largest visitor groups are from Germany, the United Kingdom, the Netherlands, Italy, Spain and the United States.
According to figures from the International Congress and Convention Association in 2015, Berlin became the leading organizer of conferences globally, hosting 195 international meetings. Some of these congress events take place on venues such as CityCube Berlin or the Berlin Congress Center (bcc).
The Messe Berlin (also known as Berlin ExpoCenter City) is the main convention organizing company in the city. Its main exhibition area covers more than 160,000 square meters (1,722,226 sq ft). Several large-scale trade fairs like the consumer electronics trade fair IFA, where the first practical audio tape recorder and the first completely electronic television system were first introduced to the public, the ILA Berlin Air Show, the Berlin Fashion Week (including the Premium Berlin and the Panorama Berlin), the Green Week, the Fruit Logistica, the transport fair InnoTrans, the tourism fair ITB and the adult entertainment and erotic fair Venus are held annually in the city, attracting a significant number of business visitors.
=== Creative industries ===
The creative arts and entertainment business is an important part of Berlin's economy. The sector comprises music, film, advertising, architecture, art, design, fashion, performing arts, publishing, R&D, software, TV, radio, and video games.
In 2014, around 30,500 creative companies operated in the Berlin-Brandenburg metropolitan region, predominantly SMEs. Generating a revenue of 15.6 billion Euro and 6% of all private economic sales, the culture industry grew from 2009 to 2014 at an average rate of 5.5% per year.
Berlin is an important European and German film industry hub. It is home to more than 1,000 film and television production companies, 270 movie theaters, and around 300 national and international co-productions are filmed in the region every year. The historic Babelsberg Studios and the production company UFA are adjacent to Berlin in Potsdam. The city is also home of the German Film Academy (Deutsche Filmakademie), founded in 2003, and the European Film Academy, founded in 1988.
=== Media ===
Berlin is home to many magazine, newspaper, book, and scientific/academic publishers and their associated service industries. In addition, around 20 news agencies, more than 90 regional daily newspapers and their websites, as well as the Berlin offices of more than 22 national publications such as Der Spiegel, and Die Zeit reinforce the capital's position as Germany's epicenter for influential debate. Therefore, many international journalists, bloggers, and writers live and work in the city.
Berlin is the central location to several international and regional television and radio stations. The public broadcaster RBB has its headquarters in Berlin as well as the commercial broadcasters MTV Europe and Welt. German international public broadcaster Deutsche Welle has its TV production unit in Berlin, and most national German broadcasters have a studio in the city, including ZDF and RTL.
Berlin has Germany's largest number of daily newspapers, with numerous local broadsheets (Berliner Morgenpost, Berliner Zeitung, Der Tagesspiegel), and three major tabloids, as well as national dailies of varying sizes, each with a different political affiliation, such as Die Welt, Neues Deutschland, and Die Tageszeitung. The Berliner, a monthly magazine, is Berlin's English-language periodical and La Gazette de Berlin a French-language newspaper.
Berlin is also the headquarter of major German-language publishing houses like Walter de Gruyter, Springer, the Ullstein Verlagsgruppe (publishing group), Suhrkamp, and Cornelsen are all based in Berlin. Each of which publishes books, periodicals, and multimedia products.
== Quality of life ==
According to Mercer, Berlin ranked number 19 in the Quality of Living City Ranking in 2024.
Also in 2024, according to Monocle, Berlin occupied the position of the 17th-most-livable city in the world. Economist Intelligence Unit ranked Berlin number 21 of all global cities for livability. In 2019 Berlin was also number 8 on the Global Power City Index. In the same year Berlin was honored for having the best future prospects of all cities in Germany.
== Transport ==
=== Roads ===
Berlin's transport infrastructure provides a diverse range of urban mobility.
A total of 979 bridges cross 197 km (122 miles) the inner-city waterways. Berlin roads total 5,422 km (3,369 miles) of which 77 km (48 miles) are motorways (known as Autobahn). In 2013 only 1.344 million motor vehicles were registered in the city. With 377 cars per 1000 residents in 2013 (570/1000 in Germany), Berlin as a Western global city has one of the lowest numbers of cars per capita.
=== Cycling ===
Berlin is well known for its highly developed bicycle lane system. It is estimated that Berlin has 710 bicycles per 1,000 residents. Around 500,000 daily bike riders accounted for 13 percent of total traffic in 2010.
Cyclists in Berlin have access to 620 km of bicycle paths including approximately 150 km of mandatory bicycle paths, 190 km of off-road bicycle routes, 60 km of bicycle lanes on roads, 70 km of shared bus lanes which are also open to cyclists, 100 km of combined pedestrian/bike paths and 50 km of marked bicycle lanes on roadside pavements or sidewalks. Riders are allowed to carry their bicycles on Regionalbahn (RE), S-Bahn and U-Bahn trains, on trams, and on night buses if a bike ticket is purchased.
=== Taxicabs ===
Taxicabs in Berlin are yellow or beige. In 2024, around 8,000 taxicabs were in service. Like in most of Europe, app-based sharing cab services are available but limited.
=== Rail ===
Long-distance rail lines directly connect Berlin with all of the major cities of Germany. the regional rail lines of the Verkehrsverbund Berlin-Brandenburg provide access to Brandenburg and to the Baltic Sea. The Berlin Hauptbahnhof (Berlin Central Station) is the largest grade-separated railway station in Europe. The Deutsche Bahn runs the high speed Intercity-Express (ICE) to domestic destinations, including Hamburg, Munich, Cologne, Stuttgart, and Frankfurt am Main.
=== Water transport ===
The Spree and the Havel rivers cross Berlin. There are no frequent passenger connections to and from Berlin by water. Berlin's largest harbour, the Westhafen, is located in the district of Moabit. It is a transhipment and storage site for inland shipping with a growing importance.
=== Intercity buses ===
There is an increasing quantity of intercity bus services. Berlin city has more than 10 stations that run buses to destinations throughout Berlin. Destinations in Germany and Europe are connected via the intercity bus exchange Zentraler Omnibusbahnhof Berlin.
=== Urban public transport ===
The Berliner Verkehrsbetriebe (BVG) and the German State-owned Deutsche Bahn (DB) manage several extensive urban public transport systems.
Public transport in Berlin has a long and complicated history because of the 20th-century division of the city, where movement between the two halves was not served. Since 1989, the transport network has been developed extensively. However, it still contains early 20th century traits, such as the U1.
=== Airports ===
Berlin is served by one commercial international airport: Berlin Brandenburg Airport (BER), located just outside Berlin's south-eastern border, in the state of Brandenburg. It began construction in 2006, with the intention of replacing Tegel Airport (TXL) and Schönefeld Airport (SXF) as the single commercial airport of Berlin. Previously set to open in 2012, after extensive delays and cost overruns, it opened for commercial operations in October 2020. The planned initial capacity of around 27 million passengers per year is to be further developed to bring the terminal capacity to approximately 55 million per year by 2040.
Before the opening of the BER in Brandenburg, Berlin was served by Tegel Airport and Schönefeld Airport. Tegel Airport was within the city limits, and Schönefeld Airport was located at the same site as BER. Both airports together handled 29.5 million passengers in 2015. In 2014, 67 airlines served 163 destinations in 50 countries from Berlin. Tegel Airport was a focus city for Lufthansa and Eurowings while Schönefeld served as an important destination for airlines like Germania, easyJet and Ryanair. Until 2008, Berlin was also served by the smaller Tempelhof Airport, which functioned as a city airport, with a convenient location near the city center, allowing for quick transit times between the central business district and the airport. The airport grounds have since been turned into a city park.
== Rohrpost ==
From 1865 to 1976, Berlin operated an expansive pneumatic postal network, reaching a maximum length of 400 kilometers (roughly 250 miles) by 1940. The system was divided into two distinct networks after 1949. The West Berlin system remained in public use until 1963, and continued to be utilized for government correspondence until 1972. Conversely, the East Berlin system, which incorporated the Hauptelegraphenamt—the central hub of the operation—remained functional until 1976.
== Energy ==
Berlin's two largest energy provider for private households are the Swedish firm Vattenfall and the Berlin-based company GASAG. Both offer electric power and natural gas supply. Some of the city's electric energy is imported from nearby power plants in southern Brandenburg.
As of 2015 the five largest power plants measured by capacity are the Heizkraftwerk Reuter West, the Heizkraftwerk Lichterfelde, the Heizkraftwerk Mitte, the Heizkraftwerk Wilmersdorf, and the Heizkraftwerk Charlottenburg. All of these power stations generate electricity and useful heat at the same time to facilitate buffering during load peaks.
In 1993 the power grid connections in the Berlin-Brandenburg capital region were renewed. In most of the inner districts of Berlin power lines are underground cables; only a 380 kV and a 110 kV line, which run from Reuter substation to the urban Autobahn, use overhead lines. The Berlin 380-kV electric line is the backbone of the city's energy grid.
== Health ==
Berlin has a long history of discoveries in medicine and innovations in medical technology. The modern history of medicine has been significantly influenced by scientists from Berlin. Rudolf Virchow was the founder of cellular pathology, while Robert Koch developed vaccines for anthrax, cholera, and tuberculosis. For his life's work Koch is seen as one of the founders of modern medicine.
The Charité complex (Universitätsklinik Charité) is the largest university hospital in Europe, tracing back its origins to the year 1710. More than half of all German Nobel Prize winners in Physiology or Medicine, including Emil von Behring, Robert Koch and Paul Ehrlich, have worked at the Charité. The Charité is spread over four campuses and comprises around 3,000 beds, 15,500 staff, 8,000 students, and more than 60 operating theaters, and it has a turnover of two billion euros annually.
== Telecommunication ==
Since 2017, the digital television standard in Berlin and Germany is DVB-T2. This system transmits compressed digital audio, digital video and other data in an MPEG transport stream.
Berlin has installed several hundred free public Wireless LAN sites across the capital since 2016. The wireless networks are concentrated mostly in central districts; 650 hotspots (325 indoor and 325 outdoor access points) are installed.
The UMTS (3G) and LTE (4G) networks of the three major cellular operators Vodafone, Telekom Deutschland and O2 enable the use of mobile broadband applications citywide.
== Education and research ==
As of 2014, Berlin had 878 schools, teaching 340,658 students in 13,727 classes and 56,787 trainees in businesses and elsewhere. The city has a 6-year primary education program. After completing primary school, students continue to the Sekundarschule (a comprehensive school) or Gymnasium (college preparatory school). Berlin has a special bilingual school program in the Europaschule, in which children are taught the curriculum in German and a foreign language, starting in primary school and continuing in high school.
The Französisches Gymnasium Berlin, which was founded in 1689 to teach the children of Huguenot refugees, offers (German/French) instruction. The John F. Kennedy School, a bilingual German–American public school in Zehlendorf, is particularly popular with children of diplomats and the English-speaking expatriate community. 82 Gymnasien teach Latin and 8 teach Classical Greek.
=== Higher education ===
The Berlin-Brandenburg capital region is one of the most prolific centers of higher education and research in Germany and Europe. Historically, 67 Nobel Prize winners are affiliated with the Berlin-based universities.
The city has four public research universities and more than 30 private, professional, and technical colleges (Hochschulen), offering a wide range of disciplines. A record number of 175,651 students were enrolled in the winter term of 2015/16. Among them around 18% have an international background.
The three largest universities combined have approximately 103,000 enrolled students. There are the Freie Universität Berlin (Free University of Berlin, FU Berlin) with about 33,000 students, the Humboldt Universität zu Berlin (HU Berlin) with 35,000 students, and Technische Universität Berlin (TU Berlin) with 35,000 students. The Charité Medical School has around 8,000 students. The FU, the HU, the TU, and the Charité make up the Berlin University Alliance, which has received funding from the Excellence Strategy program of the German government. The Universität der Künste (UdK) has about 4,000 students and ESMT Berlin is only one of four business schools in Germany with triple accreditation. The Hertie School, a private public policy school located in Mitte, has more than 900 students and doctoral students. The Berlin School of Economics and Law has an enrollment of about 11,000 students, the Berlin University of Applied Sciences and Technology of about 12,000 students, and the Hochschule für Technik und Wirtschaft (University of Applied Sciences for Engineering and Economics) of about 14,000 students.
=== Research ===
The city has a high density of internationally renowned research institutions, such as the Fraunhofer Society, the DLR Institute for Planetary Research, the Leibniz Association, the Helmholtz Association, and the Max Planck Society, which are independent of, or only loosely connected to its universities. In 2012, around 65,000 professional scientists were working in research and development in the city.
Berlin is one of the knowledge and innovation communities (KIC) of the European Institute of Innovation and Technology (EIT). The KIC is based at the Center for Entrepreneurship at TU Berlin and has a focus in the development of IT industries. It partners with major multinational companies such as Siemens, Deutsche Telekom, and SAP.
One of Europe's successful research, business and technology clusters is based at WISTA in Berlin-Adlershof, with more than 1,000 affiliated firms, university departments and scientific institutions.
In addition to the university-affiliated libraries, the Staatsbibliothek zu Berlin is a major research library. Its two main locations are on Potsdamer Straße and on Unter den Linden. There are also 86 public libraries in the city. ResearchGate, a global social networking site for scientists, is based in Berlin.
== Culture ==
Berlin is known for its numerous cultural institutions, many of which enjoy international reputation. The diversity and vivacity of the metropolis led to a trendsetting atmosphere. An innovative music, dance and art scene has developed in the 21st century.
Young people, international artists and entrepreneurs continued to settle in the city and made Berlin a popular entertainment center in the world.
The expanding cultural performance of the city was underscored by the relocation of the Universal Music Group who decided to move their headquarters to the banks of the River Spree. In 2005, Berlin was named "City of Design" by UNESCO and has been part of the Creative Cities Network ever since.
Many German and International films were shot in Berlin, including M, One, Two, Three, Cabaret, Christiane F., Possession, Octopussy, Wings of Desire, Run Lola Run, The Bourne Trilogy, Good Bye, Lenin!, The Lives of Others, Inglourious Basterds, Hanna, Unknown and Bridge of Spies.
=== Galleries and museums ===
As of 2011 Berlin is home to 138 museums and more than 400 art galleries. The ensemble on the Museum Island is a UNESCO World Heritage Site and is in the northern part of the Spree Island between the Spree and the Kupfergraben. As early as 1841 it was designated a "district dedicated to art and antiquities" by a royal decree. Subsequently, the Altes Museum was built in the Lustgarten. The Neues Museum, which displays the bust of Queen Nefertiti, Alte Nationalgalerie, Pergamon Museum, and Bode Museum were built there.
Apart from the Museum Island, there are many additional museums in the city. The Gemäldegalerie (Painting Gallery) focuses on the paintings of the "old masters" from the 13th to the 18th centuries, while the Neue Nationalgalerie (New National Gallery, built by Ludwig Mies van der Rohe) specializes in 20th-century European painting. The Hamburger Bahnhof, in Moabit, exhibits a major collection of modern and contemporary art. The expanded Deutsches Historisches Museum reopened in the Zeughaus with an overview of German history spanning more than a millennium. The Bauhaus Archive is a museum of 20th-century design from the famous Bauhaus school. Museum Berggruen houses the collection of noted 20th century collector Heinz Berggruen, and features an extensive assortment of works by Picasso, Matisse, Cézanne, and Giacometti, among others. The Kupferstichkabinett Berlin (Museum of Prints and Drawings) is part of the Staatlichen Museen zu Berlin (Berlin State Museums) and the Kulturforum at Potsdamer Platz in the Tiergarten district of Berlin's Mitte district. It is the largest museum of the graphic arts in Germany and at the same time one of the four most important collections of its kind in the world. The collection includes Friedrich Gilly's design for the monument to Frederick II of Prussia.
The Jewish Museum has a standing exhibition on two millennia of German-Jewish history. The German Museum of Technology in Kreuzberg has a large collection of historical technical artifacts. The Museum für Naturkunde (Berlin's natural history museum) exhibits natural history near Berlin Hauptbahnhof. It has the largest mounted dinosaur in the world (a Giraffatitan skeleton). A well-preserved specimen of Tyrannosaurus rex and the early bird Archaeopteryx are at display as well.
In Dahlem, there are several museums of world art and culture, such as the Museum of Asian Art, the Ethnological Museum, the Museum of European Cultures, as well as the Allied Museum. The Brücke Museum features one of the largest collection of works by artist of the early 20th-century expressionist movement. In Lichtenberg, on the grounds of the former East German Ministry for State Security, is the Stasi Museum. The site of Checkpoint Charlie, one of the most renowned crossing points of the Berlin Wall, is still preserved. A private museum venture exhibits a comprehensive documentation of detailed plans and strategies devised by people who tried to flee from the East.
The Beate Uhse Erotic Museum claimed to be the largest erotic museum in the world until it closed in 2014.
The cityscape of Berlin displays large quantities of urban street art. It has become a significant part of the city's cultural heritage and has its roots in the graffiti scene of Kreuzberg of the 1980s. The Berlin Wall itself has become one of the largest open-air canvasses in the world. The leftover stretch along the Spree river in Friedrichshain remains as the East Side Gallery. Berlin today is consistently rated as an important world city for street art culture.
Berlin has galleries which are quite rich in contemporary art. Located in Mitte, KW Institute for Contemporary Art, KOW, Sprüth Magers; Kreuzberg there are a few galleries as well such as Blain Southern, Esther Schipper, Future Gallery, König Gallerie.
=== Nightlife and festivals ===
Berlin's nightlife has been celebrated as one of the most diverse and vibrant of its kind. In the 1970s and 80s, the SO36 in Kreuzberg was a center for punk music and culture. The SOUND and the Dschungel gained notoriety. Throughout the 1990s, people in their 20s from all over the world, particularly those in Western and Central Europe, made Berlin's club scene a premier nightlife venue. After the fall of the Berlin Wall in 1989, many historic buildings in Mitte, the former city center of East Berlin, were illegally occupied and re-built by young squatters and became a fertile ground for underground and counterculture gatherings. The central boroughs are home to many nightclubs, including the Tresor and the Berghain. The KitKatClub and several other locations are known for their sexually uninhibited parties.
Clubs are not required to close at a fixed time during the weekends, and many parties last well into the morning or even all weekend, including near Alexanderplatz. Several venues have become a popular stage for the Neo-Burlesque scene.
Berlin has a long history of gay culture, and is an important birthplace of the LGBT rights movement. Same-sex bars and dance halls operated freely as early as the 1880s, and the first gay magazine, Der Eigene, started in 1896. By the 1920s, gays and lesbians had an unprecedented visibility. Today, in addition to a positive atmosphere in the wider club scene, the city again has a huge number of queer clubs and festivals. The most famous and largest are Berlin Pride, the Christopher Street Day, the Lesbian and Gay City Festival in Berlin-Schöneberg, the Kreuzberg Pride.
The annual Berlin International Film Festival (Berlinale) with around 500,000 admissions is considered to be the largest publicly attended film festival in the world. The Karneval der Kulturen (Carnival of Cultures), a multi-ethnic street parade, is celebrated every Pentecost weekend. Berlin is also well known for the cultural festival Berliner Festspiele, which includes the jazz festival JazzFest Berlin, and Young Euro Classic, the largest international festival of youth orchestras in the world. Several technology and media art festivals and conferences are held in the city, including Transmediale and Chaos Communication Congress. The annual Berlin Festival focuses on indie rock, electronic music and synthpop and is part of the International Berlin Music Week. Every year Berlin hosts one of the largest New Year's Eve celebrations in the world, attended by well over a million people. The focal point is the Brandenburg Gate, where midnight fireworks are centered, but various private fireworks displays take place throughout the entire city. Partygoers in Germany often toast the New Year with a glass of sparkling wine.
=== Performing arts ===
Berlin is home to 44 theaters and stages. The Deutsches Theater in Mitte was built in 1849–50 and has operated almost continuously since then. The Volksbühne at Rosa-Luxemburg-Platz was built in 1913–14, though the company had been founded in 1890. The Berliner Ensemble, famous for performing the works of Bertolt Brecht, was established in 1949. The Schaubühne was founded in 1962 and moved to the building of the former Universum Cinema on Kurfürstendamm in 1981. With a seating capacity of 1,895 and a stage floor of 2,854 square meters (30,720 sq ft), the Friedrichstadt-Palast in Berlin Mitte is the largest show palace in Europe. For Berlin's independent dance and theatre scene, venues such as the Sophiensäle in Mitte and the three houses of the Hebbel am Ufer (HAU) in Kreuzberg are important. Most productions there are also accessible to an English-speaking audience. Some of the dance and theatre groups that also work internationally (Gob Squad, Rimini Protokoll) are based there, as well as festivals such as the international festival Dance in August.
Berlin has three major opera houses: the Deutsche Oper, the Berlin State Opera, and the Komische Oper. The Berlin State Opera on Unter den Linden opened in 1742 and is the oldest of the three. Its musical director is Daniel Barenboim. The Komische Oper has traditionally specialized in operettas and is also at Unter den Linden. The Deutsche Oper opened in 1912 in Charlottenburg.
The city's main venue for musical theater performances are the Theater am Potsdamer Platz and Theater des Westens (built in 1895). Contemporary dance can be seen at the Radialsystem V. The Tempodrom is host to concerts and circus-inspired entertainment. It also houses a multi-sensory spa experience. The Admiralspalast in Mitte has a vibrant program of variety and music events.
There are seven symphony orchestras in Berlin. The Berlin Philharmonic Orchestra is one of the preeminent orchestras in the world; it is housed in the Berliner Philharmonie near Potsdamer Platz on a street named for the orchestra's longest-serving conductor, Herbert von Karajan. Simon Rattle was its principal conductor from 1999 to 2018, a position now held by Kirill Petrenko. The Konzerthausorchester Berlin was founded in 1952 as the orchestra for East Berlin. Christoph Eschenbach is its principal conductor. The Haus der Kulturen der Welt presents exhibitions dealing with intercultural issues and stages world music and conferences. The Kookaburra and the Quatsch Comedy Club are known for satire and comedy shows. In 2018, the New York Times described Berlin as "arguably the world capital of underground electronic music".
=== Cuisine ===
The cuisine and culinary offerings of Berlin vary greatly. 23 restaurants in Berlin have been awarded one or more Michelin stars in the Michelin Guide of 2021, which ranks the city at the top for the number of restaurants having this distinction in Germany. Berlin is well known for its offerings of vegetarian and vegan cuisine and is home to an innovative entrepreneurial food scene promoting cosmopolitan flavors, local and sustainable ingredients, pop-up street food markets, supper clubs, as well as food festivals, such as Berlin Food Week.
Many local foods originated from north German culinary traditions and include rustic and hearty dishes with pork, goose, fish, peas, beans, cucumbers, or potatoes. Typical Berliner fare include popular street food like the Currywurst (which gained popularity with postwar construction workers rebuilding the city), Buletten and the Berliner donut, known in Berlin as Pfannkuchen (German: [ˈp͡fanˌkuːxn̩] ). German bakeries offering a variety of breads and pastries are widespread. One of Europe's largest delicatessen markets is found at the KaDeWe, and among the world's largest chocolate stores is Rausch.
Berlin is also home to a diverse gastronomy scene reflecting the immigrant history of the city. Turkish and Arab immigrants brought their culinary traditions to the city, such as the lahmajoun and falafel, which have become common fast food staples. The modern fast-food version of the doner kebab sandwich which evolved in Berlin in the 1970s, has since become a favorite dish in Germany and elsewhere in the world. Asian cuisine like Chinese, Vietnamese, Thai, Indian, Korean, and Japanese restaurants, as well as Spanish tapas bars, Italian, and Greek cuisine, can be found in many parts of the city.
=== Recreation ===
Zoologischer Garten Berlin, the older of two zoos in the city, was founded in 1844. It is the most visited zoo in Europe and presents the most diverse range of species in the world. It was the home of the captive-born celebrity polar bear Knut. The city's other zoo, Tierpark Friedrichsfelde, was founded in 1955.
Berlin's Botanischer Garten includes the Botanic Museum Berlin. With an area of 43 hectares (110 acres) and around 22,000 different plant species, it is one of the largest and most diverse collections of botanical life in the world. Other gardens in the city include the Britzer Garten, and the Gärten der Welt (Gardens of the World) in Marzahn.
The Tiergarten park in Mitte, with landscape design by Peter Joseph Lenné, is one of Berlin's largest and most popular parks. In Kreuzberg, the Viktoriapark provides a viewing point over the southern part of inner-city Berlin. Treptower Park, beside the Spree in Treptow, features a large Soviet War Memorial. The Volkspark in Friedrichshain, which opened in 1848, is the oldest park in the city, with monuments, a summer outdoor cinema and several sports areas. Tempelhofer Feld, the site of the former city airport, is the world's largest inner-city open space.
Potsdam is on the southwestern periphery of Berlin. The city was a residence of the Prussian kings and the German Kaiser, until 1918. The area around Potsdam in particular Sanssouci is known for a series of interconnected lakes and cultural landmarks. The Palaces and Parks of Potsdam and Berlin are the largest World Heritage Site in Germany.
Berlin is also well known for its numerous cafés, street musicians, beach bars along the Spree River, flea markets, boutique shops and pop-up stores, which are a source for recreation and leisure.
== Sports ==
Berlin has established a high-profile as a host city of major international sporting events. The city hosted the 1936 Summer Olympics and was the host city for the 2006 FIFA World Cup final. The World Athletics Championships was held at Olympiastadion in 2009 and 2025. The city hosted the Euroleague Final Four basketball competition in 2009 and 2016, and was one of the hosts of FIBA EuroBasket 2015. In 2015 Berlin was the venue for the UEFA Champions League Final. The city bid to host the 2000 Summer Olympics but lost to Sydney.
Berlin hosted the 2023 Special Olympics World Summer Games. This is the first time Germany has ever hosted the Special Olympics World Games.
The annual Berlin Marathon – a course that holds the most top-10 world record runs – and the ISTAF are well-established athletic events in the city. The Mellowpark in Köpenick is one of the biggest skate and BMX parks in Europe. A fan fest at Brandenburg Gate, which attracts several hundreds of thousands of spectators, has become popular during international football competitions, such as the UEFA European Championship.
Friedrich Ludwig Jahn, who is often hailed as the "father of modern gymnastics", invented the horizontal bar, parallel bars, rings, and the vault around 1811 in Berlin. Jahn's Turners movement, first realized at Volkspark Hasenheide, was the origin of modern sports clubs. In 2013, around 600,000 Berliners were registered in one of the more than 2,300 sport and fitness clubs. The city of Berlin operates more than 60 public indoor and outdoor swimming pools. Berlin is the largest Olympic training center in Germany, with around 500 top athletes (15% of all German top athletes) being based there. Forty-seven elite athletes participated in the 2012 Summer Olympics. Berliners would achieve seven gold, twelve silver, and three bronze medals.
Several professional clubs representing the most important spectator team sports in Germany are based in Berlin. The oldest and most popular first-division team based in Berlin is the football club Hertha BSC. The team represented Berlin as a founding member of the Bundesliga in 1963. Other professional team sport clubs include:
== See also ==
List of fiction set in Berlin
List of honorary citizens of Berlin
List of people from Berlin
List of songs about Berlin
== References ==
=== Citations ===
=== Sources ===
== External links ==
berlin.de – official website
Geographic data related to Berlin at OpenStreetMap |
Bibcode (identifier) | The bibcode (also known as the refcode) is a compact identifier used by several astronomical data systems to uniquely specify literature references.
== Adoption ==
The Bibliographic Reference Code (refcode) was originally developed to be used in SIMBAD and the NASA/IPAC Extragalactic Database (NED), but it became a de facto standard and is now used more widely, for example, by the NASA Astrophysics Data System, which coined and prefers the term "bibcode".
== Format ==
The code has a fixed length of 19 characters and has the form
YYYYJJJJJVVVVMPPPPA
where YYYY is the four-digit year of the reference and JJJJJ is a code indicating where the reference was published. In the case of a journal reference, VVVV is the volume number, M indicates the section of the journal where the reference was published (e.g., L for a letters section), PPPP gives the starting page number, and A is the first letter of the last name of the first author. Periods (.) are used to fill unused fields and to pad fields out to their fixed length if too short; padding is done on the right for the publication code and on the left for the volume number and page number. Page numbers greater than 9999 are continued in the M column. The 6-digit article ID numbers (in lieu of page numbers) used by the Physical Review publications since the late 1990s are treated as follows: The first two digits of the article ID, corresponding to the issue number, are converted to a lower-case letter (01 = a, etc.) and inserted into column M. The remaining four digits are used in the page field.
== Examples ==
Some examples of bibcodes are:
== See also ==
Digital object identifier
== References == |
Bicycle | A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven, single-track vehicle, with two wheels attached to a frame, one behind the other. A bicycle rider is called a cyclist, or bicyclist.
Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion bicycles. There are many more bicycles than cars. Bicycles are the principal means of transport in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys. Bicycles are used for fitness, military and police applications, courier services, bicycle racing, and artistic cycling.
The basic shape and configuration of a typical upright or "safety" bicycle, has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century, electric bicycles have become popular.
The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets, and tension-spoked wheels.
== Etymology ==
The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time.
Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity 🚲 in HTML produces 🚲.
Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike.
== History ==
The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel.
The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings (equivalent to £30 in 2023).
In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory.
The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame.
Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders.
The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units.
In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year.
Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting.
== Uses ==
Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries.
They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX.
== Technical aspects ==
The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort.
=== Types ===
Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles.
Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles".
=== Dynamics ===
A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself.
The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle.
Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie.
=== Performance ===
The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation.
A human traveling on a bicycle at low to medium speeds of around 16–24 km/h (10–15 mph) uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is 144.18 km/h (89.59 mph).
In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than 1⁄10 that generated by energy efficient motorcars.
== Parts ==
=== Frame ===
The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends.
Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames.
Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle.
Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale.
Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than 1 kilogram (2.2 lb).
Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models.
=== Drivetrain and gearing ===
The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex.
Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 12 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer.
An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear.
Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals.
With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common).
=== Steering ===
The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel.
=== Seating ===
Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle.
A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering.
=== Brakes ===
Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity.
With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s.
Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake.
=== Suspension ===
Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort.
Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot.
Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension.
=== Wheels and tires ===
The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels.
Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between 38 and 64 mm (1.5 and 2.5 in) wide, and have treads for gripping in muddy conditions or metal studs for ice.
=== Groupset ===
Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars.
=== Accessories ===
Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both.
Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing.
Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively.
Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing.
Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable.
=== Standards ===
A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety.
The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability".
The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry.
== Maintenance and repair ==
Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools.
Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket).
There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps.
Full service is available from bicycle mechanics at a local bike shop.
In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association.
=== Maintenance ===
The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of 30 to 40 pounds per square inch (210 to 280 kPa), whereas bicycle tires are normally in the range of 60 to 100 pounds per square inch (410 to 690 kPa).
Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for 10,000 miles (16,000 km) or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease.
The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every 500 miles (800 km) or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components.
Over the longer term, tires do wear out, after 2,000 to 5,000 miles (3,200 to 8,000 km); a rash of punctures is often the most visible sign of a worn tire.
=== Repair ===
Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice.
The most common roadside problem is a puncture of the tire's inner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove.
=== Tools ===
There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer.
== Social and historical aspects ==
The bicycle has had a considerable effect on human society, in both the cultural and industrial realms.
=== In daily life ===
Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast.
In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting.
In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front.
There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal.
In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs.
Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker".
Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy.
Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world.
=== Poverty alleviation ===
=== Female emancipation ===
The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers.
The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood.": 859 In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action.
In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach.
=== Economic implications ===
Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft.
Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s.
They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful.
Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll.
Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap.
They also presaged a move away from public transit that would explode with the introduction of the automobile.
J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885.
In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames.
Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China.
In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales.
=== Environmental impact ===
One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel.
=== Manufacturing ===
The global bicycle market is $61 billion in 2011. As of 2009, 130 million bicycles were sold every year globally and 66% of them were made in China.
=== Legal requirements ===
Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists.
The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition.
In some countries, bicycles must have functioning front and rear lights when ridden after dark.
Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect).
=== Theft ===
Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
General
== Further reading ==
Glaskin, Max (2013). Cycling Science: How Rider and Machine Work Together. Chicago: University of Chicago Press. ISBN 978-0-226-92187-7.
== External links ==
A History of Bicycles and Other Cycles at the Canada Science and Technology Museum |
Bicycle transportation planning and engineering | Bicycle transportation planning and engineering are the disciplines related to transportation engineering and transportation planning concerning bicycles as a mode of transport and the concomitant study, design and implementation of cycling infrastructure. It includes the study and design of dedicated transport facilities for cyclists (e.g. cyclist-only paths) as well as mixed-mode environments (i.e. where cyclists share roads and paths with vehicular and foot traffic) and how both of these examples can be made to work safely. In jurisdictions such as the United States it is often practiced in conjunction with planning for pedestrians as a part of active transportation planning.
== Networks, signage and maps ==
Most national cycling route networks have long-distance named routes, rather like highways. However, the international numbered-node cycle network has a modular design that enables arbitrary routes using simple signage. Both aim to minimize map use with plentiful signs.
Cycle networks of routes can be developed in co-ordination with cycle maps. Co-ordination can be local or national (the numbered-node cycle network has national co-ordination in some countries, and local co-ordination in others).
== Bikeways ==
Some examples of the types of bikeways under the purview of bicycle transportation engineers include partially segregated infrastructure in-road such as bike lanes, buffered bike lanes; physically segregated in-road such as cycle tracks; bike paths with their own right-of-way; and shared facilities such as bicycle boulevards, shared lane markings, advisory bike lane, road shoulders, wide outside lanes, shared street schemes, and any roadways with legal access for cycling.
=== In roadway ===
NACTO guidelines state "desired width for a cycle track should be 5 feet (1.5 m). In areas with high bicyclist volumes or uphill sections, the desired width should be 7 feet (2.1 m)".
CROW standard width for one way cycle paths in the Netherlands is a minimum of 2.5 m (8′). For bidirectional use the minimum is 3.5 m (11′).
==== Unsegregated ====
Bicycle boulevard
Shared bus and cycle lane
==== Partially segregated ====
Cycle lane
Shared lane marking
==== Segregated ====
Cycle track
===== Barriers =====
Options for barriers are soft-hit posts, raised curb or traffic barriers.
=== Off road ===
Bike path
Rail trail
=== Bike freeway ===
Bike freeway
== Intersections and signals ==
Bicycle transportation engineers also are involved in improving intersections/junctions and traffic lights.
Advanced stop lines are one example of road markings on mixed mode shared space as cycling infrastructure.
== Other infrastructure ==
Road diets, curb extension, improving the road surface; building bicycle parking such as bicycle locks, bicycle stands, lockers.
== Legislation ==
in California new bikeway design standards were last adopted in 1976. Those designs were adapted by the Association of American State Highway and Transportation Officials (AASHTO) to become the AASHTO Guide for Bicycle Facilities, which is followed in the USA.
== See also ==
Bikeway controversies
Bicycle Master Plan
Bikeway safety
Cyclability
CROW Design Manual for Bicycle Traffic
Bicycle mobility
Sustainable transport
Outline of cycling
Cycling infrastructure
Mikael Colville-Andersen
== References == |
Boat | A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size or capacity, its shape, or its ability to carry boats.
Small boats are typically used on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats (such as whaleboats) were intended for offshore use. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship.
Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to move cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions.
Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and inboard/outboard motors (including gasoline, diesel, and electric).
== History ==
=== Differentiation from other prehistoric watercraft ===
The earliest watercraft are considered to have been rafts. These would have been used for voyages such as the settlement of Australia sometime between 50,000 and 60,000 years ago.
A boat differs from a raft by obtaining its buoyancy by having most of its structure exclude water with a waterproof layer, e.g. the planks of a wooden hull, the hide covering (or tarred canvas) of a currach. In contrast, a raft is buoyant because it joins components that are themselves buoyant, for example, logs, bamboo poles, bundles of reeds, floats (such as inflated hides, sealed pottery containers or, in a modern context, empty oil drums). The key difference between a raft and a boat is that the former is a "flow through" structure, with waves able to pass up through it. Consequently, except for short river crossings, a raft is not a practical means of transport in colder regions of the world as the users would be at risk of hypothermia. Today that climatic limitation restricts rafts to between 40° north and 40° south, with, in the past, similar boundaries that have moved as the world's climate has varied.: 11
=== Types ===
The earliest boats may have been either dugouts or hide boats.: 11 The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Hide boats, made from covering a framework with animal skins, could be equally as old as logboats, but such a structure is much less likely to survive in an archaeological context.: 63
Plank-built boats are considered, in most cases, to have developed from the logboat. There are examples of logboats that have been expanded: by deforming the hull under the influence of heat, by raising up the sides with added planks, or by splitting down the middle and adding a central plank to make it wider. (Some of these methods have been in quite recent use – there is no simple developmental sequence). The earliest known plank-built boats are from the Nile, dating to the third millennium BC. Outside Egypt, the next earliest are from England. The Ferriby boats are dated to the early part of the second millennium BC and the end of the third millennium.: 63, 66–67 Plank-built boats require a level of woodworking technology that was first available in the Neolithic with more complex versions only becoming achievable in the Bronze Age.: 59
== Types ==
Boats can be categorized by their means of propulsion. These divide into:
Unpowered. This involves drifting with the tide or a river current.
Powered by the crew-members on board, using oars, paddles or a punting pole or quant.
Powered by sail.
Towed – either by humans or animals from a river or canal bank (or in very shallow water, by walking on the sea or river bed) or by another vessel.
Powered by machinery, such as internal combustion engines, steam engines or by batteries and an electric motor.Any one vessel may use more than one of these methods at different times or in combination.: 33
A number of large vessels are usually referred to as boats. Submarines are a prime example. Other types of large vessels which are traditionally called boats include Great Lakes freighters, riverboats, and ferryboats. Though large enough to carry their own boats and heavy cargo, these vessels are designed for operation on inland or protected coastal waters.
== Terminology ==
The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On some boats, a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or cover much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads.
The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port.
== Building materials ==
Until the mid-19th century, most boats were made of natural materials, primarily wood, although bark and animal skins were also used. Early boats include the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log.
By the mid-19th century, some boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structures it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode.
As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats.
Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it was not until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight.
Around the mid-1960s, boats made of fiberglass (aka "glass fiber") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam.
Cold molding is a modern construction method, using wood as the structural component. In one cold molding process, very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. An alternative process uses thin sheets of plywood shaped over a disposable male mold, and coated with epoxy.
== Propulsion ==
The most common means of boat propulsion are as follows:
Engine
Inboard motor
Stern drive (Inboard/outboard)
Outboard motor
Paddle wheel
Water jet (jetboat, personal water craft)
Fan (hovercraft, air boat)
Man (rowing, paddling, setting pole etc.)
Wind (sailing)
== Buoyancy ==
A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink.
As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading.
== European Union classification ==
Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class:
Class A - the boat may safely navigate any waters.
Class B - the boat is limited to offshore navigation. (Winds up to Force 8 & waves up to 4 metres)
Class C - the boat is limited to inshore (coastal) navigation. (Winds up to Force 6 & waves up to 2 metres)
Class D - the boat is limited to rivers, canals and small lakes. (Winds up to Force 4 & waves up to 0.5 metres)
Europe is the main producer of recreational boats (the second production in the world is located in Poland). European brands are known all over the world - in fact, these are the brands that created RCD and set the standard for shipyards around the world.
== See also ==
== References ==
== External links ==
University of Washington Libraries Digital Collections – Freshwater and Marine Image Bank (enter search term "vessels" for images of boats and vessels) |
Bombardier Aerospace | Bombardier Aviation, a division of Bombardier Inc., is headquartered in Dorval, Quebec, Canada. The company currently produces the Global and Challenger series of business jets.
At its peak, Bombardier operated manufacturing plants in 27 countries and employed over 70,000 workers. However, under financial pressure, it significantly reduced its workforce and divested its entire commercial aircraft portfolio including the Q-Series regional turboprop, CRJ-Series of regional jets, and the C-Series narrowbody jet.
== History ==
=== Early activities ===
Bombadier acquired the state-owned Canadair from the government of Canada in 1986 and restored it to profitability. Canadair had been nationalized in 1976.
In 1989, Bombardier acquired the near-bankrupt Short Brothers aircraft manufacturing company in Belfast, Northern Ireland. This was followed in 1990 by the acquisition of the bankrupt American company Learjet, a manufacturer of business jets headquartered in Wichita, Kansas. A third major aircraft manufacturer acquisition was the money-losing Boeing subsidiary, de Havilland Aircraft of Canada based in Toronto, Ontario, acquired in 1992. Canadair, Learjet and Short Brothers cost US$ 215 million to acquire and produced revenues of US$1.3 billion in 1990. The sales of Canadair commuter jets and airborne surveillance systems, Learjet business aircraft and Short Brothers C-23 Sherpa cargo planes were growing at that time.
The aerospace company accounted for over half of Bombardier Inc.'s revenue. By the start of the 2010s, its most popular aircraft included its Dash 8 Series 400, CRJ100/200/440, and CRJ700/900/1000 lines of regional airliners although the company was devoting most of its Research and Development budget to the newer CSeries. It also manufactured the Bombardier 415 amphibious water-bomber (in Dorval and North Bay), and the Global Express and the Challenger lines of business jets.
The CSeries, which Bombardier offered in several size versions, initially competed with the Airbus A318 and Airbus A319; the Boeing 737 Next Generation 737-600 and 737-700 models; and the Embraer 195. Bombardier claimed the CSeries would burn 20% less fuel per trip than these competitors, which would make it still about 8% more fuel efficient than the Boeing 737 MAX, which was introduced in 2017.
In 2008, the launch customer for the CSeries, Lufthansa, signed a letter of intent for up to 60 aircraft and 30 options. The Montreal manufacturing complex was redeveloped by Ghafari Associates to incorporate lean manufacturing for the CSeries.
=== 2010–2016 ===
On 24 March 2011, Shanghai-based Commercial Aircraft Corporation of China (COMAC) and Bombardier Inc. signed a framework agreement for a long-term strategic cooperation on commercial aircraft. The intention was to break the near-duopoly of Airbus and Boeing. Aircraft covered by the programme included the Bombardier CRJ-series, CSeries and Q-series; and the Comac ARJ21 and Comac C919. In January 2012, the firm began manufacturing simple structures, such as flight controls for the CRJ series, from its first facility in Africa, near Casablanca, Morocco. On 30 September 2013, it broke ground on its permanent facility, due to open late 2014. In October, a joint development deal between Bombardier and a South Korean consortium consisting of Korea Aerospace Industries and Korean Air Lines was revealed, to develop a 90-seater turboprop regional airliner, targeting a 2019 launch date.
In November 2012, Bombardier signed the largest deal in its history with Swiss business jet operator VistaJet for 56 Global series jets for a total value of $3.1 billion, including an option for an additional 86 jets, for a total transaction value of $7.8 billion. In April 2013, Canada's Porter Airlines placed a conditional order for 12 CSeries aircraft, with options for another 18; this was conditional on jets being allowed to use Billy Bishop Toronto City Airport off downtown Toronto. In 2015, the Canadian Government announced that it would not approve the marine expansion of the runway required for the use of jets at the airport and the proposal was shelved.
In January 2014, 1,700 employees were cut from Bombardier Aerospace due to a 19 percent drop in orders in 2013. In July of that year, Bombardier reorganized itself in response to underperformance; President Guy Hachey retired and Bombardier Aerospace was split into three divisions: business aircraft; commercial aircraft and aerostructures; and engineering services, while 1,800 jobs were cut. In its 2014-year end statement, Bombardier Aerospace reported its employee count had reduced by 3,700, delivered 290 aircraft and held orders for 282 more; and also claimed "strong long-term potential". On 29 October 2015, Bombardier announced a US$4.9-billion third-quarter loss and $3.2 billion writedown on the CSeries. It also cancelled its Learjet 85 program, taking another US$1.2-billion writedown and cancelling 64 outstanding orders. The firm's debt reached approximately $9 billion, largely due to the CSeries, which had not recorded a single firm order since September 2014. Bombardier shares fell 17.4 per cent on that day. By 21 December 2015, the firm had 243 firm orders for the CSeries; a US$2.5 billion cash infusion – $1 billion from the provincial government plus a $1.5 billion investment from the Caisse de dépôts et placements du Québec – was keeping the parent company adequately funded.
On 17 February 2016, Bombardier announced its 2015 profits were $138 million before taking a $5.4 billion write-down; it also announced 7,000 jobs would be cut. After a long and expensive development, costing US$5.4 billion to date, including a US$3.2 billion writeoff, the small (110–125 seat) CS100 version of the CSeries received initial type certification from Transport Canada on 18 December 2015. At the time, the company had 243 firm orders and letters of intent and commitment for another 360. Most orders were for the CS300 model. The first CS100 was expected to be flying by mid-2016 in Lufthansa colours. "Certification is a great thing, but 2016 is going to be critical for orders," analyst Chris Murray, a
Managing Director with Alta Corp, told Bloomberg Business. Fred Cromer, president of Bombardier's commercial aircraft unit, hinted on 21 December 2015 that price cuts or other incentives may be offered to bolster sales (list price for the CS100 was US$71.8 million and for the CS300 US$82 million).
Intending to boost profit margins, Bombardier announced on 12 January 2016 that it would cancel deals with third party sales agent Tag Aeronautics, as well as cancelling 24 firm and 30 optional orders, aiming to later resell these aircraft without a sales agency fee. The CSeries was adversely hit by production delays and stiff competition in 2016. On 20 January, United Continental Holdings Inc. announced that it had ordered 40 Boeing 737-700s instead. Air Canada announced it would buy up to 75 CS300s, a larger variant, on 17 February 2016; prior to this, there had been no CSeries orders since 2014. The CSeries program was forecast to have positive cash flow after delivering approximately 200 aircraft. David Tyerman, an analyst with Canaccord Genuity, commented on the difficulty of winning orders and questioned how profitable the next CSeries order will be. According to Bjorn Fehrm of the aviation consulting firm Leeham Company, the first 15 CSeries built in 2016 each cost $60 million to make, but would sell for only $30 million.
Bombardier held negotiation with Delta Air Lines, the latter placing an order in April 2016 for 75 CS100 models with an option for 50 additional aircraft. At full list price, the deal would total US$5.6 billion; sources claimed that Delta had received a significant discount. Air Canada firmed up its tentative order for 45 CS300s with an option for another 30 in June 2016; it was reportedly valued at $3.8 billion, increasing to $6.3 billion if the option was exercised (based on the aircraft's list price). The next day, Bombardier delivered the first CSeries jet to Swiss International Air Lines, the first operator to start flying them.
=== Government subsidy controversies ===
==== Embraer ====
Brazil and Canada engaged in an international, adjudicated trade dispute over government subsidies to domestic aircraft manufacturers in the late 1990s and early 2000s. The World Trade Organization decided that Brazil ran an illegal subsidy program, Proex, benefiting Brazilian manufacturer Embraer from at least 1999–2000; and that Canada illegally subsidized its indigenous regional airliner industry.
In late September 2017, the World Trade Organization announced that it would consider Brazil's complaint filed in February, including allegations that the Canadian government unfairly subsidized the CSeries. Embraer claimed that the subsidies are an "unsustainable practice that distorts the entire global market, harming competitors at the expense of Canadian taxpayers."
==== Boeing ====
On 28 April 2016, Bombardier Aerospace, a division of Bombardier Inc., recorded a firm order from Delta Air Lines for 75 CSeries CS100s plus 50 options. On 27 April 2017, The Boeing Company filed a petition for dumping them at $19.6m each, below their $33.2m production cost. On the same day, both Bombardier and the government of Canada rejected Boeing's claim, vowing to mount a "vigorous defence".
On 9 June 2017, the US International Trade Commission (USITC) found that the US industry could be threatened and should be protected. On 26 September, after lobbying by Boeing, the US Department of Commerce (DoC) alleged subsidies of 220% and intended to collect deposits accordingly, plus a preliminary 80% anti-dumping duty, resulting in a duty of 300%. The DoC announced its final ruling, a total duty of 292%, on 20 December, hailing it as an affirmation of the "America First" policy. In October, with financial issues already mounting, Bombardier was indirectly forced by the US government tariffs to relinquish 50.01% of its stake in the CSeries program to Airbus for a symbolic CAD$1, and would produce CSeries aircraft in the United States.
On 10 January 2018, Canada formally filed a complaint at the World Trade Organization (WTO) against the United States over the affair. On 26 January, the four USITC commissioners unanimously reversed their earlier claims, finding that US industry is no longer threatened and no duty orders will be issued, overturning the imposed duties. The Commission public report was made available by February 2018. On March 22, Boeing declined to appeal the ruling.
==== 2015–2017 government assistance ====
On 29 October 2015, the Quebec government announced that it would invest US$1 billion (roughly CAD$1.3 billion) to protect jobs and the CSeries, the province buying a 49.5% interest in the limited partnership controlling the CSeries program. Bombardier had reportedly asked Ottawa for a repayable loan of $350 million, while the province expected the federal government to match its $1 billion loan in return for a near 50 percent stake in the CSeries program. Debts from the project had forced Bombardier to raise cash and seek aid in order to stay afloat. Both provincial and federal contributions came via repayable loans; independent economist Mark Milke claimed it is questionable whether they would be repaid, calling the bailout loans "corporate welfare" in The Globe and Mail.
Days after his swearing-in, on 10 November 2015, Prime Minister Justin Trudeau stated Bombardier must make a "strong business case" for federal aid, agreeing that the firm exemplified important high-value manufacturing, but stated that such aid would be shaped by Canadians' best interests, not on "emotion, politics or symbols". In April 2016, the federal government offered an aid package to Bombardier without disclosing the amount or conditions imposed; it reportedly rejected the offer. An unnamed source advised Reuters that negotiations were still underway. On 14 April 2016, Bombardier shares were at a six-month high over rumors that Delta had ordered CSeries jets. The firm continued to request a $1 billion aid package from the federal government.
In May 2016, the federal government reportedly offered a $1 billion aid package (in addition to the $1 billion subsidy offered by the Government of Quebec) with the condition of Bombardier ending its dual-class share structure which enables the Bombardier and Beaudoin families to control it despite a minority ownership. According to Bloomberg, the talks reached a standstill over this condition. The federal plan also recommended that the firm issue new shares to gain $1 billion in additional funding. The Toronto Star predicted that the government would bailout the firm as bankruptcy would lead to the loss of some 70,000 jobs as well as significant exports, which had totaled $34.2 billion in the previous five years. In May 2016, Federal Finance Minister Bill Morneau said the aerospace sector is "critically important". In February 2017, the federal government agreed to provide $372.5 million in interest-free repayable loans, to be issued in instalments over the following four years; one third was intended for the CSeries while the rest went to the Global 7000 business jet.
=== Airbus partnership ===
On 16 October 2017, Bombardier and Airbus announced a partnership on the CSeries program to expand in an estimated market of more than 6,000 new 100-150 seat aircraft over 20 years; in July 2018, Airbus acquired a 50.01% majority stake in the holding company for the program, Bombardier keeping 31% and Investissement Québec 19%. Under this deal, the CSeries is now marketed as the Airbus A220. Access to Airbus's supply chain expertise was intended to save production costs while the headquarters and primary assembly line remain in Québec, with a second assembly line at the Airbus Mobile factory in Alabama, US. Airbus did not pay for its share, nor did it assume any debt. Airbus insisted that it had no plan to buy Bombardier's stake in the program, remaining strategic partners after 2025; clauses allowed it to buy out Quebec's share in 2023 and Bombardier's seven years after the deal closes, though production is required to remain in Quebec until at least 2041. Bombardier CEO Alain Bellemare said the deal would raise sales: "It brings certainty to the future of the program so it increases the level of confidence that the aircraft is there to stay. Combining the CSeries with Airbus's global scale ... will take the CSeries program to new heights".
=== Divestment ===
On 8 November 2018, Viking Air parent Longview Aviation Capital Corp. acquired the Q400 program and the de Havilland brand from Bombardier. Viking had already bought the discontinued de Havilland Canada aircraft type certificates in 2006. At that point, Q400 sales were lower than rival ATR. Bombardier announced the sale was for $300 million and expected $250 million annual savings. The Q400 deal closed on 3 June 2019; the new holding company, De Havilland Aircraft of Canada Limited, inherited an order book of 51 Q400s. Also in late 2018, Bombardier sold its business jet training program to CAE Inc. for $645 million and announced 5,000 job cuts over 18 months across its 70,000 employees worldwide: 500 in Ontario, 2,500 in Quebec and 2,000 outside Canada.
Bombardier shifted focus from commercial to business aircraft, anticipating business jet shipments to increase from 135 in 2018 to 150-155 in 2019, and forecast revenues of $16.5 billion in 2018, rising to over $20 billion in 2020 with a free cash flow of $0.75-1 billion, mostly via the large Global 7500. Business Aircraft revenues were expected to increase from $5 billion for 2018 to $6.25 billion in 2019 and $8.5 billion in 2020 with 180 deliveries, including aftermarket within the 4,700 fleet doubling from the 28% captured in 2015. Aerostructures & Engineering Services were expected to grow from $2 billion in 2018 to $2.25 billion in 2020. Airliner revenues were expected to shrink from $1.7 billion to $1.4 billion in 2019, halving losses to $125 million, with deliveries flat at 35 CRJs and Q400s; it was to be profitable with CRJs only in 2020.
On 2 May 2019, Bombardier's aerospace division was renamed Bombardier Aviation following the divestment of the CSeries and Q400 programmes. On 25 June 2019, Bombardier agreed with Mitsubishi Heavy Industries to sell the CRJ program, a deal was expected to close in early 2020 subject to regulatory approval. Mitsubishi will gain Bombardier's global expertise in terms of engineering, certification, customer relations and support, boosting its SpaceJet (formerly MRJ) programme and enabling its production in North America. The deal includes two service centres in Canada and two in the US, as well as the CRJ's type certificates. Bombardier retains the Mirabel assembly facility and produce the CRJ on behalf of Mitsubishi until the current order backlog is complete. In early May 2020, Mitsubishi confirmed that all conditions had been met. The transaction closed on 1 June. Bombardier's CRJ-related service and support activities were transferred to a new Montreal-based company, MHI RJ Aviation Group.
On 31 October 2019, Bombardier announced the sale of its aerostructures activities and aftermarket services operations in Northern Ireland and Morocco, and its aerostructures maintenance, repair and overhaul (MRO) facility in Dallas, to Spirit AeroSystems. The sale was expected to close in the first half of 2020 subject to regulatory approval. In September 2020 Spirit said "there can be no assurances" that conditions would be met by the 31 October deadline. A last-minute amendment reduced the amount of the cash consideration and adjusted the overall valuation, enabling the parties to set a closing date of 30 October.
On 12 February 2020, Bombardier sold its share in Airbus Canada Limited Partnership, the holding company for the A220 programme, for $591 million; Airbus now has a 75% share, with the remaining 25% owned by Investissement Québec. This sale marked Bombardier's "strategic exit" from the commercial aviation sector.
Despite rumours that its business jet activities might be sold to Textron, parent company of Cessna, Beechcraft and Bell Helicopters, on 17 February it emerged that Bombardier had instead agreed to sell its rail division to Alstom and would focus exclusively on business aviation.
== Aircraft ==
=== Current ===
=== Divested ===
=== Out of production ===
Production ended at Bombardier, instead of after divestment
== Facilities ==
Bombardier Aviation has several facilities.
=== Facilities history ===
Bombardier Aerospace once had manufacturing, engineering and services facilities in 27 countries. The production facilities are located in Canada, the United States, and Mexico.
On 3 May 2018, Bombardier announced the sale of its Toronto Downsview facility where it manufactures the Global business jet family and the Q400 regional turboprops, for $635 million, leased back for three to five years to maintain Q400 production, while leasing a 38-acre (15 ha) site at Toronto Pearson International Airport to open a final assembly plant for the Global business jets.
On 2 May 2019, Bombardier announced that all of its aerospace assets would be consolidated into a "single, streamlined and fully integrated business", resulting in the sale of its operations in Belfast and Morocco.
=== Current facilities ===
==== Canada ====
Montreal Trudeau International Airport — Headquarters. Challenger 300 and 650 final assembly and flight test. Global family interior completion.
Saint-Laurent, Quebec — Product Development Centre. Cockpit and aft fuselage manufacturing facility.
Toronto Pearson International Airport — Global family final assembly
==== Mexico ====
Querétaro — Aircraft component manufacturing facility for Challenger 605 and Global 6000/7000.
==== United States ====
Wichita, Kansas — Facility used for the US headquarters of Bombardier as well as an expanded service center, flight testing and engineering facility and home to the Bombardier Defense group.
=== Former facilities ===
==== Canada ====
Downsview Airport — Dash 8 final assembly and flight test.
Montréal Mirabel International Airport — CRJ series and A220 family final assembly and flight test. In July 2018 Airbus took over the Mirabel facility as part of its take over of the A220 family program, whilst Bombardier continued to manufacture CRJ series aircraft at the Mirabel facility until the order backlog was completed in December 2020.
North Bay Airport — Bombardier CL-415 was the final assembly and flight test until production ended in 2015. After a lull in 2002, the plant was restarted by contractor Vortex Aerospace Services in 2005. While production has ended, Vortex continues to provide training for CL 415 at the facility.
==== Morocco ====
Casablanca (BP 197 Zone Franche Midparc next to Mohammed V International Airport) — Flight controls for CRJ series aircraft. In November 2020, Bombardier sold its Casablanca operations to Spirit AeroSystems.
==== United Kingdom ====
Belfast, Northern Ireland — Former Short Brothers plant across from Victoria Park near George Best Belfast City Airport. Aircraft fuselage, engine nacelle, wing manufacturing and assembly facility. In November 2020, Bombardier sold its Belfast operations to Spirit AeroSystems.
== Production ==
Bombardier Aerospace fiscal or calendar year delivery of regional, business and amphibious aircraft:
== Gallery ==
== See also ==
Viking Air – Canadian manufacturer that purchased the type certificates from Bombardier for all discontinued de Havilland Canada designs
== Notes ==
== References ==
Commercial Aircraft and Airline Markings by Christopher Chant.
== External links ==
Official website
Gregory Polek (26 June 2018). "Bombardier Sees Return to Prominence for its Regional Lineup". AIN online.
Jon Hemmerdinger (27 June 2018). "ANALYSIS: Bombardier returns to regional aircraft roots". Flightglobal. |
Bomber | A bomber is a military combat aircraft that utilizes
air-to-ground weaponry to drop bombs, launch torpedoes, or deploy air-launched cruise missiles.
There are two major classifications of bomber: strategic and tactical. Strategic bombing is done by heavy bombers primarily designed for long-range bombing missions against strategic targets to diminish the enemy's ability to wage war by limiting access to resources through crippling infrastructure, reducing industrial output, or inflicting massive civilian casualties to an extent deemed to force surrender. Tactical bombing is aimed at countering enemy military activity and in supporting offensive operations, and is typically assigned to smaller aircraft operating at shorter ranges, typically near the troops on the ground or against enemy shipping.
Bombs were first dropped from an aircraft during the Italo-Turkish War, with the first major deployments coming in the First World War and Second World War by all major airforces, damaging cities, towns, and rural areas. The first bomber planes in history were the Italian Caproni Ca 30 and British Bristol T.B.8, both of 1913. Some bombers were decorated with nose art or victory markings.
During WWII with engine power as a major limitation, combined with the desire for accuracy and other operational factors, bomber designs tended to be tailored to specific roles. Early in the Cold War however, bombers were the only means of carrying nuclear weapons to enemy targets, and held the role of deterrence.
With the advent of guided air-to-air missiles, bombers needed to avoid interception. High-speed and high-altitude flying became a means of evading detection and attack. With the advent of ICBMs the role of the bomber was brought to a more tactical focus in close air support roles, and a focus on stealth technology for strategic bombers.
== Classification ==
=== Strategic ===
Strategic bombing is done by heavy bombers primarily designed for long-range bombing missions against strategic targets such as supply bases, bridges, factories, shipyards, and cities themselves, to diminish the enemy's ability to wage war by limiting access to resources through crippling infrastructure or reducing industrial output. Current examples include the strategic nuclear-armed bombers: B-2 Spirit, B-52 Stratofortress, Tupolev Tu-95 'Bear', Tupolev Tu-22M 'Backfire' and Tupolev Tu-160 "Blackjack"; historically notable examples are the: Gotha G.IV, Avro Lancaster, Heinkel He 111, Junkers Ju 88, Boeing B-17 Flying Fortress, Consolidated B-24 Liberator, Boeing B-29 Superfortress, and Tupolev Tu-16 'Badger'.
=== Tactical ===
Tactical bombing, aimed at countering enemy military activity and in supporting offensive operations, is typically assigned to smaller aircraft operating at shorter ranges, typically near the troops on the ground or against enemy shipping. This role is filled by tactical bomber class, which crosses and blurs with various other aircraft categories: light bombers, medium bombers, dive bombers, interdictors, fighter-bombers, attack aircraft, multirole combat aircraft, and others.
Current examples: Xian JH-7, Dassault Mirage 2000D, and the Panavia Tornado IDS
Historical examples: Ilyushin Il-2 Shturmovik, Junkers Ju 87 Stuka, Republic P-47 Thunderbolt, Hawker Typhoon and Mikoyan MiG-27.
== History ==
The first use of an air-dropped bomb (actually four hand grenades specially manufactured by the Italian naval arsenal) was carried out by Italian Second Lieutenant Giulio Gavotti on 1 November 1911 during the Italo-Turkish war in Libya – although his plane was not designed for the task of bombing, and his improvised attacks on Ottoman positions had little impact. These picric acid-filled steel spheres were nicknamed "ballerinas" from the fluttering fabric ribbons attached. Turks carried out the first ever anti-airplane operation in history during the Italo-Turkish war. Although lacking anti-aircraft weapons, they were the first to shoot down an airplane by rifle fire. The first aircraft to crash in a war was the one of Lieutenant Piero Manzini, shot down on 25 August 1912.
=== Early bombers ===
On 16 October 1912, Bulgarian observer Prodan Tarakchiev dropped two of those bombs on the Turkish railway station of Karağaç (near the besieged Edirne) from an Albatros F.2 aircraft piloted by Radul Milkov, during the First Balkan War. This is deemed to be the first use of an aircraft as a bomber.
The first heavier-than-air aircraft purposely designed for bombing were the Italian Caproni Ca 30 and British Bristol T.B.8, both of 1913. The Bristol T.B.8 was an early British single engined biplane built by the Bristol Aeroplane Company. They were fitted with a prismatic Bombsight in the front cockpit and a cylindrical bomb carrier in the lower forward fuselage capable of carrying twelve 10 lb (4.5 kg) bombs, which could be dropped singly or as a salvo as required.
The aircraft was purchased for use both by the Royal Naval Air Service and the Royal Flying Corps (RFC), and three T.B.8s, that were being displayed in Paris during December 1913 fitted with bombing equipment, were sent to France following the outbreak of war. Under the command of Charles Rumney Samson, a bombing attack on German gun batteries at Middelkerke, Belgium was executed on 25 November 1914.
The dirigible, or airship, was developed in the early 20th century. Early airships were prone to disaster, but slowly the airship became more dependable, with a more rigid structure and stronger skin. Prior to the outbreak of war, Zeppelins, a larger and more streamlined form of airship designed by German Count Ferdinand von Zeppelin, were outfitted to carry bombs to attack targets at long range. These were the first long range, strategic bombers. Although the German air arm was strong, with a total of 123 airships by the end of the war, they were vulnerable to attack and engine failure, as well as navigational issues. German airships inflicted little damage on all 51 raids, with 557 Britons killed and 1,358 injured. The German Navy lost 53 of its 73 airships, and the German Army lost 26 of its 50 ships.
The Caproni Ca 30 was built by Gianni Caproni in Italy. It was a twin-boom biplane with three 67 kW (80 hp) Gnome rotary engines and first flew in October 1914. Test flights revealed power to be insufficient and the engine layout unworkable, and Caproni soon adopted a more conventional approach installing three 81 kW (110 hp) Fiat A.10s. The improved design was bought by the Italian Army and it was delivered in quantity from August 1915.
While mainly used as a trainer, Avro 504s were also briefly used as bombers at the start of the First World War by the Royal Naval Air Service (RNAS) when they were used for raids on the German airship sheds.
=== Strategic bombing ===
Bombing raids and interdiction operations were mainly carried out by French and British forces during the War as the German air arm was forced to concentrate its resources on a defensive strategy. Notably, bombing campaigns formed a part of the British offensive at the Battle of Neuve Chapelle in 1915, with Royal Flying Corps squadrons attacking German railway
stations in an attempt to hinder the logistical supply of the German army. The early, improvised attempts at bombing that characterized the early part of the war slowly gave way to a more organized and systematic approach to strategic and tactical bombing, pioneered by various air power strategists of the Entente, especially Major Hugh Trenchard; he was the first to advocate that there should be "... sustained [strategic bombing] attacks with a view to interrupting the enemy's railway communications ... in conjunction with the main operations of the Allied Armies."
When the war started, bombing was very crude (hand-held bombs were thrown over the side) yet by the end of the war long-range bombers equipped with complex mechanical bombing computers were being built,
designed to carry large loads to destroy enemy industrial targets. The most important bombers used in World War I were the French Breguet 14, British de Havilland DH-4, German Albatros C.III and Russian
Sikorsky Ilya Muromets. The Russian Sikorsky Ilya Muromets, was the first four-engine bomber to equip a dedicated strategic bombing unit during World War I. This heavy bomber was unrivaled in the early stages of the war, as the Central Powers had no comparable aircraft until much later.
Long range bombing raids were carried out at night by multi-engine biplanes such as the Gotha G.IV (whose name was synonymous with all multi-engine German bombers) and later the Handley Page Type O; the majority of bombing was done by single-engined biplanes with one or two crew members flying short distances to attack enemy lines and immediate hinterland. As the effectiveness of a bomber was dependent on the weight and accuracy of its bomb load, ever larger bombers were developed starting in World War I, while considerable money was spent developing suitable bombsights.
=== World War II ===
With engine power as a major limitation, combined with the desire for accuracy and other operational factors, bomber designs tended to be tailored to specific roles. By the start of the war this included:
dive bomber – specially strengthened for vertical diving attacks for greater accuracy
light bomber, medium bomber and heavy bomber – subjective definitions based on size and/or payload capacity
torpedo bomber – specialized aircraft armed with torpedoes
ground attack aircraft – aircraft used against targets on a battlefield such as troop or tank concentrations
night bomber – specially equipped to operate at night when opposing defences are limited
maritime patrol – long range bombers that were used against enemy shipping, particularly submarines
fighter-bomber – a modified fighter aircraft used as a light bomber
Bombers of this era were not intended to attack other aircraft although most were fitted with defensive weapons. World War II saw the beginning of the widespread use of high speed bombers which began to minimize defensive weaponry in order to attain higher speed. Some smaller designs were used as the basis for night fighters. A number of fighters, such as the Hawker Hurricane were used as ground attack aircraft, replacing earlier conventional light bombers that proved unable to defend themselves while carrying a useful bomb load.
=== Cold War ===
At the start of the Cold War, bombers were the only means of carrying nuclear weapons to enemy targets, and had the role of deterrence. With the advent of guided air-to-air missiles, bombers needed to avoid interception. High-speed and high-altitude flying became a means of evading detection and attack. Designs such as the English Electric Canberra could fly faster or higher than contemporary fighters. When surface-to-air missiles became capable of hitting high-flying bombers, bombers were flown at low altitudes to evade radar detection and interception.
Once "stand off" nuclear weapon designs were developed, bombers did not need to pass over the target to make an attack; they could fire and turn away to escape the blast. Nuclear strike aircraft were generally finished in bare metal or anti-flash white to minimize absorption of thermal radiation from the flash of a nuclear explosion. The need to drop conventional bombs remained in conflicts with non-nuclear powers, such as the Vietnam War or Malayan Emergency.
The development of large strategic bombers stagnated in the later part of the Cold War because of spiraling costs and the development of the Intercontinental ballistic missile (ICBM) – which was felt to have similar deterrent value while being impossible to intercept. Because of this, the United States Air Force XB-70 Valkyrie program was cancelled in the early 1960s; the later B-1B Lancer and B-2 Spirit aircraft entered service only after protracted political and development problems. Their high cost meant that few were built and the 1950s-designed B-52s are projected to remain in use until the 2040s. Similarly, the Soviet Union used the intermediate-range Tu-22M 'Backfire' in the 1970s, but their Mach 3 bomber project stalled. The Mach 2 Tu-160 'Blackjack' was built only in tiny numbers, leaving the 1950s Tupolev Tu-16 and Tu-95 'Bear' heavy bombers to continue being used into the 21st century.
The British strategic bombing force largely came to an end when the V bomber force was phased out; the last of which left service in 1983. The French Mirage IV bomber version was retired in 1996, although the Mirage 2000N and the Rafale have taken on this role. The only other nation that fields strategic bombing forces is China, which has a number of Xian H-6s.
=== Modern era ===
Currently, only the United States Air Force, the Russian Aerospace Forces' Long-Range Aviation command, and China's People's Liberation Army Air Force operate strategic heavy bombers. Other air forces have transitioned away from dedicated bombers in favor of multirole combat aircraft.
At present, these air forces are each developing stealth replacements for their legacy bomber fleets, the USAF with the Northrop Grumman B-21, the Russian Aerospace Forces with the PAK DA, and the PLAAF with the Xian H-20. As of 2021, the B-21 is expected to enter service by 2026–2027. The B-21 would be capable of loitering near target areas for extended periods of time.
== Other uses ==
Occasionally, military aircraft have been used to bomb ice jams with limited success as part of an effort to clear them. In 2018, the Swedish Air Force dropped bombs on a forest fire, snuffing out flames with the aid of the blast waves. The fires had been raging in an area contaminated with unexploded ordnance, rendering them difficult to extinguish for firefighters.
== See also ==
Aerial bombing of cities
Air interdiction
Assembly ship
Carpet bombing
Fighter aircraft
List of bomber aircraft
Offensive counter air
Strategic bomber
== References ==
== External links == |
Buoyancy | Buoyancy (), or upthrust, is the force exerted by a fluid opposing the weight of a partially or fully immersed object (which may be also be a parcel of fluid). In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus, the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
For this reason, an object with average density greater than the surrounding fluid tends to sink because its weight is greater than the weight of the fluid it displaces. If the object is less dense, buoyancy can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction.
Buoyancy also applies to fluid mixtures, and is the most common driving force of convection currents. In these cases, the mathematical modelling is altered to apply to continua, but the principles remain the same. Examples of buoyancy driven flows include the spontaneous separation of air and water or oil and water.
Buoyancy is a function of the force of gravity or other source of acceleration on objects of different densities, and for that reason is considered an apparent force, in the same way that centrifugal force is an apparent force as a function of inertia. Buoyancy can exist without gravity in the presence of an inertial reference frame, but without an apparent "downward" direction of gravity or other source of acceleration, buoyancy does not exist.
The center of buoyancy of an object is the center of gravity of the displaced volume of fluid.
== Archimedes' principle ==
Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC. For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces:
Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object
—with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object.
Mathematically we note.
F
B
=
−
F
g
=
−
ρ
V
g
{\displaystyle \mathbf {F_{B}} =-\mathbf {F_{g}} =-\rho V{\textbf {g}}}
Where
g
{\displaystyle \mathbf {g} }
is the local gravitational acceleration,
ρ
{\displaystyle \rho }
the density of the fluid, and
V
{\displaystyle V}
the displaced volume; the negative sign arises since the buoyant force acts in the opposite direction as the object's weight. Archimedes' principle does not consider the surface tension (capillarity) acting on the body, but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle
F
B
=
−
F
g
{\displaystyle \mathbf {F_{B}} =-\mathbf {F_{g}} }
remains valid.
It's important to note that the density of an object is defined to be its mass per unit volume.
ρ
=
M
V
{\displaystyle \rho ={\frac {M}{V}}}
If an object is fully submerged and we assume that the net force acting upon the object in the vertical direction is zero. If fully submerged the displaced volume is simply the volume of the object.
F
net
=
0
=
F
B
−
F
g
=
ρ
fluid
V
g
−
ρ
obj
V
g
⟹
ρ
fluid
=
ρ
obj
{\displaystyle F_{\text{net}}=0=F_{B}-F_{g}=\rho _{\text{fluid}}Vg-\rho _{\text{obj}}Vg\implies \rho _{\text{fluid}}=\rho _{\text{obj}}}
This implies that objects of greater density than the fluid will sink, and objects of lesser density will float.
Example: If you drop wood into water, buoyancy will keep it afloat.
== Applications ==
A common application Archimedes' principle is of hydrostatic weighing. Suppose we can measure the tension of a hanging mass by a force probe. Assuming Archimedes' principle, when the mass is submerged in the fluid and the net force is zero.
F
net
=
0
=
F
B
+
F
T
−
F
g
=
ρ
fluid
V
g
+
F
T
−
m
g
{\displaystyle F_{\text{net}}=0=F_{B}+F_{T}-F_{g}=\rho _{\text{fluid}}Vg+F_{T}-mg}
⟹
V
=
m
g
−
F
T
ρ
fluid
g
{\displaystyle \implies V={\frac {mg-F_{T}}{\rho _{\text{fluid}}g}}}
Recall that the definition of density states.
ρ
obj
=
m
V
=
m
g
ρ
fluid
m
g
−
F
T
{\displaystyle \rho _{\text{obj}}={\frac {m}{V}}={\frac {mg\rho _{\text{fluid}}}{mg-F_{T}}}}
Thus, the density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes. Below we can denote the ratio of densities.
ρ
obj
ρ
fluid
=
F
g
F
g
−
F
app
{\displaystyle {\frac {\rho _{\text{obj}}}{\rho _{\text{fluid}}}}={\frac {F_{g}}{F_{g}-F_{\text{app}}}}\,}
This formula is also used for example in describing the measuring principle of a dasymeter.
== Forces and equilibrium ==
The equation to calculate the pressure inside a fluid in equilibrium is:
f
+
div
σ
=
0
{\displaystyle \mathbf {f} +\operatorname {div} \,\sigma =0}
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
σ
i
j
=
−
p
δ
i
j
.
{\displaystyle \sigma _{ij}=-p\delta _{ij}.\,}
Here δij is the Kronecker delta. Using this the above equation becomes:
f
=
∇
p
.
{\displaystyle \mathbf {f} =\nabla p.\,}
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
f
=
−
∇
Φ
.
{\displaystyle \mathbf {f} =-\nabla \Phi .\,}
Then:
∇
(
p
+
Φ
)
=
0
⟹
p
+
Φ
=
constant
.
{\displaystyle \nabla (p+\Phi )=0\Longrightarrow p+\Phi ={\text{constant}}.\,}
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
p
=
ρ
f
g
z
.
{\displaystyle p=\rho _{f}gz.\,}
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
B
=
∮
σ
d
A
.
{\displaystyle \mathbf {B} =\oint \sigma \,d\mathbf {A} .}
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
B
=
∫
div
σ
d
V
=
−
∫
f
d
V
=
−
ρ
f
g
∫
d
V
=
−
ρ
f
g
V
{\displaystyle \mathbf {B} =\int \operatorname {div} \sigma \,dV=-\int \mathbf {f} \,dV=-\rho _{f}\mathbf {g} \int \,dV=-\rho _{f}\mathbf {g} V}
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid does not exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
B
=
ρ
f
V
disp
g
,
{\displaystyle B=\rho _{f}V_{\text{disp}}\,g,\,}
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
B
=
ρ
f
V
g
.
{\displaystyle B=\rho _{f}Vg.\,}
Though the above derivation of Archimedes principle is correct, a recent paper by the Brazilian physicist Fabio M. S. Lima brings a more general approach for the evaluation of the buoyant force exerted by any fluid (even non-homogeneous) on a body with arbitrary shape. Interestingly, this method leads to the prediction that the buoyant force exerted on a rectangular block touching the bottom of a container points downward! Indeed, this downward buoyant force has been confirmed experimentally.
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
F
net
=
0
=
m
g
−
ρ
f
V
disp
g
{\displaystyle F_{\text{net}}=0=mg-\rho _{f}V_{\text{disp}}g\,}
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
m
g
=
ρ
f
V
disp
g
,
{\displaystyle mg=\rho _{f}V_{\text{disp}}g,\,}
and therefore
m
=
ρ
f
V
disp
.
{\displaystyle m=\rho _{f}V_{\text{disp}}.\,}
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location, since the density depends on temperature and salinity. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
T
=
ρ
f
V
g
−
m
g
.
{\displaystyle T=\rho _{f}Vg-mg.\,}
When a sinking object settles on the solid floor, it experiences a normal force of:
N
=
m
g
−
ρ
f
V
g
.
{\displaystyle N=mg-\rho _{f}Vg.\,}
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
=== Simplified model ===
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
=== Static stability ===
A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up.
Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'.
The stability of a buoyant object at the surface is more complex, and it may remain stable even if the center of gravity is above the center of buoyancy, provided that when disturbed from the equilibrium position, the center of buoyancy moves further to the same side that the center of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the center of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position.
== Fluids and objects ==
As a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.
=== Compressible objects ===
As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands.
If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation.
==== Submarines ====
Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion.
==== Balloons ====
The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.
==== Divers ====
Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies.
== Density ==
If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight. If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise.
If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position.
An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink.
A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.
== See also ==
== References ==
== External links ==
Falling in Water
W. H. Besant (1889) Elementary Hydrostatics from Google Books.
NASA's definition of buoyancy |
Bus | A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a motor vehicle that carries significantly more passengers than an average car or van, but fewer than the average rail transport. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving license.
Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles.
Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world.
== Name ==
The word bus is a shortened form of the Latin adjectival form omnibus ("for all"), the dative plural of omnis/omne ("all"). The theoretical full name is in French voiture omnibus ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named Stanislas Baudry in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, omnes being the masculine and feminine nominative, vocative and accusative form of the Latin adjective omnis/-e ("all"), combined with omnibus, the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone".
His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829.
== History ==
=== Steam buses ===
Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation.
The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres.
However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of 5 mph (8.0 km/h) in towns and cities, and 10 mph (16 km/h) in the country.
=== Trolleybuses ===
In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all."
The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration.
Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911.
=== Motor buses ===
In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales.
Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of 18 km/h (11.2 mph) and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard.
The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War.
The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division.
Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking.
==== Gallery ====
== Types ==
Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes.
Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.
== Design ==
=== Accessibility ===
During most of the 20th century, transit buses were almost exclusively high-floor vehicles, and they used wheelchair lifts if they provided accessibility at all. (In the U.S., only in 1993 did accessibility become a requirement in all new buses, under the federal Americans with Disabilities Act of 1990.) However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses.
Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally still use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws.
=== Configuration ===
Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-person operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three.
=== Guidance ===
Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways.
=== Liveries ===
Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.
=== Propulsion ===
The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s.
=== Dimensions ===
United Kingdom and European Union:
Maximum Length: Single rear axle 13.5 meters (44 ft 3+1⁄2 in). Twin rear axle 15 meters (49 ft 2+1⁄2 in).
Maximum Width: 2.55 meters (8 ft 4+3⁄8 in)
United States, Canada and Mexico:
Maximum Length: None
Maximum Width: 2.6 meters (8 ft 6+3⁄8 in)
== Manufacture ==
Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products.
Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service.
As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000.
== Uses ==
=== Public transport ===
Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis.
Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services.
=== Tourism ===
Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches.
In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey.
Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package.
Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories.
Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.
In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.
=== Student transport ===
In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips or transport to associated sports, music, or other school events.
=== Private charter ===
Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers.
Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on a regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections).
=== Private ownership ===
Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers.
Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses.
=== Promotion ===
Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices.
Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation.
=== Goods transport ===
In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries.
== Around the world ==
Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus.
After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level.
The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.
=== Bus expositions ===
Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry.
Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially.
== Use of retired buses ==
Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction.
Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company.
Some buses that have reached the end of their service that are still in good condition are sent for export to other countries.
Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre.
Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show.
Some buses meet a destructive end by being entered in banger races or at demolition derbies. A larger number of old retired buses have also been converted into mobile holiday homes and campers.
=== Bus preservation ===
Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new.
== Modification as railway vehicles ==
== See also ==
== References ==
== Bibliography ==
Combeau, Yvan (2013). Histoire de Paris. Paris: Presses Universitaires de France. ISBN 978-2-13-060852-3.
Fierro, Alfred (1996). Histoire et dictionnaire de Paris. Robert Laffont. ISBN 2-221-07862-4.
Héron de Villefosse, René (1959). Histoire de Paris. Bernard Grasset.
== External links ==
American Bus Association (Archived 7 February 2011 at the Wayback Machine) |
C-130 Hercules | The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built by Lockheed (now Lockheed Martin). Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations.
The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules has participated in numerous military, civilian and humanitarian aid operations. In 2007, the transport became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force (USAF). The C-130 is the longest continuously produced military aircraft, having achieved 70 years of production in 2024. The updated Lockheed Martin C-130J Super Hercules remains in production as of 2024.
== Design and development ==
=== Background and requirements ===
The Korean War showed that World War II-era piston-engine transports—Fairchild C-119 Flying Boxcars, Douglas C-47 Skytrains and Curtiss C-46 Commandos—were no longer adequate. On 2 February 1951, the United States Air Force issued a General Operating Requirement (GOR) for a new transport to Boeing, Douglas, Fairchild, Lockheed, Martin, Chase Aircraft, North American, Northrop, and Airlifts Inc.
The new transport would have a capacity of 92 passengers, 72 combat troops or 64 paratroopers in a cargo compartment that was approximately 41 ft (12 m) long, 9 ft (2.7 m) high, and 10 ft (3.0 m) wide. Unlike transports derived from passenger airliners, it was to be designed specifically as a combat transport with loading from a hinged loading ramp at the rear of the fuselage. A notable advance for large aircraft was the introduction of a turboprop powerplant, the Allison T56 which was developed for the C-130. It gave the aircraft greater range than a turbojet engine as it used less fuel. Turboprop engines also produced much more power for their weight than piston engines. However, the turboprop configuration chosen for the T56, with the propeller connected to the compressor, had the potential to cause structural failure of the aircraft if an engine failed. Safety devices had to be incorporated to reduce the excessive drag from a windmilling propeller.
=== Design phase ===
The Hercules resembles a larger, four-engine version of the Fairchild C-123 Provider with a similar wing and cargo ramp layout. The C-123 had evolved from the Chase XCG-20 Avitruc first flown in 1950. The Boeing C-97 Stratofreighter had rear ramps, which made it possible to drive vehicles onto the airplane (also possible with the forward ramp on a C-124). The ramp on the Hercules was also used to airdrop cargo, which included a low-altitude parachute-extraction system for Sheridan tanks and even dropping large improvised "daisy cutter" bombs. The new Lockheed cargo plane had a range of 1,100 nmi (1,270 mi; 2,040 km) and it could operate from short and unprepared strips.
Fairchild, North American, Martin, and Northrop declined to participate. The remaining five companies tendered a total of ten designs: Lockheed two, Boeing one, Chase three, Douglas three, and Airlifts Inc. one. The contest was a close affair between the lighter of the two Lockheed (preliminary project designation L-206) proposals and a four-turboprop Douglas design.
The Lockheed design team was led by Willis Hawkins, starting with a 130-page proposal for the Lockheed L-206. Hall Hibbard, Lockheed vice president and chief engineer, saw the proposal and directed it to Kelly Johnson, who did not care for the low-speed, unarmed aircraft, and remarked, "If you sign that letter, you will destroy the Lockheed Company." Both Hibbard and Johnson signed the proposal and the company won the contract for the now-designated Model 82 on 2 July 1951.
The first flight of the YC-130 prototype was made on 23 August 1954 from the Lockheed plant in Burbank, California. The aircraft, serial number 53-3397, was the second prototype, but the first of the two to fly. The YC-130 was piloted by Stanley Beltz and Roy Wimmer on its 61-minute flight to Edwards Air Force Base; Jack Real and Dick Stanton served as flight engineers. Kelly Johnson flew chase in a Lockheed P2V Neptune.
After the two prototypes were completed, production began in Marietta, Georgia, where over 2,300 C-130s have been built through 2009.
The initial production model, the C-130A, was powered by Allison T56-A-9 turboprops with three-blade propellers and originally equipped with the blunt nose of the prototypes. Deliveries began in December 1956, continuing until the introduction of the C-130B model in 1959. Some A-models were equipped with skis and re-designated C-130D. As the C-130A became operational with Tactical Air Command (TAC), the C-130's lack of range became apparent and additional fuel capacity was added with wing pylon-mounted tanks outboard of the engines; this added 6,000 pounds (2,700 kg) of fuel capacity for a total capacity of 40,000 pounds (18,000 kg).
=== Improved versions ===
The C-130B model was developed to complement the A-models that had previously been delivered, and incorporated new features, particularly increased fuel capacity in the form of auxiliary tanks built into the center wing section and an AC electrical system. Four-bladed Hamilton Standard propellers replaced the Aero Products' three-blade propellers that distinguished the earlier A-models. The C-130B had ailerons operated by hydraulic pressure that was increased from 2,050 to 3,000 psi (14.1 to 20.7 MPa), as well as uprated engines and four-blade propellers that were standard until the J-model.
The B model was originally intended to have "blown controls", a system that blows high-pressure air over the control surfaces to improve their effectiveness during slow flight. It was tested on an NC-130B prototype aircraft with a pair of T-56 turbines providing high-pressure air through a duct system to the control surfaces and flaps during landing. This greatly reduced landing speed to just 63 knots and cut landing distance in half. The system never entered service because it did not improve takeoff performance by the same margin, making the landing performance pointless if the aircraft could not also take off from where it had landed.
An electronic reconnaissance variant of the C-130B was designated C-130B-II. A total of 13 aircraft were converted. The C-130B-II was distinguished by its false external wing fuel tanks, which were disguised signals intelligence (SIGINT) receiver antennas. These pods were slightly larger than the standard wing tanks found on other C-130Bs. Most aircraft featured a swept blade antenna on the upper fuselage, as well as extra wire antennas between the vertical fin and upper fuselage not found on other C-130s. Radio call numbers on the tail of these aircraft were regularly changed to confuse observers and disguise their true mission.
The extended-range C-130E model entered service in 1962 after it was developed as an interim long-range transport for the Military Air Transport Service. Essentially a B-model, the new designation was the result of the installation of 1,360 US gallons (5,100 litres) Sargent Fletcher external fuel tanks under each wing's midsection and more powerful Allison T56-A-7A turboprops. The hydraulic boost pressure to the ailerons was reduced back to 2,050 psi (14.1 MPa) as a consequence of the external tanks' weight in the middle of the wingspan. The E model also featured structural improvements, avionics upgrades, and a higher gross weight. Australia took delivery of 12 C130E Hercules during 1966–67 to supplement the 12 C-130A models already in service with the RAAF. Sweden and Spain fly the TP-84T version of the C-130E fitted for aerial refueling capability.
The KC-130 tankers, originally C-130F procured for the US Marine Corps (USMC) in 1958 (under the designation GV-1) are equipped with a removable 3,600 US gallons (14,000 L) stainless steel fuel tank carried inside the cargo compartment. The two wing-mounted hose and drogue aerial refueling pods each transfer up to 300 US gallons per minute (1,100 L/min) to two aircraft simultaneously, allowing for rapid cycle times of multiple-receiver aircraft formations, (a typical tanker formation of four aircraft in less than 30 minutes). The US Navy's C-130G has increased structural strength allowing higher gross weight operation.
=== Further developments ===
The C-130H model has updated Allison T56-A-15 turboprops, a redesigned outer wing, updated avionics, and other minor improvements. Later H models had a new, fatigue-life-improved, center wing that was retrofitted to many earlier H-models. For structural reasons, some models are required to land with reduced amounts of fuel when carrying heavy cargo, reducing usable range.
The H model remains in widespread use with the United States Air Force (USAF) and many foreign air forces. Initial deliveries began in 1964 (to the RNZAF), remaining in production until 1996. An improved C-130H was introduced in 1974, with Australia purchasing 12 of the type in 1978 to replace the original 12 C-130A models, which had first entered Royal Australian Air Force (RAAF) service in 1958. The U.S. Coast Guard employs the HC-130H for long-range search and rescue, drug interdiction, illegal migrant patrols, homeland security, and logistics.
C-130H models produced from 1992 to 1996 were designated as C-130H3 by the USAF, with the "3" denoting the third variation in design for the H series. Improvements included ring laser gyros for the INUs, GPS receivers, a partial glass cockpit (ADI and HSI instruments), a more capable APN-241 color radar, night vision device compatible instrument lighting, and an integrated radar and missile warning system. The electrical system upgrade included Generator Control Units (GCU) and Bus Switching units (BSU) to provide stable power to the more sensitive upgraded components.
The equivalent model for export to the UK is the C-130K, known by the Royal Air Force (RAF) as the Hercules C.1. The C-130H-30 (Hercules C.3 in RAF service) is a stretched version of the original Hercules, achieved by inserting a 100 in (2.5 m) plug aft of the cockpit and an 80 in (2.0 m) plug at the rear of the fuselage. A single C-130K was purchased by the Met Office for use by its Meteorological Research Flight, where it was classified as the Hercules W.2. This aircraft was heavily modified, with its most prominent feature being the long red and white striped atmospheric probe on the nose and the move of the weather radar into a pod above the forward fuselage. This aircraft, named Snoopy, was withdrawn in 2001 and was then modified by Marshall of Cambridge Aerospace as a flight testbed for the A400M turbine engine, the TP400. The C-130K is used by the RAF Falcons for parachute drops. Three C-130Ks (Hercules C Mk.1P) were upgraded and sold to the Austrian Air Force in 2002.
=== Enhanced models ===
The MC-130E Combat Talon was developed for the USAF during the Vietnam War to support special operations missions in Southeast Asia, and led to both the MC-130H Combat Talon II as well as a family of other special missions aircraft. 37 of the earliest models currently operating with the Air Force Special Operations Command (AFSOC) are scheduled to be replaced by new-production MC-130J versions. The EC-130 Commando Solo is another special missions variant within AFSOC, albeit operated solely by an AFSOC-gained wing in the Pennsylvania Air National Guard, and is a psychological operations/information operations (PSYOP/IO) platform equipped as an aerial radio station and television stations able to transmit messaging over commercial frequencies. Other versions of the EC-130, most notably the EC-130H Compass Call, are also special variants, but are assigned to the Air Combat Command (ACC). The AC-130 gunship was first developed during the Vietnam War to provide close air support and other ground-attack duties.
The HC-130 is a family of long-range search and rescue variants used by the USAF and the U.S. Coast Guard. Equipped for the deep deployment of Pararescuemen (PJs), survival equipment, and (in the case of USAF versions) aerial refueling of combat rescue helicopters, HC-130s are usually the on-scene command aircraft for combat SAR missions (USAF only) and non-combat SAR (USAF and USCG). Early USAF versions were also equipped with the Fulton surface-to-air recovery system, designed to pull a person off the ground using a wire strung from a helium balloon. The John Wayne movie The Green Berets features its use. The Fulton system was later removed when aerial refueling of helicopters proved safer and more versatile. The movie The Perfect Storm depicts a real-life SAR mission involving aerial refueling of a New York Air National Guard HH-60G by a New York Air National Guard HC-130P.
The C-130R and C-130T are U.S. Navy and USMC models, both equipped with underwing external fuel tanks. The USN C-130T is similar but has additional avionics improvements. In both models, aircraft are equipped with Allison T56-A-16 engines. The USMC versions are designated KC-130R or KC-130T when equipped with underwing refueling pods and pylons and are fully night vision system compatible.
The RC-130 is a reconnaissance version developed during the Cold War. Sometimes called "ferret" aircraft, these planes were initially retrofitted standard C-130s.
The Lockheed L-100 (L-382) is a civilian variant, equivalent to a C-130E model without military equipment. The L-100 also has two stretched versions.
=== Next generation ===
In the 1970s, Lockheed proposed a C-130 variant with turbofan engines rather than turboprops, but the U.S. Air Force preferred the takeoff performance of the existing aircraft. In the 1980s, the C-130 was intended to be replaced by the Advanced Medium STOL Transport project. The project was canceled and the C-130 has remained in production.
Building on lessons learned, Lockheed Martin modified a commercial variant of the C-130 into a High Technology Test Bed (HTTB). This test aircraft set numerous short takeoff and landing performance records and significantly expanded the database for future derivatives of the C-130. Modifications made to the HTTB included extended chord ailerons, a long chord rudder, fast-acting double-slotted trailing edge flaps, a high-camber wing leading edge extension, a larger dorsal fin and dorsal fins, the addition of three spoiler panels to each wing upper surface, a long-stroke main and nose landing gear system, and changes to the flight controls and a change from direct mechanical linkages assisted by hydraulic boost, to fully powered controls, in which the mechanical linkages from the flight station controls operated only the hydraulic control valves of the appropriate boost unit.
The HTTB first flew on 19 June 1984, with civil registration of N130X. After demonstrating many new technologies, some of which were applied to the C-130J, the HTTB was lost in a fatal accident on 3 February 1993, at Dobbins Air Reserve Base, in Marietta, Georgia. The crash was attributed to disengagement of the rudder fly-by-wire flight control system, resulting in a total loss of rudder control capability while conducting ground minimum control speed tests (Vmcg). The disengagement was a result of the inadequate design of the rudder's integrated actuator package by its manufacturer; the operator's insufficient system safety review failed to consider the consequences of the inadequate design to all operating regimes. A factor that contributed to the accident was the flight crew's lack of engineering flight test training.
In the 1990s, the improved C-130J Super Hercules was developed by Lockheed (later Lockheed Martin). This model is the newest version and the only model in production. Externally similar to the classic Hercules in general appearance, the J model has new turboprop engines, six-bladed propellers, digital avionics, and other new systems.
=== Upgrades and changes ===
In 2000, Boeing was awarded a US$1.4 billion contract to develop an Avionics Modernization Program kit for the C-130. The program was beset with delays and cost overruns until project restructuring in 2007. In September 2009, it was reported that the planned Avionics Modernization Program (AMP) upgrade to the older C-130s would be dropped to provide more funds for the F-35, CV-22 and airborne tanker replacement programs. However, in June 2010, Department of Defense approved funding for the initial production of the AMP upgrade kits. Under the terms of this agreement, the USAF has cleared Boeing to begin low-rate initial production (LRIP) for the C-130 AMP. A total of 198 aircraft are expected to feature the AMP upgrade. The current cost per aircraft is US$14 million, although Boeing expects that this price will drop to US$7 million for the 69th aircraft.
In the 2000s, Lockheed Martin and the U.S. Air Force began outfitting and retrofitting C-130s with the eight-blade UTC Aerospace Systems NP2000 propellers. An engine enhancement program saving fuel and providing lower temperatures in the T56 engine has been approved, and the US Air Force expects to save $2 billion (~$2.58 billion in 2023) and extend the fleet life.
In 2021, the Air Force Research Laboratory demonstrated the Rapid Dragon system which transforms the C-130 into a lethal strike platform capable of launching 12 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Future anticipated improvements support includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when 1,900 km (1,200 mi) JASSM-XR become available in 2024.
=== Replacement ===
In October 2010, the U.S. Air Force released a capability request for information (CRFI) for the development of a new airlifter to replace the C-130. The new aircraft was to carry a 190% greater payload and assume the mission of mounted vertical maneuver (MVM). The greater payload and mission would enable it to carry medium-weight armored vehicles and unload them at locations without long runways. Various options were under consideration, including new or upgraded fixed-wing designs, rotorcraft, tiltrotors, or even an airship. The C-130 fleet of around 450 planes would be replaced by only 250 aircraft. The Air Force had attempted to replace the C-130 in the 1970s through the Advanced Medium STOL Transport project, which resulted in the C-17 Globemaster III that instead replaced the C-141 Starlifter.
The Air Force Research Laboratory funded Lockheed Martin and Boeing demonstrators for the Speed Agile concept, which had the goal of making a STOL aircraft that could take off and land at speeds as low as 70 kn (130 km/h; 81 mph) on airfields less than 2,000 ft (610 m) long and cruise at Mach 0.8-plus. Boeing's design used upper-surface blowing from embedded engines on the inboard wing and blown flaps for circulation control on the outboard wing. Lockheed's design also used blown flaps outboard, but inboard used patented reversing ejector nozzles.
Boeing's design completed over 2,000 hours of wind tunnel tests in late 2009. It was a 5 percent-scale model of a narrow body design with a 55,000 lb (25,000 kg) payload. When the AFRL increased the payload requirement to 65,000 lb (29,000 kg), they tested a 5 percent-scale model of a widebody design with a 303,000 lb (137,000 kg) take-off gross weight and an "A400M-size" 158 in (4.0 m) wide cargo box. It would be powered by four IAE V2533 turbofans.
In August 2011, the AFRL released pictures of the Lockheed Speed Agile concept demonstrator. A 23% scale model went through wind tunnel tests to demonstrate its hybrid powered lift, which combined a low drag airframe with simple mechanical assembly to reduce weight and improve aerodynamics. The model had four engines, including two Williams FJ44 turbofans. On 26 March 2013, Boeing was granted a patent for its swept-wing powered lift aircraft.
In January 2014, Air Mobility Command, Air Force Materiel Command and the Air Force Research Lab were in the early stages of defining requirements for the C-X next generation airlifter program to replace both the C-130 and C-17. The aircraft would be produced from the early 2030s to the 2040s.
== Operational history ==
=== Military ===
The first production batch of C-130A aircraft were delivered beginning in 1956 to the 463d Troop Carrier Wing at Ardmore AFB, Oklahoma, and the 314th Troop Carrier Wing at Sewart AFB, Tennessee. Six additional squadrons were assigned to the 322d Air Division in Europe and the 315th Air Division in the Far East. Additional aircraft were modified for electronics intelligence work and assigned to Rhein-Main Air Base, Germany while modified RC-130As were assigned to the Military Air Transport Service (MATS) photo-mapping division. The C-130A entered service with the U.S. Air Force in December 1956.
In 1958, a U.S. reconnaissance C-130A-II of the 7406th Support Squadron was shot down over Armenia by four Soviet MiG-17s along the Turkish-Armenian border during a routine mission.
Australia became the first non-American operator of the Hercules with 12 examples being delivered from late 1958. The Royal Canadian Air Force became another early user with the delivery of four B-models (Canadian designation CC-130 Mk I) in October / November 1960.
In 1963, a Hercules achieved and still holds the record for the largest and heaviest aircraft to land on an aircraft carrier. During October and November that year, a USMC KC-130F (BuNo 149798), loaned to the U.S. Naval Air Test Center, made 29 touch-and-go landings, 21 unarrested full-stop landings and 21 unassisted take-offs on Forrestal at a number of different weights. The pilot, Lieutenant (later Rear Admiral) James H. Flatley III, USN, was awarded the Distinguished Flying Cross for his role in this test series. The tests were highly successful, but the aircraft was not deployed this way. Flatley denied that C-130 was tested for carrier onboard delivery (COD) operations, or for delivering nuclear weapons. He said that the intention was to support the Lockheed U-2, also being tested on carriers. The Hercules used in the test, most recently in service with Marine Aerial Refueler Squadron 352 (VMGR-352) until 2005, is now part of the collection of the National Museum of Naval Aviation at NAS Pensacola, Florida.
In 1964, C-130 crews from the 6315th Operations Group at Naha Air Base, Okinawa commenced forward air control (FAC; "Flare") missions over the Ho Chi Minh Trail in Laos supporting USAF strike aircraft. In April 1965 the mission was expanded to North Vietnam where C-130 crews led formations of Martin B-57 Canberra bombers on night reconnaissance/strike missions against communist supply routes leading to South Vietnam. In early 1966 Project Blind Bat/Lamplighter was established at Ubon Royal Thai Air Force Base, Thailand. After the move to Ubon, the mission became a four-engine FAC mission with the C-130 crew searching for targets and then calling in strike aircraft. Another little-known C-130 mission flown by Naha-based crews was Operation Commando Scarf (or Operation Commando Lava), which involved the delivery of chemicals onto sections of the Ho Chi Minh Trail in Laos that were designed to produce mud and landslides in hopes of making the truck routes impassable.
In November 1964, on the other side of the globe, C-130Es from the 464th Troop Carrier Wing but loaned to 322d Air Division in France, took part in Operation Dragon Rouge, one of the most dramatic missions in history in the former Belgian Congo. After communist Simba rebels took white residents of the city of Stanleyville hostage, the U.S. and Belgium developed a joint rescue mission that used the C-130s to drop, air-land, and air-lift a force of Belgian paratroopers to rescue the hostages. Two missions were flown, one over Stanleyville and another over Paulis during Thanksgiving week. The headline-making mission resulted in the first award of the prestigious MacKay Trophy to C-130 crews.
In the Indo-Pakistani War of 1965, the No. 6 Transport Squadron of the Pakistan Air Force modified its C-130Bs for use as bombers to carry up to 20,000 pounds (9,100 kg) of bombs on pallets. These improvised bombers were used to hit Indian targets such as bridges, heavy artillery positions, tank formations, and troop concentrations, though weren't that successful .
In October 1968, a C-130Bs from the 463rd Tactical Airlift Wing dropped a pair of M-121 10,000 pounds (4,500 kg) bombs that had been developed for the massive Convair B-36 Peacemaker bomber but had never been used. The U.S. Army and U.S. Air Force resurrected the huge weapons as a means of clearing landing zones for helicopters and in early 1969 the 463rd commenced Commando Vault missions. Although the stated purpose of Commando Vault was to clear LZs, they were also used on enemy base camps and other targets.
During the late 1960s, the U.S. was eager to get information on Chinese nuclear capabilities. After the failure of the Black Cat Squadron to plant operating sensor pods near the Lop Nur Nuclear Weapons Test Base using a U-2, the CIA developed a plan, named Heavy Tea, to deploy two battery-powered sensor pallets near the base. To deploy the pallets, a Black Bat Squadron crew was trained in the U.S. to fly the C-130 Hercules. The crew of 12, led by Col Sun Pei Zhen, took off from Takhli Royal Thai Air Force Base in an unmarked U.S. Air Force C-130E on 17 May 1969. Flying for six and a half hours at low altitude in the dark, they arrived over the target and the sensor pallets were dropped by parachute near Anxi in Gansu province. After another six and a half hours of low-altitude flight, they arrived back at Takhli. The sensors worked and uploaded data to a U.S. intelligence satellite for six months before their batteries failed. The Chinese conducted two nuclear tests, on 22 September 1969 and 29 September 1969, during the operating life of the sensor pallets. Another mission to the area was planned as Operation Golden Whip, but it was called off in 1970. It is most likely that the aircraft used on this mission was either C-130E serial number 64-0506 or 64-0507 (cn 382-3990 and 382–3991). These two aircraft were delivered to Air America in 1964. After being returned to the U.S. Air Force sometime between 1966 and 1970, they were assigned the serial numbers of C-130s that had been destroyed in accidents. 64-0506 is now flying as 62–1843, a C-130E that crashed in Vietnam on 20 December 1965, and 64-0507 is now flying as 63–7785, a C-130E that had crashed in Vietnam on 17 June 1966.
The A-model continued in service through the Vietnam War, where the aircraft assigned to the four squadrons at Naha AB, Okinawa, and one at Tachikawa Air Base, Japan performed yeoman's service, including operating highly classified special operations missions such as the BLIND BAT FAC/Flare mission and Fact Sheet leaflet mission over Laos and North Vietnam. The A-model was also provided to the Republic of Vietnam Air Force as part of the Vietnamization program at the end of the war, and equipped three squadrons based at Tan Son Nhut Air Base. The last operator in the world is the Honduran Air Force, which is still flying one of five A model Hercules (FAH 558, c/n 3042) as of October 2009. As the Vietnam War wound down, the 463rd Troop Carrier/Tactical Airlift Wing B-models and A-models of the 374th Tactical Airlift Wing were transferred back to the United States where most were assigned to Air Force Reserve and Air National Guard units.
Another prominent role for the B model was with the United States Marine Corps, where Hercules initially designated as GV-1s replaced C-119s. After Air Force C-130Ds proved the type's usefulness in Antarctica, the U.S. Navy purchased several B-models equipped with skis that were designated as LC-130s. C-130B-II electronic reconnaissance aircraft were operated under the SUN VALLEY program name primarily from Yokota Air Base, Japan. All reverted to standard C-130B cargo aircraft after their replacement in the reconnaissance role by other aircraft.
The C-130 was also used in the 1976 Entebbe raid in which Israeli commando forces performed a surprise operation to rescue 103 passengers of an airliner hijacked by Palestinian and German terrorists at Entebbe Airport, Uganda. The rescue force—200 soldiers, jeeps, and a black Mercedes-Benz (intended to resemble Ugandan Dictator Idi Amin's vehicle of state)—was flown over 2,200 nmi (4,074 km; 2,532 mi) almost entirely at an altitude of less than 100 ft (30 m) from Israel to Entebbe by four Israeli Air Force (IAF) Hercules aircraft without mid-air refueling (on the way back, the aircraft refueled in Nairobi, Kenya).
During the Falklands War (Spanish: Guerra de las Malvinas) of 1982, Argentine Air Force C-130s undertook dangerous re-supply night flights as blockade runners to the Argentine garrison on the Falkland Islands. They also performed daylight maritime survey flights. One was shot down by a Royal Navy Sea Harrier using AIM-9 Sidewinders and cannon. The crew of seven were killed. Argentina also operated two KC-130 tankers during the war, and these refueled both the Douglas A-4 Skyhawks and Navy Dassault-Breguet Super Étendards; some C-130s were modified to operate as bombers with bomb-racks under their wings. The British also used RAF C-130s to support their logistical operations.
During the Gulf War of 1991 (Operation Desert Storm), the C-130 Hercules was used operationally by the U.S. Air Force, U.S. Navy, and U.S. Marine Corps, along with the air forces of Australia, New Zealand, Saudi Arabia, South Korea, and the UK. The MC-130 Combat Talon variant also made the first attacks using the largest conventional bombs in the world, the BLU-82 "Daisy Cutter" and GBU-43/B "Massive Ordnance Air Blast" (MOAB) bomb. Daisy Cutters were used to primarily clear landing zones and to eliminate mine fields. The weight and size of the weapons make it impossible or impractical to load them on conventional bombers. The GBU-43/B MOAB is a successor to the BLU-82 and can perform the same function, as well as perform strike functions against hardened targets in a low air threat environment.
Since 1992, two successive C-130 aircraft named Fat Albert have served as the support aircraft for the U.S. Navy Blue Angels flight demonstration team. Fat Albert I was a TC-130G (151891) a former U.S. Navy TACAMO aircraft serving with Fleet Air Reconnaissance Squadron Three (VQ-3) before being transferred to the BLUES, while Fat Albert II is a C-130T (164763). Although Fat Albert supports a Navy squadron, it is operated by the U.S. Marine Corps (USMC) and its crew consists solely of USMC personnel. At some air shows featuring the team, Fat Albert takes part, performing flyovers. Until 2009, it also demonstrated its rocket-assisted takeoff (RATO) capabilities; these ended due to dwindling supplies of rockets.
The AC-130 also holds the record for the longest sustained flight by a C-130. From 22 to 24 October 1997, two AC-130U gunships flew 36 hours nonstop from Hurlburt Field, Florida to Daegu International Airport, South Korea, being refueled seven times by KC-135 tanker aircraft. This record flight beat the previous record longest flight by over 10 hours and the two gunships took on 410,000 lb (190,000 kg) of fuel. The gunship has been used in every major U.S. combat operation since Vietnam, except for Operation El Dorado Canyon, the 1986 attack on Libya.
During the invasion of Afghanistan in 2001 and the ongoing support of the International Security Assistance Force (Operation Enduring Freedom), the C-130 Hercules has been used operationally by Australia, Belgium, Canada, Denmark, France, Italy, the Netherlands, New Zealand, Norway, Portugal, Romania, South Korea, Spain, the UK, and the United States.
During the 2003 invasion of Iraq (Operation Iraqi Freedom), the C-130 Hercules was used operationally by Australia, the UK, and the United States. After the initial invasion, C-130 operators as part of the Multinational force in Iraq used their C-130s to support their forces in Iraq.
Since 2004, the Pakistan Air Force has employed C-130s in the War in North-West Pakistan. Some variants had forward looking infrared (FLIR Systems Star Safire III EO/IR) sensor balls, to enable close tracking of militants.
In 2017, France and Germany announced that they are to build up a joint air transport squadron at Evreux Air Base, France, comprising ten C-130J aircraft. Six of these will be operated by Germany. Initial operational capability is expected for 2021 while full operational capability is scheduled for 2024.
The Argentine Air Force has five C-130H aircraft that are part of a US-funded security assistance donation. The US has been leasing the aircraft to the Argentine Air Force through the Georgia Air National Guard since June 2023.
=== Deepwater Horizon Oil Spill ===
For almost two decades, the USAF 910th Airlift Wing's 757th Airlift Squadron and the U.S. Coast Guard have participated in oil spill cleanup exercises to ensure the U.S. military has a capable response in the event of a national emergency. The 757th Airlift Squadron operates the DOD's only fixed-wing Aerial Spray System which was certified by the EPA to disperse pesticides on DOD property to spread oil dispersants onto the Deepwater Horizon oil spill in the Gulf Coast in 2010.
During the 5-week mission, the aircrews flew 92 sorties and sprayed approximately 30,000 acres with nearly 149,000 gallons of oil dispersant to break up the oil. The Deepwater Horizon mission was the first time the US used the oil dispersing capability of the 910th Airlift Wing—its only large area, fixed-wing aerial spray program—in an actual spill of national significance. The Air Force Reserve Command announced the 910th Airlift Wing has been selected as a recipient of the Air Force Outstanding Unit Award for its outstanding achievement from 28 April 2010 through 4 June 2010.
=== Hurricane Harvey (2017) ===
C-130s temporarily based at Kelly Field conducted mosquito control aerial spray applications over areas of eastern Texas devastated by Hurricane Harvey. This special mission treated more than 2.3 million acres at the direction of Federal Emergency Management Agency (FEMA) and the Texas Department of State Health Services (DSHS) to assist in recovery efforts by helping contain the significant increase in pest insects caused by large amounts of standing, stagnant water. The 910th Airlift Wing operates the Department of Defense's only aerial spray capability to control pest insect populations, eliminate undesired and invasive vegetation, and disperse oil spills in large bodies of water.
The aerial spray flight also is now able to operate during the night with NVGs, which increases the flight's best case spray capacity from approximately 60 thousand acres per day to approximately 190 thousand acres per day. Spray missions are normally conducted at dusk and nighttime hours when pest insects are most active, the U.S. Air Force Reserve reports.
=== Aerial firefighting ===
In the early 1970s, Congress authorized the Modular Airborne Firefighting System (MAFFS), a joint operation between the U.S. Forest Service and the Department of Defense. MAFFS is roll-on/roll-off device that allows C-130s to be temporarily converted into a 3,000-gallon airtanker for fighting wildfires when demand exceeds the supply of privately contracted and publicly available airtankers.
In the late 1980s, 22 retired USAF C-130As were removed from storage and transferred to the U.S. Forest Service, which then transferred them to six private companies to be converted into airtankers. One of these C-130s crashed in June 2002 while operating near Walker, California. The crash was attributed to wing separation caused by fatigue stress cracking and contributed to the grounding of the entire large aircraft fleet. After an extensive review, US Forest Service and the Bureau of Land Management declined to renew the leases on nine C-130A over concerns about the age of the aircraft, which had been in service since the 1950s, and their ability to handle the forces generated by aerial firefighting.
More recently, an updated Retardant Aerial Delivery System known as RADS XL was developed by Coulson Aviation USA. That system consists of a C-130H/Q retrofitted with an in-floor discharge system, combined with a removable 3,500- or 4,000-gallon water tank. The combined system is FAA certified. On 23 January 2020, Coulson's Tanker 134, an EC-130Q registered N134CG, crashed during aerial firefighting operations in New South Wales, Australia, killing all three crew members. The aircraft had taken off out of RAAF Base Richmond and was supporting firefighting operations during Australia's 2019–20 fire season.
== Variants ==
Significant military variants of the C-130 include:
C-130A
Initial production model with four Allison T56-A-11/9 turboprop engines. 219 were ordered and deliveries to the USAF began in December 1956.
C-130B
Variant with four Allison T56-A-7 engines. 134 were ordered and entered USAF service in May 1959.
C-130E
Same engines as the C-130B but with two 1,290 U.S. gal (4,900 L; 1,070 imp gal) external fuel tanks, and an increased maximum takeoff weight capability. Introduced in August 1962 with 389 were ordered.
C-130F/G
Variants procured by the U.S. Navy for Marine Corps refueling missions, and other support/transport operations.
C-130H
Identical to the C-130E but with more powerful Allison T56-A-15 turboprop engines. Introduced in June 1964 with 308 ordered.
C-130K
Designation for RAF Hercules C1/W2/C3 aircraft (C-130Js in RAF service are the Hercules C.4 and Hercules C.5)
C-130T
Improved variants procured by the U.S. Navy for Marine Corps refueling, and other support/transport operations.
C-130A-II Dreamboat
Early version Electronic Intelligence/Signals Intelligence (ELINT/SIGINT) aircraft
C-130J Super Hercules
Tactical airlifter, with new engines, avionics, and updated systems
C-130B BLC
A one-off conversion of C-130B 58–0712, modified with a double Allison YT56 gas generator pod under each outer wing, to provide bleed air for all the control surfaces and flaps.
AC-130A/E/H/J/U/W
Gunship variants
C-130D/D-6
Ski-equipped version for snow and ice operations United States Air Force / Air National Guard
CC-130E/H/J Hercules
Designation for Canadian Armed Forces / Royal Canadian Air Force Hercules aircraft. U.S. Air Force used the CC-130J designation to differentiate the standard C-130J variant from the "stretched" C-130J (company designation C-130J-30). CC-130H(T) is the Canadian tanker variant of the KC-130H.
C-130M
Designation used by the Brazilian Air Force for locally modified C-130H aircraft.
DC-130A/E/H
USAF and USN Drone control
E-130J
Future USN TACAMO aircraft
EC-130
EC-130E/J Commando Solo – USAF / Air National Guard psychological operations version
EC-130E Airborne Battlefield Command and Control Center (ABCCC) – USAF procedural air-to-ground attack control, also provided NRT threat updates
EC-130E Rivet Rider – Airborne psychological warfare aircraft
EC-130H Compass Call – Electronic warfare and electronic attack.
EC-130Q – USN TACAMO aircraft
EC-130V – Airborne early warning and control (AEW&C) variant used by USCG for counter-narcotics missions
GC-130
Permanently grounded instructional airframes
HC-130
HC-130B/E/H – Early model combat search and rescue
HC-130P/N Combat King – USAF aerial refueling tanker and combat search and rescue
HC-130J Combat King II – Next generation combat search and rescue tanker
HC-130H/J – USCG long-range surveillance and search and rescue, USAFR Aerial Spray & Airlift
JC-130
Temporary conversion for flight test operations; used to recover drones and spy satellite film capsules.
KC-130F/R/T/J
United States Marine Corps aerial refueling tanker and tactical airlifter
LC-130F/H/R
USAF / Air National Guard – Ski-equipped version for Arctic and Antarctic support operations; LC-130F and R previously operated by USN
MC-130
MC-130E/H Combat Talon I/II – Special operations infiltration/extraction variant
MC-130W Combat Spear/Dragon Spear – Special operations tanker/gunship
MC-130P Combat Shadow – Special operations tanker – all operational aircraft converted to HC-130P standard
MC-130J Commando II (formerly Combat Shadow II) – Special operations tanker Air Force Special Operations Command
YMC-130H – Modified aircraft under Operation Credible Sport for second Iran hostage crisis rescue attempt
NC-130
Permanent conversion for flight test operations
PC-130/C-130-MP
Maritime patrol
RC-130A/S
Surveillance aircraft for reconnaissance
SC-130J Sea Herc
Proposed maritime patrol version of the C-130J, designed for coastal surveillance and anti-submarine warfare.
TC-130
Aircrew training
VC-130H
VIP transport
WC-130A/B/E/H/J
Weather reconnaissance ("Hurricane Hunter") version for USAF / Air Force Reserve Command's 53d Weather Reconnaissance Squadron in support of the National Weather Service's National Hurricane Center
C-130(EM/BM) Erciyes
Turkey's Erciyes modernization program covers modernization of the avionics of C-130B/E variants of the aircraft. In scope of modernization the aircraft is equipped with Digital Cockpit (four-color Multifunctional Display with moving map capability-MFD), two Central Display Units (CDU) and two multifunction Central Control Computers compatible with international navigational requirements, as well as with a multifunction Mission Computer with high operational capability, Flight Management System (FMS), Link-16, Ground Mission Planning Unit compatible with the Air Force Information System, and display and lighting systems compatible with Night Vision Goggles. Other components such as GPS, indicator, anti-collision system, air radar, advanced military and civilian navigation systems, night-time invisible lighting for military missions, black box voice recorder, communication systems, advanced automated flight systems (military and civilian), systems enabling operation in the military network, digital moving map and ground mission planning systems are also included.
B.L.8
(Thai: บ.ล.๘) Royal Thai Armed Forces designation for the C-130H.
B.L.8A
(Thai: บ.ล.๘ก) Royal Thai Armed Forces designation for the C-130H-30.
TP 84
Swedish Air Force designation for the C-130H
== Operators ==
Former operators
== Accidents ==
The C-130 Hercules has had a low accident rate in general. The Royal Air Force recorded an accident rate of about one aircraft loss per 250,000 flying hours over the last 40 years, placing it behind Vickers VC10s and Lockheed TriStars with no flying losses. USAF C-130A/B/E-models had an overall attrition rate of 5% as of 1989 as compared to 1–2% for commercial airliners in the U.S., according to the NTSB, 10% for B-52 bombers, and 20% for fighters (F-4, F-111), trainers (T-37, T-38), and helicopters (H-3).
== Aircraft on display ==
=== Argentina ===
C-130B FAA TC-60. ex USAF 61-0964 received in February 1992 now at Museo Nacional de Aeronáutica since September 2011.
=== Australia ===
C-130A RAAF A97-214 used by 36 Squadron from early 1959, withdrawn from use late 1978. Stored at RAAF Museum, RAAF Base Williams, Point Cook. Airframe scrapped in February 2022. Cockpit section preserved and gifted to National Vietnam Veterans Museum, Phillip Island.
C-130E RAAF A97-160 used by 37 Squadron from August 1966, withdrawn from use November 2000; to RAAF Museum, 14 November 2000, cocooned as of September 2005.
C-130H A97-011 delivered in October 1978, withdrawn from use December 2012 to RAAF Museum, Point Cook where it is currently on display.
=== Belgium ===
C-130H Belgian Air Component tailnumber CH13 in service from 2009 until May 2021 is on display at the Beauvechain Air Base at the First Wing Historical Center.
=== Brazil ===
C-130H Brazilian Air Force FAB-2453 is on display at the Museu Aeroespacial in Rio de Janeiro since 2014.
=== Canada ===
CC-130E RCAF 10313 (later 130313) is on display at the National Air Force Museum of Canada, CFB Trenton
CC-130E RCAF 10307 (later 130307) is on display in the Reserve Hangar at the Canada Aviation and Space Museum, Ottawa, Ontario
CC-130E RCAF 130328 is on display at the Greenwood Aviation Museum, CFB Greenwood
=== Colombia ===
C-130B FAC 1010 (serial number 3521) moved on 14 January 2016 to the Colombian Aerospace Museum in Tocancipá, Cundinamarca, for static display.
C-130B FAC1011 (serial number 3585, ex 59–1535) preserved at the Colombian Air and Space Museum within CATAM AFB, Bogotá.
=== Indonesia ===
C-130B Indonesian Air Force A-1301 preserved at Sulaeman Airstrip, Bandung. Also occasionally used for Paskhas Training. The airplane is relocated to Air Force Museum in Yogyakarta in 2017.
=== New Zealand ===
C-130H(NZ) Royal New Zealand Air Force, aircraft NZ7001 was retired to the Air Force Museum making its final delivery flight into Wigram on 19 February 2025, following 60 years service.
=== Norway ===
C-130H Royal Norwegian Air Force 953 was retired on 10 June 2007 and moved to the Air Force museum at Oslo Gardermoen in May 2008.
=== Philippines ===
L-100-20 4512 Philippine Air Force on display at Mactan Air Base aircraft park.
=== Poland ===
C-130E number 1503 (serial number 70-1272), formerly operated by Polish Air Force and stationed at 33rd Air Base, retired on 30 July 2024. It is currently on display at the Polish Air Force Museum in Dęblin.
=== Saudi Arabia ===
C-130H RSAF 460 was operated by 4 Squadron Royal Saudi Air Force from December 1974 until January 1987. It was damaged in a fire at Jeddah in December 1989. Restored for ground training by August 1993. At Royal Saudi Air Force Museum, November 2002, restored for ground display by using a tail from another C-130H.
=== United Kingdom ===
Hercules C3 XV202 that served with the Royal Air Force from 1967 to 2011, is on display at the Royal Air Force Museum Cosford.
=== United States ===
GC-130A, AF Ser. No. 55-037 used by the 773 TCS, 483 TCW, 315 AD, 374 TCW, 815 TAS, 35 TAS, 109 TAS, belly-landed at Duluth, Minnesota, April 1973, repaired; 167 TAS, 180 TAS, to Chanute Technical Training Center as GC-130A, May 1984; now displayed at Museum of Missouri Military History, Missouri National Guard Ike Skelton Training Center, Jefferson City, Missouri. Previously displayed at Octave Chanute Aerospace Museum, (former) Chanute AFB, Rantoul, Illinois until museum closed.
C-130A, AF Ser. No. 56-0518 used by the 314 TCW, 315 AD, 41 ATS, 328 TAS; to Republic of Vietnam Air Force 435 Transport Squadron, November 1972; holds the C-130 record for taking off with the most personnel on board, during the evacuation of SVN, 29 April 1975, with 452. Returned to USAF, 185 TAS, 105 TAS; Flown to Little Rock AFB on 28 June 1989. It was converted to a static display at the LRAFB Visitor Center, Arkansas by Sept. 1989.
C-130A, AF Ser. No. 57-0453 was operated from 1958 to 1991, last duty with 155th TAS, 164th TAG, Tennessee Air National Guard, Memphis International Airport/ANGB, Tennessee, 1976–1991, named "Nite Train to Memphis"; to AMARC in December 1991, then sent to Texas for modification into a replica of C-130A-II Dreamboat aircraft, AF Ser. No. 56-0528, shot down by Soviet fighters in Soviet airspace near Yerevan, Armenia on 2 September 1958, while on ELINT mission with loss of all crew, displayed in National Vigilance Park, National Security Agency grounds, Fort George Meade, Maryland.
C-130B, AF Ser. No. 59-0528 was operated by 145th Airlift Wing, North Carolina Air National Guard; placed on static display at Charlotte Air National Guard Base, North Carolina in 2010.
C-130D, AF Ser. No. 57-0490 used by the 61st TCS, 17th TCS, 139th TAS with skis, July 1975 – April 1983; to MASDC, 1984–1985, GC-130D ground trainer, Chanute AFB, Illinois, 1986–1990; When Chanute AFB closed in September 1993, it moved to the Octave Chanute Aerospace Museum (former Chanute AFB), Rantoul, Illinois. In July 1994, it moved to the Empire State Aerosciences Museum, Schenectady County Airport, New York, until placed on the gate at Stratton Air National Guard Base in October 1994.
NC-130B, AF Ser. No. 57-0526 was the second B model manufactured, initially delivered as JC-130B; assigned to 6515th Organizational Maintenance Squadron for flight testing at Edwards AFB, California on 29 November 1960; turned over to 6593rd Test Squadron's Operating Location No. 1 at Edwards AFB and spent next seven years supporting Corona Program; "J" status and prefix removed from aircraft in October 1967; transferred to 6593rd Test Squadron at Hickam AFB, Hawaii and modified for mid-air retrieval of satellites; acquired by 6514th Test Squadron at Hill AFB, Utah in Jan. 1987 and used as electronic testbed and cargo transport; aircraft retired January 1994 with 11,000+ flight hours and moved to Hill Aerospace Museum at Hill AFB by January 1994.
C-130E, AF Ser. No. 62-1787, on display at the National Museum of the United States Air Force, Wright-Patterson AFB, Ohio, was flown to the museum on 18 August 2011. One of the greatest feats of heroism during the Vietnam War involved the C-130E, call sign "Spare 617". The C-130E attempted to airdrop ammunition to surround South Vietnamese forces at An Loc, Vietnam. Approaching the drop zone, Spare 617 received heavy enemy ground fire that damaged two engines, ruptured a bleed air duct in the cargo compartment, and set the ammunition on fire. Flight engineer TSgt Sanders was killed, and navigator 1st Lt Lenz and co-pilot 1st Lt Hering were both wounded. Despite receiving severe burns from hot air escaping from the damaged air bleed duct, loadmaster TSgt Shaub extinguished a fire in the cargo compartment, and successfully jettisoned the cargo pallets, which exploded in mid-air. Despite losing a third engine on the final approach, pilot Capt Caldwell landed Spare 617 safely. For their actions, Caldwell and Shaub received the Air Force Cross, the U.S. Air Force's second highest award for valor. TSgt Shaub also received the William H. Pitsenbarger Award for Heroism from the Air Force Sergeants Association.
KC-130F, USN/USMC BuNo 149798 used in tests in October–November 1963 by the U.S. Navy for unarrested landings and unassisted take-offs from the carrier USS Forrestal (CV-59), it remains the record holder for largest aircraft to operate from a carrier flight deck, and carried the name "Look Ma, No Hook" during the tests. Retired to the National Museum of Naval Aviation, NAS Pensacola, Florida in May 2003.
C-130G, USN/USMC BuNo 151891; modified to EC-130G, 1966, then testbed for EC-130Q TACAMO in 1981, then changed to TC-130G and used by Fleet Air Reconnaissance Squadron Three (VQ-3) for flight proficiency (bounce bird). In early 1991 it was transferred to AMMARG Davis-Monthan AFB Tucson, AZ. In May 1991 it was assigned as the U.S. Navy's Blue Angels USMC support aircraft, serving as "Fat Albert Airlines" from 1991 to 2002. Retired to the National Museum of Naval Aviation at NAS Pensacola, Florida in November 2002 where it remains on outside static display reflecting the BLUES colors.
C-130E, AF Ser. No. 64-0525 was on display at the 82nd Airborne Division War Memorial Museum at Fort Bragg, North Carolina. The aircraft was the last assigned to the 43rd AW at Pope AFB, North Carolina before retirement from the USAF.
C-130E-LM, AF Ser. No. 64-0533 – Taken in December 1964 by 314th Troop Carrier Wing, Sewart AFB, TN. Last assigned to 37th Airlift Squadron, Rhein-Main AB, Germany. Transferred to Elmendorf AFB for display, May 2004. Marked as 53-2453.
C-130E, AF Ser. No. 69-6579 operated by the 61st TAS, 314th TAW, 50th AS, 61st AS; at Dyess AFB as maintenance trainer as GC-130E, March 1998; to Dyess AFB Linear Air Park, January 2004.
MC-130E Combat Talon I, AF Ser. No. 64-0567, unofficially known as "Wild Thing". It transported captured Panamanian dictator Manuel Noriega in 1989 during Operation Just Cause and participated in Operation Eagle Claw, the unsuccessful attempt to rescue U.S. hostages from Iran in 1980. Wild Thing was also the first fixed-wing aircraft to employ night-vision goggles. On display at Hurlburt Field, in Florida.
C-130E, AF Ser. No. 69-6580 operated by the 61st TAS, 314th TAW, 317th TAW, 314th TAW, 317th TAW, 40th AS, 41st AS, 43rd AW, retired after center wing cracks were detected in April 2002; to the Air Mobility Command Museum, Dover AFB, Delaware on 2 February 2004.
C-130E, AF Ser. No. 70-1269 was used by the 43rd AW and is on display at the Pope Air Park, Pope AFB, North Carolina as of 2006.
C-130H, AF Ser. No. 74-1686 used by the 463rd TAW; one of three C-130H airframes modified to YMC-130H for an aborted rescue attempt of Iranian hostages, Operation Credible Sport, with rocket packages blistered onto fuselage in 1980, but these were removed after the mission was canceled. Subsequent duty with the 4950th Test Wing, then donated to the Museum of Aviation at Robins AFB, Georgia, in March 1988.
C-130H, AF Ser. No. 88-4401 operated by the Ohio 179th Airlift Wing has been retired and is on display at the MAPS Air Museum in Canton, Ohio.
== Specifications (C-130H) ==
Data from USAF C-130 Hercules fact sheet, International Directory of Military Aircraft, Complete Encyclopedia of World Aircraft, and Encyclopedia of Modern Military Aircraft.General characteristics
Crew: 5 (2 pilots, CSO/navigator, flight engineer and loadmaster)
Capacity: 42,000 lb (19,000 kg) payload
C-130E/H/J cargo hold: length, 40 ft (12.19 m); width, 9 ft 11 in (3.02 m); height, 9 ft (2.74 m). Rear ramp: length, 123 in (3.12 m); width, 119 in (3.02 m)
C-130J-30 cargo hold: length, 55 ft (16.76 m); width, 9 ft 11 in (3.02 m); height, 9 ft (2.74 m). Rear ramp: length, 123 inches (3.12 m); width, 119 in (3.02 m)
92 passengers or
64 airborne troops or
74 litter patients with 5 medical crew or
6 pallets or
2–3 Humvees or
2 M113 armored personnel carriers
1 CAESAR self-propelled howitzer
Length: 97 ft 9 in (29.79 m)
Wingspan: 132 ft 7 in (40.41 m)
Height: 38 ft 3 in (11.66 m)
Wing area: 1,745 sq ft (162.1 m2)
Airfoil: root: NACA 64A318; tip: NACA 64A412
Empty weight: 75,800 lb (34,382 kg)
Max takeoff weight: 155,000 lb (70,307 kg)
Powerplant: 4 × Allison T56-A-15 turboprop engines, 4,590 shp (3,420 kW) each
Propellers: 4-bladed Hamilton Standard 54H60 constant-speed fully feathering reversible propellers, 13 ft 6 in (4.11 m) diameter
Performance
Maximum speed: 320 kn (370 mph, 590 km/h) at 20,000 ft (6,100 m)
Cruise speed: 292 kn (336 mph, 541 km/h)
Range: 2,050 nmi (2,360 mi, 3,800 km)
Ferry range: 3,995 nmi (4,597 mi, 7,399 km)
Service ceiling: 33,000 ft (10,000 m) empty
23,000 ft (7,000 m) with 42,000 lb (19,000 kg) payload
Rate of climb: 1,830 ft/min (9.3 m/s)
Takeoff distance: 3,586 ft (1,093 m) at 155,000 lb (70,307 kg) max gross weight;
1,400 ft (427 m) at 80,000 lb (36,287 kg) gross weight
Avionics
Westinghouse Electronic Systems AN/APN-241 weather and navigational radar
== See also ==
Related development
Lockheed AC-130
Lockheed DC-130
Lockheed EC-130
Lockheed EC-130H Compass Call
Lockheed HC-130
Lockheed L-100 Hercules
Lockheed LC-130
Lockheed Martin C-130J Super Hercules
Lockheed Martin KC-130
Lockheed MC-130
Lockheed RC-130
Lockheed WC-130
Aircraft of comparable role, configuration, and era
Antonov An-12
Armstrong Whitworth AW.660 Argosy
Blackburn Beverley
Shaanxi Y-8
Kawasaki C-1
Short Belfast
Transall C-160
Related lists
List of accidents and incidents involving the Lockheed C-130 Hercules
List of non-carrier aircraft flown from aircraft carriers
List of United States military aerial refueling aircraft
List of military electronics of the United States
== Notes ==
== References ==
== Sources ==
Borman, Martin W. Lockheed C-130 Hercules. Marlborough, UK: Crowood Press, 1999. ISBN 978-1-86126-205-9.
Diehl, Alan E., PhD, Former Senior USAF Safety Scientist. Silent Knights: Blowing the Whistle on Military Accidents and Their Cover-ups. Dulles, Virginia: Brassey's Inc., 2002. ISBN 1-57488-544-8.
Donald, David, ed. "Lockheed C-130 Hercules". The Complete Encyclopedia of World Aircraft. New York: Barnes & Noble Books, 1997. ISBN 0-7607-0592-5.
Eden, Paul. "Lockheed C-130 Hercules". Encyclopedia of Modern Military Aircraft. London: Amber Books, 2004. ISBN 1-904687-84-9.
Frawley, Gerard. The International Directory of Military Aircraft, 2002/03. Fyshwick, ACT, Australia: Aerospace Publications Pty Ltd, 2002. ISBN 1-875671-55-2.
Olausson, Lars. Lockheed Hercules Production List 1954–2011. Såtenäs, Sweden: Self-published, 27th Edition March 2009. No ISBN.
Olausson, Lars (March 2010). Lockheed Hercules Production List 1954–2012 (28th ed.). Såtenäs, Sweden: Self-published.
"Pentagon Over the Islands: The Thirty-Year History of Indonesian Military Aviation". Air Enthusiast Quarterly (2): 154–162. n.d. ISSN 0143-5450.
Reed, Chris. Lockheed C-130 Hercules and Its Variants. Atglen, Pennsylvania: Schiffer Publishing, 1999. ISBN 978-0-7643-0722-5.
== External links ==
Lockheed Martin official C-130 page
U.S. Air Force C-130 fact sheet
C-130 U.S. Navy fact file, and C-130E Hercules Fact Sheet, National Museum of the Air Force site
C-130hercules.net
C-130 page on amcmuseum.org
"Herculean Transport" a 1954 Flight article
C-130 takes off and lands on a Carrier USS Forrestal on YouTube
Newsreel footage from 1955 of blunt nose Hercules prototype (1955) from British Pathé (Record No:63598) at YouTube
The short film Staff Film Report 66-12A (1966) is available for free viewing and download at the Internet Archive.
RNZAF C-130H (NZ) Hercules NZ7001 retired and on display at the Air Force Museum of New Zealand |
C-17 Globemaster III | The McDonnell Douglas/Boeing C-17 Globemaster III is a large military transport aircraft developed for the United States Air Force (USAF) between the 1980s to the early 1990s by McDonnell Douglas. The C-17 carries forward the name of two previous piston-engined military cargo aircraft, the Douglas C-74 Globemaster and the Douglas C-124 Globemaster II.
The C-17 is based upon the YC-15, a smaller prototype airlifter designed during the 1970s. It was designed to replace the Lockheed C-141 Starlifter, and also fulfill some of the duties of the Lockheed C-5 Galaxy. The redesigned airlifter differs from the YC-15 in that it is larger and has swept wings and more powerful engines. Development was protracted by a series of design issues, causing the company to incur a loss of nearly US$1.5 billion on the program's development phase. On 15 September 1991, roughly one year behind schedule, the first C-17 performed its maiden flight. The C-17 formally entered USAF service on 17 January 1995. Boeing, which merged with McDonnell Douglas in 1997, continued to manufacture the C-17 for almost two decades. The final C-17 was completed at the Long Beach, California, plant and flown on 29 November 2015.
The C-17 commonly performs tactical and strategic airlift missions, transporting troops and cargo throughout the world; additional roles include medical evacuation and airdrop duties. The transport is in service with the USAF along with the air forces of India, the United Kingdom, Australia, Canada, Qatar, the United Arab Emirates, Kuwait, and the Europe-based multilateral organization Heavy Airlift Wing.
The type played a key logistical role during both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq, as well as in providing humanitarian aid in the aftermath of various natural disasters, including the 2010 Haiti earthquake, the 2011 Sindh floods and the 2023 Turkey-Syria earthquake.
== Development ==
=== Background and design phase ===
In the 1970s, the U.S. Air Force began looking for a replacement for its Lockheed C-130 Hercules tactical cargo aircraft. The Advanced Medium STOL Transport (AMST) competition was held, with Boeing proposing the YC-14, and McDonnell Douglas proposing the YC-15. Though both entrants exceeded specified requirements, the AMST competition was canceled before a winner was selected. The USAF started the C-X program in November 1979 to develop a larger AMST with longer range to augment its strategic airlift.
By 1980, the USAF had a large fleet of aging C-141 Starlifter cargo aircraft. Compounding matters, increased strategic airlift capabilities were needed to fulfill its rapid-deployment airlift requirements. The USAF set mission requirements and released a request for proposals (RFP) for C-X in October 1980. McDonnell Douglas chose to develop a new aircraft based on the YC-15. Boeing bid an enlarged three-engine version of its AMST YC-14. Lockheed submitted both a C-5-based design and an enlarged C-141 design. On 28 August 1981, McDonnell Douglas was chosen to build its proposal, then designated C-17. Compared to the YC-15, the new aircraft differed in having swept wings, increased size, and more powerful engines. This would allow it to perform the work done by the C-141, and to fulfill some of the duties of the Lockheed C-5 Galaxy, freeing the C-5 fleet for outsize cargo.
Alternative proposals were pursued to fill airlift needs after the C-X contest. These were lengthening of C-141As into C-141Bs, ordering more C-5s, continued purchases of KC-10s, and expansion of the Civil Reserve Air Fleet. Limited budgets reduced program funding, requiring a delay of four years. During this time contracts were awarded for preliminary design work and for the completion of engine certification. In December 1985, a full-scale development contract was awarded, under Program Manager Bob Clepper. At this time, first flight was planned for 1990. The USAF had formed a requirement for 210 aircraft.
Development problems and limited funding caused delays in the late 1980s. Criticisms were made of the developing aircraft and questions were raised about more cost-effective alternatives during this time. In April 1990, Secretary of Defense Dick Cheney reduced the order from 210 to 120 aircraft. The maiden flight of the C-17 took place on 15 September 1991 from the McDonnell Douglas's plant in Long Beach, California, about a year behind schedule. The first aircraft (T-1) and five more production models (P1-P5) participated in extensive flight testing and evaluation at Edwards Air Force Base. Two complete airframes were built for static and repeated load testing.
=== Development difficulties ===
A static test of the C-17 wing in October 1992 resulted in its failure at 128% of design limit load, below the 150% requirement. Both wings buckled rear to the front and failures occurred in stringers, spars, and ribs. Some $100 million was spent to redesign the wing structure; the wing failed at 145% during a second test in September 1993. A review of the test data, however, showed that the wing was not loaded correctly and did indeed meet the requirement. The C-17 received the "Globemaster III" name in early 1993. In late 1993, the Department of Defense (DoD) gave the contractor two years to solve production issues and cost overruns or face the contract's termination after the delivery of the 40th aircraft. By accepting the 1993 terms, McDonnell Douglas incurred a loss of nearly US$1.5 billion on the program's development phase.
In March 1994, the Non-Developmental Airlift Aircraft program was established to procure a transport aircraft using commercial practices as a possible alternative or supplement to the C-17. Initial material solutions considered included: buy a modified Boeing 747-400 NDAA, restart the C-5 production line, extend the C-141 service life, and continue C-17 production. The field eventually narrowed to: the Boeing 747-400 (provisionally named the C-33), the Lockheed Martin C-5D, and the McDonnell Douglas C-17. The NDAA program was initiated after the C-17 program was temporarily capped at a 40-aircraft buy (in December 1993) pending further evaluation of C-17 cost and performance and an assessment of commercial airlift alternatives.
In April 1994, the program remained over budget and did not meet weight, fuel burn, payload, and range specifications. It failed several key criteria during airworthiness evaluation tests. Problems were found with the mission software, landing gear, and other areas. In May 1994, it was proposed to cut production to as few as 32 aircraft; these cuts were later rescinded. A July 1994 Government Accountability Office (GAO) report revealed that USAF and DoD studies from 1986 and 1991 stated the C-17 could use 6,400 more runways outside the U.S. than the C-5, but these studies had only considered runway dimensions, but not runway strength or load classification numbers (LCN). The C-5 has a lower LCN, but the USAF classifies both in the same broad load classification group. When considering runway dimensions and load ratings, the C-17's worldwide runway advantage over the C-5 shrank from 6,400 to 911 airfields. The report also stated "current military doctrine that does not reflect the use of small, austere airfields", thus the C-17's short field capability was not considered.
A January 1995 GAO report stated that the USAF originally planned to order 210 C-17s at a cost of $41.8 billion, and that the 120 aircraft on order were to cost $39.5 billion based on a 1992 estimate. In March 1994, the U.S. Army decided it did not need the 60,000 lb (27,000 kg) low-altitude parachute-extraction system delivery with the C-17 and that the C-130's 42,000 lb (19,000 kg) capability was sufficient. C-17 testing was limited to this lower weight. Airflow issues prevented the C-17 from meeting airdrop requirements. A February 1997 GAO report revealed that a C-17 with a full payload could not land on 3,000 ft (914 m) wet runways; simulations suggested a distance of 5,000 ft (1,500 m) was required. The YC-15 was transferred to AMARC to be made flightworthy again for further flight tests for the C-17 program in March 1997.
By September 1995, most of the prior issues were reportedly resolved and the C-17 was meeting all performance and reliability targets. The first USAF squadron was declared operational in January 1995.
=== Production and deliveries ===
In 1996, the DoD ordered another 80 aircraft for a total of 120. In 1997, McDonnell Douglas merged with domestic competitor Boeing. In April 1999, Boeing offered to cut the C-17's unit price if the USAF bought 60 more; in August 2002, the order was increased to 180 aircraft. In 2007, 190 C-17s were on order for the USAF. On 6 February 2009, Boeing was awarded a $2.95 billion contract for 15 additional C-17s, increasing the total USAF fleet to 205 and extending production from August 2009 to August 2010. On 6 April 2009, U.S. Secretary of Defense Robert Gates stated that there would be no more C-17s ordered beyond the 205 planned. However, on 12 June 2009, the House Armed Services Air and Land Forces Subcommittee added a further 17 C-17s.
Debate arose over follow-on C-17 orders, the USAF requested line shutdown while Congress called for further production. In FY2007, the USAF requested $1.6 billion (~$2.27 billion in 2023) in response to "excessive combat use" on the C-17 fleet. In 2008, USAF General Arthur Lichte, Commander of Air Mobility Command, indicated before a House of Representatives subcommittee on air and land forces a need to extend production to another 15 aircraft to increase the total to 205, and that C-17 production may continue to satisfy airlift requirements. The USAF finally decided to cap its C-17 fleet at 223 aircraft; the final delivery was on 12 September 2013.
In 2010, Boeing reduced the production rate to 10 aircraft per year from a high of 16 per year, due to dwindling orders and to extend the production line's life while additional orders were sought. The workforce was reduced by about 1,100 through 2012, a second shift at the Long Beach plant was also eliminated. By April 2011, 230 production C-17s had been delivered, including 210 to the USAF. The C-17 prototype "T-1" was retired in 2012 after use as a testbed by the USAF. In January 2010, the USAF announced the end of Boeing's performance-based logistics contracts to maintain the type. On 19 June 2012, the USAF ordered its 224th and final C-17 to replace one that crashed in Alaska in July 2010.
In September 2013, Boeing announced that C-17 production was starting to close down. In October 2014, the main wing spar of the 279th and last aircraft was completed; this C-17 was delivered in 2015, after which Boeing closed the Long Beach plant. Production of spare components was to continue until at least 2017. The C-17 is projected to be in service for several decades. In February 2014, Boeing was engaged in sales talks with "five or six" countries for the remaining 15 C-17s; thus Boeing decided to build ten aircraft without confirmed buyers in anticipation of future purchases.
In May 2015, The Wall Street Journal reported that Boeing expected to book a charge of under $100 million and cut 3,000 positions associated with the C-17 program, and also suggested that Airbus' lower cost A400M Atlas took international sales away from the C-17.
== Design ==
The C-17 Globemaster III is a strategic transport aircraft, able to airlift cargo close to a battle area. The size and weight of U.S. mechanized firepower and equipment have grown in recent decades from increased air mobility requirements, particularly for large or heavy non-palletized outsize cargo. It has a length of 174 feet (53 m) and a wingspan of 169 feet 10 inches (51.77 m), and uses about 8% composite materials, mostly in secondary structure and control surfaces. The aircraft features an anhedral wing configuration, providing pitch and roll stability to the aircraft. The aircraft's stability is furthered by its T-tail design, raising the center of pressure even higher above the center of mass. Drag is also lowered, as the horizontal stabilizer is far removed from the vortices generated by the two wings of the aircraft.
The C-17 is powered by four Pratt & Whitney F117-PW-100 turbofan engines, which are based on the commercial Pratt & Whitney PW2040 used on the Boeing 757. Each engine is rated at 40,400 lbf (180 kN) of thrust. The engine's thrust reversers direct engine exhaust air upwards and forward, reducing the chances of foreign object damage by ingestion of runway debris, and providing enough reverse thrust to back up the aircraft while taxiing. The thrust reversers can also be used in flight at idle-reverse for added drag in maximum-rate descents. In vortex surfing tests performed by two C-17s, up to 10% fuel savings were reported.
For cargo operations the C-17 requires a crew of three: pilot, copilot, and loadmaster. The cargo compartment is 88 feet (27 m) long by 18 feet (5.5 m) wide by 12 feet 4 inches (3.76 m) high. The cargo floor has rollers for palletized cargo but it can be flipped to provide a flat floor suitable for vehicles and other rolling stock. Cargo is loaded through a large aft ramp that accommodates rolling stock, such as a 69-ton (63-metric ton) M1 Abrams main battle tank, other armored vehicles, trucks, and trailers, along with palletized cargo.
Maximum payload of the C-17 is 170,900 pounds (77,500 kg; 85.5 short tons), and its maximum takeoff weight is 585,000 pounds (265,000 kg). With a payload of 160,000 pounds (73,000 kg) and an initial cruise altitude of 28,000 ft (8,500 m), the C-17 has an unrefueled range of about 2,400 nautical miles (4,400 kilometres) on the first 71 aircraft, and 2,800 nautical miles (5,200 kilometres) on all subsequent extended-range models that include a sealed center wing bay as a fuel tank. Boeing informally calls these aircraft the C-17 ER. The C-17's cruise speed is about 450 knots (830 km/h) (Mach 0.74). It is designed to airdrop 102 paratroopers and their equipment. According to Boeing the maximum unloaded range is 6,230 nautical miles (11,540 km).
The C-17 is designed to operate from runways as short as 3,500 ft (1,067 m) and as narrow as 90 ft (27 m). The C-17 can also operate from unpaved, unimproved runways (although with a higher probability to damage the aircraft). The thrust reversers can be used to move the aircraft backwards and reverse direction on narrow taxiways using a three- (or more) point turn. The plane is designed for 20 man-hours of maintenance per flight hour, and a 74% mission availability rate.
== Operational history ==
=== United States Air Force ===
The first production C-17 was delivered to Charleston Air Force Base, South Carolina, on 14 July 1993. The first C-17 unit, the 17th Airlift Squadron, became operationally ready on 17 January 1995. It has broken 22 records for oversized payloads. The C-17 was awarded U.S. aviation's most prestigious award, the Collier Trophy, in 1994. A Congressional report on operations in Kosovo and Operation Allied Force noted "One of the great success stories...was the performance of the Air Force's C-17A" It flew half of the strategic airlift missions in the operation, the type could use small airfields, easing operations; rapid turnaround times also led to efficient utilization.
C-17s delivered military supplies during Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq as well as humanitarian aid in the aftermath of the 2010 Haiti earthquake, and the 2011 Sindh floods, delivering thousands of food rations, tons of medical and emergency supplies. On 26 March 2003, 15 USAF C-17s participated in the biggest combat airdrop since the United States invasion of Panama in December 1989: the night-time airdrop of 1,000 paratroopers from the 173rd Airborne Brigade occurred over Bashur, Iraq. These airdrops were followed by C-17s ferrying M1 Abrams, M2 Bradleys, M113s and artillery. USAF C-17s have also assisted allies in their airlift needs, such as Canadian vehicles to Afghanistan in 2003 and Australian forces for the Australian-led military deployment to East Timor in 2006. In 2006, USAF C-17s flew 15 Canadian Leopard C2 tanks from Kyrgyzstan into Kandahar in support of NATO's Afghanistan mission. In 2013, five USAF C-17s supported French operations in Mali, operating with other nations' C-17s (RAF, NATO and RCAF deployed a single C-17 each).
Flight crews have nicknamed the aircraft "the Moose", because during ground refueling, the pressure relief vents make a sound like the call of a female moose in heat.
Since 1999, C-17s have flown annually to Antarctica on Operation Deep Freeze in support of the US Antarctic Research Program, replacing the C-141s used in prior years. The initial flight was flown by the USAF 62nd Airlift Wing. The C-17s fly round trip between Christchurch Airport and McMurdo Station around October each year and take 5 hours to fly each way. In 2006, the C-17 flew its first Antarctic airdrop mission, delivering 70,000 pounds of supplies. Further air drops occurred during subsequent years.
A C-17 accompanies the President of the United States on his visits to both domestic and foreign arrangements, consultations, and meetings. It is used to transport the Presidential Limousine, Marine One, and security detachments. On several occasions, a C-17 has been used to transport the President himself, using the Air Force One call sign while doing so.
==== Rapid Dragon missile launcher testing ====
In 2015, as part of a missile-defense test at Wake Island, simulated medium-range ballistic missiles were launched from C-17s against THAAD missile defense systems and the USS John Paul Jones (DDG-53). In early 2020, palletized munitions–"Combat Expendable Platforms"– were tested from C-17s and C-130Js with results the USAF considered positive.
In 2021, the Air Force Research Laboratory further developed the concept into tests of the Rapid Dragon system, which transforms the C-17 into a lethal cruise missile arsenal ship capable of mass launching 45 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Anticipated improvements included support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when full production of the 1,900 km (1,200 mi) JASSM-XR was expected to deliver large inventories in 2024.
==== Evacuation of Afghanistan ====
On 15 August 2021, USAF C-17 02-1109 from the 62nd Airlift Wing and 446th Airlift Wing at Joint Base Lewis-McChord departed Hamid Karzai International Airport in Kabul, Afghanistan, while crowds of people trying to escape the 2021 Taliban offensive ran alongside the aircraft. The C-17 lifted off with people holding on to the outside, and at least two died after falling from the aircraft. There were an unknown number possibly crushed and killed by the landing gear retracting, with human remains found in the landing-gear stowage. Also that day, C-17 01-0186 from the 816th Expeditionary Airlift Squadron at Al Udeid Air Base transported 823 Afghan citizens from Hamid Karzai International Airport on a single flight, setting a new record for the type, which was previously over 670 people during a 2013 typhoon evacuation from Tacloban, Philippines.
=== Royal Air Force ===
On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. On 13 September 2022, C-17 ZZ177 carried the body of Queen Elizabeth II from Edinburgh Airport to RAF Northolt in London. She had been lying in state at St Giles' Cathedral in Edinburgh, Scotland.
=== Royal Canadian Air Force ===
The Canadian Armed Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team (DART) to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 Ruslan for a Canadian Army deployment to Haiti in 2003. A combination of leased Ruslans, Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements.
On 14 April 2010, a Canadian CC-177 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the 2010 Haiti earthquake, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, CC-177s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact.
=== Strategic Airlift Capability program ===
At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 for the SAC program with the second and third C-17s delivered in September and October 2009. SAC members are Bulgaria, Estonia, Finland, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, Sweden and the U.S. as of 2024.
The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are crewed in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions.
=== Indian Air Force ===
The C-17 provides the IAF with strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at 13,000 ft (4,000 m) to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force Station and are operated by No. 81 Squadron IAF Skylords. The first C-17 was delivered in January 2013 for testing and training; it was officially accepted on 11 June 2013. The second C-17 was delivered on 23 July 2013 and put into service immediately. IAF Chief of Air Staff Norman AK Browne called it "a major component in the IAF's modernization drive" while taking delivery of the aircraft at Boeing's Long Beach factory. On 2 September 2013, the Skylords squadron with three C-17s officially entered IAF service.
The Skylords regularly fly missions within India, such as to high-altitude bases at Leh and Thoise. The IAF first used the C-17 to transport an infantry battalion's equipment to Port Blair on Andaman Islands on 1 July 2013. Foreign deployments to date include Tajikistan in August 2013, and Rwanda to support Indian peacekeepers. One C-17 was used for transporting relief materials during Cyclone Phailin.
The sixth aircraft was received in July 2014. In June 2017, the U.S. Department of State approved the potential sale of one C-17 to India under a proposed $366 million (~$448 million in 2023) U.S. Foreign Military Sale. This aircraft, the last C-17 produced, increased the IAF's fleet to 11 C-17s. In March 2018, a contract was awarded for completion by 22 August 2019. On 26 August 2019, Boeing delivered the 11th C-17 Globemaster III to the Indian Air Force.
On 7 February 2023, an IAF C-17 delivered humanitarian aid packages for earthquake victims in Turkey and Syria by taking a detour around Pakistan's airspace in the aftermath of 2021 Taliban takeover of Afghanistan.
An IAF C-17 executed a precision airdrop of two Combat Rubberised Raiding Craft along with a platoon of 8 MARCOS commandos in an operation to rescue the ex-MV Ruen, a Maltese-flagged cargo ship hijacked by Somali pirates in December 2023. The mission was conducted on 16 March 2024 in a 10-hour round trip mission to an area 2600 km away from the Indian coast. The ship was being used as a mothership for piracy. In a joint operation carried out with the Indian Navy assets such as P-8I Neptune maritime patrol aircraft, SeaGuardian drones, destroyer INS Kolkata and patrol vessel INS Subhadra, the IAF C-17 airdropped Navy's MARCOS commandos, who boarded the hijacked ship, rescued 17 sailors and disarmed 35 pirates in the operation.
=== Qatar ===
Boeing delivered Qatar's first C-17 on 11 August 2009 and the second on 10 September 2009 for the Qatar Emiri Air Force. Qatar received its third C-17 in 2012, and fourth C-17 was received on 10 December 2012. In June 2013, The New York Times reported that Qatar was allegedly using its C-17s to ship weapons from Libya to the Syrian opposition during the civil war via Turkey. On 15 June 2015, it was announced at the Paris Airshow that Qatar agreed to order four additional C-17s from the five remaining "white tail" C-17s to double Qatar's C-17 fleet. One Qatari C-17 bears the civilian markings of government-owned Qatar Airways, although the airplane is owned and operated by the Qatar Emiri Air Force. The head of Qatar's airlift selection committee, Ahmed Al-Malki, said the paint scheme was "to build awareness of Qatar's participation in operations around the world."
== Variants ==
C-17A: Initial military airlifter version.
C-17A "ER": Unofficial name for C-17As with extended range due to the addition of the center wing tank. This upgrade was incorporated in production beginning in 2001 with Block 13 aircraft.
Block 16: This software/hardware upgrade was a major improvement of the improved Onboard Inert Gas-Generating System (OBIGGS II), a new weather radar, an improved stabilizer strut system and other avionics.
Block 21: Adds ADS-B capability, IFF modification, communication/navigation upgrades and improved flight management.
C-17B: A proposed tactical airlifter version with double-slotted flaps, an additional main landing gear on the center fuselage, more powerful engines, and other systems for shorter landing and take-off distances. Boeing offered the C-17B to the U.S. military in 2007 for carrying the Army's Future Combat Systems (FCS) vehicles and other equipment.
KC-17: Proposed tanker variant of the C-17.
MD-17: Proposed variant for US airlines participating in the Civil Reserve Air Fleet, later redesignated as BC-17X after 1997 merger.
== Operators ==
Australia
Royal Australian Air Force – 8 C-17A ERs in service as of January 2018.
No. 36 Squadron
Canada
Royal Canadian Air Force – 5 CC-177 (C-17A ER) aircraft in use as of January 2025.: 32
429 Transport Squadron, CFB Trenton
India
Indian Air Force – 11 C-17s as of August 2019.
No. 81 Squadron (Skylords), Hindon AFS
Kuwait
Kuwait Air Force – 2 C-17s as of January 2018
Europe
The multi-nation Strategic Airlift Capability Heavy Airlift Wing – 3 C-17s in service as of January 2018, including 1 C-17 contributed by the USAF; based at Pápa Air Base, Hungary.
Qatar
Qatar Emiri Air Force – 8 C-17As in use as of January 2018,
United Arab Emirates
United Arab Emirates Air Force – 8 C-17As in operation as of January 2018
United Kingdom
Royal Air Force – 8 C-17A ERs in use as of May 2021
No. 24 Squadron, RAF Brize Norton
No. 99 Squadron, RAF Brize Norton
United States
United States Air Force – 222 C-17s in service as of January 2018 (157 Active, 47 Air National Guard, 18 Air Force Reserve)
60th Air Mobility Wing – Travis Air Force Base, California
21st Airlift Squadron
62d Airlift Wing – McChord AFB, Washington
4th Airlift Squadron
7th Airlift Squadron
8th Airlift Squadron
10th Airlift Squadron – (2003–2016)
305th Air Mobility Wing – McGuire Air Force Base, New Jersey
6th Airlift Squadron
385th Air Expeditionary Group – Al Udeid Air Base, Qatar
816th Expeditionary Airlift Squadron
436th Airlift Wing – Dover Air Force Base, Delaware
3d Airlift Squadron
437th Airlift Wing – Charleston Air Force Base, South Carolina
14th Airlift Squadron
15th Airlift Squadron
16th Airlift Squadron
17th Airlift Squadron – (1993–2015)
3d Wing – Elmendorf Air Force Base, Alaska
517th Airlift Squadron (Associate)
15th Wing – Hickam Air Force Base, Hawaii
535th Airlift Squadron
97th Air Mobility Wing – Altus AFB, Oklahoma
58th Airlift Squadron
412th Test Wing – Edwards AFB, California
418th Flight Test Squadron
Air Force Reserve
315th Airlift Wing (Associate) – Charleston AFB, South Carolina
300th Airlift Squadron
317th Airlift Squadron
701st Airlift Squadron
349th Air Mobility Wing (Associate) – Travis AFB, California
301st Airlift Squadron
445th Airlift Wing – Wright-Patterson AFB, Ohio
89th Airlift Squadron
446th Airlift Wing (Associate) – McChord AFB, Washington
97th Airlift Squadron
313th Airlift Squadron
728th Airlift Squadron
452d Air Mobility Wing – March ARB, California
729th Airlift Squadron
507th Air Refueling Wing – Tinker AFB, Oklahoma
730th Air Mobility Training Squadron (Altus AFB)
512th Airlift Wing (Associate) – Dover AFB, Delaware
326th Airlift Squadron
514th Air Mobility Wing (Associate) – McGuire AFB, New Jersey
732d Airlift Squadron
911th Airlift Wing – Pittsburgh Air Reserve Station, Pennsylvania
758th Airlift Squadron
Air National Guard
105th Airlift Wing – Stewart ANGB, New York
137th Airlift Squadron
145th Airlift Wing – Charlotte Air National Guard Base, North Carolina
156th Airlift Squadron
154th Wing – Hickam AFB, Hawaii
204th Airlift Squadron (Associate)
164th Airlift Wing – Memphis ANGB, Tennessee
155th Airlift Squadron
167th Airlift Wing – Shepherd Field ANGB, West Virginia
167th Airlift Squadron
172d Airlift Wing – Allen C. Thompson Field ANGB, Mississippi
183d Airlift Squadron
176th Wing – Elmendorf AFB, Alaska
144th Airlift Squadron
== Accidents and notable incidents ==
On 10 September 1998, a USAF C-17 (AF Serial No. 96-0006) delivered Keiko the orca to Vestmannaeyjar, Iceland, a 3,800-foot (1,200 m) runway, and suffered a landing gear failure during landing. There were no injuries, but the landing gear sustained major damage.
On 10 December 2003, a USAF C-17 (AF Serial No. 98-0057) was hit by a surface-to-air missile after take-off from Baghdad, Iraq. One engine was disabled and the aircraft returned for a safe landing. It was repaired and returned to service.
On 6 August 2005, a USAF C-17 (AF Serial No. 01-0196) ran off the runway at Bagram Air Base in Afghanistan while attempting to land, destroying its nose and main landing gear. After two months making it flightworthy, a test pilot flew the aircraft to Boeing's Long Beach facility as the temporary repairs imposed performance limitations. In October 2006, it returned to service following repairs.
On 30 January 2009, a USAF C-17 (AF Serial No. 96-0002 – "Spirit of the Air Force") made a gear-up landing at Bagram Air Base. It was ferried from Bagram AB, making several stops along the way, to Boeing's Long Beach plant for extensive repairs. The USAF Aircraft Accident Investigation Board concluded the cause was the crew's failure to follow the pre-landing checklist and lower the landing gear.
On 28 July 2010, a USAF C-17 (AF Serial No. 00-0173 – "Spirit of the Aleutians") crashed at Elmendorf Air Force Base, Alaska, while practicing for the 2010 Arctic Thunder Air Show, killing all four aboard. It crashed near a railroad, disrupting rail operations. A military investigation found pilot error caused a stall. This is the C-17's only fatal crash and the only hull loss accident.
On 23 January 2012, a USAF C-17 (AF Serial No. 07-7189), assigned to the 437th Airlift Wing, Joint Base Charleston, South Carolina, landed on runway 34R at Forward Operating Base Shank, Afghanistan. The crew did not realize the required stopping distance exceeded the runway's length thus were unable to stop. It came to rest approximately 700 feet from the runway's end upon an embankment, causing major structural damage but no injuries. After 9 months of repairs to make it airworthy, the C-17 flew to Long Beach. It returned to service at a reported cost of $69.4 million.
On 20 July 2012, a USAF C-17 of the 305th Air Mobility Wing, flying from McGuire AFB, New Jersey, to MacDill Air Force Base in Tampa, Florida, mistakenly landed at nearby Peter O. Knight Airport, a small municipal field without a control tower, with Gen. Jim Mattis, then commander of CENTCOM, on board. After a few hours, the Globemaster took off from the airport's 3,580-foot (1,090 m) runway without incident and made the short trip to MacDill AFB. The mistaken landing followed an extended duration flight from Europe to Southwest Asia to embark military passengers before returning to the U.S. The USAF investigation attributed the incident to fatigue leading to pilot error, as both airfields' main runways share the same magnetic heading and are only four miles apart along the shore of Tampa Bay.
On 9 April 2021, USAF C-17 10-0223 suffered a fire in its undercarriage after landing at Charleston AFB following a flight from RAF Mildenhall, UK. The fire spread to the fuselage before it was extinguished.
== Specifications (C-17A) ==
Data from Brassey's World Aircraft & Systems Directory, U.S. Air Force fact sheet, BoeingGeneral characteristics
Crew: 3 (2 pilots, 1 loadmaster)
Capacity: 170,900 lb (77,519 kg) of cargo distributed at max over 18 463L master pallets or a mix of palletized cargo and vehicles
102 paratroopers or
134 troops with palletized and sidewall seats or
54 troops with sidewall seats (allows 13 cargo pallets) only or
36 litter and 54 ambulatory patients and medical attendants or
Cargo, such as one M1 Abrams tank, two Bradley armored vehicles, or three Stryker armored vehicles
Length: 174 ft (53 m)
Wingspan: 169 ft 9.6 in (51.755 m)
Height: 55 ft 1 in (16.79 m)
Wing area: 3,800 sq ft (350 m2)
Aspect ratio: 7.165
Airfoil: root: DLBA 142; tip: DLBA 147
Empty weight: 282,500 lb (128,140 kg)
Max takeoff weight: 585,000 lb (265,352 kg)
Fuel capacity: 35,546 US gal (29,598 imp gal; 134,560 L)
Powerplant: 4 × Pratt & Whitney PW2000 turbofan engines, 40,440 lbf (179.9 kN) thrust each (US military designation: F117-PW-100)
Performance
Cruise speed: 450 kn (520 mph, 830 km/h) (Mach 0.74–0.79)
Range: 2,420 nmi (2,780 mi, 4,480 km) with 157,000 lb (71,214 kg) payload
Ferry range: 6,230 nmi (7,170 mi, 11,540 km)
Service ceiling: 45,000 ft (14,000 m)
Wing loading: 150 lb/sq ft (730 kg/m2)
Thrust/weight: 0.277 (minimum)
Takeoff run at MTOW: 8,200 ft (2,499 m)
Takeoff run at 395,000 lb (179,169 kg): 3,000 ft (914 m)
Landing distance: 3,500 ft (1,067 m)
Avionics
AlliedSignal AN/APS-133(V) weather and mapping radar
== See also ==
Related development
McDonnell Douglas YC-15
Aircraft of comparable role, configuration, and era
Ilyushin Il-76
Xi'an Y-20
Related lists
List of active Canadian military aircraft
List of active United Kingdom military aircraft
List of active United States military aircraft
== References ==
=== Bibliography ===
== External links ==
Official website
USAF C-17 fact sheet
Globemaster (C-17) – Royal Air Force
RCAF CC-177 Globemaster III page Archived 21 August 2021 at the Wayback Machine
Full C-17 production list, including manufacturer serial numbers (c/n)
Tour of the manufacturing line on California's Gold |
Cable transport | Cable transport is a broad class of transport modes that have cables. They transport passengers and goods, often in vehicles called cable cars. The cable may be driven or passive, and items may be moved by pulling, sliding, sailing, or by drives within the object being moved on cableways. The use of pulleys and balancing of loads moving up and down are common elements of cable transport. They are often used in mountainous areas where cable haulage can overcome large differences in elevation.
== Common modes of cable transport ==
=== Aerial transport ===
Forms of cable transport in which one or more cables are strung between supports of various forms and cars are suspended from these cables.
=== Cable railways ===
Forms of cable transport where cars on rails are hauled by cables. The rails are usually steeply inclined and usually at ground level.
Cable car
Funicular
=== Other ===
Other forms of cable-hauled transport.
Cable ferry
Surface lift
Elevator
== History ==
Rope-drawn transport dates back to 250 BC as evidenced by illustrations of aerial ropeway transportation systems in South China.
=== Early aerial tramways ===
The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bi-cable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed and expanded with the advent of wire rope and electric drive.
The first use of wire rope for aerial tramways is disputed. American inventor Peter Cooper is one early claimant, constructing an aerial tramway using wire rope in Baltimore 1832, to move landfill materials. Though there is only partial evidence for the claimed 1832 tramway, Cooper was involved in many of such tramways built in the 1850s, and in 1853 he built a two-mile-long tramway to transport iron ore to his blast furnaces at Ringwood, New Jersey.
World War I motivated extensive use of military tramways for warfare between Italy and Austria.
During the industrial revolution, new forms of cable-hauled transportation systems were created including the use of steel cable to allow for greater load support and larger systems. Aerial tramways were first used for commercial passenger haulage in the 1900s.
=== The first cable railways ===
The earliest form of cable railway was the gravity incline, which in its simplest form consists of two parallel tracks laid on a steep gradient, with a single rope wound around a winding drum and connecting the trains of wagons on the tracks. Loaded wagons at the top of the incline are lowered down, their weight hauling empty wagons from the bottom. The winding drum has a brake to control the rate of travel of the wagons. The first use of a gravity incline isn't recorded, but the Llandegai Tramway at Bangor in North Wales was opened in 1798, and is one of the earliest examples using iron rails.
The first cable-hauled street railway was the London and Blackwall Railway, built in 1840, which used fibre to grip the haulage rope. This caused a series of technical and safety issues, which led to the adoption of steam locomotives by 1848.
The first Funicular railway was opened in Lyon in 1862.
The Westside and Yonkers Patent Railway Company developed a cable-hauled elevated railway. This 3½ mile long line was proposed in 1866 and opened in 1868. It operated as a cable railway until 1871 when it was converted to use steam locomotives.
The next development of the cable car came in California. Andrew Hallidie, a Scottish emigre, gave San Francisco the first effective and commercially successful route, using steel cables, opening the Clay Street Hill Railroad on August 2, 1873. Hallidie was a manufacturer of steel cables. The system featured a human-operated grip, which was able to start and stop the car safely. The rope that was used allowed the multiple, independent cars to run on one line, and soon Hallidie's concept was extended to multiple lines in San Francisco.
The first cable railway outside the United Kingdom and the United States was the Roslyn Tramway, which opened in 1881, in Dunedin, New Zealand. America remained the country that made the greatest use of cable railways; by 1890 more than 500 miles of cable-hauled track had been laid, carrying over 1,000,000 passengers per year. However, in 1890, electric tramways exceeded the cable hauled tramways in mileage, efficiency and speed.
=== Early ski lifts ===
The first surface lift was built in 1908 by German Robert Winterhalder in Schollach/Eisenbach, Hochschwarzwald and started operations February 14, 1908.
A steam-powered toboggan tow, 950 feet (290 m) in length, was built in Truckee, California, in 1910. The first skier-specific tow in North America was apparently installed in 1933 by Alec Foster at Shawbridge in the Laurentians outside Montreal, Quebec.
The modern J-bar and T-bar mechanism was invented in 1934 by the Swiss engineer Ernst Constam, with the first lift installed in Davos, Switzerland.
The first chairlift was developed by James Curran in 1936. The co-owner of the Union Pacific Railroad, William Averell Harriman owned America's first ski resort, Sun Valley, Idaho. He asked his design office to tackle the problem of lifting skiers to the top of the resort. Curran, a Union Pacific bridge designer, adapted a cable hoist he had designed for loading bananas in Honduras to create the first ski lift.
=== More recent developments ===
More recent developments are being classified under the type of track that their design is based upon. After the success of this operation, several other projects were initiated in New Zealand and Chicago. The social climate around pollution is allowing for a shift from cars back to the utilization of cable transport due to their advantages. However, for many years they were a niche form of transportation used primarily in difficult-to-operate conditions for cars (such as on ski slopes as lifts). Now that cable transport projects are on the increase, the social effects are beginning to become more significant.
In 2018 the highest 3S cablecar has been inaugurated in Zermatt, Switzerland after more than two years of construction. This cablecar is also called the "Matterhorn Glacier ride" and it allows passengers to reach the top of the Klein Matterhorn mountain (3883m)
== Urban cableways ==
Urban cable transport encompasses all transport systems that use cables to pull vehicles around cities. The gondola or cableway type is spreading relatively quickly in South American urban environments, following the success of Medellín's Metrocable around 2004. As of 2023, 33 aerial cable cars (ACC) transit lines were inaugurated in Latin America and the Caribbean (LAC), the majority after 2010. There are also three recent installations in Algeria in Tlemcen, Skikda and Constantine. Urban cable cars are a relatively old concept – the Bastille cable car in Grenoble, France went into service in 1934 and the Sugarloaf Cable Cars in Rio de Janeiro in 1912 and 1913, although the purpose of these installations is not to transport city dwellers in the narrow sense, but to serve as a tourist site. The majority of installations were built to overcome specific geographical challenges, such as for river crossings, access to islands, major urban breaks (motorway, railway, large industrial site) or steep gradients.
Urban cable cars often have advantages such as relatively small environmental footprints or less air pollution, rapid and less complex construction than other transport infrastructures, lower cost, low noise pollution, faster transport, a comfortable pleasant view, a pull factor for tourists, very little space needs at ground level,and lower energy cost per passenger. In Medellin, connecting high-up neighborhoods with the rest of the city through its MetroCable system has helped reduce the city's crime rates. Disadvantages can include challenges of poor weather conditions and slower speeds depending on the case.
According to a World Bank study, they typically reach speeds of between 10-20km/h and can carry up to 2,000 people per hour in each direction. One line in Bolivia's capital La Paz carries up to 65,000 people every 24 hours. It has a length of 16km as of 2019 and a one-way ticket costed around $0.42. There are installations planned all over the world that do not necessarily address specific geographical problems – according to The Gondola Project, these include Tasmania, Gothenburg (Sweden), Mombasa (Kenya) and Chicago.
== Social effects ==
=== Comparison with other transport types ===
When compared to trains and cars, the volume of people to transport over time and the start-up cost of the project must be a consideration. In areas with extensive road networks, personal vehicles offer greater flexibility and range. Remote places like mountainous regions and ski slopes may be difficult to link with roads, making cable transport project a much easier approach. A cable transport project system may also need fewer invasive changes to the local environment.
The use of Cable Transport is not limited to such rural locations as skiing resorts; it can be used in urban development areas. Their uses in urban areas include funicular railways, gondola lifts, and aerial tramways.
== Safety ==
According to a study by the technical inspection association TÜV SÜD, for every 100 million hours of travel, there are on average 25 deaths due to car accidents, 16 due to plane accidents and only two due to cable car accidents, most of which are due to passenger behaviour.
=== Accidents ===
A cable car accident in Cavalese, Italy, on 9 March 1976 is considered the worst aerial lift accident in history. The car crashed off the rails and fell 200 meters down a mountainside, also crashing through a grassy meadow before coming to a halt. The tragedy caused the death of 43 people, and four lift officials were jailed for charges regarding the accident.
On April 15, 1978, a cable car at Squaw Valley Ski Resort in California came off from one of its cables, dropping 75 feet (23 m) and violently bouncing up. It collided with a cable which sheared through the car. Four people were killed and 31 injured.
The Singapore cable car crash of 29 January 1983 occurred when a drilling rig passed beneath the cable car system linking the Singapore mainland with Sentosa island. The derrick of the drilling rig aboard the ship MV Eniwetok struck the cables, causing two of the gondolas to fall into the sea below. There were 7 fatalities.
On February 3, 1998, twenty people died in Cavalese, Italy, when a United States Marine Corps EA-6B Prowler aircraft, while flying too low, against regulations, cut a cable supporting a gondola of an aerial tramway. Those killed, 19 passengers and one operator, were eight Germans, five Belgians, three Italians, two Poles, one Austrian, and one Dutch. The United States refused to have the four Marines tried under Italian law and later court-martialed two of them with minimal charges in their country.
The Kaprun disaster was a fire that occurred in an ascending train in the tunnel of the Gletscherbahn Kaprun 2 funicular in Kaprun, Austria, on 11 November 2000. The disaster claimed the lives of 155 people, leaving 12 survivors (10 Germans and two Austrians) from the burning train. It is one of the worst cable car accidents in history.
A cable car derailed and crashed to the ground in the Nevis Range, near Fort William, Scotland, on 13 July 2006, seriously injuring all five passengers. Another car on the same rail also slid back down the rails when the crash happened. Following the incident, 50 people were left stranded at the station whilst the staff and aid helped the passengers of the crashed car.
On Wednesday 25 July 2012, passengers of the London cable car were stuck 90 meters in the air when a power failure caused the gondola to stop over the River Thames. The fault happened at 11:45 am and lasted for about 30 minutes. No passengers were injured, but this was the first problem to ever hit the London's new cable car link.
== References ==
== Further reading ==
"Cable cars: Danger in the skies". BBC News. June 1, 1999. Retrieved July 9, 2018.
== External links ==
Melbourne's cable trams on YouTube
San Francisco's Cable Cars & Motor Cars; 1900-1940s – with 1906 Earthquake on YouTube |
Canada | Canada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, making it the world's second-largest country by total area, with the world's longest coastline. Its border with the United States is the world's longest international land border. The country is characterized by a wide range of both meteorologic and geological regions. With a population of over 41 million people, it has widely varying population densities, with the majority residing in urban areas and large areas of the country being sparsely populated. Canada's capital is Ottawa and its three largest metropolitan areas are Toronto, Montreal, and Vancouver.
Indigenous peoples have continuously inhabited what is now Canada for thousands of years. Beginning in the 16th century, British and French expeditions explored and later settled along the Atlantic coast. As a consequence of various armed conflicts, France ceded nearly all of its colonies in North America in 1763. In 1867, with the union of three British North American colonies through Confederation, Canada was formed as a federal dominion of four provinces. This began an accretion of provinces and territories resulting in the displacement of Indigenous populations, and a process of increasing autonomy from the United Kingdom. This increased sovereignty was highlighted by the Statute of Westminster, 1931, and culminated in the Canada Act 1982, which severed the vestiges of legal dependence on the Parliament of the United Kingdom.
Canada is a parliamentary democracy and a constitutional monarchy in the Westminster tradition. The country's head of government is the prime minister, who holds office by virtue of their ability to command the confidence of the elected House of Commons and is appointed by the governor general, representing the monarch of Canada, the ceremonial head of state. The country is a Commonwealth realm and is officially bilingual (English and French) in the federal jurisdiction. It is very highly ranked in international measurements of government transparency, quality of life, economic competitiveness, innovation, education and human rights. It is one of the world's most ethnically diverse and multicultural nations, the product of large-scale immigration. Canada's long and complex relationship with the United States has had a significant impact on its history, economy, and culture.
A developed country, Canada has a high nominal per capita income globally and its advanced economy ranks among the largest in the world by nominal GDP, relying chiefly upon its abundant natural resources and well-developed international trade networks. Recognized as a middle power, Canada's support for multilateralism and internationalism has been closely related to its foreign relations policies of peacekeeping and aid for developing countries. Canada promotes its domestically shared values through participation in multiple international organizations and forums.
== Etymology ==
While a variety of theories have been postulated for the etymological origins of Canada, the name is now accepted as coming from the St. Lawrence Iroquoian word kanata, meaning "village" or "settlement". In 1535, Indigenous inhabitants of the present-day Quebec City region used the word to direct French explorer Jacques Cartier to the village of Stadacona. Cartier later used the word Canada to refer not only to that particular village but to the entire area subject to Donnacona (the chief at Stadacona); by 1545, European books and maps had begun referring to this small region along the Saint Lawrence River as Canada.
From the 16th to the early 18th century, Canada referred to the part of New France that lay along the Saint Lawrence River. Following the British conquest of New France, this area was known as the British Province of Quebec from 1763 to 1791. In 1791, the area became two British colonies called Upper Canada and Lower Canada. These two colonies were collectively referred to as the Canadas until their union as the Province of Canada in 1841.
Upon Confederation in 1867, Canada was adopted as the legal name for the new country at the London Conference and the word dominion was conferred as the country's title. By the 1950s, the term Dominion of Canada was no longer used by the United Kingdom, which considered Canada a "realm of the Commonwealth".
The Canada Act 1982, which brought the Constitution of Canada fully under Canadian control, referred only to Canada. Later that year, the name of the national holiday was changed from Dominion Day to Canada Day.
== History ==
=== Indigenous peoples ===
The first inhabitants of North America are generally hypothesized to have migrated from Siberia by way of the Bering land bridge and arrived at least 14,000 years ago. The Paleo-Indian archeological sites at Old Crow Flats and Bluefish Caves are two of the oldest sites of human habitation in Canada. The characteristics of Indigenous societies included permanent settlements, agriculture, complex societal hierarchies, and trading networks. Some of these cultures had collapsed by the time European explorers arrived in the late 15th and early 16th centuries and have only been discovered through archeological investigations. Indigenous peoples in present-day Canada include the First Nations, Inuit, and Métis, the last being of mixed descent who originated in the mid-17th century when First Nations people married European settlers and their offspring subsequently developed their own identity.
The Indigenous population at the time of the first European settlements is estimated to have been between 200,000 and two million, with a figure of 500,000 accepted by Canada's Royal Commission on Aboriginal Peoples. As a consequence of European colonization, the Indigenous population declined by forty to eighty percent. The decline is attributed to several causes, including the transfer of European diseases, to which they had no natural immunity, conflicts over the fur trade, conflicts with the colonial authorities and settlers, and the loss of Indigenous lands to settlers and the subsequent collapse of several nations' self-sufficiency.
Although not without conflict, European Canadians' early interactions with First Nations and Inuit populations were relatively peaceful. First Nations and Métis peoples played a critical part in the development of European colonies in Canada, particularly for their role in assisting European coureurs des bois and voyageurs in their explorations of the continent during the North American fur trade. These early European interactions with First Nations would change from friendship and peace treaties to the dispossession of Indigenous lands through treaties. From the late 18th century, European Canadians forced Indigenous peoples to assimilate into a western Canadian society. Settler colonialism reached a climax in the late 19th and early 20th centuries. A period of redress began with the formation of a reconciliation commission by the Government of Canada in 2008. This included acknowledgment of cultural genocide, settlement agreements, and betterment of racial discrimination issues, such as addressing the plight of missing and murdered Indigenous women.
=== European colonization ===
It is believed that the first documented European to explore the east coast of Canada was Norse explorer Leif Erikson. In approximately 1000 AD, the Norse built a small short-lived encampment that was occupied sporadically for perhaps 20 years at L'Anse aux Meadows on the northern tip of Newfoundland. No further European exploration occurred until 1497, when seafarer John Cabot explored and claimed Canada's Atlantic coast in the name of Henry VII of England. In 1534, French explorer Jacques Cartier explored the Gulf of Saint Lawrence where, on July 24, he planted a 10-metre (33 ft) cross bearing the words, "long live the King of France", and took possession of the territory New France in the name of King Francis I. The early 16th century saw European mariners with navigational techniques pioneered by the Basque and Portuguese establish seasonal whaling and fishing outposts along the Atlantic coast. In general, early settlements during the Age of Discovery appear to have been short-lived due to a combination of the harsh climate, problems with navigating trade routes and competing outputs in Scandinavia.
In 1583, Sir Humphrey Gilbert, by the royal prerogative of Queen Elizabeth I, founded St John's, Newfoundland, as the first North American English seasonal camp. In 1600, the French established their first seasonal trading post at Tadoussac along the Saint Lawrence. French explorer Samuel de Champlain arrived in 1603 and established the first permanent year-round European settlements at Port Royal (in 1605) and Quebec City (in 1608). Among the colonists of New France, Canadiens extensively settled the Saint Lawrence River valley and Acadians settled the present-day Maritimes, while fur traders and Catholic missionaries explored the Great Lakes, Hudson Bay, and the Mississippi watershed to Louisiana. The Beaver Wars broke out in the mid-17th century over control of the North American fur trade.
The English established additional settlements in Newfoundland in 1610 along with settlements in the Thirteen Colonies to the south. A series of four wars erupted in colonial North America between 1689 and 1763; the later wars of the period constituted the North American theatre of the Seven Years' War. Mainland Nova Scotia came under British rule with the 1713 Treaty of Utrecht and Canada and most of New France came under British rule in 1763 after the Seven Years' War.
=== British North America ===
The Royal Proclamation of 1763 established First Nation treaty rights, created the Province of Quebec out of New France, and annexed Cape Breton Island to Nova Scotia. St John's Island (now Prince Edward Island) became a separate colony in 1769. To avert conflict in Quebec, the British Parliament passed the Quebec Act 1774, expanding Quebec's territory to the Great Lakes and Ohio Valley. More importantly, the Quebec Act afforded Quebec special autonomy and rights of self-administration at a time when the Thirteen Colonies were increasingly agitating against British rule. It re-established the French language, Catholic faith, and French civil law there, staving off the growth of an independence movement in contrast to the Thirteen Colonies. The Proclamation and the Quebec Act in turn angered many residents of the Thirteen Colonies, further fuelling anti-British sentiment in the years prior to the American Revolution.
After the successful American War of Independence, the 1783 Treaty of Paris recognized the independence of the newly formed United States and set the terms of peace, ceding British North American territories south of the Great Lakes and east of the Mississippi River to the new country. The American war of independence also caused a large out-migration of Loyalists, the settlers who had fought against American independence. Many moved to Canada, particularly Atlantic Canada, where their arrival changed the demographic distribution of the existing territories. New Brunswick was in turn split from Nova Scotia as part of a reorganization of Loyalist settlements in the Maritimes, which led to the incorporation of Saint John, New Brunswick, as Canada's first city. To accommodate the influx of English-speaking Loyalists in Central Canada, the Constitutional Act of 1791 divided the province of Canada into French-speaking Lower Canada (later Quebec) and English-speaking Upper Canada (later Ontario), granting each its own elected legislative assembly.
The Canadas were the main front in the War of 1812 between the United States and the United Kingdom. Peace came in 1815; no boundaries were changed. Immigration resumed at a higher level, with over 960,000 arrivals from Britain between 1815 and 1850. New arrivals included refugees escaping the Great Irish Famine as well as Gaelic-speaking Scots displaced by the Highland Clearances. Infectious diseases killed between 25 and 33 percent of Europeans who immigrated to Canada before 1891.
The desire for responsible government resulted in the abortive Rebellions of 1837. The Durham Report subsequently recommended responsible government and the assimilation of French Canadians into English culture. The Act of Union 1840 merged the Canadas into a united Province of Canada and responsible government was established for all provinces of British North America east of Lake Superior by 1855. The signing of the Oregon Treaty by Britain and the United States in 1846 ended the Oregon boundary dispute, extending the border westward along the 49th parallel. This paved the way for British colonies on Vancouver Island (1849) and in British Columbia (1858). The Anglo-Russian Treaty of Saint Petersburg (1825) established the border along the Pacific coast, but, even after the US Alaska Purchase of 1867, disputes continued about the exact demarcation of the Alaska–Yukon and Alaska–British Columbia border.
=== Confederation and expansion ===
Following three constitutional conferences, the British North America Act, 1867 officially proclaimed Canadian Confederation on July 1, 1867, initially with four provinces: Ontario, Quebec, Nova Scotia, and New Brunswick. Canada assumed control of Rupert's Land and the North-Western Territory to form the Northwest Territories, where the Métis' grievances ignited the Red River Rebellion and the creation of the province of Manitoba in July 1870. British Columbia and Vancouver Island (which had been united in 1866) joined the confederation in 1871 on the promise of a transcontinental railway extending to Victoria in the province within 10 years, while Prince Edward Island joined in 1873. In 1898, during the Klondike Gold Rush in the Northwest Territories, Parliament created the Yukon Territory. Alberta and Saskatchewan became provinces in 1905. Between 1871 and 1896, almost one quarter of the Canadian population emigrated south to the US.
To open the West and encourage European immigration, the Government of Canada sponsored the construction of three transcontinental railways (including the Canadian Pacific Railway), passed the Dominion Lands Act to regulate settlement and established the North-West Mounted Police to assert authority over the territory. This period of westward expansion and nation building resulted in the displacement of many Indigenous peoples of the Canadian Prairies to "Indian reserves", clearing the way for ethnic European block settlements. This caused the collapse of the Plains Bison in western Canada and the introduction of European cattle farms and wheat fields dominating the land. The Indigenous peoples saw widespread famine and disease due to the loss of the bison and their traditional hunting lands. The federal government did provide emergency relief, on condition of the Indigenous peoples moving to the reserves. During this time, Canada introduced the Indian Act extending its control over the First Nations to education, government and legal rights.
=== Early 20th century ===
Because Britain still maintained control of Canada's foreign affairs under the British North America Act, 1867, its declaration of war in 1914 automatically brought Canada into the First World War. Volunteers sent to the Western Front later became part of the Canadian Corps, which played a substantial role in the Battle of Vimy Ridge and other major engagements of the war. The Conscription Crisis of 1917 erupted when the Unionist Cabinet's proposal to augment the military's dwindling number of active members with conscription was met with vehement objections from French-speaking Quebecers. In 1919, Canada joined the League of Nations independently of Britain, and the Statute of Westminster, 1931, affirmed Canada's independence.
The Great Depression in Canada during the early 1930s saw an economic downturn, leading to hardship across the country. In response to the downturn, the Co-operative Commonwealth Federation (CCF) in Saskatchewan introduced many elements of a welfare state (as pioneered by Tommy Douglas) in the 1940s and 1950s. On the advice of Prime Minister William Lyon Mackenzie King, war with Germany was declared effective September 10, 1939, by King George VI, seven days after the United Kingdom. The delay underscored Canada's independence.
The first Canadian Army units arrived in Britain in December 1939. In all, over a million Canadians served in the armed forces during the Second World War. Canadian troops played important roles in many key battles of the war, including the failed 1942 Dieppe Raid, the Allied invasion of Italy, the Normandy landings, the Battle of Normandy, and the Battle of the Scheldt in 1944. Canada provided asylum for the Dutch monarchy while that country was occupied and is credited by the Netherlands for major contributions to its liberation from Nazi Germany. Despite another conscription crisis in Quebec in 1944, Canada finished the war with a large army and strong economy.
=== Contemporary era ===
The financial crisis of the Great Depression led the Dominion of Newfoundland to relinquish responsible government in 1934 and become a Crown colony ruled by a British governor. After two referendums, Newfoundlanders voted to join Canada in 1949 as a province.
Canada's post-war economic growth, combined with the policies of successive Liberal governments, led to the emergence of a new Canadian identity, marked by the adoption of the maple leaf flag in 1965, the implementation of official bilingualism (English and French) in 1969, and the institution of official multiculturalism in 1971. Socially democratic programs were also instituted, such as Medicare, the Canada Pension Plan, and Canada Student Loans; though, provincial governments, particularly Quebec and Alberta, opposed many of these as incursions into their jurisdictions.
Finally, another series of constitutional conferences resulted in the Canada Act 1982, the patriation of Canada's constitution from the United Kingdom, concurrent with the creation of the Canadian Charter of Rights and Freedoms. Canada had established complete sovereignty as an independent country under its own monarchy. In 1999, Nunavut became Canada's third territory after a series of negotiations with the federal government.
At the same time, Quebec underwent profound social and economic changes through the Quiet Revolution of the 1960s, giving birth to a secular nationalist movement. The radical Front de libération du Québec (FLQ) ignited the October Crisis with a series of bombings and kidnappings in 1970, and the sovereigntist Parti Québécois was elected in 1976, organizing an unsuccessful referendum on sovereignty-association in 1980. Attempts to accommodate Quebec nationalism constitutionally through the Meech Lake Accord failed in 1990. This led to the formation of the Bloc Québécois in Quebec and the invigoration of the Reform Party of Canada in the West. A second referendum followed in 1995, in which sovereignty was rejected by a slimmer margin of 50.6 to 49.4 percent. In 1997, the Supreme Court ruled unilateral secession by a province would be unconstitutional, and the Clarity Act was passed by Parliament, outlining the terms of a negotiated departure from Confederation.
In addition to the issues of Quebec sovereignty, a number of crises shook Canadian society in the late 1980s and early 1990s. These included the explosion of Air India Flight 182 in 1985, the largest mass murder in Canadian history; the École Polytechnique massacre in 1989, a university shooting targeting female students; and the Oka Crisis of 1990, the first of a number of violent confrontations between provincial governments and Indigenous groups. Canada joined the Gulf War in 1990 and was active in several peacekeeping missions in the 1990s, including operations in the Balkans during and after the Yugoslav Wars, and in Somalia, resulting in an incident that has been described as "the darkest era in the history of the Canadian military". Canada sent troops to Afghanistan in 2001, resulting in the largest amount of Canadian deaths for any single military mission since the Korean War in the early 1950s.
In 2011, Canadian forces participated in the NATO-led intervention into the Libyan Civil War and also became involved in battling the Islamic State insurgency in Iraq in the mid-2010s. The country celebrated its sesquicentennial in 2017; three years later, the COVID-19 pandemic in Canada began on January 27, 2020, causing widespread social and economic disruption. In 2021, an announcement of the discovery of possible gravesites of over 200 Indigenous children found near a former Canadian Indian residential school, which aimed to assimilate Indigenous children for most of the 20th century, refocused media and public attention on the cultural genocide carried out against Canada's Indigenous peoples. A trade war involving the United States, Canada, and Mexico began on February 1, 2025, when U.S. president Donald Trump signed orders imposing tariffs on goods from the two countries entering the United States.
== Geography ==
By total area (including its waters), Canada is the second-largest country. By land area alone, Canada ranks fourth, due to having the world's largest area of fresh water lakes. Stretching from the Atlantic Ocean in the east, along the Arctic Ocean to the north, and to the Pacific Ocean in the west, the country encompasses 9,984,670 square kilometres (3,855,100 sq mi) of territory. Canada also has vast maritime terrain, with the world's longest coastline of 243,042 kilometres (151,019 mi). In addition to sharing the world's largest land border with the United States—spanning 8,891 kilometres (5,525 mi)—Canada shares a land border with Greenland (and hence the Kingdom of Denmark) to the northeast, on Hans Island, and a maritime boundary with France's overseas collectivity of Saint Pierre and Miquelon to the southeast. Canada is also home to the world's northernmost settlement, Canadian Forces Station Alert, on the northern tip of Ellesmere Island—latitude 82.5°N—which lies 817 kilometres (508 mi) from the North Pole. In latitude, Canada's most northerly point of land is Cape Columbia in Nunavut at 83°6′41″N, with its southern extreme at Middle Island in Lake Erie at 41°40′53″N. In longitude, Canada's land extends from Cape Spear, Newfoundland, at 52°37'W, to Mount St. Elias, Yukon Territory, at 141°W.
Canada can be divided into seven physiographic regions: the Canadian Shield, the interior plains, the Great Lakes–St. Lawrence Lowlands, the Appalachian region, the Western Cordillera, Hudson Bay Lowlands, and the Arctic Archipelago. Boreal forests prevail throughout the country, ice is prominent in northern Arctic regions and through the Rocky Mountains, and the relatively flat Canadian Prairies in the southwest facilitate productive agriculture. The Great Lakes feed the St. Lawrence River (in the southeast) where the lowlands host much of Canada's economic output. Canada has over 2,000,000 lakes—563 of which are larger than 100 square kilometres (39 sq mi)—containing much of the world's fresh water. There are also fresh-water glaciers in the Canadian Rockies, the Coast Mountains, and the Arctic Cordillera. Canada is geologically active, having many earthquakes and potentially active volcanoes.
=== Climate ===
Average winter and summer high temperatures across Canada vary from region to region. Winters can be harsh in many parts of the country, particularly in the interior and Prairie provinces, which experience a continental climate, where daily average temperatures are near −15 °C (5 °F), but can drop below −40 °C (−40 °F) with severe wind chills. In non-coastal regions, snow can cover the ground for almost six months of the year, while in parts of the north snow can persist year-round. Coastal British Columbia has a temperate climate, with a mild and rainy winter. On the east and west coasts, average high temperatures are generally in the low 20s °C (70s °F), while between the coasts, the average summer high temperature ranges from 25 to 30 °C (77 to 86 °F), with temperatures in some interior locations occasionally exceeding 40 °C (104 °F).
Much of Northern Canada is covered by ice and permafrost. The future of the permafrost is uncertain because the Arctic has been warming at three times the global average as a result of climate change in Canada. Canada's annual average temperature over land has risen by 1.7 °C (3.1 °F), with changes ranging from 1.1 to 2.3 °C (2.0 to 4.1 °F) in various regions, since 1948. The rate of warming has been higher across the North and in the Prairies. In the southern regions of Canada, air pollution from both Canada and the United States—caused by metal smelting, burning coal to power utilities, and vehicle emissions—has resulted in acid rain, which has severely impacted waterways, forest growth, and agricultural productivity. Canada is one of the largest greenhouse gas emitters globally, with emissions increased by 16.5 percent between 1990 and 2022.
=== Biodiversity ===
Canada is divided into 15 terrestrial and five marine ecozones. These ecozones encompass over 80,000 classified species of Canadian wildlife, with an equal number yet to be formally recognized or discovered. Although Canada has a low percentage of endemic species compared to other countries, due to human activities, invasive species, and environmental issues in the country, there are currently more than 800 species at risk of being lost. About 65 percent of Canada's resident species are considered "Secure". Over half of Canada's landscape is intact and relatively free of human development. The boreal forest of Canada is considered to be the largest intact forest on Earth, with approximately 3,000,000 square kilometres (1,200,000 sq mi) undisturbed by roads, cities or industry. Since the end of the last glacial period, Canada has consisted of eight distinct forest regions.
Approximately 12.1 percent of the nation's landmass and freshwater are conservation areas, including 11.4 percent designated as protected areas. Approximately 13.8 percent of its territorial waters are conserved, including 8.9 percent designated as protected areas. Canada's first National Park, Banff National Park established in 1885 spans 6,641 square kilometres (2,564 sq mi). Canada's oldest provincial park, Algonquin Provincial Park, established in 1893, covers an area of 7,653.45 square kilometres (2,955.01 sq mi). Lake Superior National Marine Conservation Area is the world's largest freshwater protected area, spanning roughly 10,000 square kilometres (3,900 sq mi). Canada's largest national wildlife region is the Scott Islands Marine National Wildlife Area which spans 11,570.65 square kilometres (4,467.45 sq mi).
== Government and politics ==
Canada is described as a "full democracy", with a tradition of liberalism, and an egalitarian, moderate political ideology. An emphasis on social justice has been a distinguishing element of Canada's political culture. Peace, order, and good government, alongside an Implied Bill of Rights, are founding principles of Canadian federalism.
At the federal level, Canada has been dominated by two relatively centrist parties practising "brokerage politics": the centre-left leaning Liberal Party of Canada and the centre-right leaning Conservative Party of Canada (or its predecessors). The historically predominant Liberals position themselves at the centre of the political scale. Five parties had representatives elected to Parliament in the 2025 election—the Liberals, who formed a minority government; the Conservatives, who became the Official Opposition; the Bloc Québécois; the New Democratic Party (occupying the left); and the Green Party. Far-right and far-left politics have never been a prominent force in Canadian society.
Canada has a parliamentary system within the context of a constitutional monarchy—the monarchy of Canada being the foundation of the executive, legislative, and judicial branches. The reigning monarch is also monarch of 14 other sovereign Commonwealth countries and Canada's 10 provinces. The monarch appoints a representative, the governor general, on the advice of the prime minister, to carry out most of their ceremonial royal duties.
The monarchy is the source of sovereignty and authority in Canada. However, while the governor general or monarch may exercise their power without ministerial advice in rare crisis situations, the use of the executive powers (or royal prerogative) is otherwise directed by the Cabinet, a committee of ministers of the Crown responsible to the elected House of Commons and chosen and headed by the prime minister, the head of government. To ensure the stability of government, the governor general will usually appoint as prime minister the person who is the current leader of the political party that can obtain the confidence of a majority of members in the House. The Prime Minister's Office (PMO) is one of the most powerful institutions in government, initiating most legislation for parliamentary approval and selecting for appointment by the Crown the governor general, lieutenant governors, senators, federal court judges, and heads of Crown corporations and government agencies. The leader of the party with the second-most seats usually becomes the leader of the Official Opposition and is part of an adversarial parliamentary system intended to keep the government in check.
The Parliament of Canada passes all federal statute laws. It comprises the monarch, the House of Commons, and the Senate. While Canada inherited the British concept of parliamentary supremacy, this was later, with the enactment of the Constitution Act, 1982, all but completely superseded by the American notion of the supremacy of the law.
Each of the 343 members of Parliament in the House of Commons is elected by simple plurality in an electoral district or riding. The Constitution Act, 1982, requires that no more than five years pass between elections, although the Canada Elections Act limits this to four years with a "fixed" election date in October; general elections still must be called by the governor general and can be triggered by either the advice of the prime minister or a lost confidence vote in the House. The 105 members of the Senate, whose seats are apportioned on a regional basis, serve until age 75.
Canadian federalism divides government responsibilities between the federal government and the 10 provinces. Provincial legislatures are unicameral and operate in parliamentary fashion similar to the House of Commons. Canada's three territories also have legislatures, but these are not sovereign, have fewer constitutional responsibilities than the provinces, and differ structurally from their provincial counterparts.
=== Law ===
The Constitution of Canada is the supreme law of the country and consists of written text and unwritten conventions. The Constitution Act, 1867 (known as the British North America Act, 1867 prior to 1982), affirmed governance based on parliamentary precedent and divided powers between the federal and provincial governments. The Statute of Westminster, 1931, granted full autonomy, and the Constitution Act, 1982, ended all legislative ties to Britain, as well as adding a constitutional amending formula and the Canadian Charter of Rights and Freedoms. The Charter guarantees basic rights and freedoms that usually cannot be overridden by any government; a notwithstanding clause allows Parliament and the provincial legislatures to override certain sections of the Charter for a period of five years.
Canada's judiciary interprets laws and has the power to strike down acts of Parliament that violate the constitution. The Supreme Court of Canada is the highest court, final arbiter, and has been led since 2017 by Richard Wagner, the Chief Justice of Canada. The governor general appoints the court's nine members on the advice of the prime minister and minister of justice. The federal Cabinet also appoints justices to superior courts in the provincial and territorial jurisdictions.
Common law prevails everywhere except Quebec, where civil law predominates. Criminal law is solely a federal responsibility and is uniform throughout Canada. Law enforcement, including criminal courts, is officially a provincial responsibility, conducted by provincial and municipal police forces. In most rural and some urban areas, policing responsibilities are contracted to the federal Royal Canadian Mounted Police.
Canadian Aboriginal law provides certain constitutionally recognized rights to land and traditional practices for Indigenous groups in Canada. Various treaties and case laws were established to mediate relations between Europeans and many Indigenous peoples. The role of Aboriginal law and the rights they support were reaffirmed by section 35 of the Constitution Act, 1982. These rights may include provision of services, such as healthcare through the Indian Health Transfer Policy, and exemption from taxation.
=== Provinces and territories ===
Canada is a federation composed of 10 federated states, called provinces, and three federal territories. These may be grouped into four main regions: Western Canada, Central Canada, Atlantic Canada, and Northern Canada (Eastern Canada refers to Central Canada and Atlantic Canada together). Provinces and territories have responsibility for social programs such as healthcare, education, and social programs, as well as administration of justice (but not criminal law). Although the provinces collect more revenue than the federal government, equalization payments are made by the federal government to ensure reasonably uniform standards of services and taxation are kept between the richer and poorer provinces.
The major difference between a Canadian province and a territory is that provinces receive their sovereignty from the Crown and power and authority from the Constitution Act, 1867, whereas territorial governments have powers delegated to them by the Parliament of Canada and the commissioners represent the King in his federal Council, rather than the monarch directly. The powers flowing from the Constitution Act, 1867, are divided between the federal government and the provincial governments to exercise exclusively and any changes to that arrangement require a constitutional amendment, while changes to the roles and powers of the territories may be performed unilaterally by the Parliament of Canada.
=== Foreign relations ===
Canada is recognized as a middle power for its role in global affairs with a tendency to pursue multilateral and international solutions. Canada is known for its commitment to international peace and security, as well as being a mediator in conflicts, and for providing aid to developing countries.
Canada and the United States have a long and complex relationship; historically close allies, they co-operate regularly on military campaigns and humanitarian efforts. Canada also maintains historic and traditional ties to the United Kingdom and to France, along with both countries' former colonies through its membership in the Commonwealth of Nations and the Organisation internationale de la Francophonie. Canada is noted for having a positive relationship with the Netherlands, owing, in part, to its contribution to the Dutch liberation during the Second World War. Canada has diplomatic and consular offices in over 270 locations in approximately 180 foreign countries.
Canada promotes its domestically shared values through participating in multiple international organizations. Canada was a founding member of the United Nations (UN) in 1945 and formed the North American Aerospace Defense Command together with the United States in 1958. The country has membership in the World Trade Organization, the Five Eyes, the G7 and the Organisation for Economic Co-operation and Development (OECD). The country was a founding member the Asia-Pacific Economic Cooperation forum (APEC) in 1989 and joined the Organization of American States (OAS) in 1990. Canada ratified the Universal Declaration of Human Rights (UDHR) in 1948, and seven principal UN human rights conventions and covenants since then.
=== Military and peacekeeping ===
Alongside many domestic obligations, more than 3,000 Canadian Armed Forces (CAF) personnel are deployed in multiple foreign military operations. The Canadian unified forces comprise the Royal Canadian Navy, Canadian Army, and Royal Canadian Air Force. The nation employs a professional, volunteer force of approximately 68,000 active personnel and 27,000 reserve personnel—increasing to 71,500 and 30,000 respectively under "Strong, Secure, Engaged"—with a sub-component of approximately 5,000 Canadian Rangers. In 2022, Canada's military expenditure totalled approximately $26.9 billion, or around 1.2 percent of the country's gross domestic product (GDP) – placing it 14th for military expenditure by country.
Canada's role in developing peacekeeping and its participation in major peacekeeping initiatives during the 20th century has played a major role in its positive global image. Peacekeeping is deeply embedded in Canadian culture and a distinguishing feature that Canadians feel sets their foreign policy apart from the United States. Canada has long been reluctant to participate in military operations that are not sanctioned by the United Nations, such as the Vietnam War or the 2003 invasion of Iraq. Since the 21st century, Canadian direct participation in UN peacekeeping efforts has greatly declined. The large decrease was a result of Canada directing its participation to UN-sanctioned military operations through the North Atlantic Treaty Organization, rather than directly through the UN. The change to participation via NATO has resulted in a shift towards more militarized and deadly missions rather than traditional peacekeeping duties.
== Economy ==
Canada has a highly developed mixed-market economy, with the world's ninth-largest nominal GDP as of 2023, at approximately US$2.221 trillion. It is one of the world's largest trading nations, with a highly globalized economy. In 2021, Canadian trade in goods and services reached $2.016 trillion. Canada's exports totalled over $637 billion, while its imported goods were worth over $631 billion, of which approximately $391 billion originated from the United States. In 2018, Canada had a trade deficit in goods of $22 billion and a trade deficit in services of $25 billion. The Toronto Stock Exchange is the ninth-largest stock exchange in the world by market capitalization, listing over 1,500 companies with a combined market capitalization of over US$2 trillion.
The Bank of Canada is the central bank of the country. The minister of finance and minister of innovation, science, and industry use data from Statistics Canada to enable financial planning and develop economic policy. Canada has a strong cooperative banking sector, with the world's highest per-capita membership in credit unions. It ranks low in the Corruption Perceptions Index (14th in 2023) and "is widely regarded as among the least corrupt countries of the world". It ranks high in the Global Competitiveness Report (19th in 2024). Canada's economy ranks above most Western nations on the Heritage Foundation's Index of Economic Freedom and experiences a relatively low level of income disparity. The country's average household disposable income per capita is "well above" the OECD average. Canada ranks among the lowest of the most developed countries for housing affordability and foreign direct investment.
Since the early 20th century, the growth of Canada's manufacturing, mining, and service sectors has transformed the nation from a largely rural economy to an urbanized, industrial one. The Canadian economy is dominated by the service industry, which employs about three-quarters of the country's workforce. Canada has an unusually important primary sector, of which the forestry and petroleum industries are the most prominent components. Many towns in northern Canada, where agriculture is difficult, are sustained by nearby mines or sources of timber.
Canada's economic integration with the United States has increased significantly since the Second World War. The Canada – United States Free Trade Agreement (FTA) of 1988 eliminated tariffs between the two countries, while the North American Free Trade Agreement (NAFTA) expanded the free-trade zone to include Mexico in 1994 (later replaced by the Canada–United States–Mexico Agreement). As of 2023, Canada is a signatory to 15 free trade agreements with 51 different countries.
Canada is one of the few developed nations that are net exporters of energy. Atlantic Canada possess vast offshore deposits of natural gas, and Alberta hosts the fourth-largest oil reserves in the world. The vast Athabasca oil sands and other oil reserves give Canada 13 percent of global oil reserves, constituting the world's third- or fourth-largest. Canada is additionally one of the world's largest suppliers of agricultural products; the Canadian Prairies region is one of the most important global producers of wheat, canola, and other grains. Canada's main exports are zinc, uranium, gold, nickel, platinoids, aluminum, steel, iron ore, coking coal, lead, copper, molybdenum, cobalt, and cadmium. Canada has a sizeable manufacturing sector centred in southern Ontario and Quebec, with automobiles and aeronautics representing particularly important industries. The country's fishing industry is also a key contributor to the economy.
=== Science and technology ===
In 2020, Canada spent approximately $41.9 billion on domestic research and development, with supplementary estimates for 2022 at $43.2 billion. As of 2023, the country has produced 15 Nobel laureates in physics, chemistry, and medicine. The country ranks seventh in the worldwide share of articles published in scientific journals, according to the Nature Index, and is home to the headquarters of a number of global technology firms. Canada has one of the highest levels of Internet access in the world, with over 33 million users, equivalent to around 94 percent of its total population.
Canada's achievements in science and technology include the creation of the modern alkaline battery, the discovery of insulin, the development of the polio vaccine, and discoveries about the interior structure of the atomic nucleus. Other major Canadian scientific contributions include the artificial cardiac pacemaker, mapping the visual cortex, the development of the electron microscope, plate tectonics, deep learning, multi-touch technology, and the identification of the first black hole, Cygnus X-1. Canada has a long history of discovery in genetics, which include stem cells, site-directed mutagenesis, T-cell receptor, and the identification of the genes that cause Fanconi anemia, cystic fibrosis, and early-onset Alzheimer's disease, among numerous other diseases.
The Canadian Space Agency runs an active space program focused on deep-space, planetary, and aviation research, along with rockets and satellites. Canada launched its first satellite, Alouette 1, in 1962. It contributes to the International Space Station and is known for its robotic tools, such as multiple Canadarms. Canada has initiated many long-term projects, including the Radarsat satellite series and the Black Brant rocket series.
== Demographics ==
The 2021 Canadian census enumerated a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. It is estimated that Canada's population surpassed 40,000,000 in 2023. The main drivers of population growth are immigration and, to a lesser extent, natural growth. Canada has one of the highest per-capita immigration rates in the world, driven mainly by economic policy and family reunification. A record 405,000 immigrants were admitted in 2021. Canada leads the world in refugee resettlement; it resettled more than 47,600 in 2022. New immigrants settle mostly in major urban areas, such as Toronto, Montreal, and Vancouver.
Canada's population density, at 4.2 inhabitants per square kilometre (11/sq mi), is among the lowest in the world, with approximately 95 percent of the population residing south of the 55th parallel north. About 80 percent of the population lives within 150 kilometres (93 mi) of the border with the contiguous United States. Canada is highly urbanized, with over 80 percent of the population living in urban centres. The majority of Canadians (over 70 percent ) live below the 49th parallel, with 50 percent of Canadians living south of 45°42′ (45.7 degrees) north. The most densely populated part of the country is the Quebec City–Windsor Corridor in Southern Quebec and Southern Ontario along the Great Lakes and the St. Lawrence River.
The majority of Canadians (81.1 percent) live in family households, 12.1 percent report living alone, and 6.8 percent live with other relatives or unrelated persons. Fifty-one percent of households are couples with or without children, 8.7 percent are single-parent households, 2.9 percent are multigenerational households, and 29.3 percent are single-person households.
=== Ethnicity ===
Respondents in the 2021 Canadian census self-reported over 450 "ethnic or cultural origins". The major panethnic groups chosen were: European (52.5 percent), North American (22.9 percent), Asian (19.3 percent), North American Indigenous (6.1 percent), African (3.8 percent), Latin, Central and South American (2.5 percent), Caribbean (2.1 percent), Oceanian (0.3 percent), and other (6 percent). Over 60 percent of Canadians reported a single origin, and 36 percent reported having multiple ethnic origins, thus the overall total is greater than 100 percent.
The country's ten largest self-reported ethnic or cultural origins in 2021 were Canadian (accounting for 15.6 percent of the population), followed by English (14.7 percent), Irish (12.1 percent), Scottish (12.1 percent), French (11.0 percent), German (8.1 percent), Chinese (4.7 percent), Italian (4.3 percent), Indian (3.7 percent), and Ukrainian (3.5 percent).
Of the 36.3 million people enumerated in 2021, approximately 24.5 million reported being "White", representing 67.4 percent of the population. The Indigenous population representing 5 percent or 1.8 million people, grew by 9.4 percent compared to the non-Indigenous population, which grew by 5.3 percent from 2016 to 2021. One out of every four Canadians or 26.5 percent of the population belonged to a non-White and non-Indigenous visible minority, the largest of which in 2021 were South Asian (2.6 million people; 7.1 percent), Chinese (1.7 million; 4.7 percent), Black (1.5 million; 4.3 percent), Filipinos (960,000 2.6 percent), Arabs (690,000; 1.9 percent), Latin Americans (580,000; 1.6 percent), Southeast Asians (390,000; 1.1 percent), West Asians (360,000; 1.0 percent), Koreans (220,000; 0.6 percent) and Japanese (99,000; 0.3 percent).
Between 2011 and 2016, the visible minority population rose by 18.4 percent. In 1961, about 300,000 people, less than two percent of Canada's population, were members of visible minority groups. The 2021 census indicated that 8.3 million people, or almost one-quarter (23.0 percent) of the population, reported themselves as being or having been a landed immigrant or permanent resident in Canada—above the 1921 census previous record of 22.3 percent. In 2021, India, China, and the Philippines were the top three countries of origin for immigrants moving to Canada.
=== Languages ===
A multitude of languages are used by Canadians, with English and French (the official languages) being the mother tongues of approximately 54 percent and 19 percent of Canadians, respectively. Canada's official bilingualism policies give citizens the right to receive federal government services in either English or French with official-language minorities guaranteed their own schools in all provinces and territories.
Quebec's 1974 Official Language Act established French as the only official language of the province. Although more than 82 percent of French-speaking Canadians live in Quebec, there are substantial Francophone populations in New Brunswick, Alberta, and Manitoba, with Ontario having the largest French-speaking population outside Quebec. New Brunswick, the only officially bilingual province, has an Acadian French minority constituting 33 percent of the population. There are also clusters of Acadians in southwestern Nova Scotia, on Cape Breton Island, and in central and western Prince Edward Island.
Other provinces have no official languages as such, but French is used as a language of instruction, in courts, and for other government services, in addition to English. Manitoba, Ontario, and Quebec allow for both English and French to be spoken in the provincial legislatures and laws are enacted in both languages. In Ontario, French has some legal status, but is not fully co-official. There are 11 Indigenous language groups, composed of more than 65 distinct languages and dialects. Several Indigenous languages have official status in the Northwest Territories. Inuktitut is the majority language in Nunavut and is one of three official languages in the territory.
As of the 2021 census, just over 7.8 million Canadians listed a non-official language as their first language. Some of the most common non-official first languages include Mandarin (679,255 first-language speakers), Punjabi (666,585), Cantonese (553,380), Spanish (538,870), Arabic (508,410), Tagalog (461,150), Italian (319,505), German (272,865), and Tamil (237,890). The country is also home to many sign languages, some of which are Indigenous. American Sign Language (ASL) is used across the country due to the prevalence of ASL in primary and secondary schools. Quebec Sign Language (LSQ) is used primarily in Quebec.
=== Religion ===
Canada is religiously diverse, encompassing a wide range of beliefs and customs. The Constitution of Canada refers to God; however, Canada has no official church and the government is officially committed to religious pluralism. Freedom of religion in Canada is a constitutionally protected right.
Rates of religious adherence have steadily decreased since the 1970s. With Christianity in decline after having once been central and integral to Canadian culture and daily life, Canada has become a post-Christian, secular state. Although the majority of Canadians consider religion to be unimportant in their daily lives, they still believe in God. The practice of religion is generally considered a private matter.
According to the 2021 census, Christianity is the largest religion in Canada, with Roman Catholics representing 29.9 percent of the population having the most adherents. Christians overall representing 53.3 percent of the population, are followed by people reporting irreligion or having no religion at 34.6 percent. Other faiths include Islam (4.9 percent), Hinduism (2.3 percent), Sikhism (2.1 percent), Buddhism (1.0 percent), Judaism (0.9 percent), and Indigenous spirituality (0.2 percent). Canada has the second-largest national Sikh population, behind India.
== Health ==
Healthcare in Canada is delivered through the provincial and territorial systems of publicly funded health care, informally called Medicare. It is guided by the provisions of the Canada Health Act of 1984 and is universal. Universal access to publicly funded health services "is often considered by Canadians as a fundamental value that ensures national healthcare insurance for everyone wherever they live in the country". Around 30 percent of Canadians' healthcare is paid for through the private sector. This mostly pays for services not covered or partially covered by Medicare, such as prescription drugs, dentistry and optometry. Approximately 65 to 75 percent of Canadians have some form of supplementary health insurance; many receive it through their employers or access secondary social service programs.
In common with many other developed countries, Canada is experiencing an increase in healthcare expenditures due to a demographic shift toward an older population, with more retirees and fewer people of working age. In 2021, the average age in Canada was 41.9 years. Life expectancy is 81.1 years. A 2016 report by the chief public health officer found that 88 percent of Canadians, one of the highest proportions of the population among G7 countries, indicated that they "had good or very good health". Eighty percent of Canadian adults self-report having at least one major risk factor for chronic disease: smoking, physical inactivity, unhealthy eating or excessive alcohol use. Canada has one of the highest rates of adult obesity among OECD countries, contributing to approximately 2.7 million cases of diabetes. Four chronic diseases—cancer (leading cause of death), cardiovascular diseases, respiratory diseases, and diabetes—account for 65 percent of deaths in Canada. There are approximately 8 million people aged 15 and older with one or more disabilities in Canada.
In 2021, the Canadian Institute for Health Information reported that healthcare spending reached $308 billion, or 12.7 percent of Canada's GDP for that year. In 2022, Canada's per-capita spending on health expenditures ranked 12th among health-care systems in the OECD. Canada has performed close to, or above the average on the majority of OECD health indicators since the early 2000s, ranking above the average on OECD indicators for wait-times and access to care, with average scores for quality of care and use of resources. The Commonwealth Fund's 2021 report comparing the healthcare systems of the 11 most developed countries ranked Canada second-to-last. Identified weaknesses were comparatively higher infant mortality rate, the prevalence of chronic conditions, long wait times, poor availability of after-hours care, and a lack of prescription drugs and dental coverage. An increasing problem in Canada's health system is a lack of healthcare professionals, and hospital capacity.
== Education ==
Education in Canada is for the most part provided publicly, funded and overseen by federal, provincial, and local governments. Education is within provincial jurisdiction and a province's curriculum is overseen by its government. Education in Canada is generally divided into primary education, followed by secondary and post-secondary education. Education in both English and French is available in most places across Canada. Canada has a large number of universities, almost all of which are publicly funded. Established in 1663, Université Laval is the oldest post-secondary institution in Canada. The nation's three top ranking universities are the University of Toronto, McGill, and the University of British Columbia. The largest university is the University of Toronto, which has over 85,000 students.
According to a 2022 report by the OECD, Canada is one of the most educated countries in the world; the country ranks first worldwide in the percentage of adults having tertiary education, with over 56 percent of Canadian adults having attained at least an undergraduate college or university degree. Canada spends an average of 5.3 percent of its GDP on education. The country invests heavily in tertiary education (more than US$20,000 per student). As of 2022, 89 percent of adults aged 25 to 64 have earned the equivalent of a high-school degree, compared to an OECD average of 75 percent.
The mandatory education age ranges between 5–7 to 16–18 years, contributing to an adult literacy rate of 99 percent. Just over 60,000 children are homeschooled in the country as of 2016. Canada is a well-performing OECD country in reading literacy, mathematics, and science, with the average student scoring 523.7, compared with the OECD average of 493 in 2015.
== Culture ==
Historically, Canada has been influenced by British, French, and Indigenous cultures and traditions. During the 20th century, Canadians with African, Caribbean, and Asian nationalities have added to the Canadian identity and its culture.
Canada's culture draws influences from its broad range of constituent nationalities, and policies that promote a just society are constitutionally protected. Since the 1960s, Canada has emphasized human rights and inclusiveness for all its people. The official state policy of multiculturalism is often cited as one of Canada's significant accomplishments and a key distinguishing element of Canadian identity. In Quebec, cultural identity is strong and there is a French Canadian culture that is distinct from English Canadian culture. As a whole, Canada is in theory a cultural mosaic of regional ethnic subcultures.
Canada's approach to governance emphasizing multiculturalism, which is based on selective immigration, social integration, and suppression of far-right politics, has wide public support. Government policies such as publicly funded health care, higher taxation to redistribute wealth, the outlawing of capital punishment, strong efforts to eliminate poverty, strict gun control, a social liberal attitude toward women's rights (like pregnancy termination) and LGBT rights, and legalized euthanasia and cannabis use are indicators of Canada's political and cultural values. Canadians also identify with the country's foreign aid policies, peacekeeping roles, the national park system, and the Canadian Charter of Rights and Freedoms.
=== Symbols ===
Themes of nature, pioneers, trappers, and traders played an important part in the early development of Canadian symbolism. Modern symbols emphasize the country's geography, cold climate, lifestyles, and the Canadianization of traditional European and Indigenous symbols. The use of the maple leaf as a Canadian symbol dates to the early 18th century. The maple leaf is depicted on Canada's current and previous flags and on the Arms of Canada. Canada's official tartan, known as the "maple leaf tartan", reflects the colours of the maple leaf through the seasons—green in the spring, gold in the early autumn, red at the first frost, and brown after falling. The Arms of Canada are closely modelled after those of the United Kingdom, with French and distinctive Canadian elements replacing or added to those derived from the British version.
Other prominent symbols include the national motto, "A mari usque ad mare" ("From Sea to Sea"), the sports of ice hockey and lacrosse, the beaver, Canada goose, common loon, Canadian horse, the Royal Canadian Mounted Police, the Canadian Rockies, and, more recently, the totem pole and Inuksuk. Canadian beer, maple syrup, tuques, canoes, nanaimo bars, butter tarts, and poutine are defined as uniquely Canadian. Canadian coins feature many of these symbols: the loon on the $1 coin, the Arms of Canada on the 50¢ piece, and the beaver on the nickel. An image of the monarch appears on $20 bank notes and the obverse of coins.
=== Literature ===
Canadian literature is often divided into French- and English-language literatures, which are rooted in the literary traditions of France and Britain, respectively. The earliest Canadian narratives were of travel and exploration. This progressed into three major themes of historical Canadian literature: nature, frontier life, and Canada's position within the world, all of which tie into the garrison mentality. In recent decades, Canada's literature has been strongly influenced by immigrants from around the world. By the 1990s, Canadian literature was viewed as some of the world's best.
Numerous Canadian authors have accumulated international literary awards, including novelist, poet, and literary critic Margaret Atwood, who received two Booker Prizes; Alice Munro, who received a Nobel Prize in Literature; and Booker Prize recipient Michael Ondaatje, who wrote the novel The English Patient, which was adapted as a film of the same name that won the Academy Award for Best Picture. L. M. Montgomery produced a series of children's novels beginning in 1908 with Anne of Green Gables.
=== Media ===
Canada's media is highly autonomous, uncensored, diverse, and very regionalized. The Broadcasting Act declares "the system should serve to safeguard, enrich, and strengthen the cultural, political, social, and economic fabric of Canada". Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States. As a result, the preservation of a distinctly Canadian culture is supported by federal government programs, laws, and institutions such as the Canadian Broadcasting Corporation (CBC), the National Film Board of Canada (NFB), and the Canadian Radio-television and Telecommunications Commission (CRTC).
Canadian mass media, both print and digital, and in both official languages, is largely dominated by a "handful of corporations". The largest of these corporations is the country's national public broadcaster, the Canadian Broadcasting Corporation, which also plays a significant role in producing domestic cultural content, operating its own radio and TV networks in both English and French. In addition to the CBC, some provincial governments offer their own public educational TV broadcast services as well, such as TVOntario and Télé-Québec.
Non-news media content in Canada, including film and television, is influenced both by local creators as well as by imports from the United States, the United Kingdom, Australia, and France. In an effort to reduce the amount of foreign-made media, government interventions in television broadcasting can include both regulation of content and public financing. Canadian tax laws limit foreign competition in magazine advertising.
=== Visual arts ===
Art in Canada is marked by thousands of years of habitation by Indigenous peoples, and, in later times, artists have combined British, French, Indigenous, and American artistic traditions, at times embracing European styles while working to promote nationalism. The nature of Canadian art reflects these diverse origins, as artists have taken their traditions and adapted these influences to reflect the reality of their lives in Canada.
Modern painting in Canada has been greatly influenced by several major movements that have emerged over the years. One of the most prominent movements is the Group of Seven, which was founded in 1920, aimed to capture the wilderness in their artwork. Associated with the group was Emily Carr, known for her landscapes and portrayals of the Indigenous peoples of the Pacific Northwest Coast. The mid-20th century saw the rise of abstract art in Canada, with artists like Jean-Paul Riopelle and Paul-Émile Borduas. The 1960s and 1970s saw the emergence of conceptual art, with artists such as Michael Snow and Ian Carr-Harris. This era also saw the emergence of Indigenous artists like Norval Morrisseau, who combined traditional Indigenous techniques with modern art styles. In more recent years, contemporary art has seen a revival of figurative art, with artists such as Kent Monkman and Shuvinai Ashoona.
=== Music ===
Canadian music reflects a variety of regional scenes. Canada has developed a vast music infrastructure that includes church halls, chamber halls, conservatories, academies, performing arts centres, record companies, radio stations, and television music video channels. Government support programs, such as the Canada Music Fund, assist a wide range of musicians and entrepreneurs who create, produce and market original and diverse Canadian music. As a result of its cultural importance, as well as government initiatives and regulations, the Canadian music industry is one of the largest in the world, producing internationally renowned composers, musicians, and ensembles. Music broadcasting in the country is regulated by the CRTC. The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards. The Canadian Music Hall of Fame honours Canadian musicians for their lifetime achievements.
Patriotic music in Canada dates back over 200 years. The earliest work of patriotic music in Canada, "The Bold Canadian", was written in 1812. "The Maple Leaf Forever", written in 1866, was a popular patriotic song throughout English Canada and, for many years, served as an unofficial national anthem. "O Canada" also served as an unofficial national anthem for much of the 20th century and was adopted as the country's official anthem in 1980.
=== Sports ===
Canada's official national sports are ice hockey and lacrosse. Other major professional games include curling, basketball, baseball, soccer, and football. Great achievements in Canadian sports are recognized by numerous "Halls of Fame" and museums, such as Canada's Sports Hall of Fame.
Canada shares several major professional sports leagues with the United States. Canadian teams in these leagues include seven franchises in the National Hockey League, three Major League Soccer teams, and one team in each of Major League Baseball and the National Basketball Association. Other popular professional competitions include the Canadian Football League, National Lacrosse League, the Canadian Premier League, and the curling tournaments hosted by Curling Canada. Canadians identified hockey as their preferred sport for viewing, followed by soccer and then basketball.
In terms of participation, swimming was the most commonly reported sport by over one-third (35 percent) of Canadians in 2023. This was closely followed by cycling (33 percent) and running (27 percent). The popularity of specific sports varies; in general, the Canadian-born population was more likely to have participated in winter sports such as ice hockey (the most popular young adult team sport), skating, skiing and snowboarding, compared with immigrants, who were more likely to have played soccer (the most popular youth team sport), tennis or basketball. Sports such as golf, volleyball, badminton, bowling, and martial arts are also widely enjoyed at the youth and amateur levels.
Canada has enjoyed success both at the Winter Olympics and at the Summer Olympics—particularly the Winter Games as a "winter sports nation"—and has hosted high-profile international sporting events such as the 1976 Summer Olympics, the 1988 Winter Olympics, the 2010 Winter Olympics, the 2015 FIFA Women's World Cup, the 2015 Pan American Games and 2015 Parapan American Games. The country is scheduled to co-host the 2026 FIFA World Cup alongside Mexico and the United States.
== See also ==
Index of Canada-related articles
List of Canada-related topics by provinces and territories
Outline of Canada
== Notes ==
== References ==
This article incorporates content that is under an open licence from Statistics Canada - (FAQ).
== Further reading ==
== External links ==
Overviews
Canada from UCB Libraries GovPubs
Canada profile from the OECD
Key Development Forecasts for Canada from International Futures
Government
Official website of the Government of Canada
Official website of the Governor General of Canada
Official website of the Prime Ministers of Canada
Travel
Canada's official website for travel and tourism |
Car | A car, or an automobile, is a motor vehicle with wheels. Most definitions of cars state that they run primarily on roads, seat one to eight people, have four wheels, and mainly transport people rather than cargo. There are around one billion cars in use worldwide.
The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769, while the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventor Carl Benz patented his Benz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901 Oldsmobile Curved Dash and the 1908 Ford Model T, both American cars, are widely considered the first mass-produced and mass-affordable cars, respectively. Cars were rapidly adopted in the US, where they replaced horse-drawn carriages. In Europe and other parts of the world, demand for automobiles did not increase until after World War II. In the 21st century, car usage is still increasing rapidly, especially in China, India, and other newly industrialised countries.
Cars have controls for driving, parking, passenger comfort, and a variety of lamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These include rear-reversing cameras, air conditioning, navigation systems, and in-car entertainment. Most cars in use in the early 2020s are propelled by an internal combustion engine, fueled by the combustion of fossil fuels. Electric cars, which were invented early in the history of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025. The transition from fossil fuel-powered cars to electric cars features prominently in most climate change mitigation scenarios, such as Project Drawdown's 100 actionable solutions for climate change.
There are costs and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs and maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance. The costs to society include resources used to produce cars and fuel, maintaining roads, land-use, road congestion, air pollution, noise pollution, public health, and disposing of the vehicle at the end of its life. Traffic collisions are the largest cause of injury-related deaths worldwide. Personal benefits include on-demand transportation, mobility, independence, and convenience. Societal benefits include economic benefits, such as job and wealth creation from the automotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place has far-reaching implications for the nature of societies.
== Etymology ==
The English word car is believed to originate from Latin carrus/carrum "wheeled vehicle" or (via Old North French) Middle English carre "two-wheeled cart", both of which in turn derive from Gaulish karros "chariot". It originally referred to any wheeled horse-drawn vehicle, such as a cart, carriage, or wagon. The word also occurs in other Celtic languages.
"Motor car", attested from 1895, is the usual formal term in British English. "Autocar", a variant likewise attested from 1895 and literally meaning "self-propelled car", is now considered archaic. "Horseless carriage" is attested from 1895.
"Automobile", a classical compound derived from Ancient Greek autós (αὐτός) "self" and Latin mobilis "movable", entered English from French and was first adopted by the Automobile Club of Great Britain in 1897. It fell out of favour in Britain and is now used chiefly in North America, where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".
== History ==
In 1649, Hans Hautsch of Nuremberg built a clockwork-driven carriage. The first steam-powered vehicle was designed by Ferdinand Verbiest, a Flemish member of a Jesuit mission in China around 1672. It was a 65-centimetre-long (26 in) scale-model toy for the Kangxi Emperor that was unable to carry a driver or a passenger. It is not known with certainty if Verbiest's model was successfully built or run.
Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle. He also constructed two steam tractors for the French Army, one of which is preserved in the French National Conservatory of Arts and Crafts. His inventions were limited by problems with water supply and maintaining steam pressure. In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. In the United Kingdom, sentiment against them led to the Locomotive Acts of 1865.
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but installed it in a boat on the river Saone in France. Coincidentally, in 1807, the Swiss inventor François Isaac de Rivaz designed his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen. Neither design was successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir, who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.
In November 1881, French inventor Gustave Trouvé demonstrated a three-wheeled car powered by electricity at the International Exposition of Electricity. Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the German Carl Benz patented his Benz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His first Motorwagen was built in 1885 in Mannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company, Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888, Bertha Benz, the wife and business partner of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.
In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became a joint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.
Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890, and sold their first car in 1892 under the brand name Daimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes that was placed in a specially ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
In 1890, Émile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines, and so laid the foundation of the automotive industry in France. In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 kilometres (1,300 mi) from Valentigney to Paris and Brest and back again. They were attached to the first Paris–Brest–Paris bicycle race, but finished six days after the winning cyclist, Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 by George Selden of Rochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for a two-stroke car engine, which hindered, more than encouraged, development of cars in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
In 1893, the first running, petrol-driven American car was built and road-tested by the Duryea brothers of Springfield, Massachusetts. The first public run of the Duryea Motor Wagon took place on 21 September 1893, on Taylor Street in Metro Center Springfield. Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897: 66 and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.
In Britain, there had been several attempts to build steam cars with varying degrees of success, with Thomas Rickett even attempting a production run in 1860. Santler from Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894, followed by Frederick William Lanchester in 1895, but these were both one-offs. The first production vehicles in Great Britain came from the Daimler Company, a company founded by Harry J. Lawson in 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.
In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897, he built the first diesel engine. Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.
=== Mass production ===
Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan, and based upon stationary assembly line techniques pioneered by Marc Isambard Brunel at the Portsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US by Thomas Blanchard in 1821, at the Springfield Armory in Springfield, Massachusetts. This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes). It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black". In 1914, an assembly line worker could buy a Model T with four months' pay.
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroën was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910–1911), independent suspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, called the General Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared bonnet, doors, roof, and windows with Pontiac; by the 1990s, corporate powertrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.
In Europe, much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice of vertical integration, buying Hotchkiss' British subsidiary (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, from Abbey to Xtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete. Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Rüsselsheim in 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, like Daihatsu, or were the result of partnering with European companies, like Isuzu building the Wolseley A-9 in 1922. Mitsubishi was also partnered with Fiat and built the Mitsubishi Model A based on a Fiat vehicle. Toyota, Nissan, Suzuki, Mazda, and Honda began as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to take Toyoda Loom Works into automobile manufacturing would create what would eventually become Toyota Motor Corporation, the largest automobile manufacturer in the world. Subaru, meanwhile, was formed from a conglomerate of six companies who banded together as Fuji Heavy Industries, as a result of having been broken up under keiretsu legislation.
== Components and design ==
=== Propulsion and fuels ===
==== Fossil fuels ====
Most cars in use in the mid 2020s run on petrol burnt in an internal combustion engine (ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say this phase-out of fossil fuel vehicles must be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.
Other hydrocarbon fossil fuels also burnt by deflagration (rather than detonation) in ICE cars include diesel, autogas, and CNG. Removal of fossil fuel subsidies, concerns about oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for cars. This includes hybrid vehicles, plug-in electric vehicles and hydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 million electric cars on the world's roads. Despite rapid growth, less than five per cent of cars on the world's roads were fully electric and plug-in hybrid cars by the end of 2024. Cars for racing or speed records have sometimes employed jet or rocket engines, but these are impractical for common use. Oil consumption has increased rapidly in the 20th and 21st centuries because there are more cars; the 1980s oil glut even fuelled the sales of low-economy vehicles in OECD countries. The BRIC countries are adding to this consumption.
==== Batteries ====
In almost all hybrid (even mild hybrid) and pure electric cars regenerative braking recovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot. Although all cars must have friction brakes (front disc brakes and either disc or drum rear brakes) for emergency stops, regenerative braking improves efficiency, particularly in city driving.
=== User interface ===
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include a steering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, the electric car and the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch, ignition timing, and a crank instead of an electric starter. However, new controls have also been added to vehicles, making them more complex. These include air conditioning, navigation systems, and in-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such as BMW's iDrive and Ford's MyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
=== Electronics and interior ===
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, an entertainment system which originated from car radios, sideways windows which can be lowered or raised electrically (manually on earlier cars), and one or multiple auxiliary power outlets for supplying portable appliances such as mobile phones, portable fridges, power inverters, and electrical air pumps from the on-board electrical system. More costly upper-class and luxury cars are equipped with features earlier such as massage seats and collision avoidance systems.
Dedicated automotive fuses and circuit breakers prevent damage from electrical overload.
=== Lighting ===
Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
=== Weight and size ===
During the late 20th and early 21st century, cars increased in weight due to batteries, modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines" and, as of 2019, typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons). Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users. The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. The Wuling Hongguang Mini EV, a typical city car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like the Suburban. Cars have also become wider.
Some places tax heavier cars more: as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycled aluminium instead of steel. It has been suggested that one benefit of subsidising charging infrastructure is that cars can use lighter batteries.
=== Seating and body style ===
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear. Full-size cars and large sport utility vehicles can often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand, sports cars are most often designed with only two seats. Utility vehicles like pickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, the sedan/saloon, hatchback, station wagon/estate, coupe, and minivan.
== Safety ==
Traffic collisions are the largest cause of injury-related deaths worldwide. Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland, and Henry Bliss one of the US's first pedestrian car casualties in 1899 in New York City. There are now standard tests for safety in new cars, such as the Euro and US NCAP tests, and insurance-industry-backed tests by the Insurance Institute for Highway Safety (IIHS). However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.
== Costs and benefits ==
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs and auto maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance, are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience, and emergency power. During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."
Similarly the costs to society of car use may include; maintaining roads, land use, air pollution, noise pollution, road congestion, public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from the tax opportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.
== Environmental effects ==
Car production and use has a large number of environmental impacts: it causes local air pollution plastic pollution and contributes to greenhouse gas emissions and climate change. Cars and vans caused 10% of energy-related carbon dioxide emissions in 2022. As of 2023, electric cars produce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity from low-carbon sources. Cars consume almost a quarter of world oil production as of 2019. Cities planned around cars are often less dense, which leads to further emissions, as they are less walkable for instance. A growing demand for large SUVs is driving up emissions from cars.
Cars are a major cause of air pollution, which stems from exhaust gas in diesel and petrol cars and from dust from brakes, tyres, and road wear. Larger cars pollute more. Heavy metals and microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both cause water pollution.
Animals and plants are often negatively affected by cars via habitat destruction and fragmentation from the road network and pollution. Animals are also killed every year on roads by cars, referred to as roadkill. More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allow wildlife crossings) and creating wildlife corridors.
Governments use fiscal policies, such as road tax, to discourage the purchase and use of more polluting cars; Vehicle emission standards ban the sale of new highly pollution cars. Many countries plan to stop selling fossil cars altogether between 2025 and 2050. Various cities have implemented low-emission zones, banning old fossil fuel and Amsterdam is planning to ban fossil fuel cars completely. Some cities make it easier for people to choose other forms of transport, such as cycling. Many Chinese cities limit licensing of fossil fuel cars,
== Social issues ==
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas. Growth in the popularity of cars and commuting has led to traffic congestion. Moscow, Istanbul, Bogotá, Mexico City and São Paulo were the world's most congested cities in 2018 according to INRIX, a data analytics company.
=== Access to cars ===
In the United States, the transport divide and car dependency resulting from domination of car-based transport systems presents barriers to employment in low-income neighbourhoods, with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income. Dependency on automobiles by African Americans may result in exposure to the hazards of driving while black and other types of racial discrimination related to buying, financing and insuring them.
=== Health impact ===
Air pollution from cars increases the risk of lung cancer and heart disease. It can also harm pregnancies: more children are born too early or with lower birth weight. Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development of asthma, childhood cancer, and neurocognitive issues such as autism. The growth in popularity of the car allowed cities to sprawl, therefore encouraging more travel by car, resulting in inactivity and obesity, which in turn can lead to increased risk of a variety of diseases. When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.
== Emerging car technologies ==
Although intensive development of conventional battery electric vehicles is continuing into the 2020s, other car propulsion technologies that are under development include wireless charging, hydrogen cars, and hydrogen/electric hybrids. Research into alternative forms of power includes using ammonia instead of hydrogen in fuel cells.
New materials which may replace steel car bodies include aluminium, fiberglass, carbon fiber, biocomposites, and carbon nanotubes. Telematics technology is allowing more and more people to share cars, on a pay-as-you-go basis, through car share and carpool schemes. Communication is also evolving due to connected car systems. Open-source cars are not widespread.
=== Autonomous car ===
Fully autonomous vehicles, also known as driverless cars, already exist as robotaxis but have a long way to go before they are in general use.
=== Car sharing ===
Car-share arrangements and carpooling are also increasingly popular, in the US and Europe. For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.
== Industry ==
The automotive industry designs, develops, manufactures, markets, and sells the world's motor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year. The automotive industry in China produces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India. The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road; they burn over a trillion litres (0.26×10^12 US gal; 0.22×10^12 imp gal) of petrol and diesel fuel yearly, consuming about 50 exajoules (14,000 TWh) of energy. The numbers of cars are increasing rapidly in China and India. In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars. The sustainable transport movement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, the European Commission introduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future. According to this package, by 2035, all newly sold cars in the European market must be Zero-emissions vehicles.
== Alternatives ==
Established alternatives for some aspects of car use include public transport such as busses, trolleybusses, trains, subways, tramways, light rail, cycling, and walking. Bicycle sharing systems have been established in China and many European cities, including Copenhagen and Amsterdam. Similar programmes have been developed in large US cities. Additional individual modes of transport, such as personal rapid transit could serve as an alternative to cars if they prove to be socially accepted. A study which checked the costs and the benefits of introducing Low Traffic Neighbourhood in London found the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.
== See also ==
== Notes ==
== References ==
== Further reading ==
Berger, Michael L. (2001). The automobile in American history and culture: a reference guide. US: Bloomsbury Publishing. ISBN 9780313016066.
Brinkley, Douglas (2003). Wheels for the world: Henry Ford, his company, and a century of progress, 1903-2003. Viking. ISBN 9780670031818.
Cole, John; Cole, Francis (213). A Geography of the European Union. London: Routledge. p. 110. ISBN 9781317835585. {{cite book}}: ISBN / Date incompatibility (help) – Number of cars in use (in millions) in various European countries in 1973 and 1992
Halberstam, David (1986). The Reckoning. New York: Morrow. ISBN 0-688-04838-2.
Kay, Jane Holtz (1997). Asphalt nation : how the automobile took over America, and how we can take it back. New York: Crown. ISBN 0-517-58702-5.
Margolius, Ivan (2020). "What is an automobile?". The Automobile. 37 (11): 48–52. ISSN 0955-1328.
Sachs, Wolfgang (1992). For love of the automobile: looking back into the history of our desires. Berkeley: University of California Press. ISBN 0-520-06878-5.
Wilkins, Mira; Hill, Frank Ernest (1964). American Business Abroad: Ford on Six Continents.
Williams, Heathcote (1991). Autogeddon. New York: Arcade. ISBN 1-55970-176-5.
Latin America: Economic Growth Trends. US: Agency for International Development, Office of Statistics and Reports. 1972. p. 11. – Number of motor vehicles registered in Latin America in 1970
World Motor Vehicle Production and Registration. US: Business and Defense Services Administration, Transportation Equipment Division. p. 3. – Number of registered passenger cars in various countries in 1959-60 and 1969–70
== External links ==
Media related to Automobiles at Wikimedia Commons
Fédération Internationale de l'Automobile
Forum for the Automobile and Society
Transportation Statistics Annual Report 1996: Transportation and the Environment by Fletcher, Wendell; Sedor, Joanne; p. 219 (contains figures on vehicle registrations in various countries in 1970 and 1992) |
Car costs | A car's internal costs are all the costs consumers pay to own and operate a car. Normally these expenditures are divided into fixed or standing costs and variable or running costs. Fixed costs are those which do not depend on the distance traveled by the vehicle and which the owner must pay to keep the vehicle ready for use on the road, like insurance or road taxes. Variable or running costs are those that depend on the use of the car, like fuel or tolls.
Compared to other popular modes of passenger transportation, especially buses or trains, the car has a relatively high cost per passenger-distance traveled. For the average car owner, depreciation constitutes about half the cost of running a car. The typical motorist underestimates this fixed cost by a significant margin.
The IRS considers that the average US automobile has a total cost of US$0.58/mile, around €0.32/km. According to the American Automobile Association, the average driver of the average sedan spends totally approximately US$8,700 per year, or US$720 per month, to own and operate their vehicle.
== Fixed costs ==
=== Car acquisition ===
The car itself has a cost. The cost can be reduced by buying a reused car. But the reused car may have hidden defects, hidden problems, or be about to be out of norms. It is the most shallow and most immediate cost. But it can be delayed under a loan scheme.
==== Loan costs ====
Car finance comprises the different financial products which allows someone to acquire a car with any arrangement other than a single lump payment. When used, and for the purpose of assessing the private financial costs, one must consider only the interests paid by the car owner, as some part of the amount the owner pays each month for the finance is already embedded in the depreciations costs.
=== Depreciation ===
The yearly depreciation of a car is the amount its value decreases every year. Normally a car's value is correlated with the price it has on the market, but on average a car has a depreciation around 15–20% per year. Depending on market conditions, cars may depreciate 10–30% the first year. Since 2021, however, cars have appreciated significantly the first year due to shortages of cars and up to five-year wait times for new cars.
=== Car taxes ===
Car taxes, road taxes, vehicle taxes or Vehicle Excise Duty are the amount of money car owners pay the government to allow the car to operate within that region or state. These taxes serve to maintain the road infrastructure or to compensate the negative externalities caused by the motor vehicles. These taxes may depend on engine displacement, vehicle weight, miles traveled, CO2 emissions, or the car value.
=== Insurance ===
Insurance serves to provide financial protection against physical damage and/or bodily injury resulting from traffic collisions and against liability that could also arise there-from.
=== Inspection ===
Vehicle inspection is a procedure mandated by law in which a vehicle is inspected to ensure that it conforms to regulations governing safety, emissions, or both.
=== Driving license ===
A driving license is often required to drive a car.
=== Cost of capital ===
The cost of capital, applied to a purchase of a car, is the amount of money the car owner could have obtained, if they, instead of buying the car, applied that amount of money to other worthy investment. The cost of capital is the rate of return that capital could be expected to earn in an alternative investment of equivalent risk. Considering by default the car has depreciation, and that such depreciation is already considered at a certain cost item, the cost of capital of owning a car, is then the income that the car owner could have obtained with the money spent on such car. One example could be a common standard interest rate in a deposit account.
== Running costs ==
=== Fuel ===
The fuel costs depend basically on four factors, namely the distance travelled by the car, the price paid for the fuel, the energy efficiency of the car and the type of driving. In Western countries, this cost normally is the second highest after depreciation.
=== Maintenance ===
The maintenance of a car can have the purpose to be a long term or a short term maintenance. This cost might be very irregular and somewhat unpredictable but tends to increase with the age of the car. On this item are included car parts that need to be replaced after a certain period of time (for example every two years) or with a specific number of travelled kilometres or miles, like tires or filters.
=== Repairs and improvements ===
Repairs costs are completely unpredictable because they depend on the number and severity of car collisions, like dents repairing for example. These costs also refer to spare parts substitution due to malfunctioning. On this cost item it might be included also the parts bought to improve the performance or the aesthetic of the vehicle.
=== Parking ===
The costs of parking include all the money the user needs to pay to park their car. This applies normally to car parking lots, like in offices, public buildings, shopping centres or in the downtown; but also on the public space (normally in the inner part of some city) using parking meters. This cost might be relatively predictable, if the user for example has a monthly contract with some parking lot company, or if he rents a private parking space.
=== Tolls ===
A toll road, also known as a turnpike or tollway, is a public or private roadway for which a fee (or toll) is assessed for passage. Normally this applies to motorways, bridges and tunnels but it might also apply, like in some cities such as London or Stockholm, to gain access to the city-centre. This cost might be predictable if the user passes the tolled roadway, a defined number of times per month.
=== Fines ===
A traffic fine or traffic ticket is a notice issued by a law enforcement official to a motorist accusing violation of traffic laws. Traffic tickets generally come in two forms, citing a moving violation, such as exceeding the speed limit, or a non-moving violation, such as a parking violation. These tickets almost always imply the payment of a certain quantity of money.
=== Car washes ===
The cost of car wash varies according to the frequency users clean their car, and with the price of each cleaning. In many countries in Europe, cleaning a car in a driveway may be Illegal.
== See also ==
Economics of car use
Effects of the car on societies
Externalities of automobiles
Personal finance
== References ==
== External links ==
Automobile costs calculator for Australia
Automobile costs calculator for Canada
Automobile costs calculator for Ireland
Automobile costs calculator for the United Kingdom
Automobile costs calculator for the United States |
Carbon dioxide | Carbon dioxide is a chemical compound with the chemical formula CO2. It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in a gas state at room temperature and at normally-encountered concentrations it is odorless. As the source of carbon in the carbon cycle, atmospheric CO2 is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.042% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.028%. Burning fossil fuels is the main cause of these increased CO2 concentrations, which are the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological features. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and CO2 is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. CO2 is released from organic materials when they decay or combust, such as in forest fires. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (HCO−3), which causes ocean acidification as atmospheric CO2 levels increase.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess CO2 emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the CO2 being released back into the atmosphere. CO2, or the carbon it holds, is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas.
Nearly all CO2 produced by humans goes into the atmosphere. Less than 1% of CO2 produced annually is put to commercial use, mostly in the fertilizer industry and in the oil and gas industry for enhanced oil recovery. Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.: 3
== Chemical and physical properties ==
=== Structure, bonding and molecular vibrations ===
The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment.
As a linear triatomic molecule, CO2 has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15.0 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.20 μm), with a Fermi resonance doublet at 1285 cm−1.
In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules.
=== In aqueous solution ===
Carbon dioxide is soluble in water, in which it reversibly forms H2CO3 (carbonic acid), which is a weak acid, because its ionization in water is incomplete.
CO2 + H2O ⇌ H2CO3
The hydration equilibrium constant of carbonic acid is, at 25 °C:
K
h
=
[
H
2
CO
3
]
[
CO
2
(
aq
)
]
=
1.70
×
10
−
3
{\displaystyle K_{\mathrm {h} }={\frac {{\ce {[H2CO3]}}}{{\ce {[CO2_{(aq)}]}}}}=1.70\times 10^{-3}}
Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as CO2 molecules, not affecting the pH.
The relative concentrations of CO2, H2CO3, and the deprotonated forms HCO−3 (bicarbonate) and CO2−3(carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter.
Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion (HCO−3):
H2CO3 ⇌ HCO−3 + H+
Ka1 = 2.5 × 10−4 mol/L; pKa1 = 3.6 at 25 °C.
This is the true first acid dissociation constant, defined as
K
a
1
=
[
HCO
3
−
]
[
H
+
]
[
H
2
CO
3
]
{\displaystyle K_{\mathrm {a1} }={\frac {{\ce {[HCO3- ][H+]}}}{{\ce {[H2CO3]}}}}}
where the denominator includes only covalently bound H2CO3 and does not include hydrated CO2(aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an apparent value calculated on the (incorrect) assumption that all dissolved CO2 is present as carbonic acid, so that
K
a
1
(
a
p
p
a
r
e
n
t
)
=
[
HCO
3
−
]
[
H
+
]
[
H
2
CO
3
]
+
[
CO
2
(
aq
)
]
{\displaystyle K_{\mathrm {a1} }{\rm {(apparent)}}={\frac {{\ce {[HCO3- ][H+]}}}{{\ce {[H2CO3] + [CO2_{(aq)}]}}}}}
Since most of the dissolved CO2 remains as CO2 molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1.
The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion (CO2−3):
HCO−3 ⇌ CO2−3 + H+
Ka2 = 4.69 × 10−11 mol/L; pKa2 = 10.329
In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase.
In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of
C
O
2
{\displaystyle \mathrm {CO_{2}} }
, especially noticeable as temperatures exceed 30 °C.
The temperature dependence of the electrical conductivity of fully deionized water without CO2 saturation is comparably low in relation to these data.
=== Chemical reactions ===
CO2 is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with CO2 are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in CO2 scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating.
Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with CO2 to give carboxylates:
MR + CO2 → RCO2M
where M = Li or MgBr and R = alkyl or aryl.
In metal carbon dioxide complexes, CO2 serves as a ligand, which can facilitate the conversion of CO2 to other chemicals.
The reduction of CO2 to CO is ordinarily a difficult and slow reaction:
CO2 + 2 e− + 2 H+ → CO + H2O
The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process.
Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from CO2 absorbed from the air and water:
n CO2 + n H2O → (CH2O)n + n O2
=== Physical properties ===
Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air.
Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice.
Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated CO2 at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released.
At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide.
Table of thermal and physical properties of saturated liquid carbon dioxide:
Table of thermal and physical properties of carbon dioxide (CO2) at atmospheric pressure:
== Biological role ==
Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration.
=== Photosynthesis and carbon fixation ===
Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product.
Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from CO2 and ribulose bisphosphate, as shown in the diagram at left.
RuBisCO is thought to be the single most abundant protein on Earth.
Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales.
Plants can grow as much as 50% faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated CO2 levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated CO2 in FACE experiments.
Increased atmospheric CO2 concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that CO2 enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein.
The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of CO2.
Plants also emit CO2 during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of CO2 each year, a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere.
=== Toxicity ===
Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about 30 km (19 mi) altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location.
In humans, exposure to CO2 at concentrations greater than 5% causes the development of hypercapnia and respiratory acidosis. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations of more than 10% may cause convulsions, coma, and death. CO2 levels of more than 30% act rapidly leading to loss of consciousness in seconds.
Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by CO2 emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is mazuku.
Adaptation to increased concentrations of CO2 occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition.
==== Below 1% ====
There are few studies of the health effects of long-term continuous CO2 exposure on humans and animals at levels below 1%. Occupational CO2 exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this CO2 concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% CO2 have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000 ppm) CO2 likely due to CO2 induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm.
However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of CO2 in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of CO2 declined to safe levels (0.2%).
==== Ventilation ====
Poor ventilation is one of the main causes of excessive CO2 concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that CO2 concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher CO2 concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm).
Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly.
In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen CO2) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from CO2 fumes emanating from the large amount of dry ice she was transporting in her car.
==== Indoor air ====
Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the CO2 rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm".
In homes, schools, nurseries and offices, there are no systematic relationships between the levels of CO2 and other pollutants, and indoor CO2 is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. CO2 is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of CO2 and CO emitted directly into the living environment.
==== Outdoor areas with elevated concentrations ====
Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about 100 m (330 ft) in diameter, concentrations of CO2 rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of CO2 produced by disturbance of deep lake water saturated with CO2 are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986.
== Human physiology ==
=== Content ===
The body produces approximately 2.3 pounds (1.0 kg) of carbon dioxide per day per person, containing 0.63 pounds (290 g) of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table.
=== Transport in the blood ===
CO2 is carried in blood in three different ways. Exact percentages vary between arterial and venous blood.
Majority (about 70% to 80%) is converted to bicarbonate ions HCO−3 by the enzyme carbonic anhydrase in the red blood cells, by the reaction:
CO2 + H2O → H2CO3 → H+ + HCO−3
5–10% is dissolved in blood plasma
5–10% is bound to hemoglobin as carbamino compounds
Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect.
=== Regulation of respiration ===
Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue.
Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of CO2 in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis.
Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness.
The respiratory centers try to maintain an arterial CO2 pressure of 40 mmHg. With intentional hyperventilation, the CO2 content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.
== Concentrations and role in the environment ==
=== Atmosphere ===
=== Oceans ===
==== Ocean acidification ====
Carbon dioxide dissolves in the ocean to form carbonic acid (H2CO3), bicarbonate (HCO−3), and carbonate (CO2−3). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of CO2 emitted by human activity.
==== Hydrothermal vents ====
Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006.
== Sources ==
The burning of fossil fuels for energy produces 36.8 billion tonnes of CO2 per year as of 2023. Nearly all of this goes into the atmosphere, where approximately half is subsequently absorbed into natural carbon sinks. Less than 1% of CO2 produced annually is put to commercial use.: 3
=== Biological processes ===
Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce CO2 and ethanol, also known as alcohol, as follows:
C6H12O6 → 2 CO2 + 2 CH3CH2OH
All aerobic organisms produce CO2 when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is:
C6H12O6 + 6 O2 → 6 CO2 + 6 H2O
Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane.
==== Combustion ====
The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen:
CH4 + 2 O2 → CO2 + 2 H2O
Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide:
Fe2O3 + 3 CO → 3 CO2 + 2 Fe
==== By-product from hydrogen production ====
Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane).
==== Thermal decomposition of limestone ====
It is produced by thermal decomposition of limestone, CaCO3 by heating (calcining) at about 850 °C (1,560 °F), in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses:
CaCO3 → CaO + CO2
Acids liberate CO2 from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below:
CaCO3 + 2 HCl → CaCl2 + H2CO3
The carbonic acid (H2CO3) then decomposes to water and CO2:
H2CO3 → CO2 + H2O
Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams.
== Commercial uses ==
Around 230 Mt of CO2 are used each year, mostly in the fertiliser industry for urea production (130 million tonnes) and in the oil and gas industry for enhanced oil recovery (70 to 80 million tonnes).: 3 Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.: 3
Technology exists to capture CO2 from industrial flue gas or from the air. Research is ongoing on ways to use captured CO2 in products and some of these processes have been deployed commercially. However, the potential to use products is very small compared to the total volume of CO2 that could foreseeably be captured. The vast majority of captured CO2 is considered a waste product and sequestered in underground geologic formations.
=== Precursor to chemicals ===
In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using CO2 by the Kolbe–Schmitt reaction.
Captured CO2 could be to produce methanol or electrofuels. To be carbon-neutral, the CO2 would need to come from bioenergy production or direct air capture.: 21–24
=== Fossil fuel recovery ===
Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well.
Most CO2 injected in CO2-EOR projects comes from naturally occurring underground CO2 deposits. Some CO2 used in EOR is captured from industrial facilities such as natural gas processing plants, using carbon capture technology and transported to the oilfield in pipelines.
=== Agriculture ===
Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional CO2 to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment.
=== Foods ===
Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290).
A candy called Pop Rocks is pressurized with carbon dioxide gas at about 4,000 kPa (40 bar; 580 psi). When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop.
Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids.
==== Beverages ====
Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen.
The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response.
==== Winemaking ====
Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine.
Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers.
==== Stunning animals ====
Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress.
=== Inert gas ===
Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, CO2 use is sometimes referred to as MAG welding, for Metal Active Gas, as CO2 can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern.
Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately 60 bar (870 psi; 59 atm), allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of CO2 are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans.
=== Fire extinguisher ===
Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms.
Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of CO2 systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries.
=== Supercritical CO2 as solvent ===
Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide.
=== Refrigerant ===
Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below −78.5 °C (−109.3 °F) at regular atmospheric pressure, regardless of the air temperature.
Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). CO2 might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than CO2 does. CO2 physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to 130 bars (1,900 psi; 13,000 kPa), CO2 systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, CO2 (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded CO2-based beverage coolers and the U.S. Army is interested in CO2 refrigeration and heating technology.
=== Minor uses ===
Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers.
Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton.
Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation.
Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer CO2 include placing animals directly into a closed, prefilled chamber containing CO2, or exposure to a gradually increasing concentration of CO2. The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents.: 5, 31 Percentages of CO2 vary for different species, based on identified optimal percentages to minimize distress.: 22
Carbon dioxide is also used in several related cleaning and surface-preparation techniques.
== History of discovery ==
Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" (spiritus sylvestris).
The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas.
Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid CO2.
Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp. Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide.
== See also ==
== Notes ==
== References ==
== External links ==
Current global map of carbon dioxide concentration
CDC – NIOSH Pocket Guide to Chemical Hazards – Carbon Dioxide
Trends in Atmospheric Carbon Dioxide (NOAA)
The rediscovery of CO2: History, What is Shecco? - as refrigerant |
Cargo aircraft | A cargo aircraft (also known as freight aircraft, freighter, airlifter or cargo jet) is a fixed-wing aircraft that is designed or converted for the carriage of cargo rather than passengers. Such aircraft generally feature one or more large doors for loading cargo. Passenger amenities are removed or not installed, although there are usually basic comfort facilities for the crew such as a galley, lavatory, and bunks in larger planes. Freighters may be operated by civil passenger or cargo airlines, by private individuals, or by government agencies of individual countries such as the armed forces.
Aircraft designed for cargo flight usually have features that distinguish them from conventional passenger aircraft: a wide/tall fuselage cross-section, a high-wing to allow the cargo area to sit near the ground, numerous wheels to allow it to land at unprepared locations, and a high-mounted tail to allow cargo to be driven directly into and off the aircraft.
By 2015, dedicated freighters represent 43% of the 700 billion ATK (available tonne-kilometer) capacity, while 57% is carried in airliner's cargo holds. Also in 2015, Boeing forecast belly freight to rise to 63% while specialised cargoes would represent 37% of a 1,200 billion ATKs in 2035.
The Cargo Facts Consulting firm forecasts that the global freighter fleet will rise from 1,782 to 2,920 cargo aircraft from 2019 to 2039.
== History ==
Aircraft were put to use carrying cargo in the form of air mail as early as 1911. Although the earliest aircraft were not designed primarily as cargo carriers, by the mid-1920s aircraft manufacturers were designing and building dedicated cargo aircraft.
In the UK during the early 1920s, the need was recognized for a freighter aircraft to transport troops and material quickly to pacify tribal revolts in the newly occupied territories of the Middle East. The Vickers Vernon, a development of the Vickers Vimy Commercial, entered service with the Royal Air Force as the first dedicated troop transport in 1921. In February 1923 this was put to use by the RAF's Iraq Command who flew nearly 500 Sikh troops from Kingarban to Kirkuk in the first ever strategic airlift of troops. Vickers Victorias played an important part in the Kabul Airlift of November 1928 – February 1929, when they evacuated diplomatic staff and their dependents together with members of the Afghan royal family endangered by a civil war. The Victorias also helped to pioneer air routes for Imperial Airways' Handley Page HP.42 airliners.
The World War II German design, the Arado Ar 232 was the first purpose-built cargo aircraft. The Ar 232 was intended to supplant the earlier Junkers Ju 52 freighter conversions, but only a few were built. Most other forces used freighter versions of airliners in the cargo role as well, most notably the C-47 Skytrain version of the Douglas DC-3, which served with practically every Allied nation. One important innovation for future cargo aircraft design was introduced in 1939, with the fifth and sixth prototypes of the Junkers Ju 90 four-engined military transport aircraft, with the earliest known example of a rear loading ramp. This aircraft, like most of its era, used tail-dragger landing gear which caused the aircraft to have a decided rearward tilt when landed. These aircraft introduced the Trapoklappe, a powerful ramp/hydraulic lift with a personnel stairway centered between the vehicle trackway ramps, that raised the rear of the aircraft into the air and allowed easy loading. A similar rear loading ramp even appeared in a somewhat different form on the nosewheel gear-equipped, late WW II era American Budd RB-1 Conestoga twin-engined cargo aircraft.
Postwar Europe also served to play a major role in the development of the modern air cargo and air freight industry. It is during the Berlin Airlift at the height of the Cold War, when a massive mobilization of aircraft was undertaken by the West to supply West Berlin with food and supplies, in a virtual around the clock air bridge, after the Soviet Union closed and blockaded Berlin's land links to the west. To rapidly supply the needed numbers of aircraft, many older types, especially the Douglas C-47 Skytrain, were pressed into service. In operation it was found that it took as long or longer to unload these older designs as the much larger tricycle landing gear Douglas C-54 Skymaster which was easier to move about in when landed. The C-47s were quickly removed from service, and from then on flat-decks were a requirement of all new cargo designs.
In the years following the war era a number of new custom-built cargo aircraft were introduced, often including some "experimental" features. For instance, the US's C-82 Packet featured a removable cargo area, while the C-123 Provider introduced the now-common rear fuselage/upswept tail shaping to allow for a much larger rear loading ramp. But it was the introduction of the turboprop that allowed the class to mature, and even one of its earliest examples, the C-130 Hercules, in the 21st century as the Lockheed Martin C-130J, is still the yardstick against which newer military transport aircraft designs are measured. Although larger, smaller and faster designs have been proposed for many years, the C-130 continues to improve at a rate that keeps it in production.
"Strategic" cargo aircraft became an important class of their own starting with the Lockheed C-5 Galaxy in the 1960s and a number of similar Soviet designs from the 70s and 80s, and culminating in the Antonov An-225, the world's largest aircraft. These designs offer the ability to carry the heaviest loads, even main battle tanks, at global ranges. The Boeing 747 was originally designed to the same specification as the C-5, but later modified as a design that could be offered as either passenger or all-freight versions. The "bump" on the top of the fuselage allows the crew area to be clear of the cargo containers sliding out of the front in the event of an accident.
When the Airbus A380 was announced, the maker originally accepted orders for the freighter version A380F, offering the second largest payload capacity of any cargo aircraft, exceeded only by the An-225. An aerospace consultant has estimated that the A380F would have 7% better payload and better range than the 747-8F, but also higher trip costs.
Starting May 2020 Portuguese Hi Fly started charting cargo flights with an A380, carrying medical supplies from China to different parts of the world in the response to the COVID-19 outbreak. It allows almost 320 m3 of cargo between the three decks. In November 2020 Emirates started offering an A380 mini-freighter, which allows for 50 tons of cargo in the belly of the plane.
== Importance ==
Cargo aircraft has had many uses throughout the years, but the current importance of cargo aircraft is not highly talked about. Cargo planes today can carry almost everything ranging from perishables and supplies to fully built cars and livestock. The most use of cargo aircraft comes from the increase in online shopping through retailers like Amazon and eBay. Since most of these items are made all over the world, air cargo is used to get it from point A to point B as fast as possible. Air cargo significantly adds to the world trade value, Air cargo transports over US$6 trillion worth of goods, accounting for approximately 35% of world trade by value. This helps producers keep the costs of goods down, allows consumers to be able to purchase more items, and allows stores to remain with goods on the shelf.
Not only is air cargo important in the delivery and shipping aspect, it is also highly important in the job industry. Air cargo companies around the United States employ over 250,000 workers, U.S. cargo airlines employed 268,730 workers in August 2023, 34% of the industry total.
== Cargo aircraft types ==
Nearly all commercial cargo aircraft presently in the fleet are derivatives or transformations of passenger aircraft. However, there are three other methods to the development of cargo aircraft.
=== Derivatives of non-cargo aircraft ===
Many types can be converted from airliner to freighter by installing a main deck cargo door with its control systems; upgrading floor beams for cargo loads and replacing passenger equipment and furnishings with new linings, ceilings, lighting, floors, drains and smoke detectors.
Specialized engineering teams rival Airbus and Boeing, giving the aircraft another 15–20 years of life.
Aeronautical Engineers Inc converts the Boeing 737-300/400/800, McDonnell Douglas MD-80 and Bombardier CRJ200.
Israel Aerospace Industries’ Bedek Aviation converts the 737-300/400/700/800 in about 90 days, 767-200/300s in about four months and 747-400s in five months, and is looking at the Boeing 777, Airbus A330 and A321. Voyageur Aviation located in North Bay, Ontario converts the DHC-8-100 into the DHC-8-100 Package Freighter Conversion.
An A300B4-200F conversion cost $5M in 1996, an A300-600F $8M in 2001, a McDonnell Douglas MD-11F $9M in 1994, a B767-300ERF $13M in 2007, a Boeing 747-400 PSF $22M in 2006, an A330-300 P2F was estimated at $20M in 2016 and a Boeing 777-200ER BCF at $40M in 2017.
By avoiding the main deck door installation and relying on lighter elevators between decks, LCF Conversions wants to convert A330/A340s or B777s for $6.5M to $7.5M.
In the mid-2000s, passenger 747-400s cost $30–50 million before a $25 million conversion, a Boeing 757 had to cost $15 million before conversion, falling to below $10 million by 2018, and $5 million for a 737 Classic, falling to $2–3 million for a Boeing 737-400 by 2018.
Derivative freighters have most of their development costs already amortized, and lead time before production is shorter than all new aircraft. Converted cargo aircraft use older technology; their direct operating costs are higher than what might be achieved with current technology. Since they have not been designed specifically for air cargo, loading and unloading is not optimized; the aircraft may be pressurized more than necessary, and there may be unnecessary apparatus for passenger safety.
=== Dedicated civilian cargo aircraft ===
A dedicated commercial air freighter is an airplane which has been designed from the beginning as a freighter, with no restrictions caused by either passenger or military requirements. Over the years, there has been a dispute concerning the cost effectiveness of such an airplane, with some cargo carriers stating that they could consistently earn a profit if they had such an aircraft. To help resolve this disagreement, the National Aeronautics and Space Administration (NASA) selected two contractors, Douglas Aircraft Co. and Lockheed-Georgia Co., to independently evaluate the possibility of producing such a freighter by 1990. This was done as part of the Cargo/Logistics Airlift Systems Study (CLASS). At comparable payloads, dedicated cargo aircraft was said to provide a 20 percent reduction in trip cost and a 15 percent decrease in aircraft price compared to other cargo aircraft. These findings, however, are extremely sensitive to assumptions about fuel and labor costs and, most particularly, to growth in demand for air cargo services. Further, it ignores the competitive situation brought about by the lower capital costs of future derivative air cargo aircraft.
The main advantage of the dedicated air freighter is that it can be designed specifically for air freight demand, providing the type of loading and unloading, flooring, fuselage configuration, and pressurization which are optimized for its mission. Moreover, it can make full use of NASA's ACEE results, with the potential of significantly lowering operating costs and fuel usage. Such a high overhead raises the price of the airplane and its direct operating cost (because of depreciation and insurance costs) and increases the financial risks to investors, especially since it would be competing with derivatives which have much smaller development costs per unit and which themselves have incorporated some of the cost-reducing technology.
=== Joint civil-military cargo aircraft ===
One benefit of a combined development is that the development costs would be shared by the civil and military sectors, and the number of airplanes required by the military could be decreased by the number of civil reserve airplanes purchased by air carriers and available to the military in case of emergency.
There are some possible drawbacks, as the restrictions executed by joint development, the punishments that would be suffered by both civil and military airplanes, and the difficulty in discovering an organizational structure that authorizes their compromise. Some features appropriate to a military aircraft would have to be rejected, because they are not suitable for a civil freighter. Moreover, each airplane would have to carry some weight which it would not carry if it were independently designed. This additional weight lessens the payload and the profitability of the commercial version. This could either be compensated by a transfer payment at acquisition, or an operating penalty compensation payment. Most important, it is not clear that there will be an adequate market for the civil version or that it will be cost competitive with derivatives of passenger aircraft.
=== Unpiloted cargo aircraft ===
Rapid delivery demand and e-commerce growth stimulate UAV freighters development for 2020:
Californian Elroy Air wants to replace trucks on inefficient routes and should fly a subscale prototype;
Californian Natilus plans a Boeing 747 sized transpacific unpiloted freighter and should fly a subscale prototype;
Californian Sabrewing Aircraft targets small regional unpiloted freighter and should fly a 65%-scale vehicle in 2018 fall;
The Chinese Academy of Sciences flew its 1.5-tonne (3,300 lb) payload AT200 in October 2017 based on New Zealand's PAC P-750 XSTOL utility turboprop
Chinese package carrier SF Express conducted emergency logistics tests in December 2017 with a Tengoen Technologies’ TB001 medium-altitude UAV, and plan an eight-turbofan carrying 20 t (44,000 lb) more than 7,600 km (4,100 nmi)
Boeing flew its Boeing Cargo Air Vehicle prototype, a vertical takeoff and landing (eVTOL) craft.
Carpinteria, California-startup Dorsal Aircraft wants to make light standard ISO containers part of its unpiloted freighter structure where the wing, engines and tail are attached to a dorsal spine fuselage.
Interconnecting 1.5–15.2-metre-long (5–50 ft) long aluminum containers carry the flight loads, aiming to lower overseas airfreight costs by 60%, and plan to convert C-130H with the help of
Wagner Aeronautical of San Diego, experienced in passenger-to-cargo conversions.
Beijing-based Beihang UAS Technology developed its BZK-005 high-altitude, long-range UAV for cargo transport, capable of carrying 1.2 t (2,600 lb) over 1,200 km (650 nmi) at 5,000 m (16,000 ft).
Garuda Indonesia will test three of them initially from September 2019, before operations in the fourth quarter.
Garuda plans up to 100 cargo UAVs to connect remote regions with limited airports in Maluku, Papua, and Sulawesi.
== Examples ==
=== Early air mail and airlift logistics aircraft ===
Avro Lancastrian (Transatlantic mail)
Avro York (Berlin Airlift)
Boeing C-7000
Curtiss JN-4
Douglas M-2
Douglas DC-3
Douglas DC-4
Douglas DC-6
=== Converted airliners ===
=== Oversize transport ===
=== Light aircraft ===
=== Military cargo aircraft ===
=== Experimental cargo aircraft ===
Hughes H-4 Hercules ("Spruce Goose")
Lockheed R6V Constitution
LTV XC-142
=== Comparisons ===
== See also ==
Airlift
Air transport
Cargo airline
Combi aircraft
Modular aircraft
Preighter
== References ==
== External links ==
Airlift Cargo Aircraft
History of the Airmail Service
Indo-Russian Transport Aircraft (IRTA) |
Cessna | Cessna () is an American brand of general aviation aircraft owned by Textron Aviation since 2014, headquartered in Wichita, Kansas. Originally, it was a brand of the Cessna Aircraft Company, an American general aviation aircraft manufacturing corporation also headquartered in Wichita. The company produced small, piston-powered aircraft, as well as business jets. For much of the mid-to-late 20th century, Cessna was one of the highest-volume and most diverse producers of general aviation aircraft in the world. It was founded in 1927 by Clyde Cessna and Victor Roos and was purchased by General Dynamics in 1985, then by Textron in 1992. In March 2014, when Textron purchased the Beechcraft and Hawker Aircraft corporations, Cessna ceased operations as a subsidiary company, and joined the others as one of the three distinct brands produced by Textron Aviation.
Throughout its history, and especially in the years following World War II, Cessna became best known for producing small, high-wing, piston aircraft. Its most popular and iconic aircraft is the Cessna 172, delivered since 1956 (with a break from 1986 to 1996), with more sold than any other aircraft in history. Since the first model was delivered in 1972, the brand has also been well known for its Citation family of low-wing business jets which vary in size.
== History ==
=== Origins ===
Clyde Cessna, a farmer in Rago, Kansas, built his own aircraft and flew it in June 1911. He was the first person to do so between the Mississippi River and the Rocky Mountains. Cessna started his wood-and-fabric aircraft ventures in Enid, Oklahoma, testing many of his early planes on the salt flats. When bankers in Enid refused to lend him more money to build his planes, he moved to Wichita.
Cessna Aircraft was formed when Clyde Cessna and Victor Roos became partners in the Cessna-Roos Aircraft Company in 1927. Roos resigned just one month into the partnership, selling back his interest to Cessna. Shortly afterward, Roos's name was dropped from the company name.
The Cessna DC-6 earned certification on the same day as the stock market crash of 1929, October 29, 1929.
In 1932, the Cessna Aircraft Company closed due to the Great Depression.
However, the Cessna CR-3 custom racer made its first flight in 1933. The plane won the 1933 American Air Race in Chicago and later set a new world speed record for engines smaller than 500 cubic inches by averaging 237 mph (381 km/h).
Cessna's nephews, brothers Dwane and Dwight Wallace, bought the company from Cessna in 1934. They reopened it and began the process of building it into what would become a global success.
The Cessna C-37 was introduced in 1937 as Cessna's first seaplane when equipped with Edo floats. In 1940, Cessna received their largest order to date, when they signed a contract with the U.S. Army for 33 specially equipped Cessna T-50s, their first twin engine plane. Later in 1940, the Royal Canadian Air Force placed an order for 180 T-50s.
=== Postwar boom ===
Cessna returned to commercial production in 1946, after the revocation of wartime production restrictions (L-48), with the release of the Model 120 and Model 140. The approach was to introduce a new line of all-metal aircraft that used production tools, dies and jigs, rather than the hand-built tube-and-fabric construction process used before the war.
The Model 140 was named by the US Flight Instructors Association as the "Outstanding Plane of the Year" in 1948.
Cessna's first helicopter, the Cessna CH-1, received FAA type certification in 1955.
Cessna introduced the Cessna 172 in 1956. It became the most produced airplane in history. During the post-World War II era, Cessna was known as one of the "Big Three" in general aviation aircraft manufacturing, along with Piper and Beechcraft.
In 1959, Cessna acquired Aircraft Radio Corporation (ARC), of Boonton, New Jersey, a leading manufacturer of aircraft radios. During these years, Cessna expanded the ARC product line, and rebranded ARC radios as "Cessna" radios, making them the "factory option" for avionics in new Cessnas. However, during this time, ARC radios suffered a severe decline in quality and popularity. Cessna kept ARC as a subsidiary until 1983, selling it to avionics-maker Sperry.
In 1960, Cessna acquired McCauley Industrial Corporation, of Ohio, a leading manufacturer of propellers for light aircraft. McCauley became the world's leading producer of general aviation aircraft propellers, largely through their installation on Cessna airplanes.
In 1960, Cessna affiliated itself with Reims Aviation of Reims, France. In 1963, Cessna produced its 50,000th airplane, a Cessna 172.
Cessna's first business jet, the Cessna Citation I, performed its maiden flight on September 15, 1969.
Cessna produced its 100,000th single-engine airplane in 1975.
In 1985, Cessna ceased to be an independent company. It was purchased by General Dynamics. Production of the Cessna Caravan began. General Dynamics in turn sold Cessna to Textron in 1992.
Late in 2007, Cessna purchased the bankrupt Columbia Aircraft company for US$26.4M and would continue production of the Columbia 350 and 400 as the Cessna 350 and Cessna 400 at the Columbia factory in Bend, Oregon. However, production of both aircraft had ended by 2018.
=== Chinese production controversy ===
On November 27, 2007, Cessna announced the then-new Cessna 162 would be built in China by Shenyang Aircraft Corporation, which is a subsidiary of the China Aviation Industry Corporation I (AVIC I), a Chinese government-owned consortium of aircraft manufacturers. Cessna reported that the decision was made to save money and also that the company had no more plant capacity in the United States at the time. Cessna received much negative feedback for this decision, with complaints centering on the recent quality problems with Chinese production of other consumer products, China's human rights record, exporting of jobs and China's less than friendly political relationship with the United States. The customer backlash surprised Cessna and resulted in a company public relations campaign. In early 2009, the company attracted further criticism for continuing plans to build the 162 in China while laying off large numbers of workers in the United States. In the end, the Cessna 162 was not a commercial success and only a small number were delivered before production was cancelled.
=== 2008–2010 economic crisis ===
The company's business suffered notably during the late-2000s recession, laying off more than half its workforce between January 2009 and September 2010.
On November 4, 2008, Cessna's parent company, Textron, indicated that Citation production would be reduced from the original 2009 target of 535 "due to continued softening in the global economic environment" and that this would result in an undetermined number of lay-offs at Cessna.
On November 8, 2008, at the Aircraft Owners and Pilots Association (AOPA) Expo, CEO Jack Pelton indicated that sales of Cessna aircraft to individual buyers had fallen, but piston and turboprop sales to businesses had not. "While the economic slowdown has created a difficult business environment, we are encouraged by brisk activity from new and existing propeller fleet operators placing almost 200 orders for 2009 production aircraft," Pelton stated.
Beginning in January 2009, a total of 665 jobs were cut at Cessna's Wichita and Bend, Oregon, plants. The Cessna factory at Independence, Kansas, which builds the Cessna piston-engined aircraft and the Cessna Mustang, did not see any layoffs, but one third of the workforce at the former Columbia Aircraft facility in Bend was laid off. This included 165 of the 460 employees who built the Cessna 350 and 400. The remaining 500 jobs were eliminated at the main Cessna Wichita plant.
In January 2009, the company laid off an additional 2,000 employees, bringing the total to 4,600. The job cuts included 120 at the Bend, Oregon, facility reducing the plant that built the Cessna 350 and 400 to fewer than half the number of workers that it had when Cessna bought it. Other cuts included 200 at the Independence, Kansas, plant that builds the single-engined Cessnas and the Mustang, reducing that facility to 1,300 workers.
On April 29, 2009, the company suspended the Citation Columbus program and closed the Bend, Oregon, facility. The Columbus program was finally cancelled in early July 2009. The company reported, "Upon additional analysis of the business jet market related to this product offering, we decided to formally cancel further development of the Citation Columbus". With the 350 and 400 production moving to Kansas, the company indicated that it would lay off 1,600 more workers, including the remaining 150 employees at the Bend plant and up to 700 workers from the Columbus program.
In early June 2009, Cessna laid off an additional 700 salaried employees, bringing the total number of lay-offs to 7,600, which was more than half the company's workers at the time.
The company closed its three Columbus, Georgia, manufacturing facilities between June 2010 and December 2011. The closures included the new 100,000-square-foot (9,300 m2) facility that was opened in August 2008 at a cost of US$25M, plus the McCauley Propeller Systems plant. These closures resulted in total job losses of 600 in Georgia. Some of the work was relocated to Cessna's Independence, Kansas, or Mexican facilities.
Cessna's parent company, Textron, posted a loss of US$8M in the first quarter of 2010, largely driven by continuing low sales at Cessna, which were down 44%. Half of Cessna's workforce remained laid-off and CEO Jack Pelton stated that he expected the recovery to be long and slow.
In September 2010, a further 700 employees were laid off, bringing the total to 8,000 jobs lost. CEO Jack Pelton indicated this round of layoffs was due to a "stalled [and] lackluster economy" and noted that while the number of orders cancelled for jets had been decreasing, new orders had not met expectations. Pelton added, "our strategy is to defend and protect our current markets while investing in products and services to secure our future, but we can do this only if we succeed in restructuring our processes and reducing our costs."
=== 2010s ===
On May 2, 2011, CEO Jack J. Pelton retired. The new CEO, Scott A. Ernest, started on May 31, 2011. Ernest joined Textron after 29 years at General Electric, where he had most recently served as vice president and general manager, global supply chain for GE Aviation. Ernest previously worked for Textron CEO Scott Donnelly when both worked at General Electric.
In September 2011, the Federal Aviation Administration (FAA) proposed a US$2.4 million fine against the company for its failure to follow quality assurance requirements while producing fiberglass components at its plant in Chihuahua, Mexico. Excess humidity meant that the parts did not cure correctly and quality assurance did not detect the problems. The failure to follow procedures resulted in the delamination in flight of a 7 ft (2.1 m) section of one Cessna 400's wing skin from the spar while the aircraft was being flown by an FAA test pilot. The aircraft was landed safely. The FAA also discovered 82 other aircraft parts that had been incorrectly made and not detected by the company's quality assurance. The investigation resulted in an emergency Airworthiness Directive that affected 13 Cessna 400s.
Since March 2012, Cessna has been pursuing building business jets in China as part of a joint venture with Aviation Industry Corporation of China (AVIC). The company stated that it intends to eventually build all aircraft models in China, saying "The agreements together pave the way for a range of business jets, utility single-engine turboprops and single-engine piston aircraft to be manufactured and certified in China."
In late April 2012, the company added 150 workers in Wichita as a result of anticipated increased demand for aircraft production. Overall, they have cut more than 6000 jobs in the Wichita plant since 2009.
In March 2014, Cessna ceased operations as a company and instead became a brand of Textron Aviation.
== Marketing initiatives ==
During the 1950s and 1960s, Cessna's marketing department followed the lead of Detroit automakers and came up with many unique marketing terms in an effort to differentiate its product line from their competitors.
Other manufacturers and the aviation press widely ridiculed and spoofed many of the marketing terms, but Cessna built and sold more aircraft than any other manufacturer during the boom years of the 1960s and 1970s.
Generally, the names of Cessna models do not follow a theme, but there is usually logic to the numbering: the 100 series are the light singles, the 200s are the heftier, the 300s are light to medium twins, the 400s have "wide oval" cabin-class accommodation and the 500s are jets. Many Cessna models have names starting with C for the sake of alliteration (e.g. Citation, Crusader, Chancellor).
=== Company terminology ===
Cessna marketing terminology includes:
Para-Lift Flaps – Large Fowler flaps Cessna introduced on the 170B in 1952, replacing the narrow chord plain flaps then in use.
Land-O-Matic – In 1956, Cessna introduced sprung-steel tricycle landing gear on the 172. The marketing department chose "Land-O-Matic" to imply that these aircraft were much easier to land and take off than the preceding conventional landing gear equipped Cessna 170. They even went as far as to say pilots could do "drive-up take-offs and drive-in landings", implying that flying these aircraft was as easy as driving a car. In later years, some Cessna models had their steel sprung landing gear replaced with steel tube gear legs. The 206 retains the original spring steel landing gear today.
Omni-Vision – The rear windows on some Cessna singles, starting with the 182 and 210 in 1962 and followed by the 172 and 150 in 1963 and 1964 respectively. The term was intended to make the pilot feel visibility was improved on the notably poor-visibility Cessna line. The introduction of the rear window caused in most models a loss of cruise speed due to the extra drag, while not adding any useful visibility.
Cushioned Power – The rubber mounts on the cowling of the 1967 model 150, in addition to the rubber mounts isolating the engine from the cabin.
Omni-Flash – The flashing beacon on the tip of the fin that could be seen all around.
Open-View – This referred to the removal of the top section of the control wheel in 1967 models. These had been rectangular, they now became "ram's horn" shaped, thus not blocking the instrument panel as much.
Quick-Scan – Cessna introduced a new instrument panel layout in the 1960s and this buzzword was to indicate Cessna's panels were ahead of the competition.
Nav-O-Matic – The name of the Cessna autopilot system, which implied the system was relatively simple.
Camber-Lift – A marketing name used to describe Cessna aircraft wings starting in 1972 when the aerodynamics designers at Cessna added a slightly drooped leading edge to the standard NACA 2412 airfoil used on most of the light aircraft fleet. Writer Joe Christy described the name as "stupid" and added "Is there any other kind [of lift]?"
Stabila-Tip – Cessna started commonly using wingtip fuel tanks, carefully shaped for aerodynamic effect rather than being tubular-shaped. Tip tanks do have an advantage of reducing free surface effect of fuel affecting the balance of the aircraft in rolling maneuvers.
== Aircraft ==
In October 2020, Textron Aviation was producing the following Cessna-branded models:
Cessna 172 Skyhawk – high-wing, single piston-engined, four-seat aircraft in production since 1956
Cessna 182 Skylane – high-wing, single piston-engined, four-seat aircraft in production since 1956
Cessna 206 Stationair – high-wing, single piston-engined, six-seat utility aircraft in production since 1962
Cessna 208 Caravan – high-wing single-turboprop utility aircraft in production since 1984
Cessna 408 SkyCourier – high-wing twin-turboprop utility aircraft in production since 2022
Cessna Citation family – twin-engined business jets
Cessna Citation 525 M2/CJ series – in production since 1991
Cessna Citation 560XL Excel – in production since 1996
Cessna Citation 680 Sovereign – out of production since 2021
Cessna Citation 680A Latitude – in production since 2014
Cessna Citation 700 Longitude – in production since 2019
== References ==
== External links ==
Official website
"Cessna aircraft history, performance and specifications". PilotFriend.com.
"Patents owned by Cessna Aircraft Company". US Patent & Trademark Office. Archived from the original on June 12, 2019. Retrieved December 5, 2005.
Cessna Aircraft Company (2008). "Genealogy of Aircraft" (PDF).
Mort Brown Cessna Special Collection – Personal collection of documents belonging to a former chief test pilot |
Cessna 172 | The Cessna 172 Skyhawk is an American four-seat, single-engine, high wing, fixed-wing aircraft made by the Cessna Aircraft Company. First flown in 1955, more 172s have been built than any other aircraft. It was developed from the 1948 Cessna 170 but with tricycle landing gear rather than conventional landing gear. The Skyhawk name was originally used for a trim package, but was later applied to all standard-production 172 aircraft, while some upgraded versions were marketed as the Cutlass, Powermatic, and Hawk XP. The aircraft was also produced under license in France by Reims Aviation, which marketed upgraded versions as the Reims Rocket.
Measured by its longevity and popularity, the Cessna 172 is the most successful aircraft in history. Cessna delivered the first production model in 1956, and as of 2015, the company and its partners had built more than 44,000 units. With a break from 1986–96, the aircraft remains in production today.
A light general aviation airplane, the Skyhawk's main competitors throughout much of its history were the Beechcraft Musketeer and Grumman American AA-5 series, though neither are currently in production. Other prominent competitors still in production include the Piper PA-28 Cherokee, and, more recently, the Diamond DA40 Diamond Star and Cirrus SR20.
== Design and development ==
The Cessna 172 started as a tricycle landing gear variant of the taildragger Cessna 170, with a basic level of standard equipment. In January 1955, Cessna flew an improved variant of the Cessna 170, a Continental O-300-A-powered Cessna 170C with larger elevators and a more angular tailfin. Although the variant was tested and certified, Cessna decided to modify it with a tricycle landing gear, and the modified Cessna 170C flew again on June 12, 1955. To reduce the time and cost of certification, the type was added to the Cessna 170 type certificate as the Model 172. Later, the 172 was given its own type certificate. The 172 became an overnight sales success, and over 1,400 were built in 1956, its first full year of production.
Early 172s were similar in appearance to the 170s, with the same straight aft fuselage and tall landing gear legs, although the 172 had a straight tailfin while the 170 had a rounded fin and rudder. In 1960, the 172A incorporated revised landing gear and the swept-back tailfin, which is still in use today.
The final aesthetic development, found in the 1963 172D and all later 172 models, was a lowered rear deck allowing an aft window. Cessna advertised this added rear visibility as "Omni-Vision".
Production halted in 1986 because of Product liability costs, but resumed in 1996 at Cessna's new factory at Independence, Kansas with the Cessna 172R Skyhawk. Cessna supplemented this in 1998 with the 180 hp (134 kW) Cessna 172S Skyhawk SP.
=== Modifications ===
The Cessna 172 may be modified via a wide array of supplemental type certificates (STCs), including increased engine power and higher gross weights. Available STC engine modifications increase power from 180 to 210 hp (134 to 157 kW), add constant-speed propellers, or allow the use of automobile gasoline. Other modifications include additional fuel tank capacity in the wing tips, added baggage compartment tanks, added wheel pants to reduce drag, or enhanced landing and takeoff performance and safety with a STOL kit. The 172 has also been equipped with the 180 hp (134 kW) fuel injected Superior Air Parts Vantage engine.
== Operational history ==
=== World records ===
From December 4, 1958, to February 7, 1959, Robert Timm and John Cook set the world record for (refueled) flight endurance in a used Cessna 172, registration number N9172B. They took off from McCarran Field (now Harry Reid International Airport) in Las Vegas, Nevada, and landed back at McCarran Field after 64 days, 22 hours, 19 minutes and 5 seconds in a flight covering an estimated 150,000 miles (240,000 km), over 6 times further than flying around the world at the equator. The flight was part of a fund-raising effort for the Damon Runyon Cancer Fund. The aircraft is now on display at the airport.
== Variants ==
Cessna has historically used model years similar to U.S. auto manufacturers, with sales of new models typically starting a few months prior to the actual calendar year.
172
Introduced in November 1955 for the 1956 model year as a development of the Cessna 170B with tricycle landing gear, dubbed "Land-O-Matic" by Cessna. The 172 also featured a redesigned tail similar to the experimental 170C, "Para-Lift" flaps, and a maximum gross weight of 2,200 lb (998 kg) while retaining the 170B's 145 hp (108 kW) Continental O-300-A six-cylinder, air-cooled engine. The 1957 and 1959 model years brought only minor changes, while 1959 introduced a new cowling for improved engine cooling. The prototype 172, c/n 612, was modified from 170 c/n 27053, which previously served as the prototype of the 170B. A total of 3,757 were constructed over the four model years; 1,178 (1956), 1,041 (1957), 750 (1958), 788 (1959).
172A
1960 model year with a swept-back vertical tail and rudder and powered by a 145 hp (108 kW) O-300-C engine. It was also the first 172 to be certified for floatplane operation. 994 built.
172B
1961 model year with shorter landing gear, engine mounts lengthened by three inches (76 mm), a reshaped cowling, a pointed propeller spinner, and an increased gross weight of 2,250 lb (1,021 kg). The stepped firewall introduced in the closely related Cessna 175 was adopted in the 172, along with the 175's wider, rearranged instrument panel located further aft in the fuselage. For the first time, the Skyhawk name was applied to an available deluxe option package that included optional wheel fairings, avionics, and a cargo door along with full exterior paint rather than partial paint stripes. The Skyhawk was also powered by an O-300-D in place of the O-300-C of the standard model. 989 built.
172C
1962 model year with fiberglass wingtips, redesigned wheel fairings, a key starter to replace the previous pull-starter, and an optional autopilot. The seats were redesigned to be six-way adjustable, and a child seat was made optional to allow two children to be carried in the baggage area. 810 built.
172D
1963 model year with a cut down rear fuselage with a wraparound Omni-Vision rear window, a one-piece windshield, increased horizontal stabilizer span, and a folding hat shelf in the rear cabin. Gross weight was increased to 2,300 lb (1,043 kg), where it would stay until the 172P. New rudder and brake pedals were also added. 1,011 were built by Cessna, while a further 18 were produced by Reims Aviation in France as the F172D.
172E
1964 model year with a redesigned instrument panel with center-mounted avionics and circuit breakers replacing the electrical fuses of previous models. 1,209 built, 67 built by Reims as the F172E.
172F
1965 model year with electrically-operated flaps to replace the previous lever-operated system and improved instrument lighting. 1,400 built, plus 94 by Reims as the F172F.
The 172F formed the basis for the U.S. Air Force's T-41A Mescalero primary trainer, which was used during the 1960s and early 1970s as initial flight screening aircraft in USAF Undergraduate Pilot Training (UPT). Following their removal from the UPT program, some extant USAF T-41s were assigned to the U.S. Air Force Academy for the cadet pilot indoctrination program, while others were distributed to Air Force aero clubs.
172G
1966 model year with a longer, more pointed spinner and sold for US$12,450 in its basic 172 version and US$13,300 in the upgraded Skyhawk version. 1,474 built (including 26 as the T-41A), plus 140 by Reims as the F172G.
172H
1967 model year with a 60A alternator replacing the generator, a rotating beacon replacing the flashing unit, redesigned wheel fairings, and a shorter-stroke nose gear oleo to reduce drag and improve the appearance of the aircraft in flight. A new cowling was used, introducing shock-mounts that transmitted lower noise levels to the cockpit and reduced cowl cracking. The electric stall warning horn was replaced by a pneumatic one. 1,586 built (including 34 as the T-41A), plus 435 by Reims as the F172H for both the 1967 and 1968 model years.
172I
The 1968 model year marked the beginning of the Lycoming-powered 172s, with the 172I introduced with a Lycoming O-320-E2D engine of 150 hp (112 kW), an increase of 5 hp (3.7 kW) over the Continental powerplant. The increased power resulted in an increase in optimal cruise from 130 mph (209 km/h) true airspeed (TAS) to 131 mph (211 km/h) TAS. There was no change in the sea level rate of climb at 645 ft (197 m) per minute. Starting with this model, the standard and deluxe Skyhawk models were no longer powered by different engines. The 172I also introduced the first standard "T" instrument arrangement. 649 built.
172J
For 1968, Cessna planned to replace the 172 with a newly designed aircraft called the 172J, featuring the same general configuration but with a more sloping windshield, a strutless cantilever wing, a more stylish interior, and various other improvements. A single 172J prototype, registered N3765C (c/n 660), was built. However, the popularity of the previous 172 with Cessna dealers and flight schools prompted the cancellation of the replacement plan, and the 172J was redesignated as the 177 from the second prototype onward and sold alongside the 172.
172K
Introduced for the 1969 model year with a redesigned tailfin cap and reshaped rear windows enlarged by 16 square inches (103 cm2). Optional long-range 52 US gal (197 L) wing fuel tanks were also offered. The 1970 model year featured fiberglass, downward-shaped, conical camber wingtips and optional fully articulated seats. 2,055 built for both model years, plus 50 by Reims as the F172K.
172L
Introduced for the 1971 model year with tapered, tubular steel landing gear legs replacing the original flat spring steel legs, increasing landing gear width by 12 in (30 cm). The new landing gear was lighter, but required aerodynamic fairings to maintain the same speed and climb performance as experienced with the flat steel design. 172L also had a nose-mounted landing light, a bonded baggage door, and optional cabin skylights. The 1972 model year introduced a plastic fairing between the dorsal fin and vertical fin to introduce a greater family resemblance to the 182's vertical fin. 1972 also introduced a reduced-diameter propeller, bonded cabin doors, and improved instrument panel controls. 1,535 built for both model years, plus 100 by Reims as the F172L.
172M
Introduced for the 1973 model year with a "Camber-Lift" wing with a drooped leading edge for improved low-speed handling, a key-locking baggage door, and new lighting switches. The 1974 model year introduced the Skyhawk II, which was sold alongside the baseline 172M and Skyhawk models with higher standard equipment, including a second nav/comm radio, an ADF and transponder, a larger baggage compartment, and nose-mounted dual landing lights. 1975 introduced inertia-reel shoulder harnesses and an improved instrument panel and door seals. Beginning in 1976, Cessna stopped marketing the aircraft as the 172 and began exclusively using the "Skyhawk" designation. This model year also saw a redesigned instrument panel to hold more avionics. Among other changes, the fuel and other small gauges were relocated to the left side for improved pilot readability compared with the earlier 172 panel designs. 6,826 built; 4,926 (1973–75) and 1,900 (1976), plus 610 by Reims as the F172M.
172N Skyhawk/100
1977 model year powered by a 160 horsepower (119 kW) Lycoming O-320-H2AD engine designed to run on 100-octane fuel (hence the "Skyhawk/100" name), whereas all previous engines used 80/87 fuel. Other changes included pre-select flap control and optional rudder trim. The 1978 model year brought a 28-volt electrical system to replace the previous 14-volt system as well as optional air conditioning. The 1979 model year increased the flap-extension speed to 110 knots (204 km/h). 6,425 total built; 1,725 (1977), 1,725 (1978), 1,850 (1979), and 1,125 (1980), plus 525 by Reims as the F172N.
172O
There was no "O" model 172, to avoid confusion with the number zero.
172P Skyhawk P
Introduced for the 1981 model year with a Lycoming O-320-D2J engine replacing the O-320-H2AD of the 172N, which had proven unreliable. Other changes included a decreased maximum flap deflection from 40 degrees to 30 to allow a gross weight increase from 2,300 lb (1,043 kg) to 2,400 lb (1,089 kg). A 62 US gal (235 L) wet wing and air conditioning were optional. The 1982 model year moved the landing lights from the nose to the wing to increase bulb life, while 1983 added some minor soundproofing improvements and thicker windows. 1984 introduced a second door latch pin, a thicker windshield and side windows, additional avionics capacity, and low-vacuum warning lights. 2,664 total built; 1,052 (1981), 724 (1982), 319 (1983), 179 (1984), 256 (1985), and 134 (1986), plus 215 by Reims as the F172P. Following the end of 172P production in 1986, Cessna ceased production of the Skyhawk for ten years.
172Q Cutlass
Introduced for the 1983 model year, the 172Q was given the name "Cutlass" to create an affiliation with the 172RG Cutlass RG, although it was actually a 172P with a Lycoming O-360-A4N engine of 180 horsepower (134 kW). The aircraft had a gross weight of 2,550 lb (1,157 kg) and an optimal cruise speed of 122 knots (226 km/h) compared to the 172P's cruise speed of 120 knots (222 km/h) on 20 hp (15 kW) less. It had a useful load that was about 100 lb (45 kg) more than the Skyhawk P and a rate of climb that was actually 20 feet (6 m) per minute lower, due to the higher gross weight. The Cutlass II was offered as a deluxe model of the 172Q, as was the Cutlass II/Nav-Pac with IFR equipment. The 172Q was produced alongside the 172P for the 1983 and 1984 model years before being discontinued. Sources disagree on the exact number of 172Q aircraft built, and the construction numbers listed on the Federal Aviation Administration type certificate overlap with those of the 1983 and 1984 172P.
172R Skyhawk R
The Skyhawk R was introduced in 1996 and is powered by a derated Lycoming IO-360-L2A producing a maximum of 160 horsepower (120 kW) at just 2,400 rpm. This is the first Cessna 172 to have a factory-fitted fuel-injected engine.
The 172R's maximum takeoff weight is 2,450 lb (1,111 kg). This model year introduced many improvements, including a new interior with soundproofing, an all new multi-level ventilation system, a standard four point intercom, contoured, energy absorbing, 26g front seats with vertical and reclining adjustments and inertia reel harnesses.
172S Skyhawk SP
The Cessna 172S was introduced in 1998 and is powered by a Lycoming IO-360-L2A producing 180 horsepower (134 kW). The maximum engine rpm was increased from 2,400 rpm to 2,700 rpm resulting in a 20 hp (15 kW) increase over the "R" model. As a result, the maximum takeoff weight was increased to 2,550 lb (1,157 kg). This model is marketed under the name Skyhawk SP, although the Type Certification data sheet specifies it is a 172S.
The 172S is built primarily for the private owner-operator and is, in its later years, offered with the Garmin G1000 avionics package and leather seats as standard equipment.
As of 2009, the 172S model was the only Skyhawk model in production.
=== Variants under 175 type certificate ===
As the Cessna 175 Skylark had gained a reputation for poor engine reliability, Cessna attempted to regain sales by rebranding the aircraft as a variant of the 172. Several later 172 variants, generally those with higher-than-standard engine power or gross weight, were built under the 175 type certificate although most did not use the unpopular Continental GO-300-E engine from the 175.
P172D Powermatic
The 175 Skylark was rebranded for the 1963 model year as the P172D Powermatic, continuing where the Skylark left off at 175C. It was powered by a 175 hp (130 kW) Continental GO-300-E with a geared reduction drive powering a constant-speed propeller, increasing cruise speed by 11 mph (18 km/h) over the standard 172D. It differed from the 175C in that it had a cut-down rear fuselage with an "Omni-Vision" rear window and an increased horizontal stabilizer span. A deluxe version was marketed as the Skyhawk Powermatic with a slightly increased top speed. Despite the rebranding, sales did not meet expectations, and the 175 type was discontinued for the civilian market after the 1963 model year. 65 were built, plus 3 by Reims as the FP172D.
R172E
Although the 175 type was discontinued for the civilian market, Cessna continued to produce the aircraft for the United States Armed Forces as the T-41 Mescalero. Introduced in 1967, the R172E was built in T-41B, T-41C, and T-41D variants for the US Army, USAF Academy, and US Military Aid Program, respectively. As the T-41B, the R172E was powered by a fuel-injected 210 hp (157 kW) Continental IO-360-D or -DE driving a constant-speed propeller, and featured a 28V electrical system, jettisonable doors, an openable right front window, a 6.00x6 nose wheel tire and military avionics, but no baggage door. The T-41C was similar to the T-41B, but had a 14V electrical system, a fixed-pitch propeller, civilian avionics, and no rear seats. The T-41D featured a 28V electrical system, four seats, corrosion-proofing, reinforced flaps and ailerons, a baggage door, and provisions for wing-mounted pylons. 255 T-41B, 45 T-41C, and 34 T-41D aircraft were built. While Cessna produced the R172E exclusively for military use, Reims built a civilian model as the FR172E Reims Rocket, with 60 built for the 1968 model year.
R172F
The R172F was similar to the R172E and was built in both T-41C and T-41D variants. 7 (T-41C) and 74 (T-41D) built, plus 85 by Reims as the FR172F Reims Rocket for the 1969 model year.
R172G
The R172G was similar to the R172E/F, differing in that it was certified to be powered by a 210 hp (157 kW) Continental IO-360-C, -D, -CB, or -DB engine. 28 (T-41D) built, plus 80 by Reims as the FR172G Reims Rocket for the 1970 model year.
R172H
The R172H introduced the extended dorsal fillet of the 172L to the T-41D. It was also certified to be powered by a 210 hp (157 kW) Continental IO-360-C, -D, -H, -CB, -DB, or -HB engine. 163 (T-41D) built, plus 125 by Reims as the FR172H Reims Rocket for the 1971 and 1972 model years.
R172J
Certified to be powered by a 210 hp (157 kW) Continental IO-360-H or -HB engine. Only one was built by Cessna, while Reims built 240 as the FR172J Reims Rocket for the 1973 through 1976 model years.
R172K Hawk XP
Following the success of the Reims Rocket in Europe, Cessna decided to once again produce the 175 type for the civilian market as the R172K Hawk XP, beginning with the 1977 model year. It was powered by a derated 195 hp (145 kW) Continental IO-360-K or -KB engine driving a McCauley constant-speed propeller and featured a new cowling with landing lights and an upgraded interior. The Hawk XP II was also available with full IFR avionics. However, owners claimed that the increased performance of the "XP" did not compensate for its increased purchase price and the higher operating costs associated with the larger engine. The aircraft was well accepted for use on floats, however, as the standard 172 is not a strong floatplane, even with only two people on board, while the XP's extra power improves water takeoff performance dramatically. 1 (1973 prototype), 725 (1977), 205 (1978), 270 (1979), 200 (1980), and 55 (1981) built, plus 85 (30 in 1977, 55 in 1978–81) by Reims as the FR172K Reims Rocket for the 1977 through 1981 model years.
172RG Cutlass RG
Cessna introduced a retractable landing gear version of the 172 in 1980, designating it as the 172RG and marketing it as the Cutlass RG.
The Cutlass RG sold for about US$19,000 more than the standard 172 and featured a variable-pitch, constant-speed propeller and a more powerful Lycoming O-360-F1A6 engine of 180 horsepower (130 kW), giving it an optimal cruise speed of 140 knots (260 km/h), compared to 122 knots (226 km/h) for the contemporary 160 horsepower (120 kW) 172N or 172P. It also had more fuel capacity than a standard Skyhawk, 62 US gallons (230 L; 52 imp gal) versus 53 US gallons (200 L; 44 imp gal), giving it greater range and endurance.
The 172RG first flew on August 24, 1976. It was the lowest-priced four-seat retractable-gear airplane on the U.S. market when it was introduced. Although the general aviation aircraft market was contracting at the time, the RG proved popular as an inexpensive flight-school trainer for complex aircraft and commercial pilot ratings under U.S. pilot certification rules, which required demonstrating proficiency in an aircraft with retractable landing gear.
The 172RG uses the same basic landing gear as the heavier R182 Skylane RG, which Cessna touted as a benefit, saying it was a proven design; however, owners have found the landing gear to have higher maintenance requirements than comparable systems from other manufacturers, with several parts prone to rapid wear or cracking. Compared to a standard 172, the 172RG is easier to load with its center of gravity too far aft, which adversely affects the aircraft's longitudinal stability.
While numbered and marketed as a 172, the 172RG was certified on the Cessna 175 type certificate. No significant design updates were made to the 172RG during its five-year model run. 1,191 were produced.
Although it is slower and has less passenger and cargo capacity than popular competing single-engine retractable-gear aircraft such as the Beechcraft Bonanza, the Cutlass RG is praised by owners for its relatively low operating costs, robust and reliable engine, and docile flying qualities comparable to the standard 172, although it has higher landing gear maintenance and insurance costs than a fixed-gear 172.
=== Special versions ===
J172T Turbo Skyhawk JT-A
Model introduced in July 2014 for 2015 customer deliveries, powered by a 155 hp (116 kW) Continental CD-155 diesel engine installed by the factory under a supplemental type certificate. Initial retail price in 2014 was $435,000 (~$551,508 in 2023). The model has a top speed of 131 kn (243 km/h) and burns 3 U.S. gallons (11 L; 2.5 imp gal) per hour less fuel than the standard 172. As a result, the model has an 885 nmi (1,639 km) range, an increase of more than 38% over the standard 172. This model is a development of the proposed and then canceled Skyhawk TD. Cessna has indicated that the JT-A will be made available in 2016.
In reviewing this new model Paul Bertorelli of AVweb said: "I'm sure Cessna will find some sales for the Skyhawk JT-A, but at $420,000, it's hard to see how it will ignite much market expansion just because it's a Cessna. It gives away $170,000 to the near-new Redbird Redhawk conversion which is a lot of change to pay merely for the smell of a new airplane. Diesel engines cost more than twice as much to manufacture as gasoline engines do and although their fuel efficiency gains back some of that investment, if the complete aircraft package is too pricey, the debt service will eat up any savings, making a new aircraft not just unattractive, but unaffordable. I haven't run the numbers on the JT-A yet, but I can tell from previous analysis that there are definite limits."
The model was certified by both EASA and the FAA in June 2017. It was discontinued in May 2018, due to poor sales as a result of the aircraft's high price, which was twice the price of the same aircraft as a diesel conversion. The aircraft remains available as an STC conversion from Continental Motors, Inc.
Electric-powered 172
In July 2010, Cessna announced it was developing an electrically powered 172 as a proof-of-concept in partnership with Bye Energy. In July 2011, Bye Energy, whose name had been changed to Beyond Aviation, announced the prototype had commenced taxi tests on 22 July 2011 and a first flight would follow soon. In 2012, the prototype, using Panacis batteries, engaged in multiple successful test flights. The R&D project was not pursued for production.
=== Canceled model ===
172TD Skyhawk TD
On October 4, 2007, Cessna announced its plan to build a diesel-powered model, to be designated the 172 Skyhawk TD ("Turbo Diesel") starting in mid-2008. The planned engine was to be a Thielert Centurion 2.0, liquid-cooled, two-liter displacement, dual overhead cam, four-cylinder, in-line, turbo-diesel with full authority digital engine control with an output of 155 hp (116 kW) and burning Jet-A fuel. In July 2013, the 172TD model was canceled due to Thielert's bankruptcy. The aircraft was later refined into the Turbo Skyhawk JT-A, which was certified in June 2014 and discontinued in May 2018.
Simulator company Redbird Flight uses the same engine and reconditioned 172 airframes to produce a similar model, the Redbird Redhawk.
Premier Aircraft Sales also announced in February 2014 that it would offer refurbished 172 airframes equipped with the Continental/Thielert Centurion 2.0 diesel engine.
== Military operators ==
A variant of the 172, the T-41 Mescalero was used as a trainer with the United States Air Force and Army. In addition, the United States Border Patrol uses a fleet of 172s for aerial surveillance along the Mexico-US border.
From 1972 to 2019 the Irish Air Corps used the Reims version for aerial surveillance and monitoring of cash, prisoner and explosive escorts, in addition to army cooperation and pilot training roles.
For T-41 operators, see Cessna T-41 Mescalero.
Angola
FAPA/DAA
Austria
Austrian Air Force 1× 172
Bolivia
Bolivian Air Force 3× 172K
Chile
Chilean Army 18× R172K (retired)
Colombia
Colombian Air Force – To replace Cessna T-41s used for primary training with deliveries from June 2021.
Ecuador
Ecuadorian Air Force 8× 172F
Ecuadorian Army 1× 172G
Guatemala
Guatemalan Air Force 6× 172K
Honduras
Honduran Air Force 3
Indonesia
Indonesian Air Force
Iraq
Iraqi Air Force
Ireland
Irish Air Corps 8× FR172H, 1× FR172K Five FR172H remained in service until 2019.
Liberia
Air Reconnaissance Unit 2
Lithuania
Lithuanian Air Force 1
Madagascar
Malagasy Air Force 4× 172M
Nicaragua
Nicaraguan Air Force 7
Pakistan
Pakistan Air Force 4× 172N
Philippines
Philippine Army -3 Units of 172M in In service (PA-101, PA-103 & PA-911)
Philippine Navy - 1×172F - Donated By Olympic Aviation in 2007 as PN 330. 1×172N - Purchased from Welcome Export Inc. in July 2008 as PN 331, 4x172S- acquired from US Foreign Military Sales delivered in February 2022
Saudi Arabia
Royal Saudi Air Force 8× F172G, 4× F172H, 4× F172M
Singapore
Republic of Singapore Air Force 8× 172K, delivered 1969 and retired 1972.
Suriname
Suriname Air Force (One in service for sale)
== Accidents and incidents ==
On February 13, 1964, Ken Hubbs, second baseman for the Chicago Cubs and winner of the Rookie of the Year Award and the Gold Glove Award, was killed when the Cessna 172 he was flying crashed near Bird Island in Utah Lake.
On October 23, 1964, David Box, lead singer for The Crickets on their 1960 release version of "Peggy Sue Got Married" and "Don't Cha Know" and later a solo artist, was killed when the Cessna 172 he was aboard crashed in northwest Harris County, Texas, while en route to a performance. Box was the second lead vocalist for The Crickets to die in a plane crash, following Buddy Holly.
On August 31, 1969, American professional boxer Rocky Marciano was killed when the Cessna 172 in which he was a passenger crashed on approach to an airfield outside Newton, Iowa.
On September 25, 1978, a Cessna 172, N7711G, and Pacific Southwest Airlines Flight 182, a Boeing 727, collided over San Diego, California. There were 144 fatalities, 2 in the Cessna 172, 135 on the PSA Flight 182 and 7 on the ground.
On May 28, 1987, a rented Reims Cessna F172P, registered D-ECJB, was used by German teenage pilot Mathias Rust in an unauthorized flight from Helsinki-Malmi Airport through Soviet airspace to land near the Red Square in Moscow, all without being intercepted by Soviet air defense.
On April 9, 1990, Atlantic Southeast Airlines Flight 2254, an Embraer EMB 120 Brasilia, collided head-on with a Civil Air Patrol Cessna 172, N99501, while en route from Gadsden Municipal Airport to Hartsfield–Jackson Atlanta International Airport. The Cessna crashed, killing two occupants, but the Brasilia made a safe emergency landing.
On January 5, 2002, high school student Charles J. Bishop stole a Cessna 172, N2371N, and intentionally crashed it into the side of the Bank of America Tower in downtown Tampa, Florida, killing only himself and otherwise causing very little damage.
On April 6, 2009, a Cessna 172N, C-GFJH, belonging to Confederation College in Thunder Bay, Ontario, Canada, was stolen by a student who flew it into United States airspace over Lake Superior. The 172 was intercepted and followed by NORAD F-16s, finally landing on Highway 60 in Ellsinore, Missouri, after a seven-hour flight. The student pilot, a Canadian citizen born in Turkey, Adam Dylan Leon, formerly known as Yavuz Berke, suffered from depression and was attempting to commit suicide by being shot down, but was instead arrested shortly after landing. On November 3, 2009, he was sentenced to two years in a US federal prison after pleading guilty to all three charges against him: interstate transportation of a stolen aircraft, importation of a stolen aircraft, and illegal entry into the US. College procedures at the time allowed easy access to aircraft and keys were routinely left in them.
On August 16, 2015, Cessna 172M N1285U collided in midair with a private North American Sabreliner, N442RM, on approach to Brown Field Municipal Airport in California, killing all five people on board the two aircraft. The cause was found to be air traffic control error. This accident, together with another fatal 2015 mid-air collision under similar circumstances, prompted the U.S. National Transportation Safety Board to recommend that the FAA more strongly emphasize scenario-based training for controllers.
On November 11, 2021, Glen de Vries, co-founder of Medidata Solutions and Blue Origin space tourist, died in the crash of a 172 near Hampton Township, New Jersey.
On March 5, 2024, a 172M of 99 Flying School, 5Y-NNJ, crashed after colliding with Safarilink Aviation Flight 053, a de Havilland Canada Dash 8, near Wilson Airport over Nairobi National Park, killing the instructor and student pilot aboard the 172. The Safarilink flight landed safely with no injuries to the 44 people on board.
== Specifications (172R) ==
Data from Cessna, FAA type certificateGeneral characteristics
Crew: one
Capacity: three passengers, 120 lb (54 kg) of baggage
Length: 27 ft 2 in (8.28 m)
Wingspan: 36 ft 1 in (11.00 m)
Height: 8 ft 11 in (2.72 m)
Wing area: 174 sq ft (16.2 m2)
Aspect ratio: 7.32
Airfoil: modified NACA 2412
Empty weight: 1,691 lb (767 kg)
Gross weight: 2,450 lb (1,111 kg)
Fuel capacity: 56 US gal (210 L) (52 US gal (200 L) usable)
Powerplant: 1 × Lycoming IO-360-L2A four cylinder, horizontally opposed aircraft engine, 160 hp (120 kW)
Propellers: 2-bladed metal, fixed pitch McCauley Model 1C235/LFA7570, 6 ft 3 in (1.91 m) diameter (maximum)
Performance
Cruise speed: 122 kn (140 mph, 226 km/h)
Stall speed: 47 kn (54 mph, 87 km/h) (power off, flaps down)
Never exceed speed: 163 kn (188 mph, 302 km/h) (IAS)
Range: 696 nmi (801 mi, 1,289 km) with 45 minute reserve, 55% power, at 12,000 feet (3,700 m)
Service ceiling: 13,500 ft (4,100 m)
Rate of climb: 721 ft/min (3.66 m/s)
Wing loading: 14.1 lb/sq ft (68.6 kg/m2)
Avionics
Optional Garmin G1000 primary flight display
== See also ==
1955 in aviation (first flight)
Dwane Wallace
Related development
Cessna 150
Cessna 152
Cessna 170
Cessna 175 Skylark
Cessna 177 Cardinal
Cessna T-41 Mescalero
Aircraft of comparable role, configuration, and era
Aero Commander 100
Beechcraft Musketeer
Diamond DA40
Grumman Cheetah
Piper Cherokee
Vulcanair V1.0
Yak-12
Related lists
List of aircraft
List of civil aircraft
List of most produced aircraft
== Notes ==
== References ==
=== Bibliography ===
Andrade, John (1982). Militair 1982. London: Aviation Press Limited. ISBN 0907898017.
Fontanellaz, Adrien; Cooper, Tom; Matos, Jose Augusto (2020). War of Intervention in Angola, Volume 3: Angolan and Cuban Air Forces, 1975–1985. Warwick, UK: Helion & Company Publishing. ISBN 978-1913118617.
Hagedorn, Daniel P. (1993). Central American and Caribbean Air Forces. Tonbridge, Kent, UK: Air-Britain (Historians) Ltd. ISBN 0851302106.
Jackson, Paul (2003). Jane's All The World's Aircraft 2003–2004. Coulsdon, UK: Jane's Information Group. ISBN 0710625375.
Phillips, Edward H. (1986). Wings of Cessna: Model 120 to the Citation III. Eagan, Minnesota, US: Flying Books. ISBN 0-911139-05-2.
Simpson, R W (1991). Airlife's General Aviation. Shrewsbury, England: Airlife Publishing. ISBN 185310194X.
Simpson, Rod; Longley, Pete; Swan, Robert (2022). The General Aviation Handbook: A Guide to Millennial General Aviation Manufacturers and their Aircraft. Tonbridge, Kent, UK: Air-Britain (Trading) Limited. ISBN 978-0-85130-562-2.
== External links ==
Official website
Complete specifications and data for each Cessna 172 model year |
Charles Kingsford Smith | Sir Charles Edward Kingsford Smith (9 February 1897 – 8 November 1935), nicknamed Smithy, was an Australian aviation pioneer. He piloted the first transpacific flight and the first flight between Australia and New Zealand.
Kingsford Smith was born in Brisbane. He grew up in Sydney, leaving school at the age of 16 and becoming an engineering apprentice. He joined the Australian Army in 1915 and was a motorcycle despatch rider on the Gallipoli campaign. He later transferred to the Royal Flying Corps and was awarded the Military Cross in 1917 after being shot down. After the war's end, Kingsford Smith worked as a barnstormer in England and the United States before returning to Australia in 1921. He subsequently joined West Australian Airways as one of the country's first commercial pilots.
In 1928, Kingsford Smith completed the first transpacific flight, a three-leg journey from California to Brisbane via Hawaii and Fiji. He and his co-pilot Charles Ulm became celebrities, together with crew members James Warner and Harry Lyon. In the same year he and Ulm completed the first non-stop flight across Australia from Melbourne to Perth and the first non-stop flight from Australia to New Zealand. They subsequently established Australian National Airways, but the airline and Kingsford Smith's other business ventures failed to achieve commercial success. He continued to participate in air races and to attempt other aviation feats.
In 1935, Kingsford Smith and his co-pilot Tommy Pethybridge disappeared over the Andaman Sea while attempting to break the Australia–England speed record. He was fêted as a national hero during the Great Depression and received numerous honours during his lifetime. After his death Sydney's primary airport was named in his memory and he was featured on the Australian twenty-dollar note for several decades.
== Early and personal life ==
Charles Edward Kingsford Smith was born on 9 February 1897 at Riverview Terrace, Hamilton in Brisbane, Colony of Queensland, the son of William Charles Smith and his wife Catherine Mary (née Kingsford, daughter of Richard Ash Kingsford, a Member of the Queensland Legislative Assembly and mayor in both Brisbane and Cairns municipal councils). His birth was officially registered and announced in the newspapers under the surname Smith, which his family used at that time. The earliest use of the surname Kingsford Smith appears to be by his older brother Richard Harold Kingsford Smith, who used the name at least informally from 1901, although he married in New South Wales under the surname Smith in 1903.
In 1903, his parents moved to Canada where they adopted the surname Kingsford Smith. They returned to Sydney in 1907.
Kingsford Smith first attended school in Vancouver, Canada. From 1909 to 1911, he was enrolled at St Andrew's Cathedral School, Sydney, where he was a chorister in the school's cathedral choir,: 39–40, 48 and then at Sydney Technical High School, before becoming an engineering apprentice with the Colonial Sugar Refining Company at 16.
Kingsford Smith married Thelma Eileen Hope Corboy in 1923. They divorced in 1929. He married Mary Powell in December 1930.
Shortly after his second marriage he joined the New Guard, a radical monarchist, anti-communist, and fascist-inspired organisation.
== World War I and early flying experience ==
In 1915, he enlisted for duty in the 1st AIF (Australian Army) and served at Gallipoli. Initially, he performed duty as a motorcycle dispatch rider, before transferring to the Royal Flying Corps, earning his pilot's wings in 1917.
In August 1917, while serving with No. 23 Squadron, Kingsford Smith was shot down and received injuries which required amputation of two toes. He was awarded the Military Cross for his gallantry in battle. As his recovery was predicted to be lengthy, Kingsford Smith was permitted to take leave in Australia where he visited his parents. Returning to England, Kingsford Smith was assigned to instructor duties and promoted to Captain.
On 1 April 1918, along with other members of the Royal Flying Corps, Kingsford Smith was transferred to the newly established Royal Air Force. On being demobilised in England, in early 1919, he joined Tasmanian Cyril Maddocks, to form Kingsford Smith, Maddocks Aeros Ltd, flying a joy-riding service mainly in the North of England, during the summer of 1919, initially using surplus DH.6 trainers, then surplus B.E.2s. Later Kingsford Smith worked as a barnstormer in the United States before returning to Australia in 1921.
Applying for a commercial pilot's licence on 2 June 1921, he gave his name as "Charles Edward Kingsford-Smith".
The Cowra Free Press told how Kingsford Smith flew under the Lachlan road bridge at Cowra, New South Wales, with local motoring identity Ken Richards. It went on to recount how Kingsford Smith was preparing to also fly under the nearby railway bridge, but was warned by Richards of telegraph wires just in time to prevent a catastrophe. Richards, they added, was a mate of Kingsford Smith, and had flown with him several times in France. In this version of events, the feat was accomplished "just after the Armistice" (11 November 1918), but may have been in July 1921, when Kingsford Smith was hosting "joy flights" there, in an aircraft owned by the Diggers' Cooperative Aviation Company. Later accounts have embellished the story.
He became one of Australia's first airline pilots when he was chosen by Norman Brearley to fly for the newly formed West Australian Airways, and piloted their Bristol Type 28 Coupe Tourers plane (G-AUDF) that made bi-weekly mail drops to the astronomers during the 1922 Solar Eclipse expedition at Wallal, Western Australia.
Around this time he began to plan his record-breaking flight across the Pacific.
== 1928 Trans-Pacific flight ==
In 1928, Kingsford Smith and Charles Ulm arrived in the United States and began to search for an aircraft. Famed Australian polar explorer Sir Hubert Wilkins sold them a Fokker F.VII/3m monoplane, which they named the Southern Cross.
At 8:54 a.m. on 31 May 1928, Kingsford Smith and his 4-man crew left Oakland, California, to attempt the first trans-Pacific flight to Australia. The flight was in three stages. The first, from Oakland to Wheeler Army Airfield, Hawaii, was 3,870 kilometres (2,400 mi), taking an uneventful 27 hours 25 minutes (87.54 mph). They took off from Barking Sands on Mana, Kauai, since the runway at Wheeler was not long enough. They headed for Suva, Fiji, 5,077 kilometres (3,155 mi) away, taking 34 hours 30 minutes (91.45 mph). This was the most demanding portion of the journey, as they flew through a massive lightning storm near the equator. The third leg was the shortest, 2,709 kilometres (1,683 mi) in 20 hours (84.15 mph), and crossed the Australian coastline near Ballina before turning north to fly 170 kilometres (110 mi) to Brisbane, where they landed at 10.50 a.m. on 9 June. The total flight distance was approximately 11,566 kilometres (7,187 mi). Kingsford Smith was met by a huge crowd of 26,000 at Eagle Farm Airport, and was welcomed as a hero. Australian aviator Charles Ulm was the relief pilot. The other crewmen were Americans radio operator James Warner and navigator and engineer Harry Lyon.
The National Film and Sound Archive of Australia has a film biography of Kingsford Smith, called An Airman Remembers, and recordings of Kingsford Smith and Ulm talking about the journey.
A stamp sheet and stamps, featuring the Australian aviators Kingsford Smith and Ulm, were released by Australia Post in 1978, commemorating the 50th anniversary of the flight.
A young New Zealander named Jean Batten attended a dinner in Australia featuring Kingsford Smith after the trans-Pacific flight and told him "I'm going to learn to fly." She later convinced him to take her for a flight in the Southern Cross and went on to become a record-setting aviator, following his example instead of his advice ("Don't attempt to break men's records – and don't fly at night", he told her in 1928 and remembered wryly later).
== 1928 Trans-Tasman flight ==
After making the first non-stop flight across Australia from Point Cook near Melbourne to Perth in Western Australia in August 1928, Kingsford Smith and Ulm registered themselves as Australian National Airways (see below). They then decided to attempt the Tasman Sea crossing to New Zealand not only because it had not yet been done, but also in the hope the Australian Government would grant Australian National Airways a subsidised contract to carry scheduled mail regularly. The Tasman had remained unflown after the failure of the first attempt in January 1928, when New Zealanders John Moncrieff and George Hood had vanished without a trace.
Kingsford Smith's flight was planned for take off from Richmond, near Sydney, on Sunday 2 September 1928, with a scheduled landing around 9:00 a.m. on 3 September at Wigram Aerodrome, near Christchurch, the principal city in the South Island of New Zealand. This plan drew a storm of protest from New Zealand churchmen about the "sanctity of the Sabbath being set at naught."
The mayor of Christchurch supported the churchmen and cabled a protest to Kingsford Smith. As it happened, unfavourable weather developed over the Tasman and the flight was deferred, so it is not known whether or how Kingsford Smith would have heeded the cable.
Accompanied by Ulm, navigator Harold Arthur Litchfield, and radio operator Thomas H. McWilliams, a New Zealander made available by the New Zealand Government, Kingsford Smith left Richmond in the evening of 10 September, planning to fly overnight to a daylight landing after a flight of about 14 hours. The 2,600 kilometres (1,600 mi) planned route was only just over half the distance between Hawaii and Fiji. After a stormy flight, at times through icing conditions, the Southern Cross made landfall in much improved weather near Cook Strait, the passage between New Zealand's two main islands. At an estimated 241 kilometres (150 mi) out from New Zealand, the crew dropped a wreath in memory of the two New Zealanders who had disappeared during their attempt to cross the Tasman Sea earlier that year.
There was a tremendous welcome in Christchurch, where the Southern Cross landed at 0922 after a flight of 14 hours and 25 minutes. About 30,000 people made their way to Wigram, including many students from state schools, who were given the day off, and public servants, who were granted leave until 11 a.m. The event was also broadcast live on radio.
While the New Zealand Air Force overhauled the Southern Cross free of charge, Kingsford Smith and Ulm were taken on a triumphant tour of New Zealand, flying in Bristol Fighters.
The return to Sydney was made from Blenheim, a small city at the north of the South Island. Hampered by fog, severe weather and a minor navigational error, the flight to Richmond took over 23 hours; on touchdown the aircraft had enough fuel for only another 10 minutes flying.
== Australian National Airways ==
In partnership with Ulm, Kingsford Smith established Australian National Airways in 1929. The passenger, mail and freight service commenced operations flying between Sydney, Brisbane and Melbourne, in January 1930, with five aircraft but closed after crashes in March and November the next year.
== Later flights, the MacRobertson Air Race, the 1934 Pacific Flight ==
After collecting his 'old bus', Southern Cross, from the Fokker aircraft company in the Netherlands where it had been overhauled, in June 1930 he achieved an east–west crossing of the Atlantic from Ireland to Newfoundland in 31+1⁄2 hours, having taken off from Portmarnock Beach (The Velvet Strand), just north of Dublin. New York gave him a tumultuous welcome. The Southern Cross continued on to Oakland, California, completing a circumnavigation of the world, begun in 1928. In 1930, he competed in an England to Australia air race, and, flying solo, won the event taking 13 days. He arrived in Sydney on 22 October 1930.
In 1931, he purchased an Avro Avian he named the Southern Cross Minor, to attempt an Australia-to-England flight. He later sold the aircraft to Captain W.N. "Bill" Lancaster who vanished on 11 April 1933 over the Sahara Desert; Lancaster's remains were not found until 1962. The wreck of the Southern Cross Minor is now in the Queensland Museum. In the early 1930s, Smith began developing the Southern Cross automobile as a side project.
In 1933, Seven Mile Beach, New South Wales, was used by Kingsford Smith as the runway for the first commercial flight between Australia and New Zealand.
In 1934, he purchased a Lockheed Altair, the Lady Southern Cross, with the intention of competing in the MacRobertson Air Race.
== Disappearance and death ==
Kingsford Smith and co-pilot John Thompson 'Tommy' Pethybridge were flying the Lady Southern Cross overnight from Allahabad (modern Prayagraj), India, to Singapore, as part of their attempt to break the England-Australia speed record held by C. W. A. Scott and Tom Campbell Black, when they disappeared over the Andaman Sea in the early hours of 8 November 1935. Aviator Jimmy Melrose claimed to have seen the Lady Southern Cross fighting a storm 150 miles (240 km) from shore and 200 feet (61 m) over the sea with fire coming from its exhaust. Despite a search for 74 hours over the Bay of Bengal by one person, British pilot Eric Stanley Greenwood, OBE, their bodies were never recovered.
Eighteen months later, Burmese fishermen found an undercarriage leg and wheel, with its tyre still inflated, which had been washed ashore at Aye Island in the Gulf of Martaban, 3 km (2 mi) off the southeast coastline of Burma, some 137 km (85 mi) south of Mottama (formerly known as Martaban). Lockheed confirmed the undercarriage leg to be from the Lady Southern Cross. Botanists who examined the weeds clinging to the undercarriage leg estimated that the aircraft lies not far from the island at a depth of approximately 15 fathoms (90 ft; 27 m). The undercarriage leg is now on public display at the Powerhouse Museum in Sydney, Australia.
In 2009, filmmaker and explorer Damien Lay stated he was certain he had found the Lady Southern Cross. The location of the claimed find was widely misreported as "in the Bay of Bengal". However, the 2009 search was in fact at the same location where the landing gear had been found in 1937, at Aye Island in the Andaman Sea.
Kingsford Smith was survived by his wife, Mary, Lady Kingsford Smith, and their three-year-old son Charles Jnr. Kingsford Smith's autobiography, My Flying Life, was published posthumously in 1937 and became a best-seller.
Following The Joint Australian Myanmar Lady Southern Cross Search Expedition II (LSCSEII) in 2009, Lay conducted a total of ten further expeditions to Myanmar to recover wreckage from the site. In 2011, Lay claimed to have found the wreckage, but that claim has been widely disputed, and no evidence confirming the claim has been forthcoming. The location of the site, approximately 1.8 miles off the coast of Myanmar, has never been publicly released.
Lay has worked closely with both the Kingsford Smith and Pethybridge families since 2005. The privately funded project was supported by the government and people of Myanmar. In December 2017 Lay was still searching for parts of the Lady Southern Cross.
== Honours and legacy ==
In 1930, Kingsford Smith was the inaugural recipient of the Segrave Trophy, awarded for "Outstanding Skill, Courage and Initiative on Land, Water [or] in the Air".
Kingsford Smith was knighted in the 1932 King's Birthday Honours List as a Knight Bachelor. He received the accolade on 3 June 1932 from His Excellency Sir Isaac Isaacs, the Governor-General of Australia, for services to aviation and later was appointed honorary Air Commodore of the Royal Australian Air Force.
In 1986, Kingsford Smith was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum.
The major airport of Sydney, located in the suburb of Mascot, was named Kingsford Smith International Airport in his honour. The federal electorate surrounding the airport is named the Division of Kingsford Smith, and includes the suburb of Kingsford.
His most famous aircraft, the Southern Cross, is now preserved and displayed in a purpose-built memorial to Kingsford Smith near the International Terminal at Brisbane Airport. Kingsford Smith sold the plane to the Australian Government in 1935 for £3000 so it could be put on permanent display for the public. The plane was carefully stored for many years before the current memorial was built.
Kingsford Smith Drive in Brisbane passes through the suburb of his birth, Hamilton. Another Kingsford Smith Drive, which is located in the Canberra district of Belconnen, intersects with Southern Cross Drive.
Opened in 2009, Kingsford Smith School in the Canberra suburb of Holt was named after the famous aviator, as was Sir Charles Kingsford-Smith Elementary School in Vancouver, British Columbia, Canada.
He was pictured on the Australian $20 paper note (in circulation from 1966 until 1994, when the $20 polymer note was introduced to replace it), to honour his contribution to aviation and his accomplishments during his life. He was also depicted on the Australian one-dollar coin of 1997, the centenary of his birth.
Albert Park in Suva, where he landed on the trans-Pacific flight, now contains the Kingsford Smith Pavilion.
A memorial stands at Seven Mile Beach in New South Wales commemorating the first commercial flight to New Zealand.
Qantas named its sixth Airbus A380 (VH-OQF) after Kingsford Smith.
KLM named one of its Boeing 747s (PH-BUM) after Kingsford Smith.
A trans-Encke propeller moonlet, an inferred minor body, of Saturn is named after him.
Australian aviation enthusiast Austin Byrne was part of the large crowd at Sydney's Mascot Aerodrome in June 1928 to welcome the Southern Cross and its crew following their successful trans-Pacific flight. Witnessing this event inspired Byrne to make a scale model of the Southern Cross to give to Kingsford Smith. After the aviator's disappearance, Byrne continued to expand and enhance his tribute with paintings, photographs, documents, and artworks he created, designed or commissioned. Between 1930 and his death in 1993, Byrne devoted his life to creating and touring his Southern Cross Memorial.
== In popular culture ==
Kingsford Smith made a cameo appearance as himself in the feature film Splendid Fellows (1934)
A documentary was made about his life: The Old Bus (1934)
The 1946 Australian film Smithy was based on his life, with Ron Randell as Kingsford Smith and John Tate as Ulm
His life was dramatised in the 1966 radio play Boy on an Old Bus by Richard Lane.
The 1985 Australian television mini-series A Thousand Skies, has John Walton as Kingsford Smith and Andrew Clarke as Ulm
New Zealand author and documentarian Ian Mackersey's 1998 biography Smithy: The Life of Sir Charles Kingsford Smith (hardback ISBN 0 316 64308 4, paperback ISBN 0 7515 2656 8
Bill Bryson details Kingsford Smith's life in his book Down Under.
Australian author Peter FitzSimons's book Charles Kingsford Smith and Those Magnificent Men explores Smithy's life and aviation history (published by Harper Collins, Australia. 2009; (ISBN 978 0 7322 8819 8)
The songs "Kingsford Smith, Aussie is Proud of You" and "Smithy" (1928) by Len Maurice
The songs "Smithy" and "Heroes of the Air" (1928) by Fred Moore
The songs "Smithy The King of the Air" and "The Southern Cross Monologue" by Clement Williams
Kingsford Smith is depicted on the cover art of the Icehouse album Code Blue which includes their song "Charlie's Sky"
The song "Charles Kingsford Smith" by Don McGlashan is on his Lucky Star album
Kingsford's disappearance was the topic of episode 22, series 1, of the TV series Vanishings! on Story Television titled "Disappearance of Charles Kingsford Smith" first aired 25 October 2003.
In a comic book story produced in Australia, The Phantom finds the wreckage of the Lady Southern Cross in Burma. ("The Search for Byron", The Phantom #1131, published in 1996)
== See also ==
History of Aviation
List of firsts in aviation
List of people who disappeared mysteriously at sea
== Notes ==
An aircraft similar to the Southern Cross, the Bird of Paradise, had made the first flight over (though not across) the Pacific, from California to Hawaii for the United States Army Air Corps, in 1927.
== References ==
== Sources ==
Grant, James Ritchie. "Anti-Clockwise: Australia the Wrong Way". Air Enthusiast, No. 82, July–August 1999, pp. 60–63. ISSN 0143-5450
Howard, Frederick (1983). "Kingsford Smith, Sir Charles Edward (1897–1935)". Australian Dictionary of Biography. Canberra: National Centre of Biography, Australian National University. ISBN 978-0-522-84459-7. ISSN 1833-7538. OCLC 70677943. Retrieved 9 March 2009.
Serle, Percival (1949). "Smith, Charles Edward Kingsford". Dictionary of Australian Biography. Sydney: Angus & Robertson. Retrieved 16 October 2008.
== External links ==
The Pioneers – Charles Kingsford Smith
Charles Kingsford Smith biography Ace Pilots
Sir Charles Kingsford Smith Australian Heroes
Charles Kingsford Smith about the Tasman flight
Charles Kingsford Smith (includes photos of Sir Charles Kingsford Smith and his aeroplane, the Southern Cross)
Sir Charles Kingsford Smith Sound Recordings and Newsreels
Photographs from an album kept by Charles Ulm's wife, Mary, including many of Charles Kingsford Smith: National Museum of Australia Archived 4 October 2018 at the Wayback Machine
Austin Byrne and the Kingsford Smith Southern Cross Memorial
"Our Heroes of the Air" (audio recordings of Kingsford Smith and Ulm on the National Film and Sound Archive of Australia's website)
"Knight of the Sky – Sir Charles Kingsford Smith - Stories from the Archives". Blogs. Queensland State Archives. 7 November 2021. |
Charles Lindbergh | Charles Augustus Lindbergh (February 4, 1902 – August 26, 1974) was an American aviator, military officer, and author. On May 20–21, 1927, he made the first nonstop flight from New York to Paris, a distance of 3,600 miles (5,800 km). His aircraft, the Spirit of St. Louis, was built to compete for the $25,000 Orteig Prize for the first flight between the two cities. Although not the first transatlantic flight, it was the longest at the time by nearly 2,000 miles (3,200 km), the first solo transatlantic flight, and set a new flight distance world record. The achievement garnered Lindbergh worldwide fame and stands as one of the most consequential flights in history, signalling a new era of air transportation between parts of the globe.
Raised in both Little Falls, Minnesota and Washington, D.C., Lindbergh was the son of U.S. Congressman Charles August Lindbergh. He became a U.S. Army Air Service cadet in 1924. The next year, he was hired as a U.S. Air Mail pilot in the Greater St. Louis area, where he began to prepare for crossing the Atlantic. For his 1927 flight, President Calvin Coolidge presented him both the Distinguished Flying Cross and Medal of Honor, the highest U.S. military award. He was promoted to colonel in the U.S. Army Air Corps Reserve and also earned the highest French order of merit, the Legion of Honor. His achievement spurred significant global interest in flight training, commercial aviation and air mail, which revolutionized the aviation industry worldwide (a phenomenon dubbed the "Lindbergh Boom"), and he spent much time promoting these industries. Time magazine named Lindbergh its first Man of the Year for 1927, President Herbert Hoover appointed him to the National Advisory Committee for Aeronautics in 1929, and he received the Congressional Gold Medal in 1930. In 1931, he and French surgeon Alexis Carrel began work on inventing the first perfusion pump, a device credited with making future heart surgeries and organ transplantation possible.
On March 1, 1932, Lindbergh's first-born infant child, Charles Jr., was kidnapped and murdered in what the American media called the "crime of the century". The case prompted the U.S. to establish kidnapping as a federal crime if a kidnapper crosses state lines with a victim. By late 1935, public hysteria from the case drove the Lindbergh family abroad to Europe, from where they returned in 1939. In the months before the United States entered World War II, Lindbergh's non-interventionist stance and statements about Jews and race led many to believe he was a Nazi sympathizer. Lindbergh never publicly stated support for the Nazis and condemned them several times in both his public speeches and personal diary, but associated with them on numerous occasions in the 1930s. He also supported the isolationist America First Committee and resigned from the U.S. Army Air Corps in April 1941 after President Franklin Roosevelt publicly rebuked him. In September 1941, Lindbergh gave a significant address, titled "Speech on Neutrality", outlining his position and arguments against greater American involvement in the war.
Following the Japanese attack on Pearl Harbor and German declaration of war against the U.S., Lindbergh avidly supported the American war effort but was rejected for active duty, as Roosevelt refused to restore his colonel's commission. Instead he flew 50 combat missions in the Pacific Theater as a civilian consultant and was unofficially credited with shooting down an enemy aircraft. In 1954, President Dwight Eisenhower restored his commission and promoted him to brigadier general in the U.S. Air Force Reserve. In his later years, he became a Pulitzer Prize-winning author, international explorer and environmentalist, helping to establish national parks in the U.S. and protect certain endangered species and tribal people in both the Philippines and east Africa. After retiring in Maui, he died of cancer in 1974.
== Early life ==
=== Early childhood ===
Lindbergh was born in Detroit, Michigan, on February 4, 1902, and spent most of his childhood in Little Falls, Minnesota, and Washington, D.C. He was the only child of Charles August Lindbergh (birth name Carl Månsson), who had emigrated from Sweden to Melrose, Minnesota, as an infant, and Evangeline Lodge Land Lindbergh of Detroit. Lindbergh had three elder paternal half-sisters: Lillian, Edith, and Eva. The couple separated in 1909 when Lindbergh was seven years old.
His father, a U.S. Congressman from 1907 to 1917, was one of the few congressmen to oppose the entry of the U.S. into World War I (although his congressional term ended one month before the House of Representatives voted to declare war on Germany). Lindbergh's mother was a chemistry teacher at Cass Technical High School in Detroit and later at Little Falls High School, from which her son graduated on June 5, 1918. Lindbergh attended more than a dozen other schools from Washington, D.C., to California during his childhood and teenage years (none for more than two years), including the Force School and Sidwell Friends School while living in Washington with his father, and Redondo Union High School in Redondo Beach, California, while living there with his mother. Although he enrolled in the College of Engineering at the University of Wisconsin–Madison in late 1920, Lindbergh dropped out in the middle of his sophomore year.
=== Early aviation career ===
From an early age, Lindbergh had exhibited an interest in the mechanics of motorized transportation, including his family's Saxon Six automobile, and later his Excelsior motorbike. By the time that he started college as a mechanical engineering student, he had also become fascinated with flying, though he "had never been close enough to a plane to touch it". After quitting college in February 1922, Lindbergh enrolled at the Nebraska Aircraft Corporation's flying school in Lincoln and flew for the first time on April 9 as a passenger in a two-seat Lincoln Standard "Tourabout" biplane trainer piloted by Otto Timm.
A few days later, Lindbergh took his first formal flying lesson in that same aircraft, though he was never permitted to solo because he could not afford to post the requisite damage bond. To gain flight experience and earn money for further instruction, Lindbergh left Lincoln in June to spend the next few months barnstorming across Nebraska, Kansas, Colorado, Wyoming, and Montana as a wing walker and parachutist. He also briefly worked as an airplane mechanic at the Billings, Montana, municipal airport.
Lindbergh left flying with the onset of winter and returned to his father's home in Minnesota. His return to the air and his first solo flight did not come until half a year later in May 1923 at Souther Field in Americus, Georgia, a former Army flight-training field, where he bought a World War I surplus Curtiss JN-4 "Jenny" biplane for $500. Though Lindbergh had not touched an airplane in more than six months, he had already secretly decided that he was ready to take to the air by himself. After a half-hour of dual time with a pilot who was visiting the field, Lindbergh flew solo for the first time in the Jenny. After spending another week or so at the field to "practice" (thereby acquiring five hours of "pilot in command" time), Lindbergh took off from Americus for Montgomery, Alabama, some 140 miles (230 km) to the west, for his first solo cross-country flight. He went on to spend much of the remainder of 1923 engaged in almost nonstop barnstorming under the name "Daredevil Lindbergh", this time flying in his "own ship" as the pilot. A few weeks after leaving Americus, he made his first night flight near Lake Village, Arkansas.
While Lindbergh was barnstorming in Lone Rock, Wisconsin, on two occasions he flew a local physician across the Wisconsin River to emergency calls that were otherwise unreachable because of flooding. He broke his propeller several times while landing, and on June 3, 1923 he was grounded for a week when he ran into a ditch in Glencoe, Minnesota, while flying his father—then running for the U.S. Senate—to a campaign stop. In October, Lindbergh flew his Jenny to Iowa, where he sold it to a flying student. He returned to Lincoln by train, where he joined Leon Klink and continued to barnstorm through the South for the next few months in Klink's Curtiss JN-4C "Canuck" (the Canadian version of the Jenny). Lindbergh also "cracked up" this aircraft once when his engine failed shortly after takeoff in Pensacola, Florida, but again he managed to repair the damage himself.
Following a few months of barnstorming through the South, the two pilots parted company in San Antonio, Texas, where Lindbergh reported to Brooks Field on March 19, 1924 to begin a year of military flight training with the United States Army Air Service there (and later at nearby Kelly Field). Lindbergh had his most serious flying accident on March 5, 1925, eight days before graduation, when a mid-air collision with another Army S.E.5 during aerial combat maneuvers forced him to bail out. Only 18 of the 104 cadets who started flight training a year earlier remained when Lindbergh graduated first overall in his class in March 1925, thereby earning his Army pilot's wings and a commission as a second lieutenant in the Air Service Reserve Corps.
Lindbergh later said that this year was critical to his development as both a focused, goal-oriented individual and as an aviator. The Army did not need additional active-duty pilots, however, so following graduation, Lindbergh returned to civilian aviation as a barnstormer and flight instructor, although as a reserve officer he also continued to do some part-time military flying by joining the 110th Observation Squadron, 35th Division, Missouri National Guard, in St. Louis. He was promoted to first lieutenant on December 7, 1925, and to captain in July 1926.
=== Air mail pilot ===
In October 1925, Lindbergh was hired by the Robertson Aircraft Corporation (RAC) at the Lambert-St. Louis Flying Field in Anglum, Missouri, (where he had been working as a flight instructor) to lay out and then serve as chief pilot for the newly designated 278-mile (447 km) Contract Air Mail Route #2 (CAM-2) to provide service between St. Louis and Chicago (Maywood Field) with intermediate stops in Springfield and Peoria, Illinois. Lindbergh and three other RAC pilots flew the mail over CAM-2 in a fleet of four modified war-surplus de Havilland DH-4s.
On April 13, 1926, Lindbergh executed the United States Post Office Department's Oath of Mail Messengers, and two days later he opened service on the new route. On two occasions, combinations of bad weather, equipment failure, and fuel exhaustion forced him to bail out on night approach to Chicago; both times he reached the ground without serious injury. In mid-February 1927 he left for San Diego, California, to oversee design and construction of the Spirit of St. Louis.
== New York–Paris flight ==
=== Orteig Prize ===
In 1919, British aviators John Alcock and Arthur Brown won the Daily Mail prize for the first nonstop transatlantic flight. They left St. John's, Newfoundland, on June 14, 1919, and arrived in Clifden, County Galway, Ireland the following day.
Around the same time, French-born New York hotelier Raymond Orteig was approached by Augustus Post, secretary of the Aero Club of America, to put up a $25,000
(equivalent to $453,000 in 2024) award for the first successful nonstop transatlantic flight specifically between New York City and Paris within five years after its establishment. When that time limit lapsed in 1924 without a serious attempt, Orteig renewed the offer for another five years, this time attracting a number of well-known, highly experienced, and well-financed contenders—none of whom were successful. On September 21, 1926, World War I French flying ace René Fonck's Sikorsky S-35 crashed on takeoff from Roosevelt Field in New York, killing crew members Jacob Islamoff and Charles Clavier. U.S. Naval aviators Noel Davis and Stanton H. Wooster were killed at Langley Field, Virginia, on April 26, 1927, while testing their Keystone Pathfinder. On May 8 French war heroes Charles Nungesser and François Coli departed Paris – Le Bourget Airport in the Levasseur PL 8 seaplane L'Oiseau Blanc; they disappeared somewhere in the Atlantic after last being seen crossing the west coast of Ireland.
The specific event that inspired Lindbergh to attempt the flight was René Fonck's September 1926 failure. Reading of Fonck's crash, Lindbergh characteristically decided that "a nonstop flight between New York and Paris would be less hazardous than flying mail for a single winter." He soon "discussed his idea with St. Louis businessmen and aviation supporters" and began to gather resources, making "several inquiries" with airplane manufacturers.
=== Spirit of St. Louis ===
Financing the historic flight was a challenge due to Lindbergh's obscurity, but two St. Louis businessmen eventually obtained a $15,000 bank loan. Lindbergh contributed $2,000 (equivalent to $36,000 in 2024) of his own money from his salary as an air mail pilot and another $1,000 was donated by RAC. The total of $18,000 was far less than what was available to Lindbergh's rivals.
The group tried to buy an "off-the-peg" single or multiengine monoplane from Wright Aeronautical, then Travel Air, and finally the newly formed Columbia Aircraft Corporation, but all insisted on selecting the pilot as a condition of sale. Finally the much smaller Ryan Airline Company (later called the Ryan Aeronautical Company) of San Diego agreed to design and build a custom monoplane for $10,580, and on February 25, 1927, a deal was formally closed. Dubbed the Spirit of St. Louis, the fabric-covered, single-seat, single-engine high-wing monoplane was designed jointly by Lindbergh and Ryan's chief engineer Donald A. Hall. The Spirit flew for the first time just two months later, and after a series of test flights Lindbergh took off from San Diego on May 10. He went first to St. Louis, then on to Roosevelt Field on New York's Long Island.
=== Flight ===
In the early morning of Friday, May 20, 1927, Lindbergh took off from Roosevelt Field on Long Island. His destination, Le Bourget Aerodrome, was about 7 miles (11 km) outside Paris and 3,610 miles (5,810 km) from his starting point. He was "too busy the night before to lie down for more than a couple of hours," and "had been unable [to] sleep." It rained the morning of his takeoff, but as the plane "was wheeled into position on the runway," the rain ceased and light began to break through the "low-hanging clouds." A crowd variously described as "nearly a thousand" or "several thousand" assembled to see Lindbergh off. For its transatlantic flight, the Spirit was loaded with 450 U.S. gallons (1,700 liters) of fuel that was filtered repeatedly to avoid fuel line blockage. The fuel load was a thousand pounds heavier than any the Spirit had lifted during a test flight, and the fully loaded airplane weighed 5,200 pounds (2,400 kg; 2.6 short tons). With takeoff hampered by a muddy, rain-soaked runway, the plane was "helped by men pushing at the wing struts," with the last man leaving the wings only one hundred yards (90 m) down the runway. The Spirit gained speed very slowly during its 7:52 AM takeoff, but cleared telephone lines at the far end of the field "by about twenty feet (6.1 m) with a fair reserve of flying speed".
At 8:52 AM, an hour after takeoff, Lindbergh was flying at an altitude of 500 feet (150 m) over Rhode Island, following an uneventful passage—aside from some turbulence—over Long Island Sound and Connecticut. By 9:52 AM, he had passed Boston and was flying with Cape Cod to his right, with an airspeed of 107 miles per hour (172 km/h) and altitude of 150 feet (46 m); about an hour later he began to feel tired, even though only a few hours had elapsed since takeoff. To keep his mind clear, Lindbergh descended and flew at only 10 feet (3 m) above the water's surface. By around 11:52 AM, he had climbed to an altitude of 200 feet (60 m), and at this point was 400 miles (640 km) distant from New York. Nova Scotia appeared ahead and, after flying over the Gulf of Maine, he was only "6 miles (10 km), or 2 degrees, off course." At 3:52 PM, the eastern coast of Cape Breton Island was below; he struggled to stay awake, even though it was "only the afternoon of the first day." At 5:52 PM, he was flying along the Newfoundland coast, and passed St. John's at 7:15 PM. On its May 21 front page, The New York Times ran a special cable from the prior evening: "Captain Lindbergh's airplane passed over St. John's at 8:15 o'clock tonight [7:15 New York Daylight Saving Time]...was seen by hundreds and disappeared seaward, heading for Ireland...It was flying quite low between the hills near St. John's." The Times also observed that Lindbergh was "following the track of Hawker and Greeve and also of Alcock and Brown".
Stars appeared as night fell around 8:00 PM. The sea became obscured by fog, prompting Lindbergh to climb "from an altitude of 800 feet (240 m) to 7,500 feet (2,300 m) to stay above the quickly-rising cloud." An hour later, he was flying at 10,000 feet (3,000 m). A towering thunderhead stood in front of him, and he flew into the cloud, but turned back after he noticed ice forming on the plane. While inside the cloud, Lindbergh "thrust a bare hand through the cockpit window," and felt the "sting of ice particles." After returning to open sky, he "curved back to his course." At 11:52 PM, Lindbergh was in warmer air, and no ice remained on the Spirit; he was flying 90 miles per hour (140 km/h) at 10,000 feet (3,000 m), and was 500 miles (800 km) from Newfoundland. Eighteen hours into the flight, he was halfway to Paris, and while he had planned to celebrate at this point, he instead felt "only dread." Because Lindbergh flew through several time zones, dawn came earlier, at around 2:52 AM. He began to hallucinate about two hours later. At this point in the flight, he "continually" fell asleep, awakening "seconds, possibly minutes, later." But after "flying for hours in or above the fog," the weather finally began to clear. 7:52 AM marked twenty-four hours in the air for Lindbergh and he did not feel as tired by this point.
At around 9:52 AM New York time, or twenty-seven hours after he left Roosevelt Field, Lindbergh saw "porpoises and fishing boats," a sign he had reached the other side of the Atlantic. He circled and flew closely, but no fishermen appeared on the boat decks, although he did see a face watching from a porthole. Dingle Bay, in County Kerry of southwest Ireland, was the first European land that Lindbergh encountered; he veered to get a better look and consulted his charts, identifying it as the southern tip of Ireland. The local time in Ireland was 3:00 PM. Flying over Dingle Bay, the Spirit was "2.5 hours ahead of schedule and less than 3 miles (5 km) off course." Lindbergh had navigated "almost precisely to the coastal point he had marked on his chart." He wanted to reach the French coast in daylight, so increased his speed to 110 miles per hour (180 km/h). The English coast appeared ahead of him, and he was "now wide awake." A report came from Plymouth, on the English coast, that Lindbergh's plane had started across the English Channel. News soon spread across both "Europe and the United States that Lindbergh had been spotted over England," and a crowd started to form at Le Bourget Aerodrome as he neared Paris. At sunset, he flew over Cherbourg, on the French coast 200 miles (320 km) from Paris; it was around 2:52 PM New York time.
Over the 33+1⁄2 hours of the flight, the aircraft fought icing, flew blind through fog for several hours, and Lindbergh navigated only by dead reckoning (he was not proficient at navigating by the sun and stars and he rejected radio navigation gear as heavy and unreliable). He was fortunate that the winds over the Atlantic cancelled each other out, giving him zero wind drift—and thus accurate navigation during the long flight over featureless ocean.
On arriving at Paris, Lindbergh "circled the Eiffel Tower" before flying to the airfield. He flew over the crowd at Le Bourget Aerodrome at 10:16 and landed at 10:22 PM on Saturday, May 21, on the far side of the field and "nearly half a mile from the crowd," as reported by The New York Times. The airfield was not marked on his map and Lindbergh knew only that it was some seven miles northeast of the city; he initially mistook it for some large industrial complex because of the bright lights spreading out in all directions—in fact the headlights of tens of thousands of spectators' cars caught in "the largest traffic jam in Paris history" in their attempt to be present for Lindbergh's landing.
A crowd estimated at 150,000 stormed the field, dragged Lindbergh out of the cockpit, and carried him around above their heads for "nearly half an hour." Some minor damage was done to the Spirit by souvenir hunters before pilot and plane reached the safety of a nearby hangar with the aid of French military fliers, soldiers, and police. The Times reported that before the police could intervene the "souvenir mad" spectators "stripped the plane of everything which could be taken off," and were cutting off pieces of linen when "a squad of soldiers with fixed bayonets quickly surrounded" the plane, providing guard as it was "wheeled into a shed." Lindbergh met the U.S. Ambassador to France, Myron T. Herrick, across Le Bourget field in a "little room with a few chairs and an army cot." The lights in the room were turned off to conceal his presence from the frenzied crowd, which "surged madly" trying to find him. Lindbergh shook hands with Herrick and handed him several letters he had carried across the Atlantic, three of which were from Col. Theodore Roosevelt Jr., son of former President Theodore Roosevelt, who had written letters of introduction at Lindbergh's request. Lindbergh left the airfield around midnight and was driven through Paris to the ambassador's residence, stopping to visit the French Tomb of the Unknown Soldier at the Arc de Triomphe; after arriving at the residence, he slept for the first time in about 60 hours.
Lindbergh's flight was certified by the National Aeronautic Association of the United States based on the readings from a sealed barograph placed in the Spirit.
== Global fame ==
Lindbergh received unprecedented acclaim after his historic flight. In the words of biographer A. Scott Berg, people were "behaving as though Lindbergh had walked on water, not flown over it".: 17 The New York Times printed an above the fold, page-wide headline: "Lindbergh Does It!" and his mother's house in Detroit was surrounded by a crowd reported at nearly a thousand. He became "an international celebrity, with invitations pouring in for him to visit European countries," and he "received marriage proposals, invitations to visit cities across the nation, and thousands of gifts, letters, and endorsement requests." At least "200 songs were written" in tribute to him and his flight. "Lucky Lindy!", written and composed by L. Wolfe Gilbert and Abel Baer, was finished on May 21 itself, and was "performed to great acclaim in several Manhattan clubs" that night. After landing, Lindbergh was eager to embark on a tour of Europe. As he noted in a speech a few weeks afterward, his flight marked the first time he "had ever been abroad," and he "landed with the expectancy, and the hope, of being able to see Europe."
The morning after landing, Lindbergh appeared in the balcony of the U.S. embassy, responding "briefly and modestly" to the calls of the crowd. The French Foreign Office flew the American flag, the first time it had saluted someone who was not a head of state. At the Élysée Palace, French President Gaston Doumergue bestowed the Légion d'honneur on Lindbergh, pinning the award on his lapel, with Ambassador Herrick present for the occasion. Lindbergh also made flights to Belgium and Britain in the Spirit before returning to the United States. On May 28, Lindbergh flew to Evere Aerodrome in Brussels, Belgium, circling the field three times for the cheering crowd and taxiing to a halt just after 3:00 PM, as a thousand children waved American flags. On his way to Evere, Lindbergh had met an escort of ten planes from the airport, who found him on course near Mons but had trouble keeping up as the Spirit was averaging "about 100 miles an hour." After landing, Lindbergh was welcomed by military officers and prominent officials, including Belgian Prime Minister Henri Jaspar, who led the procession of Lindbergh's plane to a "platform where it was raised to the view of cheering thousands." "It was a splendid flight," Lindbergh declared, stating: "I enjoyed every minute of it. The motor is in fine shape and I could circle Europe without touching it." Belgian troops with fixed bayonets protected the Spirit to avoid a repeat of the damage at Le Bourget. From Evere, Lindbergh motored to the U.S. embassy, and then went to place a wreath on the Belgian tomb of the unknown soldier. He then visited the Belgian royal palace at the invitation of King Albert I, where the king made Lindbergh a Knight of the Order of Leopold; as Lindbergh shook the king's hand, he said: "I have heard much of the famous soldier-king of the Belgians." The United Press reported that "One million persons are in Brussels today to greet Lindbergh," constituting "the greatest welcome ever accorded a private citizen in Belgium."
After Belgium, Lindbergh traveled to the United Kingdom. He departed Brussels and arrived at Croydon Air Field in the Spirit on May 29, where a crowd of 100,000 "mobbed" him. Before reaching the airfield he overflew London where crowds, some on roofs, "gazed at the flyer" and observers with "field glasses in the West End business district" watched him. About 50 minutes before he landed, the "roads leading toward Croydon airport were jammed." Flying into the airfield, Lindbergh "appeared on the horizon" at 5:50 PM accompanied by six British military planes, but the massive crowd "swept over the guard lines" and forced him to circle the airfield "while police battled the crowd," and "not until 10 minutes later had they cleared a space large enough" for him to land. Police reserves were sent to the airfield in "large numbers," but it was not enough to contain the multitude. As the plane came to a stop, the crowd "waved American flags, smashed fences and knocked down police," while Lindbergh himself was described as "grinning and serene" amid the "seething" crowd. The United Press reported that a "man's leg was broken in the crush," and another man fell from atop a hangar and suffered internal injuries. English officials were reportedly "surprised" by the enthusiasm of the welcome. A limousine pulled near the Spirit, escorting Lindbergh to a tower on the field where he responded to the cheering crowd. "All I can say is that this is worse than what happened at Le Bourget Field," he told them. "But all the same, I'm glad to be here." When he reached the reception room where British Secretary of State for Air Sir Samuel Hoare, U.S. Ambassador Alanson B. Houghton, and others waited, his first words were: "Save my plane!" Mechanics moved the Spirit to a hangar where it was placed "under a military guard." Also present at Croydon were former Secretary of State for Air Lord Thomson, Director of Civil Aviation Sir Sefton Brancker, and Brig. Gen. P. R. C. Groves.
Accompanied by two Royal Air Force planes, he then flew 90 miles from Croydon to Gosport, where he left the Spirit to be dismantled for shipment back to New York. On May 31, accompanied by an attache of the U.S. Embassy, Lindbergh visited British Prime Minister Stanley Baldwin at 10 Downing Street and then motored to Buckingham Palace, where King George V received him as a guest and awarded him the British Air Force Cross. In anticipation of Lindbergh's visit to the palace, a crowd massed "hoping to get a glimpse" of him. The crowd became so great that police had to call in reserves from Scotland Yard. Upon his arrival back in the United States aboard the U.S. Navy cruiser USS Memphis (CL-13) on June 11, 1927, a fleet of warships and multiple flights of military aircraft escorted him up the Potomac River to the Washington Navy Yard, where President Calvin Coolidge awarded him the Distinguished Flying Cross. Lindbergh received the first award of this medal, but it violated the authorizing regulation. Coolidge's own executive order, published in March 1927, required recipients to perform their feats of airmanship "while participating in an aerial flight as part of the duties incident to such membership [in the Organized Reserves]", which Lindbergh failed to satisfy.
Lindbergh flew from Washington, D.C., to New York City on June 13, arriving in Lower Manhattan. He traveled up the Canyon of Heroes to City Hall, where he was received by Mayor Jimmy Walker. A ticker-tape parade followed to Central Park Mall, where he was awarded the New York Medal for Valor at a ceremony hosted by New York Governor Al Smith and attended by a crowd of 200,000. Some 4,000,000 people saw Lindbergh that day. That evening, Lindbergh was accompanied by his mother and Mayor Walker when he was the guest of honor at a 500-guest banquet and dance held at Clarence MacKay's Long Island estate, Harbor Hill.
The following night, Lindbergh was honored with a grand banquet at the Hotel Commodore given by the Mayor's Committee on Receptions of the City of New York and attended by some 3,700 people. He was officially awarded the check for the prize on June 16.
On July 18, 1927, Lindbergh was promoted to the rank of colonel in the Air Corps of the Officers Reserve Corps of the U.S. Army.
On December 14, 1927, a Special Act of Congress awarded Lindbergh the Medal of Honor, despite the fact that it was almost always awarded for heroism in combat. It was presented to Lindbergh by President Coolidge at the White House on March 21, 1928. The medal contradicted Coolidge's earlier executive order directing that "not more than one of the several decorations authorized by Federal law will be awarded for the same act of heroism or extraordinary achievement" (Lindbergh was recognized for the same act with both the Medal of Honor and the Distinguished Flying Cross). The statute authorizing the award was also criticized for apparently violating procedure; House legislators reportedly neglected to have their votes counted.
Lindbergh was honored as the first Time magazine Man of the Year (now called "Person of the Year") when he appeared on that magazine's cover at age 25 on January 2, 1928; he remained the youngest Time Person of the Year until Greta Thunberg in 2019. The winner of the 1930 Best Woman Aviator of the Year Award, Elinor Smith Sullivan, said that before Lindbergh's flight:
People seemed to think we [aviators] were from outer space or something. But after Charles Lindbergh's flight, we could do no wrong. It's hard to describe the impact Lindbergh had on people. Even the first walk on the moon doesn't come close. The twenties was such an innocent time, and people were still so religious—I think they felt like this man was sent by God to do this. And it changed aviation forever because all of a sudden the Wall Streeters were banging on doors looking for airplanes to invest in. We'd been standing on our heads trying to get them to notice us but after Lindbergh, suddenly everyone wanted to fly, and there weren't enough planes to carry them.
=== Autobiography and tours ===
Barely two months after Lindbergh arrived in Paris, G. P. Putnam's Sons published his 318-page autobiography "WE", which was the first of 15 books he eventually wrote or to which he made significant contributions. The company was run by aviation enthusiast George P. Putnam.
The dustjacket notes said that Lindbergh wanted to share the "story of his life and his transatlantic flight together with his views on the future of aviation", and that "WE" referred to the "spiritual partnership" that had developed "between himself and his airplane during the dark hours of his flight". However, as Berg wrote in 1998, Putnam's chose the title without "Lindbergh's knowledge or approval," and Lindbergh would "forever complain about it, that his use of 'we' meant him and his backers, not him and his plane, as the press had people believing"; nonetheless, as Berg remarked, "his frequent unconscious use of the phrase suggested otherwise."
Putnam's sold special autographed copies of the book for $25 each, all of which were purchased before publication. "WE" was soon translated into most major languages and sold more than 650,000 copies in the first year, earning Lindbergh more than $250,000. Its success was considerably aided by Lindbergh's three-month, 22,350-mile (35,970 km) tour of the United States in the Spirit on behalf of the Daniel Guggenheim Fund for the Promotion of Aeronautics. Between July 20 and October 23, 1927, Lindbergh visited 82 cities in all 48 states, rode 1,290 mi (2,080 km) in parades, and delivered 147 speeches before 30 million people.
Lindbergh then toured 16 Latin American countries between December 13, 1927, and February 8, 1928. Dubbed the "Good Will Tour", it included stops in Mexico (where he also met his future wife, Anne, the daughter of U.S. Ambassador Dwight Morrow), Guatemala, British Honduras, El Salvador, Honduras, Nicaragua, Costa Rica, Panama, the Canal Zone, Colombia, Venezuela, St. Thomas, Puerto Rico, the Dominican Republic, Haiti, and Cuba, covering 9,390 miles (15,110 km) in just over 116 hours of flight time. A year and two days after it had made its first flight, Lindbergh flew the Spirit from St. Louis to Washington, D.C., where it has been on public display at the Smithsonian Institution ever since. Over the previous 367 days, Lindbergh and the Spirit had logged 489 hours 28 minutes of flight time.
A "Lindbergh boom" in aviation had begun. The volume of mail moving by air increased 50 percent within six months, applications for pilots' licenses tripled, and the number of planes quadrupled.: 17
President Herbert Hoover appointed Lindbergh to the National Advisory Committee for Aeronautics.
Lindbergh and Pan American World Airways head Juan Trippe were interested in developing an air route across Alaska and Siberia to China and Japan. In the summer of 1931, with Trippe's support, Lindbergh and his wife flew from Long Island to Nome, Alaska, and from there to Siberia, Japan and China. The flight was carried out with a Lockheed Model 8 Sirius named Tingmissartoq. The route was not available for commercial service until after World War II, as prewar aircraft lacked the range to fly Alaska to Japan nonstop, and the United States had not officially recognized the Soviet government. In China they volunteered to help in disaster investigation and relief efforts for the Central China flood of 1931. This was later documented in Anne's book North to the Orient.
=== Air mail promotion ===
Lindbergh used his world fame to promote air mail service. For example, at the request of Basil L. Rowe, the owner of West Indian Aerial Express (and later Pan Am's chief pilot), in February, 1928, he carried some 3,000 pieces of special souvenir mail between Santo Domingo, Dominican Repulic; Port-au-Prince, Haiti; and Havana, Cuba—the last three stops he and the Spirit made during their 7,800 mi (12,600 km) "Good Will Tour" of Latin America and the Caribbean between December 13, 1927, and February 8, 1928, and the only franked mail pieces that he ever flew in his iconic plane.
Two weeks after his Latin American tour, Lindbergh piloted a series of special flights over his old CAM-2 route on February 20 and February 21. Tens of thousands of self-addressed souvenir covers were sent in from all over the world, so at each stop Lindbergh switched to another of the three planes he and his fellow CAM-2 pilots had used, so it could be said that each cover had been flown by him. The covers were then backstamped and returned to their senders as a promotion of the air mail service.
In 1929–1931, Lindbergh carried much smaller numbers of souvenir covers on the first flights over routes in Latin America and the Caribbean, which he had earlier laid out as a consultant to Pan American Airways to be then flown under contract to the Post Office as Foreign Air Mail (FAM) routes 5 and 6.
On 10 March 1929, Lindbergh flew an inaugural flight from Brownsville, Texas, to Mexico City via Tampico, in a Ford Trimotor airplane, carrying a load of U.S. mail. When a number of mail bags came up missing for a period of one month, they subsequently came to be known in the philatelic world as the covers of the "Lost Mail Flight". The historic flight was received with much notoriety in the press and marked the beginning of extended airmail service between the United States and Mexico.
== Personal life ==
=== American family ===
In his autobiography, Lindbergh derided pilots he met as womanizing "barnstormers"; he also criticized Army cadets for their "facile" approach to relationships. He wrote that the ideal romance was stable and long-term, with a woman with keen intellect, good health, and strong genes, his "experience in breeding animals on our farm [having taught him] the importance of good heredity".
Anne Morrow Lindbergh was the daughter of Dwight Morrow, who, as a partner at J.P. Morgan & Co., had acted as financial adviser to Lindbergh. He was also the U.S. Ambassador to Mexico in 1927. Invited by Morrow on a goodwill tour to Mexico along with humorist and actor Will Rogers, Lindbergh met Anne in Mexico City in December 1927.
The couple was married on May 27, 1929, at the Morrow estate in Englewood, New Jersey, where they resided after their marriage before moving to the western part of the state. They had six children: Charles Augustus Lindbergh Jr. (1930–1932); Jon Morrow Lindbergh (1932–2021); Land Morrow Lindbergh (b. 1937), who studied anthropology at Stanford University; Anne Lindbergh (1940–1993); Scott Lindbergh (b. 1942); and Reeve Lindbergh (b. 1945), a writer. Lindbergh taught Anne how to fly, and she accompanied and assisted him in much of his exploring and charting of air routes.
Lindbergh saw his children for only a few months a year. He kept track of each child's infractions (including such things as gum-chewing) and insisted that Anne track every penny of household expenses.
Lindbergh's grandson, aviator Erik Lindbergh, has had notable involvement in both the private spaceflight and electric aircraft industries.
=== Glider hobby ===
Lindbergh came to the Monterey Peninsula with his wife in March 1930 to continue innovations in the design and use of gliders. He stayed at Del Monte Lodge in Pebble Beach, to search for sites for launching gliders. He came to the Palo Corona Ranch in Carmel Valley, California, and stayed there as guests at the Sidney Fish home, where he flew a glider from a ridge at the ranch. Eight men towed the glider to the ridge where he soared over the countryside for 10 minutes and brought the plane down 3 miles below the Highlands Inn. Other flights lasted 70 minutes. In 1930, his wife became the first woman to receive a U.S. glider pilot license.
=== Kidnapping of Charles Lindbergh Jr. ===
On the evening of March 1, 1932, twenty-month-old Charles Augustus Lindbergh Jr. was abducted from his crib in the Lindberghs' rural home, Highfields, in East Amwell, New Jersey, near the town of Hopewell. A man who claimed to be the kidnapper picked up a cash ransom of $50,000 on April 2, part of which was in gold certificates, which were soon to be withdrawn from circulation and would therefore attract attention; the bills' serial numbers were also recorded. On May 12, the child's remains were found in woods not far from the Lindbergh home.
The case was widely called the "Crime of the Century" and was described by H. L. Mencken as "the biggest story since the Resurrection". In response, Congress passed the so-called "Lindbergh Law", which made kidnapping a federal offense if the victim is taken across state lines or (as in the Lindbergh case) the kidnapper uses "the mail or ... interstate or foreign commerce in committing or in furtherance of the commission of the offense", such as in demanding ransom.
Richard Hauptmann, a 34-year-old German immigrant carpenter, was arrested near his home in the Bronx, New York, on September 19, 1934, after paying for gasoline with one of the ransom bills. $13,760 of the ransom money and other evidence was found in his home. Hauptmann went on trial for kidnapping, murder and extortion on January 2, 1935, in a circus-like atmosphere in Flemington, New Jersey. He was convicted on February 13, sentenced to death, and electrocuted at Trenton State Prison on April 3, 1936. His guilt is contested.
=== In Europe (1936–1939) ===
An intensely private man, Lindbergh became exasperated by the unrelenting public attention in the wake of the kidnapping and trial, and was concerned for the safety of his three-year-old second son, Jon. In the predawn hours of Sunday, December 22, 1935, the family "sailed furtively" from Manhattan for Liverpool, the only three passengers aboard the United States Lines freighter SS American Importer. They traveled under assumed names and with diplomatic passports issued through the personal intervention of former U.S. Treasury Secretary Ogden L. Mills.
News of the Lindberghs' "flight to Europe" did not become public until a full day later, and even after the identity of their ship became known radiograms addressed to Lindbergh on it were returned as "Addressee not aboard".
They arrived in Liverpool on December 31, then departed for South Wales to stay with relatives.
The family eventually rented "Long Barn" in Sevenoaks Weald, Kent. In 1938, the family (including a third son, Land, born May 1937 in London) moved to Île Illiec, a small four-acre (1.6 ha) island Lindbergh purchased off the Breton coast of France.
Except for a brief visit to the U.S. in December 1937, the Lindberghs lived and traveled extensively around Europe in their personal Miles M.12 Mohawk two person airplane, before returning to the U.S. in April 1939 and settling in a rented seaside estate at Lloyd Neck, Long Island, New York. The return was prompted by a personal request by General H. H. ("Hap") Arnold, the chief of the United States Army Air Corps in which Lindbergh was a reserve colonel, for him to accept a temporary return to active duty to help evaluate the Air Corps's readiness for war. His duties included evaluating new aircraft types in development, recruitment procedures, and finding a site for a new air force research institute and other potential air bases. Assigned a Curtiss P-36 fighter, he toured various facilities, reporting back to Wilbur Wright Field. Lindbergh's brief four-month tour was also his first period of active military service since his graduation from the Army's Flight School fourteen years earlier in 1925.
== Scientific activities ==
Lindbergh wrote to the Longines watch company and described a watch that would make navigation easier for pilots. First produced in 1931, they called it the "Lindbergh Hour Angle watch", and it remains in production today.
In 1929, Lindbergh became interested in the work of rocket pioneer Robert H. Goddard. By helping Goddard secure an endowment from Daniel Guggenheim in 1930, Lindbergh allowed Goddard to expand his research and development. Throughout his life, Lindbergh remained a key advocate of Goddard's work.
In 1930, Lindbergh's sister-in-law developed a fatal heart condition. Lindbergh began to wonder why hearts could not be repaired with surgery. Starting in early 1931 at the Rockefeller Institute and continuing during his time living in France, Lindbergh studied the perfusion of organs outside the body with Nobel Prize-winning French surgeon Alexis Carrel. Although perfused organs were said to have survived surprisingly well, all showed progressive degenerative changes within a few days. Lindbergh's invention, a glass perfusion pump, named the "Model T" pump, is credited with making future heart surgeries possible. In this early stage, the pump was far from perfected. In 1938, Lindbergh and Carrel described an artificial heart in the book in which they summarized their work, The Culture of Organs, but it was decades before one was built. In later years, Lindbergh's pump was further developed by others, eventually leading to the construction of the first heart-lung machine.
== Pre-war activities and politics ==
=== Overseas visits ===
In July 1936, shortly before the opening of the 1936 Summer Olympics in Berlin, American journalist William L. Shirer recorded in his diary: "The Lindberghs are here [in Berlin], and the Nazis, led by Göring, are making a great play for them."
This 1936 visit was the first of several that Lindbergh made at the request of the U.S. military establishment between 1936 and 1938, with the goal of evaluating German aviation. During this visit, the Lufthansa airline held a tea for the Lindberghs, and later invited them for a ride aboard the massive four-engine Junkers G.38 that had been christened Field-Marshal Von Hindenburg. Shirer, who was on the flight, wrote:
Hanna Reitsch demonstrated the Focke-Wulf Fw 61 helicopter to Lindbergh in 1937,: 121 and he was the first American to examine Germany's newest bomber, the Junkers Ju 88, and Germany's front-line fighter aircraft, the Messerschmitt Bf 109, which he was allowed to pilot. He said of the Bf 109 that he knew of "no other pursuit plane which combines simplicity of construction with such excellent performance characteristics".
There is disagreement on how accurate Lindbergh's reports were, but Cole asserts that the consensus among British and American officials was that they were slightly exaggerated but badly needed. Arthur Krock, the chief of The New York Times's Washington Bureau, wrote in 1939, "When the new flying fleet of the United States begins to take air, among those who will have been responsible for its size, its modernness, and its efficiency is Colonel Charles A. Lindbergh. Informed officials here, in touch with what Colonel Lindbergh has been doing for his country abroad, are authority for this statement, and for the further observation that criticism of any of his activities – in Germany or elsewhere – is as ignorant as it is unfair." General Henry H. Arnold, the only U.S. Air Force general to hold five-star rank, wrote in his autobiography, "Nobody gave us much useful information about Hitler's air force until Lindbergh came home in 1939." Lindbergh also undertook a survey of aviation in the Soviet Union in 1938.
In 1938, Hugh Wilson, the American ambassador to Germany, hosted a dinner for Lindbergh with Germany's air chief, Generalfeldmarschall Hermann Göring, and three central figures in German aviation: Ernst Heinkel, Adolf Baeumker, and Willy Messerschmitt. At this dinner, Göring presented Lindbergh with the Commander Cross of the Order of the German Eagle. Lindbergh's acceptance became controversial when, only a few weeks after this visit, the Nazi Party carried out the Kristallnacht, a nation-wide anti-Jewish pogrom which is considered a key inaugurating event of the Holocaust. Lindbergh declined to return the medal, later writing: It seems to me that the returning of decorations, which were given in times of peace and as a gesture of friendship, can have no constructive effect. If I were to return the German medal, it seems to me that it would be an unnecessary insult. Even if war develops between us, I can see no gain in indulging in a spitting contest before that war begins. Ambassador Wilson later wrote to Lindbergh: Neither you, nor I, nor any other American present had any previous hint that the presentation would be made. I have always felt that if you refused the decoration, presented under those circumstances, you would have been guilty of a breach of good taste. It would have been an act offensive to a guest of the Ambassador of your country, in the house of the Ambassador.
Lindbergh's reaction to the Kristallnacht was entrusted to his diary: "I do not understand these riots on the part of the Germans", he wrote. "It seems so contrary to their sense of order and intelligence. They have undoubtedly had a difficult 'Jewish problem', but why is it necessary to handle it so unreasonably?"
Lindbergh had planned to move to Berlin for the winter of 1938–39. He had provisionally found a house in Wannsee, but after Nazi friends discouraged him from leasing it because it had been formerly owned by Jews, it was recommended that he contact Albert Speer, who said he would build the Lindberghs a house anywhere they wanted. On the advice of his close friend Alexis Carrel, he cancelled the trip.
=== Isolationism and America First Committee ===
In 1938, the U.S. Air Attaché in Berlin invited Lindbergh to inspect the rising power of Nazi Germany's Air Force. Impressed by German technology and the apparently large number of aircraft at their disposal and influenced by the staggering number of deaths from World War I, he opposed U.S. entry into the impending European conflict. In September 1938, he stated to the French cabinet that the Luftwaffe possessed 8,000 aircraft and could produce 1,500 per month. Although this was seven times the actual number determined by the Deuxième Bureau, it influenced France into trying to avoid conflict with Nazi Germany through the Munich Agreement. At the urging of U.S. Ambassador Joseph Kennedy, Lindbergh wrote a secret memo to the British warning that a military response by Britain and France to Hitler's violation of the Munich Agreement would be disastrous; he claimed that France was militarily weak and Britain over-reliant on its navy. He urgently recommended that they strengthen their air power to force Hitler to redirect his aggression against "Asiatic Communism".
Following Hitler's invasion of Czechoslovakia and Poland, Lindbergh opposed sending aid to countries under threat, writing "I do not believe that repealing the arms embargo would assist democracy in Europe" and "If we repeal the arms embargo with the idea of assisting one of the warring sides to overcome the other, then why mislead ourselves by talk of neutrality?" He equated assistance with war profiteering: "To those who argue that we could make a profit and build up our own industry by selling munitions abroad, I reply that we in America have not yet reached a point where we wish to capitalize on the destruction and death of war".
In August 1939, Lindbergh was the first choice of Albert Einstein, whom he met years earlier in New York, to deliver the Einstein–Szilárd letter alerting President Roosevelt about the vast potential of nuclear fission. However, Lindbergh did not respond to Einstein's letter or to Szilard's later letter of September 13. Two days later, Lindbergh gave a nationwide radio address, in which he called for isolationism and indicated some pro-German sympathies and antisemitic insinuations about Jewish ownership of the media, saying "We must ask who owns and influences the newspaper, the news picture, and the radio station, ... If our people know the truth, our country is not likely to enter the war". After that, Szilard stated to Einstein: "Lindbergh is not our man.": 475
In October 1939, following the outbreak of hostilities between Britain and Germany, and a month after the Canadian declaration of war on Germany, Lindbergh made another nationwide radio address criticizing Canada for drawing the Western Hemisphere "into a European war simply because they prefer the Crown of England" to the independence of the Americas. Lindbergh further stated his opinion that the entire continent and its surrounding islands needed to be free from the "dictates of European powers".
In November 1939, Lindbergh authored a controversial Reader's Digest article in which he deplored the war, but asserted the need for a German assault on the Soviet Union. Lindbergh wrote: "Our civilization depends on peace among Western nations ... and therefore on united strength, for Peace is a virgin who dare not show her face without Strength, her father, for protection".
In late 1940, Lindbergh became the spokesman of the isolationist America First Committee, soon speaking to overflow crowds at Madison Square Garden and Chicago's Soldier Field, with millions listening by radio. He argued emphatically that America had no business attacking Germany. Lindbergh justified this stance in writings that were only published posthumously:
I was deeply concerned that the potentially gigantic power of America, guided by uninformed and impractical idealism, might crusade into Europe to destroy Hitler without realizing that Hitler's destruction would lay Europe open to the rape, loot and barbarism of Soviet Russia's forces, causing possibly the fatal wounding of Western civilization.
In April 1941, he argued before 30,000 members of the America First Committee that "the British government has one last desperate plan ... to persuade us to send another American Expeditionary Force to Europe and to share with England militarily, as well as financially, the fiasco of this war."
In his 1941 testimony before the House Committee on Foreign Affairs opposing the Lend-Lease bill, Lindbergh proposed that the United States negotiate a neutrality pact with Germany. President Franklin Roosevelt publicly decried Lindbergh's views as those of a "defeatist and appeaser", comparing him to U.S. Rep. Clement L. Vallandigham, who had led the "Copperhead" movement opposed to the American Civil War. Following this, Lindbergh resigned his colonel's commission in the U.S. Army Air Corps Reserve on April 28, 1941, writing that he saw "no honorable alternative" given that Roosevelt had publicly questioned his loyalty; the next day, The New York Times ran an above the fold, front-page article about his resignation.
On September 11, 1941, Lindbergh delivered a speech for an America First rally at the Des Moines Coliseum that accused three groups of "pressing this country toward war; the British, the Jewish, and the Roosevelt Administration". He said that the British were propagandizing America because they could not defeat Nazi Germany without American aid and that the presidency of Franklin D. Roosevelt was trying to use a war to consolidate power. The three paragraphs Lindbergh devoted to accusing American Jews of war agitation formed what biographer A. Scott Berg called "the core of his thesis". In the speech, Lindbergh said that Jewish Americans had outsized control over government and news media (even though Jews did not compose even 3% of newspaper publishers and were only a minority of foreign policy bureaucrats), employing recognizably antisemitic tropes. The speech received a strong public backlash as newspapers, politicians, and clergy throughout the country criticized America First and Lindbergh for his remarks' antisemitism.
== Antisemitism and views on race ==
Lindbergh's anticommunism resonated deeply with many Americans, while his pro-eugenics views and Nordicism enjoyed social acceptance. His speeches and writings reflected his adoption of views on race, religion, and eugenics, similar to those of the German Nazis, and he was suspected of being a Nazi sympathizer. However, during a speech in September 1941, Lindbergh stated "no person with a sense of the dignity of mankind can condone the persecution of the Jewish race in Germany." Interventionist pamphlets pointed out that his efforts were praised in Nazi Germany and included quotations such as "Racial strength is vital; politics, a luxury."
Roosevelt disliked Lindbergh's outspoken opposition to his administration's interventionist policies, telling Treasury Secretary Henry Morgenthau, "If I should die tomorrow, I want you to know this, I am absolutely convinced Lindbergh is a Nazi." In 1941 he wrote to Secretary of War Henry Stimson: "When I read Lindbergh's speech I felt that it could not have been better put if it had been written by Goebbels himself. What a pity that this youngster has completely abandoned his belief in our form of government and has accepted Nazi methods because apparently they are efficient." Shortly after the war ended, Lindbergh toured a Nazi concentration camp, and wrote in his diary, "Here was a place where men and life and death had reached the lowest form of degradation. How could any reward in national progress even faintly justify the establishment and operation of such a place?"
Lindbergh seemed to state that he believed the survival of the white race was more important than the survival of democracy in Europe: "Our bond with Europe is one of race and not of political ideology," he declared. Critics have noticed an apparent influence on Lindbergh of German philosopher Oswald Spengler, a conservative authoritarian popular during the interwar period. In a 1935 interview, Lindbergh stated "There is no escaping the fact that men were definitely not created equal..."
Lindbergh developed a long-term friendship with the automobile pioneer Henry Ford, who was well known for his antisemitic newspaper The Dearborn Independent. In a famous comment about Lindbergh to Detroit's former FBI field office special agent in charge in July 1940, Ford said: "When Charles comes out here, we only talk about the Jews."
Lindbergh considered Russia a "semi-Asiatic" country compared to Germany, and he believed Communism was an ideology that would destroy the West's "racial strength" and replace everyone of European descent with "a pressing sea of Yellow, Black, and Brown". He stated that if he had to choose, he would rather see America allied with Nazi Germany than Soviet Russia. He preferred Nordics, but he believed, after Soviet Communism was defeated, Russia would be a valuable ally against potential aggression from East Asia.
Lindbergh elucidated his beliefs regarding the white race in a 1939 article in Reader's Digest:
We can have peace and security only so long as we band together to preserve that most priceless possession, our inheritance of European blood, only so long as we guard ourselves against attack by foreign armies and dilution by foreign races.
Lindbergh said certain races have "demonstrated superior ability in the design, manufacture, and operation of machines", and that "The growth of our western civilization has been closely related to this superiority." Lindbergh admired "the German genius for science and organization, the English genius for government and commerce, the French genius for living and the understanding of life". He believed, "in America they can be blended to form the greatest genius of all".
In his book The American Axis, Holocaust researcher and investigative journalist Max Wallace agreed with Franklin Roosevelt's assessment that Lindbergh was "pro-Nazi". However, he found that the Roosevelt Administration's accusations of dual loyalty or treason were unsubstantiated. Wallace considered Lindbergh to be a well-intentioned but bigoted and misguided Nazi sympathizer whose career as the leader of the isolationist movement had a destructive impact on Jewish people.
Lindbergh's Pulitzer Prize-winning biographer, A. Scott Berg, contended that Lindbergh was not so much a supporter of the Nazi regime as someone so stubborn in his convictions and relatively inexperienced in political maneuvering that he easily allowed rivals to portray him as one. Lindbergh's receipt of the Order of the German Eagle, presented in October 1938 by Generalfeldmarschall Hermann Göring on behalf of Führer Adolf Hitler, was approved without objection by the American embassy. Lindbergh returned to the United States in early 1939 to spread his message of nonintervention. Berg contended Lindbergh's views were commonplace in the United States in the interwar era. Lindbergh's support for the America First Committee was representative of the sentiments of a number of American people.
Berg also noted: "As late as April 1939—after Germany overtook Czechoslovakia—Lindbergh was willing to make excuses for Adolf Hitler. 'Much as I disapprove of many things Hitler had done', he wrote in his diary on April 2, 1939, 'I believe she [Germany] has pursued the only consistent policy in Europe in recent years. I cannot support her broken promises, but she has only moved a little faster than other nations ... in breaking promises. The question of right and wrong is one thing by law and another thing by history.'" Berg also explained that leading up to the war, Lindbergh believed the great battle would be between the Soviet Union and Germany, not fascism and democracy.
Lindbergh always championed military strength and alertness. He believed that a strong defensive war machine would make America an impenetrable fortress and defend the Western Hemisphere from an attack by foreign powers, and that this was the U.S. military's sole purpose.
While the attack on Pearl Harbor came as a shock to Lindbergh, he did predict that America's "wavering policy in the Philippines" would invite a brutal war there, and in one speech warned, "we should either fortify these islands adequately, or get out of them entirely."
== World War II ==
In January 1942, Lindbergh met with Secretary of War, Henry L. Stimson, seeking to be recommissioned in the Army Air Forces. Stimson was strongly opposed because of the long record of public comments. Blocked from active military service, Lindbergh approached a number of aviation companies and offered his services as a consultant. As a technical adviser with Ford in 1942, he was heavily involved in troubleshooting early problems at the Willow Run Consolidated B-24 Liberator bomber production line. As B-24 production smoothed out, he joined United Aircraft in 1943 as an engineering consultant, devoting most of his time to its Chance-Vought Division.In 1944 Lindbergh persuaded United Aircraft to send him as a technical representative to the Pacific Theater to study aircraft performance under combat conditions. In preparation for his deployment to the Pacific, Lindbergh went to Brooks Brothers to buy a naval officer's uniform without insignia and visited Brentano's bookstore in New York to buy a New Testament, writing in his wartime journal entry for April 3, 1944: "Purchased a small New Testament at Brentano's. Since I can only carry one book—and a very small one—that is my choice. It would not have been a decade ago; but the more I learn and the more I read, the less competition it has." He demonstrated how United States Marine Corps Aviation pilots could take off safely with a bomb load double the Vought F4U Corsair fighter-bomber's rated capacity. At the time, several Marine squadrons were flying bomber escorts to destroy the Japanese stronghold of Rabaul, New Britain, in the Australian Territory of New Guinea. On May 21, 1944, Lindbergh flew his first combat mission: a strafing run with VMF-222 near the Japanese garrison of Rabaul. He also flew with VMF-216, from the Marine Air Base at Torokina, Bougainville. Lindbergh was escorted on one of these missions by Lt. Robert E. (Lefty) McDonough, who refused to fly with Lindbergh again, as he did not want to be known as "the guy who killed Lindbergh".
In his six months in the Pacific in 1944, Lindbergh took part in fighter bomber raids on Japanese positions, flying 50 combat missions (again as a civilian). His innovations in the use of Lockheed P-38 Lightning fighters impressed a supportive Gen. Douglas MacArthur. Lindbergh introduced engine-leaning techniques to P-38 pilots, greatly improving fuel consumption at cruise speeds, enabling the long-range fighter aircraft to fly longer-range missions. P-38 pilot Warren Lewis quoted Lindbergh's fuel-saving settings, "He said, '... we can cut the RPM down to 1400 RPMs and use 30 inches of mercury (manifold pressure), and save 50–100 gallons of fuel on a mission.'" The U.S. Marine and Army Air Force pilots who served with Lindbergh praised his courage and defended his patriotism.
On July 28, 1944, during a P-38 bomber escort mission with the 433rd Fighter Squadron in the Ceram area, Lindbergh shot down a Mitsubishi Ki-51 "Sonia" observation plane, piloted by Captain Saburo Shimada, commanding officer of the 73rd Independent Chutai. Lindbergh's participation in combat was revealed in a story in the Passaic Herald-News on October 22, 1944.
In mid-October 1944, Lindbergh participated in a joint Army-Navy conference on fighter planes at NAS Patuxent River, Maryland.
== Later life ==
After World War II, Lindbergh lived in Darien, Connecticut, and served as a consultant to the Chief of Staff of the United States Air Force and to Pan American World Airways. With most of eastern Europe under communist control, Lindbergh continued to voice concern about Soviet power, observing: "Freedom of speech and action is suppressed over a large portion of the world...Poland is not free, nor the Baltic states, nor the Balkans. Fear, hatred, and mistrust are breeding." In Lindbergh's words, Soviet and communist influence over the post-war world meant that "while our soldiers have been victorious," America had nonetheless not "accomplished the objectives for which we went to war," and he declared: "We have not established peace or liberty in Europe."
Commenting on the post-war world, Lindbergh said that "a whole civilization is in disintegration," and believed America needed to support Europe against communism. Because America had "taken a leading part" in World War II, he said it therefore could not "retire now and leave Europe to the destructive forces" that the war had "let loose." While he still believed his prewar non-interventionism was correct, Lindbergh said the United States now had a responsibility to support Europe, because of "honor, self-respect, and our own national interests." Furthermore, Lindbergh wrote that "we could not let atrocities such as those of the concentration camps go unpunished," and firmly supported the Nuremberg trials.
After the war, Lindbergh toured Germany, covering "almost two thousand miles during his last two weeks" in the country, and also traveled to Paris and participated in "conferences with military personnel and the American Ambassador" during the same trip. While in Germany in June 1945, he toured Dora concentration camp, inspecting the tunnels of Nordhausen and viewing V-1 and V-2 missile parts. He attempted to "reconcile," as Berg wrote, the technology he saw with how the "forces of evil had harnessed it." Reflecting on what happened in the camps, Lindbergh wrote in his wartime journal that it "seemed impossible that men—civilized men—could degenerate to such a level. Yet they had."
In the following page in his journal, he also lamented the mistreatment of Japanese people by Americans and other Allied personnel during the war, comparing these "incidents" to what the Germans did. As Berg wrote in 1998, Lindbergh returned from this two-month European journey "more alarmed about the state of the world than ever," but nonetheless "he knew that the American public no longer gave a hoot for his opinions." Drawing lessons from the war, Lindbergh stated: "No peace will last that is not based on Christian principles, on justice, on compassion...on a sense of the dignity of man. Without such principles there can be no lasting strength...The Germans found that out."
Soon after returning to America, Lindbergh paid a visit to his mother in Detroit, and on the train home he wrote a letter wherein he mentioned a "spiritual awareness," speaking of how important it was to spend time in the garden, take in the sun, and listen to birds. In Berg's words, this letter "revealed a changed man." As time went on, Lindbergh became increasingly spiritual in his outlook and grew concerned with the impact science and technology had on the world. In 1948, his Of Flight and Life was published, a book that has been described as an "impassioned warning against the dangers of scientific materialism and the powers of technology." He wrote of his experiences as a combat pilot in the Pacific theater, and declared his conversion from a worshiper of science to a worshiper of the "eternal truths of God," expressing concern for humanity's future. In 1949, he received the Wright Brothers Memorial Trophy and declared in his acceptance speech: "If we are to be finally successful, we must measure scientific accomplishments by their effect on man himself."
On April 7, 1954, on the recommendation of President Dwight D. Eisenhower, Lindbergh was commissioned a brigadier general in the U.S. Air Force Reserve; Eisenhower had nominated Lindbergh for promotion on February 15. Also in that year, he served on a Congressional advisory panel that recommended the site of the United States Air Force Academy. He won the Pulitzer Prize for biography in 1954 with his book, The Spirit of St. Louis, which focuses on his 1927 flight and the events leading up to it. In May 1962, Lindbergh visited the White House with his wife and met President John F. Kennedy, having his picture taken by White House photographer Robert Knudsen.
In December 1968, he visited the astronauts of Apollo 8 (the first crewed mission to orbit the Moon) the day before their launch, and in July 1969 he and his wife witnessed the launch of Apollo 11 as personal guests of Neil Armstrong. Armstrong had met Lindbergh in 1968, and the two corresponded until the latter's death in 1974. In conjunction with the first lunar landing, he shared his thoughts as part of Walter Cronkite's live television coverage. He later wrote the foreword to Apollo astronaut Michael Collins's autobiography. While he maintained his interest in technology, Lindbergh began to focus more on protecting the natural world, and after viewing the Apollo 11 launch, he "participated in a WWF-sponsored dedication of a 900-acre bird preserve."
=== Double life and secret German children ===
Beginning in 1957, Lindbergh engaged in lengthy sexual relationships with three women, while remaining married to Anne Morrow. He fathered three children with hatmaker Brigitte Hesshaimer, who lived in the Bavarian town of Geretsried. He had two children with her sister Mariette, a painter, living in Grimisuat. Lindbergh also had a son and daughter, born in 1959 and 1961, with Valeska, who was his private secretary in Europe and lived in Baden-Baden. All seven children were born between 1958 and 1967.
Ten days before he died, Lindbergh wrote to each of his European mistresses, imploring them to maintain the utmost secrecy about his illicit activities with them even after his death. The three women, none of whom ever married, all kept their affairs secret even from their children, who during his lifetime, and for almost a decade after his death, did not know the true identity of their father, whom they had only known by the alias Careu Kent, and seen only when he briefly visited them once or twice a year.
After reading a magazine article about Lindbergh in the mid-1980s, Brigitte's daughter Astrid deduced the truth. She later discovered photographs and more than 150 love letters from Lindbergh to her mother. After Brigitte and Anne Lindbergh had both died, she made her findings public. In 2003, DNA tests confirmed that Lindbergh had fathered Astrid and her two siblings.
Reeve Lindbergh, Lindbergh's youngest child with Anne, wrote in her personal journal in 2003, "This story reflects absolutely Byzantine layers of deception on the part of our shared father. These children did not even know who he was! He used a pseudonym with them (To protect them, perhaps? To protect himself, absolutely!)"
=== Environmental and tribal causes ===
In later life Lindbergh was heavily involved in conservation movements, and was deeply concerned about the negative impacts of new technologies on the natural world and native peoples, focusing on regions like Hawaii, Africa, and the Philippines. He campaigned to protect endangered species including the humpback whale, blue whale, Philippine eagle, and the tamaraw (a rare dwarf Philippine buffalo), and was instrumental in establishing protections for the Tasaday and Agta people, and various African tribes such as the Maasai. Alongside Laurance S. Rockefeller, Lindbergh helped establish the Haleakalā National Park in Hawaii. He also worked to protect Arctic wolves in Alaska, and helped establish Voyageurs National Park in northern Minnesota.
In an essay appearing in the July 1964 Reader's Digest, Lindbergh wrote about a realization he had in Kenya during a trip to see land being considered for a national park. He contrasted his time amid the African landscape with his involvement in a supersonic transport convention in New York, and while "lying under an acacia tree," he realized how the "construction of an airplane" was simple compared to the "evolutionary achievement of a bird." He wrote "that if I had to choose, I would rather have birds than airplanes."
In this essay, he questioned his old definition of "progress," and concluded that nature displayed more actual progress than humanity's creations. He wrote several more essays for Reader's Digest and Life, urging people to respect the self-awareness that came from contact with nature, which he called the "wisdom of wildness," and not merely follow science. As David Boocker wrote in 2009, Lindbergh's essays, appearing in popular magazines, "introduced millions of people to the conservation cause," and he made an important "appeal to lead a life less complicated by technology."
On May 14, 1971, Lindbergh received the Philippine Order of the Golden Heart at a formal dinner at Malacañang Palace in Manila. He was described as an aviation pioneer who had symbolized the advance of technology, and who now was a symbol of the drive to protect natural life from technology. Lindbergh actively participated in both conservation and advocacy for tribal minorities in the Philippines, frequently visiting the country and working to protect species including the tamaraw and Philippine eagle, which he described as a "magnificent bird," lending his name to a law against killing or trapping the animal.
In August 1971, in Davao City, he ceremonially received a young Philippine eagle kept in captivity after its mother was killed by a hunter, delaying his return to the United States so he could take part in the presentation. Arturo Garcia, a movie theater manager in Davao, had bought the bird in March 1970 after the hunting incident, and built a large cage for it behind his house. Lindbergh entered the cage with Jesus Alvarez, director of the Philippines park and wildlife commission, received the eagle, and then turned it over to Alvarez, remarking: "Now we have to see if the bird can go back to its natural place." The Associated Press reported on both Lindbergh's reception of the Order of the Golden Heart and the presentation of the eagle.
==== 1972 Philippines expedition ====
Lindbergh's speeches and writings in later life centered on technology and nature, and his lifelong belief that "all the achievements of mankind have value only to the extent that they preserve and improve the quality of life". In 1972, Lindbergh undertook an expedition with a television news crew to Mindanao, in the Philippines, to investigate reports of a lost tribe. The Tasaday, a Philippine indigenous people of the Lake Sebu area, were attracting much media attention at the time. Although both NBC Evening News and National Geographic ran stories about the supposed discovery of the tribe, a controversy emerged over whether the Tasaday were truly uncontacted, or had just been portrayed that way for media attention—particularly by Manuel Elizalde Jr., a Philippine politician who publicized the tribe—and were in reality "not completely isolated."
Lindbergh cooperated with Elizalde to get a "proclamation from President Ferdinand Marcos to preserve more than 46,000 acres of Tasaday country." However, during Lindbergh's 1972 expedition, the support helicopter for his team had mechanical trouble, creating the prospect of a three-day return trek through difficult jungle terrain. On April 2, The New York Times ran a UPI report stating Lindbergh's party had "sent a radio message from the rain forests of the southern Philippines saying their food was nearly gone and they needed help." Henry A. Byroade, U.S. Ambassador to the Philippines, called upon the 31st Aerospace Rescue and Recovery Squadron at Clark Air Base on the island of Luzon to perform a rescue.
U.S. Air Force Maj. Bruce Ware and his crew—co-pilot Lt. Col. Dick Smith, flight engineer SSgt Bob Baldwin, and pararescueman Airman 1st Class Kim Robinson—flew their Sikorsky HH-3E Jolly Green Giant over 600 miles (970 km) to rescue Lindbergh and his news crew on April 12, 1972. Lindbergh and the news team were stranded on a 3,000-foot (910 m) high jungle ridge line, and because of this terrain the Sikorsky "had to hover with the nose wheel on one side of the ridge, and the main wheels on the other, with the boarding steps a few feet over the ridge top." During the operation, the helicopter had to refuel twice, prompting Lindbergh to comment that although he had helped develop in-flight refueling, he had never been aboard a helicopter during the procedure, nor on the receiving end of it.
After more than twelve hours, and a total of eight trips to a nearby drop point, the mission was completed, and all 46 individuals stranded on the ridge were extracted. With Lindbergh aboard, the helicopter then flew to Mactan Air Base, on the island of Cebu, where photographers were waiting for him. Ware rested in the pilot's seat for several minutes after landing, and Lindbergh was hesitant to disembark before him. He told Ware he was certain he could not have made the "hard" three-day journey back. Lindbergh, with other passengers, was then loaded on a HC-130 and flown to Manila. As reported by the Associated Press, Lindbergh remarked after his rescue: "We were in no danger but we were stranded and running low on food."
Maj. Ware received the Distinguished Flying Cross for his actions, and the other Sikorsky crew members received the Air Medal. In 2021, Ware described how he received his medal "in less than a week," remarking that it normally "takes several months. But when you've got an international hero, it kind of gains some momentum.”
==== Retirement in Hawaii ====
Lindbergh joined with early aviation industrialist, former Pan Am executive vice president, and longtime friend, Samuel F. Pryor Jr., in "efforts by the Nature Conservancy to preserve plants and wildlife in Kipahulu Valley" on the Hawaiian island of Maui. Lindbergh chose the Kipahulu Valley for retirement, building an A-frame cottage there in 1971; Pryor moved there in 1965 with his wife, Mary, after retiring from Pan Am. Lindbergh's choice of Maui as a retirement home "represented his love of natural places" and his "lifelong commitment to the ideal of simplicity."
==== Views on technology ====
Commenting on Lindbergh's profound concern with the impact of technology on humanity, Richard Hallion wrote: "He recognized the narrow margin on which society trod in the unstable nuclear era, and his work after World War II confirmed his fear that humanity now had the ability to destroy in minutes what previous generations had taken centuries to create. And so Lindbergh the technologist changed to Lindbergh the philosopher, protector of the Tasaday, preaching a turn from the materialistic, mechanistic society toward a society based on 'simplicity, humiliation, contemplation, prayer.'" In her 1988 book, Charles A. Lindbergh and the American Dilemma, Susan M. Gray wrote that Lindbergh "established his 'middle ground' between technology and human values, embracing both, rejecting neither."
=== Death ===
Lindbergh spent his last years on Maui in his small, rustic seaside home. In 1972, he became sick with cancer and ultimately died of lymphoma on the morning of August 26, 1974, at age 72. After his cancer diagnosis, Lindbergh "sketched a simple design for his grave and coffin," helping to design his grave in the "traditional Hawaiian style." Following "a series of radiation treatments, he spent several months in Maui recuperating," and also made a 26-day stay in the Columbia-Presbyterian Medical Center in New York, but with little improvement.
After he realized the treatment would not save him, he decided to leave Columbia hospital and returned to Kipahulu with his wife Anne, flying to Honolulu on August 17 and then traveling to Maui by small plane, dying a week later. He was buried on the grounds of the Palapala Ho'omau Church in Kipahulu, Maui, a Congregational church first established in 1864, which fell into disuse in the 1940s and was restored beginning in 1964 by Samuel F. Pryor Jr., whose family cooperated with the Lindbergh family to create an endowment for the upkeep of the property. Lindbergh took part in the church restoration with his old friend Pryor, and both men agreed to make their final resting place in the small cemetery they cleared.
On the evening of August 26, President Gerald Ford made a tribute to Lindbergh, saying that the courage and daring of his Atlantic flight would never be forgotten, describing him as a selfless, sincere man, and stating: "For a generation of Americans, and for millions of other people around the world, the 'Lone Eagle' represented all that was best in our country."
== Honors and tributes ==
Lindbergh was a recipient of the Silver Buffalo Award, the highest adult award given by the Boy Scouts of America, on April 10, 1928.
On May 8, 1928, a statue was dedicated at Le Bourget Airport in Paris honoring Lindbergh and his New York to Paris flight as well as Charles Nungesser and François Coli who had disapppeared while attempted the same feat two weeks earlier in the other direction aboard L'Oiseau Blanc (The White Bird).
San Diego International Airport was named Lindbergh Field from 1928 to 2003. A replica of his plane hangs above baggage claim.
Minneapolis–Saint Paul International Airport Terminal 1 was named Lindbergh honoring his Minnesota roots and feats in aviation.
In 1933, the Lindbergh Range (Danish: Lindbergh Fjelde) in Greenland was named after him by Danish Arctic explorer Lauge Koch following aerial surveys made during the 1931–1934 Three-year Expedition to East Greenland.
In St. Louis County, Missouri, a school district, high school and highway are named for Lindbergh, and he has a star on the St. Louis Walk of Fame.
In 1937, a transatlantic race was proposed to commemorate the tenth anniversary of Lindbergh's flight to Paris, though it was eventually modified to take a different course of similar length. See 1937 Istres–Damascus–Paris Air Race.
He was inducted into the National Aviation Hall of Fame in 1967.
The Royal Air Force Museum in London minted a medal with his image as part of a 50 medal set called The History of Man in Flight in 1972.
The original Lindbergh residence in Little Falls, Minnesota, is maintained as a museum, and is listed as a National Historic Landmark.
In February 2002, the Medical University of South Carolina at Charleston, within the celebrations for the Lindbergh 100th birthday established the Lindbergh-Carrel Prize, given to major contributors to "development of perfusion and bioreactor technologies for organ preservation and growth". M. E. DeBakey and nine other scientists received the prize, a bronze statuette expressly created for the event by the Italian artist C. Zoli and named "Elisabeth", after Elisabeth Morrow, sister of Lindbergh's wife Anne Morrow, who died as a result of heart disease. Lindbergh was disappointed that contemporary medical technology could not provide an artificial heart pump that would allow for heart surgery on Elisabeth and that led to the first contact between Carrel and Lindbergh.
=== Awards and decorations ===
Lindbergh received many awards, medals and decorations, most of which were later donated to the Missouri Historical Society and are on display at the Jefferson Memorial, now part of the Missouri History Museum in Forest Park in St. Louis, Missouri.
United States government
Medal of Honor (December 14, 1927)
Distinguished Flying Cross (June 11, 1927)
Langley Gold Medal from the Smithsonian Institution (1927)
Congressional Gold Medal (Approved May 4, 1928, presented August 15, 1930)
Other U.S. awards
Orteig Prize (1927, see details above)
Harmon Trophy (1927)
Hubbard Medal (1927)
Honorary Scout (Boy Scouts of America, 1927)
New York State Medal for Valor (June 13, 1927)
Silver Buffalo Award (Boy Scouts of America, 1928)
Wright Brothers Memorial Trophy (1949)
Daniel Guggenheim Medal (1953)
Pulitzer Prize (1954)
Non-U.S. awards
Commander of the Legion of Honor (France, initial award May 23, 1927, promoted to Commandeur October 25, 1930)
Knight of the Order of Leopold (Belgium, May 28, 1927)
Air Force Cross (United Kingdom, May 31, 1927)
Silver Cross of Boyacá (Colombia, January 28, 1928)
Order of the Liberator, Commander (Venezuela, January 29, 1928)
Order of Carlos Manuel de Céspedes, Grand Cross (Cuba, February 10, 1928)
Order of the Rising Sun, Third Class (Japan, September 9, 1931)
Aeronautical Virtue Order (Romania, January 13, 1933)
Order of the German Eagle with Star (Nazi Germany, October 19, 1938)
Gold Medal "Plus Ultra" (Spain, June 1, 1927)
Order of the Golden Heart (Philippines, May 14, 1971)
Fédération Aéronautique Internationale FAI Gold Medal (1927)
ICAO Edward Warner Award (1975)
Royal Swedish Aero Clubs Gold plaque (1927)
=== Medal of Honor ===
Rank and organization: Captain, U.S. Army Air Corps Reserve. Place and date: From New York City to Paris, France, May 20–21, 1927. Entered service at: Little Falls, Minn. Born: February 4, 1902, Detroit, Mich. G.O. No.: 5, W.D., 1928; Act of Congress December 14, 1927.
Citation
For displaying heroic courage and skill as a navigator, at the risk of his life, by his nonstop flight in his airplane, the "Spirit of St. Louis", from New York City to Paris, France, 20–21 May 1927, by which Capt. Lindbergh not only achieved the greatest individual triumph of any American citizen but demonstrated that travel across the ocean by aircraft was possible.
=== Other recognition ===
1934–1939 Trustee of the Carnegie Institution
1965 International Aerospace Hall of Fame Inductee
1991 Scandinavian-American Hall of Fame Inductee
Ranked No. 3 on Flying magazine's 51 Heroes of Aviation
Member of the Independent Order of Odd Fellows
== Writings ==
In addition to "WE" and The Spirit of St. Louis, Lindbergh wrote prolifically over the years on other topics, including science, technology, nationalism, war, materialism, and values. Included among those writings were five other books: The Culture of Organs (with Dr. Alexis Carrel) (1938), Of Flight and Life (1948), The Wartime Journals of Charles A. Lindbergh (1970), Boyhood on the Upper Mississippi (1972), and his unfinished Autobiography of Values (posthumous, 1978).
== In popular culture ==
=== Literature ===
In addition to many biographies, such as A. Scott Berg's 1998 award-winning bestseller Lindbergh, Lindbergh also influenced or was the model for characters in a variety of works of fiction. Shortly after he made his famous flight, the Stratemeyer Syndicate began publishing a series of books for juvenile readers called the Ted Scott Flying Stories (1927–1943), which were written by a number of authors using the nom de plume "Franklin W. Dixon", in which the pilot hero was closely modeled after Lindbergh. Ted Scott duplicated the solo flight to Paris in the series' first volume, Over the Ocean to Paris (1927). Another reference to Lindbergh appears in the Agatha Christie novel (1934) and movie Murder on the Orient Express (1974) which begins with a fictionalized depiction of the Lindbergh kidnapping.
There have been several alternate history novels depicting Lindbergh's alleged Nazi-sympathies and non-interventionist views during the first half of World War II. In Daniel Easterman's K is for Killing (1997), a fictional Lindbergh becomes president of a fascist United States. The Philip Roth novel The Plot Against America (2004) explores an alternate history where Franklin Delano Roosevelt is defeated in the 1940 presidential election by Lindbergh, who allies the United States with Nazi Germany.
The Robert Harris novel Fatherland (1992) explores an alternate history where the Nazis won the war, the United States still defeats Japan, Adolf Hitler and President Joseph Kennedy negotiate peace terms, and Lindbergh is the US Ambassador to Germany. The Jo Walton novel Farthing (2006) explores an alternate history where the United Kingdom made peace with Nazi Germany in 1941, Japan never attacked Pearl Harbor, thus the United States never got involved with the war, and Lindbergh is president and is seeking closer economic ties with the Greater East Asian Co-Prosperity Sphere.
=== Film and television ===
Lindbergh has been the subject of numerous documentary films, including Charles A. Lindbergh (1927), a UK documentary by De Forest Phonofilm; 40,000 Miles with Lindbergh (1928), featuring Lindbergh himself; and The American Experience—Lindbergh: The Shocking, Turbulent Life of America's Lone Eagle (1988).
The 1942 MGM picture Keeper of the Flame, starring Katharine Hepburn and Spencer Tracy, features Hepburn as the widow of a "Lindbergh-like" national hero.
In the major motion picture The Spirit of St. Louis (1957), directed by Billy Wilder, Lindbergh was played by James Stewart, an admirer of Lindbergh and himself a World War II aviator. The film largely centers around Lindbergh's record-breaking 1927 flight. Prior to the casting of Stewart, John Kerr declined to play the role because of Lindbergh's alleged pro-Nazi beliefs.
In 1976, Buzz Kulik's TV movie The Lindbergh Kidnapping Case, with Anthony Hopkins as Richard Hauptmann, premiered on NBC.
Lindbergh was the theme of prolific director Orson Welles's final living film project in 1984, The Spirit of Charles Lindbergh, where Welles speaks of the human spirit while quoting Lindbergh's journal. Although never intended to be viewed by the public, a brief clip can be seen at the end of Vassili Slovic's 1995 documentary Orson Welles: the One-Man Band.
The 2020 HBO alternate history miniseries The Plot Against America, based on the Philip Roth book of the same name, features actor Ben Cole as a fictionalized President Lindbergh following his defeat of Roosevelt in 1940. The series portrays Lindbergh as a xenophobic populist with strong ties to Nazi Germany.
Charles Lindbergh "Chuck" McGill, a fictional character in the TV series Better Call Saul (2015–2022), was named after Lindbergh.
=== Music ===
Within days of the flight, dozens of Tin Pan Alley publishers rushed a variety of popular songs into print celebrating Lindbergh and the Spirit of St. Louis including "Lindbergh (The Eagle of the U.S.A.)" by Howard Johnson and Al Sherman, and "Lucky Lindy!" by L. Wolfe Gilbert and Abel Baer. In the two-year period following Lindbergh's flight, the U.S. Copyright Office recorded three hundred applications for Lindbergh songs. Tony Randall revived "Lucky Lindy" in an album of Jazz Age and Depression-era songs that he recorded titled Vo Vo De Oh Doe (1967).
While the exact origin of the name of the Lindy Hop is disputed, it is widely acknowledged that Lindbergh's 1927 flight helped to popularize the dance: soon after "Lucky Lindy" "hopped" the Atlantic, the Lindy Hop became a trendy, fashionable dance, and songs referring to the "Lindbergh Hop" were quickly released.
In 1929, Bertolt Brecht wrote a cantata called Der Lindberghflug (Lindbergh's Flight) with music by Kurt Weill and Paul Hindemith. Because of Lindbergh's apparent Nazi sympathies, in 1950 Brecht removed all direct references to Lindbergh and renamed the piece Der Ozeanflug (The Flight Across the Ocean).
In the early 1940s Woody Guthrie wrote "Lindbergh" or "Mister Charlie Lindbergh" which criticizes Lindbergh's involvement with the America First Committee and his suspected sympathy for Nazi Germany.
=== Postage stamps ===
Lindbergh and the Spirit have been honored by a variety of world postage stamps over the last eight decades, including three issued by the United States. Less than three weeks after the flight the U.S. Post Office Department issued a 10-cent "Lindbergh Air Mail" stamp on June 11, 1927, with engraved illustrations of both the Spirit of St. Louis and a map of its route from New York to Paris. This was also the first U.S. stamp to bear the name of a living person. A 13-cent commemorative stamp depicting the Spirit over the Atlantic Ocean was issued on May 20, 1977, the 50th anniversary of the flight from Roosevelt Field. On May 28, 1998, a 32¢ stamp with the legend "Lindbergh Flies Atlantic" depicting Lindbergh and the Spirit was issued as part of the Celebrate the Century stamp sheet series.
=== Other ===
During World War II, Lindbergh was a frequent target of Dr. Seuss's first political cartoons, published in the New York magazine PM, in which Seuss criticized Lindbergh's isolationism, antisemitism, and supposed Nazi sympathies.
Lindbergh's Spirit of St. Louis is featured in the opening sequence of Star Trek: Enterprise (2001–2005).
St. Louis area–based GoJet Airlines uses the callsign "Lindbergh" after Charles Lindbergh.
The aeronautical themed Hotel Charles Lindbergh at German theme park Phantasialand was named after Lindbergh.
== See also ==
== Notes ==
== References ==
=== Sources ===
== Further reading ==
=== Articles ===
Brody, Richard (February 1, 2017). "The Frightening Lessons of Philip Roth's 'The Plot Against America'". The New Yorker.
Hampton, Dan (May 19, 2017). "The World Over Which Charles Lindbergh Flew". TIME.
Singer, Saul Jay. "The Anti-Semitism Of Charles Lindbergh," Jewish Press March 6, 2019 online
Steiger, William A. (1954) "Lindbergh Flies Air Mail from Springfield." Journal of the Illinois State Historical Society 47(2): 133–148. online
Thomas, Louisa (July 24, 2016). "America First, for Charles Lindbergh and Donald Trump". The New Yorker.
=== Books ===
Bak, Richard (2011). The Big Jump: Lindbergh and the Great Atlantic Air Race. Wiley. ISBN 978-0471477525.
Duffy, James P. (2010). Lindbergh vs. Roosevelt: The Rivalry That Divided America. Regnery Publishing. ISBN 978-1596986015. online
Gehrz, Christopher. Charles Lindbergh: A Religious Biography of America's Most Infamous Pilot (Wm. B. Eerdmans Publishing, 2021) online also see online book review.
Groom, Winston (2013). The Aviators: Eddie Rickenbacker, Jimmy Doolittle, Charles Lindbergh, and the Epic Age of Flight. National Geographic. ISBN 978-1426211560.
Jackson, Joe (2012). Atlantic Fever: Lindbergh, His Competitors, and the Race to Cross the Atlantic. Macmillan. ISBN 978-0374106751.
Kessner, Thomas (2010). The Flight of the Century: Charles Lindbergh and the Rise of American Aviation. Oxford University Press. ISBN 978-0-19-532019-0.
Lindbergh, Anne Morrow (1966). North To The Orient. Mariner Books. ISBN 978-0156671408.
Lindbergh, Reeve (1999). Under A Wing: A Memoir. Delta. ISBN 978-1439148839.
Milton, Joyce (1993). Loss of Eden: A Biography of Charles and Anne Morrow Lindbergh. HarperCollins. ISBN 978-0060165031.
Mosley, Leonard (1976). Lindbergh: A Biography. Doubleday. ISBN 978-0-385-09578-5.
Wallace, Max (2005). The American Axis: Henry Ford, Charles Lindbergh and the Rise of the Third Reich. Macmillan. ISBN 978-0-312-33531-1.
Winters, Kathleen (2006). Anne Morrow Lindbergh: First Lady of the Air. Macmillan. ISBN 978-1-4039-6932-3.
=== Series ===
"Lindbergh: Daredevil". PBS.
"Lindbergh: The Spirit of St. Louis". PBS.
"Lindbergh: Transatlantic Flight, New York to Paris". PBS.
"Lindbergh: The Kidnapping". PBS.
"Lindbergh: Fallen Hero". PBS.
== External links ==
Charles A. Lindbergh in MNopedia, the Minnesota Encyclopedia
Lindbergh's first solo flight Archived May 18, 2013, at the Wayback Machine
FBI History – Famous cases: The Lindbergh kidnapping
FBI Records: The Vault – Charles Lindbergh at fbi.gov
Newspaper clippings about Charles Lindbergh in the 20th Century Press Archives of the ZBW
Finding aids to archival collections:
Morrow-Lindbergh-McIlvaine Family Papers at the Amherst College Archives & Special Collections
Charles Augustus Lindbergh papers (MS 325). Manuscripts and Archives, Yale University Library.
The Lindbergh Family Papers, including some materials of Charles Lindbergh, available for research use at the Minnesota Historical Society
Works by Charles Lindbergh at LibriVox (public domain audiobooks) |
China | China, officially the People's Republic of China (PRC), is a country in East Asia. With a population exceeding 1.4 billion, it is the second-most populous country after India, representing 17.4% of the world population. China spans the equivalent of five time zones and borders fourteen countries by land across an area of nearly 9.6 million square kilometers (3,700,000 sq mi), making it the third-largest country by land area. The country is divided into 33 province-level divisions: 22 provinces, 5 autonomous regions, 4 municipalities, and 2 semi-autonomous special administrative regions. Beijing is the country's capital, while Shanghai is its most populous city by urban area and largest financial center.
Considered one of six cradles of civilization, China saw the first human inhabitants in the region arriving during the Paleolithic. By the late 2nd millennium BCE, the earliest dynastic states had emerged in the Yellow River basin. The 8th–3rd centuries BCE saw a breakdown in the authority of the Zhou dynasty, accompanied by the emergence of administrative and military techniques, literature, philosophy, and historiography. In 221 BCE, China was unified under an emperor, ushering in more than two millennia of imperial dynasties including the Qin, Han, Tang, Yuan, Ming, and Qing. With the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall, Chinese culture flourished and has heavily influenced both its neighbors and lands further afield. However, China began to cede parts of the country in the late 19th century to various European powers by a series of unequal treaties. After decades of Qing China on the decline, the 1911 Revolution overthrew the Qing dynasty and the monarchy and the Republic of China (ROC) was established the following year.
The country under the nascent Beiyang government was unstable and ultimately fragmented during the Warlord Era, which was ended upon the Northern Expedition conducted by the Kuomintang (KMT) to reunify the country. The Chinese Civil War began in 1927, when KMT forces purged members of the rival Chinese Communist Party (CCP), who proceeded to engage in sporadic fighting against the KMT-led Nationalist government. Following the country's invasion by the Empire of Japan in 1937, the CCP and KMT formed the Second United Front to fight the Japanese. The Second Sino-Japanese War eventually ended in a Chinese victory; however, the CCP and the KMT resumed their civil war as soon as the war ended. In 1949, the resurgent Communists established control over most of the country, proclaiming the People's Republic of China and forcing the Nationalist government to retreat to the island of Taiwan. The country was split, with both sides claiming to be the sole legitimate government of China. Following the implementation of land reforms, further attempts by the PRC to realize communism failed: the Great Leap Forward was largely responsible for the Great Chinese Famine that ended with millions of Chinese people having died, and the subsequent Cultural Revolution was a period of social turmoil and persecution characterized by Maoist populism. Following the Sino-Soviet split, the Shanghai Communiqué in 1972 would precipitate the normalization of relations with the United States. Economic reforms that began in 1978 moved the country away from a socialist planned economy towards an increasingly capitalist market economy, spurring significant economic growth. A movement for increased democracy and liberalization stalled after the Tiananmen Square protests and massacre in 1989.
China is a unitary communist state led by the CCP that self-designates as a socialist state. It is one of the five permanent members of the UN Security Council; the UN representative for China was changed from the ROC to the PRC in 1971. It is a founding member of several multilateral and regional organizations such as the AIIB, the Silk Road Fund, the New Development Bank, and the RCEP. It is a member of BRICS, the G20, APEC, the SCO, and the East Asia Summit. Making up around one-fifth of the world economy, the Chinese economy is the world's largest by PPP-adjusted GDP and the second-largest by nominal GDP. China is the second-wealthiest country, albeit ranking poorly in measures of democracy, human rights and religious freedom. The country has been one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget. It is a great power, and has been described as an emerging superpower. China is known for its cuisine and culture and, as a megadiverse country, has 59 UNESCO World Heritage Sites, the second-highest number of any country.
== Etymology ==
The word "China" has been used in English since the 16th century; however, it was not used by the Chinese themselves during this period. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word Cīna, used in ancient India. "China" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian Chīn (چین), which in turn derived from Sanskrit Cīna (चीन). The origin of the Sanskrit word is a matter of debate. Cīna was first used in early Hindu scripture, including the Mahabharata (5th century BCE) and the Laws of Manu (2nd century BCE). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although use in Indian sources precedes this dynasty, this derivation is still given in various sources. Alternative suggestions include the names for Yelang and the Jing or Chu state.
The official name of the modern state is the "People's Republic of China" (simplified Chinese: 中华人民共和国; traditional Chinese: 中華人民共和國; pinyin: Zhōnghuá rénmín gònghéguó). The shorter form is "China" (中国; 中國; Zhōngguó), from zhōng ('central') and guó ('state'), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was used in official documents as an synonym for the state under the Qing. The name Zhongguo is also translated as 'Middle Kingdom' in English. China is sometimes referred to as mainland China or "the Mainland" when distinguishing it from the Republic of China or the PRC's Special Administrative Regions.
== History ==
=== Prehistory ===
Archaeological evidence suggests that early hominids inhabited China 2.25 million years ago. The hominid fossils of Peking Man, a Homo erectus who used fire, have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of Homo sapiens (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave. Chinese proto-writing existed in Jiahu around 6600 BCE, at Damaidi around 6000 BCE, Dadiwan from 5800 to 5400 BCE, and Banpo dating from the 5th millennium BCE. Some scholars have suggested that the Jiahu symbols (7th millennium BCE) constituted the earliest Chinese writing system.
=== Early dynastic rule ===
According to traditional Chinese historiography, the Xia dynasty was established during the late 3rd millennium BCE, marking the beginning of the dynastic cycle that was understood to underpin China's entire political history. In the modern era, the Xia's historicity came under increasing scrutiny, in part due to the earliest known attestation of the Xia being written millennia after the date given for their collapse. In 1958, archaeologists discovered sites belonging to the Erlitou culture that existed during the early Bronze Age; they have since been characterized as the remains of the historical Xia, but this conception is often rejected. The Shang dynasty that traditionally succeeded the Xia is the earliest for which there are both contemporary written records and undisputed archaeological evidence. The Shang ruled much of the Yellow River valley until the 11th century BCE, with the earliest hard evidence dated c. 1300 BCE. The oracle bone script, attested from c. 1250 BCE but generally assumed to be considerably older, represents the oldest known form of written Chinese, and is the direct ancestor of modern Chinese characters.
The Shang were overthrown by the Zhou, who ruled between the 11th and 5th centuries BCE, though the centralized authority of Son of Heaven was slowly eroded by fengjian lords. Some principalities eventually emerged from the weakened Zhou and continually waged war with each other during the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries BCE, there were seven major powerful states left.
=== Imperial China ===
==== Qin and Han ====
The Warring States period ended in 221 BCE after the state of Qin conquered the other six states, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the Emperor of the Qin dynasty, becoming the first emperor of a unified China. He enacted Qin's legalist reforms, notably the standardization of Chinese characters, measurements, road widths, and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Northern Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death.
Following widespread revolts during which the imperial library was burned, the Han dynasty emerged to rule China between 206 BCE and 220 CE, creating a cultural identity among its populace still remembered in the ethnonym of the modern Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors.
==== Three Kingdoms, Jin, Northern and Southern dynasties ====
After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, at the end of which Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally disabled emperor; the Five Barbarians then rebelled and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581.
==== Sui, Tang and Song ====
The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest.
Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang dynasty retained control of the Western Regions and the Silk Road, which brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Lushan rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and the Liao dynasty. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent navy which was supported by the developed shipbuilding industry along with the sea trade.
Between the 10th and 11th century CE, the population of China doubled to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of complexity. However, the military weakness of the Song army was observed by the Jin dynasty. In 1127, Emperor Emeritus Huizong, Emperor Qinzong of Song and the capital Bianjing were captured during the Jin–Song wars. The remnants of the Song retreated to southern China and reestablished the Song at Jiankang.
=== Yuan ===
The Mongol conquest of China began in 1205 with the campaigns against Western Xia by Genghis Khan, who also invaded Jin territories. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty, which conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa.
==== Ming ====
In the early Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Later Jin incursions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty.
==== Qing ====
The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. The Ming-Qing transition (1618–1683) cost 25 million lives, but the Qing appeared to have restored China's imperial power and inaugurated another flowering of the arts. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. Meanwhile, China's population growth resumed and shortly began to accelerate. It is commonly agreed that pre-modern China's population experienced two growth spurts, one during the Northern Song period (960–1127), and other during the Qing period (around 1700–1830). By the High Qing era China was possibly the most commercialized country in the world, and imperial China experienced a second commercial revolution by the end of the 18th century. On the other hand, the centralized autocracy was strengthened in part to suppress anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, like the Haijin during the early Qing period and ideological control as represented by the literary inquisition, causing some social and technological stagnation.
=== Fall of the Qing dynasty ===
In the mid-19th century, the Opium Wars with Britain and France forced China to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of what have been termed the unequal treaties. The First Sino-Japanese War (1894–1895) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan. The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–1877) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s.
In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–1879, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms known as the late Qing reforms, the Xinhai Revolution of 1911–1912 ended the Qing dynasty and established the Republic of China. Puyi, the last Emperor, abdicated in 1912.
=== Establishment of the Republic and World War II ===
On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (KMT) was proclaimed provisional president. In March 1912, the presidency was given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic in 1916. After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. During this period, China participated in World War I and saw a far-reaching popular uprising (the May Fourth Movement).
In the late 1920s, the Kuomintang under Chiang Kai-shek was able to reunify the country under its own control with a series of deft military and political maneuverings known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented "political tutelage", an intermediate stage of political development outlined in Sun Yat-sen's Three Principles of the People program for transforming China into a modern democratic state. The Kuomintang briefly allied with the Chinese Communist Party (CCP) during the Northern Expedition, though the alliance broke down in 1927 after Chiang violently suppressed the CCP and other leftists in Shanghai, marking the beginning of the Chinese Civil War. The CCP declared areas of the country as the Chinese Soviet Republic (Jiangxi Soviet) in November 1931 in Ruijin, Jiangxi. The Jiangxi Soviet was wiped out by the KMT armies in 1934, leading the CCP to initiate the Long March and relocate to Yan'an in Shaanxi. It would be the base of the communists before major combat in the Chinese Civil War ended in 1949.
In 1931, Japan invaded and occupied Manchuria. Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945), a theater of World War II. The war forced an uneasy alliance between the Kuomintang and the CCP. Japanese forces committed numerous war atrocities against the civilian population; as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in Nanjing alone during the Japanese occupation. China, along with the UK, the United States, and the Soviet Union, were recognized as the Allied "Big Four" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, along with the Penghu, were handed over to Chinese control; however, the validity of this handover is controversial.
=== People's Republic ===
China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China. Afterwards, the CCP took control of most of mainland China, and the ROC government retreated offshore to Taiwan.
On 1 October 1949, CCP Chairman Mao Zedong formally proclaimed the People's Republic of China in Tiananmen Square, Beijing. In 1950, the PRC captured Hainan from the ROC and annexed Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s. The CCP consolidated its popularity among the peasants through the Land Reform Movement, which included the state-tolerated executions of between 1 and 2 million landlords by peasants and former tenants. Though the PRC initially allied closely with the Soviet Union, the relations between the two communist nations gradually deteriorated, leading China to develop an independent industrial system and its own nuclear weapons.
The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive industrialization project, resulted in an estimated 15 to 55 million deaths between 1959 and 1961, mostly from starvation. In 1964, China detonated its first atomic bomb. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the ROC in the United Nations, and took its seat as a permanent member of the Security Council.
=== Reforms and contemporary history ===
After Mao's death, the Gang of Four were arrested by Hua Guofeng and held responsible for the Cultural Revolution. The Cultural Revolution was rebuked, with millions rehabilitated. Deng Xiaoping took power in 1978, and instituted large-scale political and economic reforms, together with the "Eight Elders", most senior and influential members of the party. The government loosened its control and the communes were gradually disbanded. Agricultural collectivization was dismantled and farmlands privatized. While foreign trade became a major focus, special economic zones (SEZs) were created. Inefficient state-owned enterprises (SOEs) were restructured and some closed. This marked China's transition away from planned economy. China adopted its current constitution on 4 December 1982.
In 1989, there were protests such those in Tiananmen Square, and then throughout the entire nation. Jiang Zemin was elevated to become the CCP general secretary, becoming the paramount leader. Jiang continued economic reforms, closing many SOEs and trimming down "iron rice bowl" (life-tenure positions). China's economy grew sevenfold during this time. British Hong Kong and Portuguese Macau returned to China in 1997 and 1999, respectively, as special administrative regions under the principle of one country, two systems. The country joined the World Trade Organization in 2001.At the 16th CCP National Congress in 2002, Hu Jintao succeeded Jiang as the general secretary. Under Hu, China maintained its high rate of economic growth, overtaking the United Kingdom, France, Germany and Japan to become the world's second-largest economy. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Xi Jinping succeeded Hu as paramount leader at the 18th CCP National Congress in 2012. Shortly after his ascension to power, Xi launched a vast anti-corruption crackdown, that prosecuted more than 2 million officials by 2022. During his tenure, Xi has consolidated power unseen since the initiation of economic and political reforms.
== Geography ==
China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is 14,500 km (9,000 mi) long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe.
The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at 35°50′40.9″N 103°27′7.5″E. China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154 m) in the Turpan Depression.
=== Climate ===
China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist.
A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. According to academics, in order to limit climate change in China to 1.5 °C (2.7 °F) electricity generation from coal in China without carbon capture must be phased out by 2045. With current policies, the GHG emissions of China will probably peak in 2025, and by 2030 they will return to 2022 levels. However, such pathway still leads to three-degree temperature rise.
Official government statistics about Chinese agricultural productivity are considered unreliable, due to exaggeration of production at subsidiary government levels. Much of China has a climate very suitable for agriculture and the country has been the world's largest producer of rice, wheat, tomatoes, eggplant, grapes, watermelon, spinach, and many other crops. In 2021, 12 percent of global permanent meadows and pastures belonged to China, as well as 8% of global cropland.
=== Biodiversity ===
China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country is a party to the Convention on Biological Diversity; its National Biodiversity Strategy and Action Plan was received by the convention in 2010.
China is home to at least 551 species of mammals (the third-highest in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China shares habitat with, and bears acute pressure from, one of the world's largest population of humans. At least 840 animal species are threatened, vulnerable or in danger of local extinction, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and traditional Chinese medicine. Endangered wildlife is protected by law, and as of 2005, the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. Most wild animals have been eliminated from the core agricultural regions of east and central China, but they have fared better in the mountainous south and west. The Baiji was confirmed extinct on 12 December 2006.
China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understory of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support a high density of plant species including numerous rare endemics. Tropical and seasonal rainforests, though confined to Yunnan and Hainan, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi.
=== Environment ===
In the early 2000s, China has suffered from environmental deterioration and pollution due to its rapid pace of industrialization. Regulations such as the 1979 Environmental Protection Law are fairly stringent, though they are poorly enforced, frequently disregarded in favor of rapid economic development. China has the second-highest death toll because of air pollution, after India, with approximately 1 million deaths. Although China ranks as the highest CO2 emitting country, it only emits 8 tons of CO2 per capita, significantly lower than developed countries such as the United States (16.1), Australia (16.8) and South Korea (13.6). Greenhouse gas emissions by China are the world's largest. The country has significant water pollution problems; only 89.4% of China's national surface water was graded suitable for human consumption by the Ministry of Ecology and Environment in 2023.
China has prioritized clamping down on pollution, bringing a significant decrease in air pollution in the 2010s. In 2020, the Chinese government announced its aims for the country to reach its peak emissions levels before 2030, and achieve carbon neutrality by 2060 in line with the Paris Agreement, which, according to Climate Action Tracker, would lower the expected rise in global temperature by 0.2–0.3 degrees – "the biggest single reduction ever estimated by the Climate Action Tracker". According to China's government, the forest coverage of the country grew from 10% of the overall territory in 1949 to 25% in 2024.
China is the world's leading investor in renewable energy and its commercialization, with $546 billion invested in 2022; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. Long heavily relying on non-renewable energy sources such as coal, China's adaptation of renewable energy has increased significantly in recent years. In 2024, 58.2% of China's electricity came from coal (largest producer in the world), 13.5% from hydroelectric power (largest), 9.8% from wind (largest), 8.3% from solar energy (largest), 4.4% from nuclear energy (second-largest), 3% from natural gas (fifth-largest), and 2.1% from bioenergy (largest); in total, 38% of China's energy came from clean energy sources. Despite its emphasis on renewables, China remains deeply connected to global oil markets and next to India, has been the largest importer of Russian crude oil in 2022.
=== Political geography ===
China is the third-largest country in the world by land area after Russia, and the third- or fourth-largest country in the world by total area. China's total area is generally stated as being approximately 9,600,000 km2 (3,700,000 sq mi). Specific area figures range from 9,572,900 km2 (3,696,100 sq mi) according to the Encyclopædia Britannica, to 9,596,961 km2 (3,705,407 sq mi) according to the UN Demographic Yearbook, and The World Factbook.
China has the longest combined land border in the world, measuring 22,117 km (13,743 mi) and its coastline covers approximately 14,500 km (9,000 mi) from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations and covers the bulk of East Asia, bordering Vietnam, Laos, and Myanmar in Southeast Asia; India, Bhutan, Nepal, Pakistan and Afghanistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. It is narrowly separated from Bangladesh and Thailand to the southwest and south, and has several maritime neighbors such as Japan, Philippines, Malaysia, and Indonesia.
China has resolved its land borders with 12 out of 14 neighboring countries, having pursued substantial compromises in most of them. China currently has a disputed land border with India and Bhutan. China is additionally involved in maritime disputes with multiple countries over territory in the East and South China Seas, such as the Senkaku Islands and the entirety of South China Sea Islands.
== Government and politics ==
The People's Republic of China is a one-party state governed by the Chinese Communist Party (CCP). The CCP describes itself as guided by socialism with Chinese characteristics, which is Marxism adapted to Chinese circumstances. The Chinese constitution states that the PRC "is a socialist state governed by a people's democratic dictatorship that is led by the working class and based on an alliance of workers and peasants"; that the state institutions "shall practice the principle of democratic centralism"; and that "the defining feature of socialism with Chinese characteristics is the leadership of the Chinese Communist Party."
The PRC officially characterizes itself as a democracy—more specifically, a whole-process people's democracy. However, the country is commonly described as an authoritarian one-party state and a dictatorship, with some of the world's heaviest restrictions in many civil areas, most notably against freedom of the press, freedom of assembly, free formation of social organizations, freedom of religion and free access to the Internet. China has consistently been ranked amongst the lowest as an "authoritarian regime" by the Economist Intelligence Unit's Democracy Index, ranking at 145th out of 167 countries in 2024. Other sources suggest that terming China as "authoritarian" does not sufficiently account for the multiple consultation mechanisms that exist in the Chinese governmental system.
=== Chinese Communist Party ===
According to the CCP constitution, its highest body is the National Congress held every five years. The National Congress elects the Central Committee, who then elects the party's Politburo, Politburo Standing Committee and the general secretary (party leader), the top leadership of the country. The general secretary holds ultimate power and authority over party and state and serves as the informal paramount leader. The current general secretary is Xi Jinping, who took office on 15 November 2012. At the local level, the secretary of the CCP committee of a subdivision outranks the local government level; CCP committee secretary of a provincial division outranks the governor while the CCP committee secretary of a city outranks the mayor.
=== Government ===
The government in China is under the sole control of the CCP. The CCP controls appointments in government bodies, with most senior government officials being CCP members.
The National People's Congress (NPC), with nearly 3,000-members, as the highest organ of state power holds the unified powers of the state, though observers often describe it as a "rubber stamp" body. The NPC meets annually, while the NPC Standing Committee, around 150 members elected from NPC delegates, meets every couple of months. Elections are indirect and not pluralistic, with nominations at all levels being controlled by the CCP. The NPC is dominated by the CCP, with another eight minor parties having nominal representation under the condition of upholding CCP leadership.
The NPC elects the president. The presidency is the ceremonial state representative, not the constitutional head of state. The incumbent president is Xi Jinping, who is also the general secretary of the CCP and the chairman of the Central Military Commission, making him China's paramount leader and supreme commander of the Armed Forces. The premier is the head of government, with Li Qiang being the incumbent. The premier is officially nominated by the president and then elected by the NPC, and has generally been either the second- or third-ranking member of the Politburo Standing Committee (PSC). The premier presides over the State Council, China's cabinet, composed of four vice premiers, state councillors, and the heads of ministries and commissions. The Chinese People's Political Consultative Conference (CPPCC) is a political advisory body that is critical in China's "united front" system, which aims to gather non-CCP voices to support the CCP. Similar to the people's congresses, CPPCCs have subdivisions; the National Committee of the CPPCC is chaired by Wang Huning, the fourth-ranking member of the PSC.
The governance of China is characterized by a high degree of political centralization but significant economic decentralization.: 7 Policy instruments or processes are often tested locally before being applied more widely, resulting in a policy that involves experimentation and feedback.: 14 Generally, central government leadership refrains from drafting specific policies, instead using the informal networks and site visits to affirm or suggest changes to the direction of local policy experiments or pilot programs.: 71 The typical approach is that central government leadership begins drafting formal policies, law, or regulations after policy has been developed at local levels.: 71
=== Administrative divisions ===
The PRC is constitutionally a unitary state divided into 23 provinces, five autonomous regions (each with a designated minority group), four direct-administered municipalities—collectively referred to as "mainland China"—as well as the special administrative regions (SARs) of Hong Kong and Macau. The PRC regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province, and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province, even though all these territories are governed by the Republic of China (ROC). Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, East China, Southwestern China, South Central China, Northeast China, and Northwestern China.
=== Foreign relations ===
The PRC has diplomatic relations with 179 United Nations member-states and maintains embassies in 174. As of 2024, China has one of the largest diplomatic networks of any country in the world. In 1971, the PRC replaced the ROC as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. It is a member of intergovernmental organizations including the G20, the SCO, the BRICS, the East Asia Summit, and the APEC. China is also a former member and leader of the Non-Aligned Movement and still considers itself an advocate for developing countries.
The PRC officially maintains the One China principle: the view that there is only one sovereign state with the name "China"—represented by the PRC—and that Taiwan is part of that China. The unique status of Taiwan has led to countries formally recognizing the PRC to maintain unique "one-China policies" that differ from each other; some countries explicitly recognize the PRC's claim over Taiwan, while others, including the U.S. and Japan, only acknowledge the claim. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Most countries have switched recognition from the ROC to the PRC since the latter replaced the former in the UN in 1971.
Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, as well as by the concept of "harmony without uniformity", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support or maintain close ties with states that are regarded as dangerous and repressive by Western nations, such as Sudan, North Korea and Iran. China's close relationship with Myanmar has involved support for its ruling governments as well as for its ethnic rebel groups, including the Arakan Army. China has a close political, economic and military relationship with Russia, and the two states often vote in unison in the UN Security Council. China's relationship with the United States is complex, and includes deep trade ties but significant political differences.
Since the early 2000s, China has followed a policy of engaging with African nations for trade and bilateral co-operation. It maintains extensive and highly diversified trade links with the European Union, and became its largest trading partner for goods. China is increasing its influence in Central Asia and South Pacific. The country has strong trade ties with ASEAN countries and major South American economies, and is the largest trading partner of Brazil, Chile, Peru, Uruguay, Argentina, and several others.
In 2013, China initiated the Belt and Road Initiative (BRI), a large global infrastructure building initiative with funding on the order of $50–100 billion per year. BRI could be one of the largest development plans in modern history. It expanded significantly over the next six years and, as of April 2020, included 138 countries and 30 international organizations. In addition to intensifying foreign policy relations, the focus is particularly on building efficient transport routes, especially the maritime Silk Road with its connections to East Africa and Europe. However many loans made under the program are unsustainable and China has faced a number of calls for debt relief from debtor nations.
=== Military ===
The People's Liberation Army (PLA) is considered one of the world's most powerful militaries and has rapidly modernized in the recent decades. Since 2024, it consists of four services: the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF) and the Rocket Force (PLARF). It also has four independent arms: the Aerospace Force, the Cyberspace Force, the Information Support Force, and the Joint Logistics Support Force, the first three of which were split from the disbanded Strategic Support Force (PLASSF). Its nearly 2.2 million active duty personnel is the largest in the world. The PLA holds the world's third-largest stockpile of nuclear weapons, and the world's second-largest navy by tonnage. China's official military budget for 2024 totalled US$229 billion (1.67 trillion Yuan), the second-largest in the world, though SIPRI estimates that its real expenditure that year was US$314 billion, making up 12% of global military spending and accounting for 1.7% of the country's GDP. According to SIPRI, its military spending from 2012 to 2021 averaged US$215 billion per year or 1.7 per cent of GDP, behind only the United States at US$734 billion per year or 3.6 per cent of GDP. The PLA is commanded by the Central Military Commission (CMC) of the party and the state; though officially two separate organizations, the two CMCs have identical membership except during leadership transition periods and effectively function as one organization. The chairman of the CMC is the commander-in-chief of the PLA.
=== Sociopolitical issues and human rights ===
The situation of human rights in China has attracted significant criticism from foreign governments, foreign press agencies, and non-governmental organizations, alleging widespread civil rights violations such as detention without trial, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. Since its inception, Freedom House has ranked China as "not free" in its Freedom in the World survey, while Amnesty International has documented significant human rights abuses. The Chinese constitution states that the "fundamental rights" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. China has limited protections regarding LGBT rights.
Although some criticisms of government policies and the ruling CCP are tolerated, censorship of political speech and information are amongst the harshest in the world and routinely used to prevent collective action. China also has the most comprehensive and sophisticated Internet censorship regime in the world, with numerous websites being blocked. The government suppresses popular protests and demonstrations that it considers a potential threat to "social stability". China additionally uses a massive surveillance network of cameras, facial recognition software, sensors, and surveillance of personal technology as a means of social control of persons living in the country.
China is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, where significant numbers of ethnic minorities reside, including violent police crackdowns and religious suppression. Since 2017, the Chinese government has been engaged in a harsh crackdown in Xinjiang, with around one million Uyghurs and other ethnic and religion minorities being detained in internment camps aimed at changing the political thinking of detainees, their identities, and their religious beliefs. According to Western reports, political indoctrination, torture, physical and psychological abuse, forced sterilization, sexual abuse, and forced labor are common in these facilities. According to a 2020 Foreign Policy report, China's treatment of Uyghurs meets the UN definition of genocide, while a separate UN Human Rights Office report said they could potentially meet the definitions for crimes against humanity. The Chinese authorities have also cracked down on dissent in Hong Kong, especially after the passage of a national security law in 2020.
In 2017 and 2020, the Pew Research Center ranked the severity of Chinese government restrictions on religion as being among the world's highest, despite ranking religious-related social hostilities in China as low in severity. The Global Slavery Index estimated that in 2016 more than 3.8 million people (0.25% of the population) were living in "conditions of modern slavery", including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed re-education through labor (laojiao) system was formally abolished in 2013, but it is not clear to what extent its practices have stopped. The much larger reform through labor (laogai) system includes labor prison factories, detention centers, and re-education camps; the Laogai Research Foundation has estimated in June 2008 that there were nearly 1,422 of these facilities, though it cautioned that this number was likely an underestimate.
=== Public views of government ===
Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, international surveys show the Chinese public have a high level of satisfaction with their government.: 137 These views are generally attributed to the material comforts and security available to large segments of the Chinese populace as well as the government's attentiveness and responsiveness. : 136 According to the World Values Survey (2022), 91% of Chinese respondents have significant confidence in their government.: 13 A Harvard University survey published in July 2020 found that citizen satisfaction with the government had increased since 2003, also rating China's government as more effective and capable than ever in the survey's history.
== Economy ==
China has the world's second-largest economy in terms of nominal GDP, and the world's largest in terms of purchasing power parity (PPP). As of 2022, China accounts for around 18% of the global economy by nominal GDP. China is one of the world's fastest-growing major economies, with its economic growth having been almost consistently above 6 percent since the introduction of the reform and opening up policy in 1978. According to the World Bank, China's GDP grew from $150 billion in 1978 to $17.96 trillion by 2022. It ranks 64th by nominal GDP per capita, making it an upper-middle income country. Of the world's 500 largest companies, 135 are headquartered in China. As of at least 2024, China has the world's second-largest equity markets and futures markets, as well as the third-largest bond market.: 153
China was one of the world's foremost economic powers throughout the arc of East Asian and global history. The country had one of the largest economies in the world for most of the past two millennia, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $15.9 trillion, as of October 2020. China has three out of the world's ten most competitive financial centers according to the 2024 Global Financial Centres Index—Shanghai, Hong Kong, and Shenzhen.
Modern-day China is often described as an example of state capitalism or party-state capitalism. The state dominates in strategic "pillar" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. According to official statistics, privately owned companies constitute more than 60% of China's GDP.
China has been the world's largest manufacturing nation since 2010, after overtaking the U.S., which had been the largest for the previous hundred years. China has also been the second-largest in high-tech manufacturing country since 2012, according to US National Science Foundation. China is the second-largest retail market after the United States. China leads the world in e-commerce, accounting for over 37% of the global market share in 2021. China is the world's leader in electric vehicle consumption and production, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world as of 2022. China is also the leading producer of batteries for electric vehicles as well as several key raw materials for batteries.
=== Tourism ===
China received 65.7 million international visitors in 2019, and in 2018 was the fourth-most-visited country in the world. It also experiences an enormous volume of domestic tourism; Chinese tourists made an estimated 6 billion travels within the country in 2019. China hosts the world's second-largest number of World Heritage Sites (56) after Italy, and is one of the most popular tourist destinations (first in the Asia-Pacific).
=== Wealth ===
China accounted for 18.6% of the world's total wealth in 2022, second highest in the world after the U.S. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million.: 23 From 1990 to 2018, the proportion of the Chinese population living with an income of less than $1.90 per day (2011 PPP) decreased from 66.3% to 0.3%, the share living with an income of less than $3.20 per day from 90.0% to 2.9%, and the share living with an income of less than $5.50 per day decreased from 98.3% to 17.0%.
From 1978 to 2018, the average standard of living multiplied by a factor of twenty-six. Wages in China have grown significantly in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. Per capita incomes have also risen significantly – when the PRC was founded in 1949, per capita income in China was one-fifth of the world average; per capita incomes now equal the world average itself. China's development is highly uneven; its major cities and coastal areas are far more prosperous than its rural and interior regions. It has a high level of economic inequality, which has increased quickly since the economic reforms. Income inequality decreased in the 2010s, and China's Gini coefficient was 0.357 in 2021.
In March 2024, China ranked second in the world, after the U.S., in total number of billionaires and total number of millionaires, with 473 Chinese billionaires and 6.2 million millionaires. In 2019, China overtook the U.S. as the home to the highest number of people who have a net personal wealth of at least $110,000, according to the global wealth report by Credit Suisse. China had 85 female billionaires as of January 2021, two-thirds of the global total. China has had the world's largest middle-class population since 2015; the middle-class grew to 500 million by 2024.
=== China in the global economy ===
China has been a member of the WTO since 2001 and is the world's largest trading power. By 2016, China was the largest trading partner of 124 countries. China became the world's largest trading nation in 2013 by the sum of imports and exports, as well as the world's largest commodity importer, accounting for roughly 45% of maritime's dry-bulk market.
China's foreign exchange reserves reached US$3.246 trillion as of March 2024, making its reserves by far the world's largest. In 2022, China was amongst the world's largest recipient of inward foreign direct investment (FDI), attracting $180 billion, though most of these were speculated to be from Hong Kong. In 2021, China's foreign exchange remittances were $US53 billion making it the second-largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $147.9 billion in 2023, and a number of major takeovers of foreign firms by Chinese companies.
Economists have argued that the renminbi is undervalued, due to currency intervention from the Chinese government, giving China an unfair trade advantage. China has also been widely criticized for manufacturing large quantities of counterfeit goods. The U.S. government has also alleged that China does not respect intellectual property (IP) rights and steals IP through espionage operations. In 2020, Harvard University's Economic Complexity Index ranked complexity of China's exports 17th in the world, up from 24th in 2010.
The Chinese government has promoted the internationalization of the renminbi in order to wean itself off its dependence on the U.S. dollar as a result of perceived weaknesses of the international monetary system. The renminbi is a component of the IMF's special drawing rights and the world's fourth-most traded currency as of 2023. However, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, the U.S. Dollar and the Japanese Yen in international trade volumes.
=== Science and technology ===
==== Historical ====
China was a world leader in science and technology until the Ming dynasty. Ancient and medieval Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, the Western World surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars.
After repeated military defeats by the European colonial powers and Imperial Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology were promoted as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed.
==== Modern era ====
Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the U.S. in R&D spending. China officially spent around 2.7% of its GDP on R&D in 2024, totaling to around $496 billion. According to the World Intellectual Property Indicators, China received more applications than the U.S. did in 2018 and 2019 and ranked first globally in patents, utility models, trademarks, industrial designs, and creative goods exports in 2021. It was ranked 11th in the Global Innovation Index in 2024, a considerable improvement from its rank of 35th in 2013. Chinese supercomputers ranked among the fastest in the world. Its efforts to develop the most advanced semiconductors and jet engines have seen delays and setbacks.
China is developing its education system with an emphasis on science, technology, engineering, and mathematics (STEM). Its academic publication apparatus became the world's largest publisher of scientific papers in 2016. In 2022, China overtook the US in the Nature Index, which measures the share of published articles in leading scientific journals.
===== Space program =====
The Chinese space program started in 1958 with some technology transfers from the Soviet Union. However, it did not launch the nation's first satellite until 1970 with the Dong Fang Hong I, which made China the fifth country to do so independently.
In 2003, China became the third country in the world to independently send humans into space with Yang Liwei's spaceflight aboard Shenzhou 5. As of 2023, eighteen Chinese nationals have journeyed into space, including two women. In 2011, China launched its first space station testbed, Tiangong-1. In 2013, a Chinese robotic rover Yutu successfully touched down on the lunar surface as part of the Chang'e 3 mission.
In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the Moon. In 2020, Chang'e 5 successfully returned Moon samples to the Earth, making China the third country to do so independently. In 2021, China became the third country to land a spacecraft on Mars and the second one to deploy a rover (Zhurong) on Mars. China completed its own modular space station, the Tiangong, in low Earth orbit on 3 November 2022. On 29 November 2022, China performed its first in-orbit crew handover aboard the Tiangong.
In May 2023, China announced a plan to land humans on the Moon by 2030. To that end, China has been developing a lunar-capable super-heavy launcher, the Long March 10, a new crewed spacecraft, and a crewed lunar lander.
China sent Chang'e 6 on 3 May 2024, which conducted the first lunar sample return from Apollo Basin on the far side of the Moon. This is China's second lunar sample return mission, the first was achieved by Chang'e 5 from the lunar near side 4 years ago. It also carried a Chinese rover called Jinchan to conduct infrared spectroscopy of lunar surface and imaged Chang'e 6 lander on lunar surface. The lander-ascender-rover combination was separated with the orbiter and returner before landing on 1 June 2024, at 22:23 UTC. It landed on the Moon's surface on 1 June 2024. The ascender was launched back to lunar orbit on 3 June 2024, at 23:38 UTC, carrying samples collected by the lander, which later completed another robotic rendezvous, before docking in lunar orbit. The sample container was then transferred to the returner, which landed on Inner Mongolia in June 2024, completing China's far side extraterrestrial sample return mission.
== Infrastructure ==
After a decades-long infrastructural boom, China has produced numerous world-leading infrastructural projects: it has the largest high-speed rail network, the most supertall skyscrapers, the largest power plant (the Three Gorges Dam), the most extensive ultra-high-voltage transmission network and innovation infrastructure, and a global satellite navigation system with the largest number of satellites.
=== Telecommunications ===
China is the largest telecom market in the world and currently has the largest number of active cellphones of any country, with over 1.7 billion subscribers, as of February 2023. It has the largest number of internet and broadband users, with over 1.1 billion Internet users as of December 2024—equivalent to around 78.6% of its population. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. As of December 2023, China had over 810 million 5G users and 3.38 million base stations installed.
China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the largest of them all, had 925 million users, as of 2018. Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military.
China has developed its own satellite navigation system, dubbed BeiDou, which began offering commercial navigation services across Asia in 2012 as well as global services by the end of 2018. Beidou followed GPS and GLONASS as the third completed global navigation satellite.
=== Transport ===
Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2022, China's highways had reached a total length of 177,000 km (110,000 mi), making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. The country is the world's largest exporter of cars by number as of 2023. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – as of 2023, there are approximately 200 million bicycles in China.
China's railways, which are operated by the state-owned China State Railway Group Company, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. As of 2023, the country had 159,000 km (98,798 mi) of railways, the second-longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place. China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2023, high speed rail in China had reached 45,000 kilometers (27,962 miles) of dedicated lines alone, making it the longest HSR network in the world. Services on the Beijing–Shanghai, Beijing–Tianjin, and Chengdu–Chongqing lines reach up to 350 km/h (217 mph), making them the fastest conventional high speed railway services in the world. With an annual ridership of over 2.3 billion passengers in 2019, it is the world's busiest. The network includes the Beijing–Guangzhou high-speed railway, the single longest HSR line in the world, and the Beijing–Shanghai high-speed railway, which has three of longest railroad bridges in the world. The Shanghai maglev train, which reaches 431 km/h (268 mph), is the fastest commercial train service in the world. Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. As of December 2023, 55 Chinese cities have urban mass transit systems in operation. As of 2020, China boasts the five longest metro systems in the world with the networks in Shanghai, Beijing, Guangzhou, Chengdu and Shenzhen being the largest.
The civil aviation industry in China is mostly state-dominated, with the Chinese government retaining a majority stake in the majority of Chinese airlines. The top three airlines in China are Air China, China Southern Airlines, and China Eastern Airlines, which collectively made up 71% of the market in 2018, are all state-owned. Air travel has expanded rapidly in the last decades, with the number of passengers increasing from 16.6 million in 1990 to 551.2 million in 2017. China had approximately 259 airports in 2024.
China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. Of the fifty busiest container ports, 15 are located in China, of which the busiest is the Port of Shanghai, also the busiest port in the world. The country's inland waterways are the world's sixth-longest, and total 27,700 km (17,212 mi).
=== Water supply and sanitation ===
Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to the Joint Monitoring Program for Water Supply and Sanitation, 93% of rural households had access to basic sanitation in 2022 (up from 77% in 2015). The ongoing South–North Water Transfer Project intends to abate water shortage in the north.
== Demographics ==
The 2020 Chinese census recorded the population as approximately 1,411,778,724. About 17.95% were 14 years old or younger, 63.35% were between 15 and 59 years old, and 18.7% were over 60 years old. Between 2010 and 2020, the average population growth rate was 0.53%.
Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid-1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a "1.5"-child policy from the mid-1980s to 2015; ethnic minorities were also exempt from one-child limits. The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. A three-child policy was announced on 31 May 2021, due to population aging, and in July 2021, all family size limits as well as penalties for exceeding them were removed. In 2023, the total fertility rate was reported to be 1.09, ranking among the lowest in the world. In 2023, National Bureau of Statistics estimated that the population fell 850,000 from 2021 to 2022, the first decline since 1961.
According to one group of scholars, one-child limits had little effect on population growth or total population size. However, these scholars have been challenged. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. The 2020 census found that males accounted for 51.2% of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.8% of the population.
=== Urbanization ===
China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 67% in 2024. China has over 160 cities with a population of over one million, including the 18 megacities as of 2024 (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Chengdu, Guangzhou, Shenzhen, Tianjin, Xi'an, Suzhou, Zhengzhou, Wuhan, Hangzhou, Linyi, Shijiazhuang, Dongguan, Qingdao, Changsha and Hefei. The total permanent population of Chongqing, Shanghai, Beijing and Chengdu is above 20 million. Shanghai is China's most populous urban area while Chongqing is its largest city proper, the only city in China with a permanent population of over 30 million. The figures in the table below are from the 2020 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists for total municipal populations. The large "floating populations" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents.
=== Ethnic groups ===
China legally recognizes 56 distinct ethnic groups, who comprise the Zhonghua minzu. The largest of these nationalities are the Han Chinese, who constitute more than 91% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every place excluding Tibet, Xinjiang, Linxia, and autonomous prefectures like Xishuangbanna. Ethnic minorities account for less than 10% of the population of China, according to the 2020 census. Compared with the 2010 population census, the Han population increased by 60,378,693 persons, or 4.93%, while the population of the 55 national minorities combined increased by 11,675,179 persons, or 10.26%. The 2020 census recorded a total of 845,697 foreign nationals living in mainland China.
=== Languages ===
There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 80% of the population), and other varieties of Chinese language: Jin, Wu, Min, Hakka, Yue, Xiang, Gan, Hui, Ping and unclassified Tuhua (Shaozhou Tuhua and Xiangnan Tuhua). Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwestern China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese indigenous peoples, including a small population on the mainland, speak Austronesian languages.
Standard Chinese, a variety based on the Beijing dialect of Mandarin, is the national language of China, having de facto official status. It is used as a lingua franca between people of different linguistic backgrounds. In the autonomous regions of China, other languages may also serve as a lingua franca, such as Uyghur in Xinjiang, where governmental services in Uyghur are constitutionally guaranteed.
=== Religion ===
Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution. The government of the country is officially atheist. Religious affairs and issues in the country are overseen by the National Religious Affairs Administration, under the United Front Work Department.
Over the millennia, the Chinese civilization has been influenced by various religious movements. The "three doctrines" of Confucianism, Taoism, and Buddhism have historically shaped Chinese culture, enriching a theological and spiritual framework of traditional religion which harks back to the early Shang and Zhou dynasty. Chinese folk religion, which is framed by the three doctrines and by other traditions, consists in allegiance to the shen, who can be deities of the surrounding nature or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Amongst the most popular cults of folk religion are those of the Yellow Emperor, embodiment of the God of Heaven and one of the two divine patriarchs of the Chinese people, of Mazu (goddess of the seas), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. In the early decades of the 21st century, the Chinese government has been engaged in a rehabilitation of folk cults—formally recognizing them as "folk beliefs" as distinguished from doctrinal religions, and often reconstructing them into forms of "highly curated" civil religion—as well as in a national and international promotion of Buddhism. China is home to many of the world's tallest religious statues, representing either deities of Chinese folk religion or enlightened beings of Buddhism; the tallest of all is the Spring Temple Buddha in Henan.
Statistics on religious affiliation in China are difficult to gather due to complex and varying definitions of religion and the diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between the three doctrines and local folk religious practices. Chinese religions or some of their currents are also definable as non-theistic and humanistic, since they do not hold that divine creativity is completely transcendent, but that it is inherent in the world and in particular in the human being. According to studies published in 2023, compiling demographic analyses conducted throughout the 2010s and the early 2020s, 70% of the Chinese population believed in or practiced Chinese folk religion—among them, with an approach of non-exclusivity, 33.4% may be identified as Buddhists, 19.6% as Taoists, and 17.7% as adherents of other types of folk religion. Of the remaining population, 25.2% are fully non-believers or atheists, 2.5% are adherents of Christianity, and 1.6% are adherents of Islam. Chinese folk religion also comprises a variety of salvationist doctrinal organized movements which emerged since the Song dynasty. There are also ethnic minorities in China who maintain their own indigenous religions, while major religions characteristic of specific ethnic groups include Tibetan Buddhism among Tibetans, Mongols and Yugurs, and Islam among the Hui, Uyghur, Kazakh, and Kyrgyz peoples, and other ethnicities in the northern and northwestern regions of the country.
=== Education ===
Compulsory education in China comprises primary and junior secondary school, which together last for nine years from the age of 6 and 15. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. Vocational education is available to students at the secondary and tertiary level. More than 10 million Chinese students graduated from vocational colleges every year. In 2023, about 91.8 percent of students continued their education at a three-year senior secondary school, while 60.2 percent of secondary school graduates were enrolled in higher education.
China has the largest education system in the world, with about 291 million students and 18.92 million full-time teachers in over 498,300 schools in 2023. Annual education investment went from less than US$50 billion in 2003 to more than US$817 billion in 2020. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces, it only totalled ¥3,204. China's literacy rate has grown dramatically, from only 20% in 1949 and 65.5% in 1979, to 97% of the population over age 15 in 2020.
As of 2023, China has over 3,074 universities, with over 47.6 million students enrolled in mainland China, giving China the largest higher education system in the world. As of 2025, China had the world's highest number of top universities. Currently, China trails only the United States and the United Kingdom in terms of representation on lists of the top 200 universities according to the 2024 Aggregate Ranking of Top Universities, a composite ranking system of three world-most followed university rankings (ARWU+QS+THE). China is home to two of the highest-ranking universities (Tsinghua University and Peking University) in Asia and emerging economies, according to the Times Higher Education World University Rankings and the Academic Ranking of World Universities. These universities are members of the C9 League, an alliance of elite Chinese universities offering comprehensive and leading education.
=== Health ===
The National Health Commission, together with its counterparts in the local commissions, oversees the health needs of the population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. The Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign.
After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a three-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. By 2022, China had established itself as a key producer and exporter of pharmaceuticals, producing around 40 percent of active pharmaceutical ingredients in 2017.
As of 2023, the life expectancy at birth exceeds 78 years.: 163 As of 2021, the infant mortality rate is 5 per thousand. Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. In 2010, air pollution caused 1.2 million premature deaths in China. Chinese mental health services are inadequate. China's large population and densely populated cities have led to serious disease outbreaks, such as SARS in 2003, although this has since been largely contained. The COVID-19 pandemic was first identified in Wuhan in December 2019; pandemic led the government to enforce strict public health measures intended to completely eradicate the virus, a goal that was eventually abandoned in December 2022 after protests against the policy.
== Culture and society ==
Since ancient times, Chinese culture has been heavily influenced by Confucianism. Chinese culture, in turn, has heavily influenced East Asia and Southeast Asia. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today.
Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide. Access to foreign media remains heavily restricted.
=== Architecture ===
Chinese architecture has developed over millennia in China and has remained a vestigial source of perennial influence on the development of East Asian architecture, including in Japan, Korea, and Mongolia. and minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, Vietnam and the Philippines.
Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces.
Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages, such as the stilt houses in the south, the Yaodong buildings in the northwest, the yurt buildings of nomadic people, and the Siheyuan buildings in the north.
=== Literature ===
Chinese literature has its roots in the Zhou dynasty's literary tradition. The classical texts of China encompass a wide range of thoughts and subjects, such as the calendar, military, astrology, herbology, and geography, as well as many others. Among the most significant early works are the I Ching and the Shujing, which are part of the Four Books and Five Classics. These texts were the cornerstone of the Confucian curriculum sponsored by the state throughout the dynastic periods. Inherited from the Classic of Poetry, classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the Shiji, the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include Water Margin, Romance of the Three Kingdoms, Journey to the West and Dream of the Red Chamber. Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the Chinese sphere of influence.
In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012.
=== Music ===
Chinese music covers a highly diverse range of music from traditional music to modern music. Chinese music dates back before the pre-imperial times. Traditional Chinese musical instruments were traditionally grouped into eight categories known as bayin (八音). Traditional Chinese opera is a form of musical theatre in China originating thousands of years and has regional style forms such as Beijing and Cantonese opera. Chinese pop (C-Pop) includes mandopop and cantopop. Chinese hip hop and Hong Kong hip hop have become popular.
=== Fashion ===
Hanfu is the historical clothing of the Han people in China. The qipao or cheongsam is a popular Chinese female dress. The hanfu movement has been popular in contemporary times and seeks to revitalize Hanfu clothing. China Fashion Week is the country's only national-level fashion festival.
=== Cinema ===
Cinema was first introduced to China in 1896 and the first Chinese film, Dingjun Mountain, was released in 1905. China has had the largest number of movie screens in the world since 2016; China became the largest cinema market in 2020. The top three highest-grossing films in China as of 2025 were Ne Zha 2 (2025), The Battle at Lake Changjin (2021), and Wolf Warrior 2 (2017).
=== Cuisine ===
Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the "Eight Major Cuisines", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. Chinese cuisine is known for its breadth of cooking methods and ingredients. China's staple food is rice in the northeast and south, and wheat-based breads and noodles in the north. Bean products such as tofu and soy milk remain a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. There is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Chinese cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables. Offshoots of Chinese food, such as Hong Kong cuisine and American Chinese cuisine, have emerged in the Chinese diaspora.
=== Sports ===
China has one of the oldest sporting cultures. There is evidence that archery (shèjiàn) was practiced during the Western Zhou dynasty. Swordplay (jiànshù) and cuju, a sport loosely related to association football date back to China's early dynasties as well.
Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and tai chi widely practiced, and commercial gyms and private fitness clubs are gaining popularity. Basketball is the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association also have a huge national following amongst the Chinese populace, with native-born and NBA-bound Chinese players and well-known national household names such as Yao Ming and Yi Jianlian being held in high esteem. China's professional football league, known as Chinese Super League, is the largest football market in East Asia. Other popular sports include martial arts, table tennis, badminton, swimming and snooker. China is home to a huge number of cyclists, with an estimated 470 million bicycles as of 2012. China has the world's largest esports market. Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular.
China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 48 gold medals – the highest number of any participating nation that year. China also won the most medals at the 2012 Summer Paralympics, with 231 overall, including 95 gold. In 2011, Shenzhen hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing, the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou collaboratively hosted the 2022 Winter Olympics, making Beijing the first dual Olympic city by holding both the Summer Olympics and the Winter Olympics. China hosted the Asian Games in 1990 (Beijing), 2010 (Guangzhou), and 2023 (Hangzhou).
== See also ==
Outline of China
== Notes ==
== References ==
== Sources ==
This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.
== Further reading ==
== External links ==
=== Government ===
The Central People's Government of People's Republic of China (in English)
=== General information ===
China at a Glance from People's Daily
China at the Encyclopædia Britannica
Country profile – China at BBC News
China. The World Factbook. Central Intelligence Agency.
China, People's Republic of (archived 2012) from UCB Libraries GovPubs
=== Maps ===
Google Maps—China
Wikimedia Atlas of the People's Republic of China
Geographic data related to China at OpenStreetMap |
Cirrus cloud | Cirrus (cloud classification symbol: Ci) is a genus of high cloud made of ice crystals. Cirrus clouds typically appear delicate and wispy with white strands. In the Earth's atmosphere, cirrus are usually formed when warm, dry air rises, causing water vapor deposition onto mineral dust and metallic particles at high altitudes. Globally, they form anywhere between 4,000 and 20,000 meters (13,000 and 66,000 feet) above sea level, with the higher elevations usually in the tropics and the lower elevations in more polar regions.
Cirrus clouds can form from the tops of thunderstorms and tropical cyclones and sometimes predict the arrival of rain or storms. Although they are a sign that rain and maybe storms are on the way, cirrus themselves drop no more than falling streaks of ice crystals. These crystals dissipate, melt, and evaporate as they fall through warmer and drier air and never reach ground. The word cirrus comes from the Latin prefix cirro-, meaning "tendril" or "curl". Cirrus clouds warm the earth, potentially contributing to climate change. A warming earth will likely produce more cirrus clouds, potentially resulting in a self-reinforcing loop.
Optical phenomena, such as sun dogs and halos, can be produced by light interacting with ice crystals in cirrus clouds. There are two other high-level cirrus-like clouds called cirrostratus and cirrocumulus. Cirrostratus looks like a sheet of cloud, whereas cirrocumulus looks like a pattern of small cloud tufts. Unlike cirrus and cirrostratus, cirrocumulus clouds contain droplets of supercooled (below freezing point) water.
Cirrus clouds form in the atmospheres of Mars, Jupiter, Saturn, Uranus, and Neptune; and on Titan, one of Saturn's larger moons. Some of these extraterrestrial cirrus clouds are made of ammonia or methane, much like water ice in cirrus on Earth. Some interstellar clouds, made of grains of dust smaller than a thousandth of a millimeter, are also called cirrus.
== Description ==
Cirrus are wispy clouds made of long strands of ice crystals that are described as feathery, hair-like, or layered in appearance. First defined scientifically by Luke Howard in an 1803 paper, their name is derived from the Latin word cirrus, meaning 'curl' or 'fringe'. They are transparent, meaning that the sun can be seen through them. Ice crystals in the clouds cause them to usually appear white, but the rising or setting sun can color them various shades of yellow or red. At dusk, they can appear gray.
Cirrus comes in five visually-distinct species: castellanus, fibratus, floccus, spissatus, and uncinus:
Cirrus castellanus has cumuliform tops caused by high-altitude convection rising up from the main cloud body.
Cirrus fibratus looks striated and is the most common cirrus species.
Cirrus floccus species looks like a series of tufts.
Cirrus spissatus is a particularly dense form of cirrus that often forms from thunderstorms.
Cirrus uncinus clouds are hooked and are the form that is usually called mare's tails.
Each species is divided into up to four varieties: intortus, vertebratus, radiatus, and duplicatus:
Intortus variety has an extremely contorted shape, with Kelvin–Helmholtz waves being a form of cirrus intortus that has been twisted into loops by layers of wind blowing at different speeds, called wind shear.
Radiatus variety has large, radial bands of cirrus clouds that stretch across the sky.
Vertebratus variety occurs when cirrus clouds are arranged side-by-side like ribs.
Duplicatus variety occurs when cirrus clouds are arranged above one another in layers.
Cirrus clouds often produce hair-like filaments called fall streaks, made of heavier ice crystals that fall from the cloud. These are similar to the virga produced in liquid–water clouds. The sizes and shapes of fall streaks are determined by the wind shear.
Cirrus cloud cover varies diurnally. During the day, cirrus cloud cover drops, and during the night, it increases. Based on CALIPSO satellite data, cirrus covers an average of 31% to 32% of the Earth's surface. Cirrus cloud cover varies wildly by location, with some parts of the tropics reaching up to 70% cirrus cloud cover. Polar regions, on the other hand, have significantly less cirrus cloud cover, with some areas having a yearly average of only around 10% coverage. These percentages treat clear days and nights, as well as days and nights with other cloud types, as lack of cirrus cloud cover.
== Formation ==
Cirrus clouds are usually formed as warm, dry air rises, causing water vapor to undergo deposition onto particles, including mostly mineral dust and metallic particles at high altitudes. Particles gathered by research aircraft from cirrus clouds over several locations above North America and Central America included mineral dust (containing aluminum, potassium, calcium, iron, and silicon), metallic particles in elemental, sulfate and oxide forms (containing sodium, potassium, iron, nickel, copper, zinc, tin, silver, molybdenum and lead), possible biological particles (containing oxygen, carbon, nitrogen and phosphorus) and elemental carbon. The authors concluded that mineral dust contributed the largest number of ice nuclei to cirrus cloud formation.
The average cirrus cloud altitude increases as latitude decreases, but the altitude is always capped by the tropopause. These conditions commonly occur at the leading edge of a warm front. Because absolute humidity is low at such high altitudes, this genus tends to be fairly transparent. Cirrus clouds can also form inside fallstreak holes (also called "cavum").
At latitudes of 65° N or S, close to polar regions, cirrus clouds form, on average, only 7,000 m (23,000 ft) above sea level. In temperate regions, at roughly 45° N or S, their average altitude increases to 9,500 m (31,200 ft) above sea level. In tropical regions, at roughly 5° N or S, cirrus clouds form 13,500 m (44,300 ft) above sea level on average. Across the globe, cirrus clouds can form anywhere from 4,000 to 20,000 m (13,000 to 66,000 ft) above sea level. Cirrus clouds form with a vast range of thicknesses. They can be as little as 100 m (330 ft) from top to bottom to as thick as 8,000 m (26,000 ft). Cirrus cloud thickness is usually somewhere between those two extremes, with an average thickness of 1,500 m (4,900 ft).
The jet stream, a high-level wind band, can stretch cirrus clouds long enough to cross continents. Jet streaks, bands of faster-moving air in the jet stream, can create arcs of cirrus cloud hundreds of kilometers long.
Cirrus cloud formation may be effected by organic aerosols (particles produced by plants) acting as additional nucleation points for ice crystal formation. However, research suggests that cirrus clouds more commonly form on mineral dust or metallic particles rather than on organic ones.
=== Tropical cyclones ===
Sheets of cirrus clouds commonly fan out from the eye walls of tropical cyclones. (The eye wall is the ring of storm clouds surrounding the eye of a tropical cyclone.) A large shield of cirrus and cirrostratus typically accompanies the high altitude outflowing winds of tropical cyclones, and these can make the underlying bands of rain—and sometimes even the eye—difficult to detect in satellite photographs.
=== Thunderstorms ===
Thunderstorms can form dense cirrus at their tops. As the cumulonimbus cloud in a thunderstorm grows vertically, the liquid water droplets freeze when the air temperature reaches the freezing point. The anvil cloud takes its shape because the temperature inversion at the tropopause prevents the warm, moist air forming the thunderstorm from rising any higher, thus creating the flat top. In the tropics, these thunderstorms occasionally produce copious amounts of cirrus from their anvils. High-altitude winds commonly push this dense mat out into an anvil shape that stretches downwind as much as several kilometers.
Individual cirrus cloud formations can be the remnants of anvil clouds formed by thunderstorms. In the dissipating stage of a cumulonimbus cloud, when the normal column rising up to the anvil has evaporated or dissipated, the mat of cirrus in the anvil is all that is left.
=== Contrails ===
Contrails are an artificial type of cirrus cloud formed when water vapor from the exhaust of a jet engine condenses on particles, which come from either the surrounding air or the exhaust itself, and freezes, leaving behind a visible trail. The exhaust can trigger the formation of cirrus by providing ice nuclei when there is an insufficient naturally-occurring supply in the atmosphere. One of the environmental impacts of aviation is that persistent contrails can form into large mats of cirrus, and increased air traffic has been implicated as one possible cause of the increasing frequency and amount of cirrus in Earth's atmosphere.
== Use in forecasting ==
Random, isolated cirrus do not have any particular significance. A large number of cirrus clouds can be a sign of an approaching frontal system or upper air disturbance. The appearance of cirrus signals a change in weather—usually more stormy—in the near future. If the cloud is a cirrus castellanus, there might be instability at the high altitude level. When the clouds deepen and spread, especially when they are of the cirrus radiatus variety or cirrus fibratus species, this usually indicates an approaching weather front. If it is a warm front, the cirrus clouds spread out into cirrostratus, which then thicken and lower into altocumulus and altostratus. The next set of clouds are the rain-bearing nimbostratus clouds. When cirrus clouds precede a cold front, squall line or multicellular thunderstorm, it is because they are blown off the anvil, and the next clouds to arrive are the cumulonimbus clouds. Kelvin-Helmholtz waves indicate extreme wind shear at high levels. When a jet streak creates a large arc of cirrus, weather conditions may be right for the development of winter storms.
Within the tropics, 36 hours prior to the center passage of a tropical cyclone, a veil of white cirrus clouds approaches from the direction of the cyclone. In the mid- to late-19th century, forecasters used these cirrus veils to predict the arrival of hurricanes. In the early 1870s the president of Belén College in Havana, Father Benito Viñes, developed the first hurricane forecasting system; he mainly used the motion of these clouds in formulating his predictions. He would observe the clouds hourly from 4:00 am to 10:00 pm. After accumulating enough information, Viñes began accurately predicting the paths of hurricanes; he summarized his observations in his book Apuntes Relativos a los Huracanes de las Antilles, published in English as Practical Hints in Regard to West Indian Hurricanes.
== Effects on climate ==
Cirrus clouds cover up to 25% of the Earth (up to 70% in the tropics at night) and have a net heating effect. When they are thin and translucent, the clouds efficiently absorb outgoing infrared radiation while only marginally reflecting the incoming sunlight. When cirrus clouds are 100 m (330 ft) thick, they reflect only around 9% of the incoming sunlight, but they prevent almost 50% of the outgoing infrared radiation from escaping, thus raising the temperature of the atmosphere beneath the clouds by an average of 10 °C (18 °F)—a process known as the greenhouse effect. Averaged worldwide, cloud formation results in a temperature loss of 5 °C (9 °F) at the earth's surface, mainly the result of stratocumulus clouds.
Cirrus clouds are likely becoming more common due to climate change. As their greenhouse effect is stronger than their reflection of sunlight, this would act as a self-reinforcing feedback. Metallic particles from human sources act as additional nucleation seeds, potentially increasing cirrus cloud cover and thus contributing further to climate change. Aircraft in the upper troposphere can create contrail cirrus clouds if local weather conditions are right. These contrails contribute to climate change.
Cirrus cloud thinning has been proposed as a possible geoengineering approach to reduce climate damage due to carbon dioxide. Cirrus cloud thinning would involve injecting particles into the upper troposphere to reduce the amount of cirrus clouds. The 2021 IPCC Assessment Report expressed low confidence in the cooling effect of cirrus cloud thinning, due to limited understanding.
== Cloud properties ==
Scientists have studied the properties of cirrus using several different methods. Lidar (laser-based radar) gives highly accurate information on the cloud's altitude, length, and width. Balloon-carried hygrometers measure the humidity of the cirrus cloud but are not accurate enough to measure the depth of the cloud. Radar units give information on the altitudes and thicknesses of cirrus clouds. Another data source is satellite measurements from the Stratospheric Aerosol and Gas Experiment program. These satellites measure where infrared radiation is absorbed in the atmosphere, and if it is absorbed at cirrus altitudes, then it is assumed that there are cirrus clouds in that location. NASA's Moderate-Resolution Imaging Spectroradiometer gives information on the cirrus cloud cover by measuring reflected infrared radiation of various specific frequencies during the day. During the night, it determines cirrus cover by detecting the Earth's infrared emissions. The cloud reflects this radiation back to the ground, thus enabling satellites to see the "shadow" it casts into space. Visual observations from aircraft or the ground provide additional information about cirrus clouds. Particle Analysis by Laser Mass Spectrometry (PALMS) is used to identify the type of nucleation seeds that spawned the ice crystals in a cirrus cloud.
Cirrus clouds have an average ice crystal concentration of 300,000 ice crystals per 10 cubic meters (270,000 ice crystals per 10 cubic yards). The concentration ranges from as low as 1 ice crystal per 10 cubic meters to as high as 100 million ice crystals per 10 cubic meters (just under 1 ice crystal per 10 cubic yards to 77 million ice crystals per 10 cubic yards), a difference of eight orders of magnitude. The size of each ice crystal is typically 0.25 millimeters, but they range from as short as 0.01 millimeters up to several millimeters. The ice crystals in contrails can be much smaller than those in naturally-occurring cirrus cloud, being around 0.001 millimeters to 0.1 millimeters in length.
In addition to forming in different sizes, the ice crystals in cirrus clouds can crystallize in different shapes: solid columns, hollow columns, plates, rosettes, and conglomerations of the various other types. The shape of the ice crystals is determined by the air temperature, atmospheric pressure, and ice supersaturation (the amount by which the relative humidity exceeds 100%). Cirrus in temperate regions typically have the various ice crystal shapes separated by type. The columns and plates concentrate near the top of the cloud, whereas the rosettes and conglomerations concentrate near the base. In the northern Arctic region, cirrus clouds tend to be composed of only the columns, plates, and conglomerations, and these crystals tend to be at least four times larger than the minimum size. In Antarctica, cirrus are usually composed of only columns which are much longer than normal.
Cirrus clouds are usually colder than −20 °C (−4 °F). At temperatures above −68 °C (−90 °F), most cirrus clouds have relative humidities of roughly 100% (that is they are saturated). Cirrus can supersaturate, with relative humidities over ice that can exceed 200%. Below −68 °C (−90 °F) there are more of both undersaturated and supersaturated cirrus clouds. The more supersaturated clouds are probably young cirrus.
== Optical phenomena ==
Cirrus clouds can produce several optical effects like halos around the Sun and Moon. Halos are caused by interaction of the light with hexagonal ice crystals present in the clouds which, depending on their shape and orientation, can result in a wide variety of white and colored rings, arcs and spots in the sky, including sun dogs, the 46° halo, the 22° halo, and circumhorizontal arcs. Circumhorizontal arcs are only visible when the Sun rises higher than 58° above the horizon, preventing observers at higher latitudes from ever being able to see them.
More rarely, cirrus clouds are capable of producing glories, more commonly associated with liquid water-based clouds such as stratus. A glory is a set of concentric, faintly-colored glowing rings that appear around the shadow of the observer, and are best observed from a high viewpoint or from a plane. Cirrus clouds only form glories when the constituent ice crystals are aspherical; researchers suggest that the ice crystals must be between 0.009 millimeters and 0.015 millimeters in length for a glory to appear.
== Relation to other clouds ==
Cirrus clouds are one of three different genera of high-level clouds, all of which are given the prefix "cirro-". The other two genera are cirrocumulus and cirrostratus. High-level clouds usually form above 6,100 m (20,000 ft). Cirrocumulus and cirrostratus are sometimes informally referred to as cirriform clouds because of their frequent association with cirrus.
In the intermediate range, from 2,000 to 6,100 m (6,500 to 20,000 ft), are the mid-level clouds, which are given the prefix "alto-". They comprise two genera, altostratus and altocumulus. These clouds are formed from ice crystals, supercooled water droplets, or liquid water droplets.
Low-level clouds usually form below 2,000 m (6,500 ft) and do not have a prefix. The two genera that are strictly low-level are stratus, and stratocumulus. These clouds are composed of water droplets, except during winter when they are formed of supercooled water droplets or ice crystals if the temperature at cloud level is below freezing. Three additional genera usually form in the low-altitude range, but may be based at higher levels under conditions of very low humidity. They are the genera cumulus, and cumulonimbus, and nimbostratus. These are sometimes classified separately as clouds of vertical development, especially when their tops are high enough to be composed of supercooled water droplets or ice crystals.
=== Cirrocumulus ===
Cirrocumulus clouds form in sheets or patches and do not cast shadows. They commonly appear in regular, rippling patterns or in rows of clouds with clear areas between. Cirrocumulus are, like other members of the cumuliform category, formed via convective processes. Significant growth of these patches indicates high-altitude instability and can signal the approach of poorer weather. The ice crystals in the bottoms of cirrocumulus clouds tend to be in the form of hexagonal cylinders. They are not solid, but instead tend to have stepped funnels coming in from the ends. Towards the top of the cloud, these crystals have a tendency to clump together. These clouds do not last long, and they tend to change into cirrus because as the water vapor continues to deposit on the ice crystals, they eventually begin to fall, destroying the upward convection. The cloud then dissipates into cirrus. Cirrocumulus clouds come in four species: stratiformis, lenticularis, castellanus, and floccus. They are iridescent when the constituent supercooled water droplets are all about the same size.
=== Cirrostratus ===
Cirrostratus clouds can appear as a milky sheen in the sky or as a striated sheet. They are sometimes similar to altostratus and are distinguishable from the latter because the Sun or Moon is always clearly visible through transparent cirrostratus, in contrast to altostratus which tends to be opaque or translucent. Cirrostratus come in two species, fibratus and nebulosus. The ice crystals in these clouds vary depending upon the height in the cloud. Towards the bottom, at temperatures of around −35 to −45 °C (−31 to −49 °F), the crystals tend to be long, solid, hexagonal columns. Towards the top of the cloud, at temperatures of around −47 to −52 °C (−53 to −62 °F), the predominant crystal types are thick, hexagonal plates and short, solid, hexagonal columns. These clouds commonly produce halos, and sometimes the halo is the only indication that such clouds are present. They are formed by warm, moist air being lifted slowly to a very high altitude. When a warm front approaches, cirrostratus clouds become thicker and descend forming altostratus clouds, and rain usually begins 12 to 24 hours later.
== Other planets ==
Cirrus clouds have been observed on several other planets. In 2008, the Martian Lander Phoenix took a time-lapse photograph of a group of cirrus clouds moving across the Martian sky using lidar. Near the end of its mission, the Phoenix Lander detected more thin clouds close to the north pole of Mars. Over the course of several days, they thickened, lowered, and eventually began snowing. The total precipitation was only a few thousandths of a millimeter. James Whiteway from York University concluded that "precipitation is a component of the [Martian] hydrologic cycle". These clouds formed during the Martian night in two layers, one around 4,000 m (13,000 ft) above ground and the other at surface level. They lasted through early morning before being burned away by the Sun. The crystals in these clouds were formed at a temperature of −65 °C (−85 °F), and they were shaped roughly like ellipsoids 0.127 millimeters long and 0.042 millimeters wide.
On Jupiter, cirrus clouds are composed of ammonia. When Jupiter's South Equatorial Belt disappeared, one hypothesis put forward by Glenn Orten was that a large quantity of ammonia cirrus clouds had formed above it, hiding it from view. NASA's Cassini probe detected these clouds on Saturn and thin water-ice cirrus on Saturn's moon Titan. Cirrus clouds composed of methane ice exist on Uranus. On Neptune, thin wispy clouds which could possibly be cirrus have been detected over the Great Dark Spot. As on Uranus, these are probably methane crystals.
Interstellar cirrus clouds are composed of tiny dust grains smaller than a micrometer and are therefore not true cirrus clouds, which are composed of frozen crystals. They range from a few light years to dozens of light years across. While they are not technically cirrus clouds, the dust clouds are referred to as "cirrus" because of their similarity to the clouds on Earth. They emit infrared radiation, similar to the way cirrus clouds on Earth reflect heat being radiated out into space.
== Notes ==
== References ==
Footnotes
Bibliography |
Civil aviation | Civil aviation is one of two major categories of flying, representing all non-military and non-state aviation, which can be both private and commercial. Most countries in the world are members of the International Civil Aviation Organization and work together to establish common Standards and Recommended Practices for civil aviation through that agency.
Civil aviation includes three major categories:
Commercial air transport, including scheduled and non-scheduled passenger and cargo flights
Aerial work, in which an aircraft is used for specialized services such as agriculture, photography, surveying, search and rescue, etc.
General aviation (GA), including all other civil flights, private or commercial
Although scheduled air transport is the larger operation in terms of passenger numbers, GA is larger in the number of flights (and flight hours, in the U.S.) In the U.S., GA carries 166 million passengers each year, more than any individual airline, though less than all the airlines combined. Since 2004, the U.S. airlines combined have carried over 600 million passengers each year, and in 2014, they carried a combined 662,819,232 passengers.
Some countries also make a regulatory distinction based on whether aircraft are flown for hire, like:
Commercial aviation includes most or all flying done for hire, particularly scheduled service on airlines; and
Private aviation includes pilots flying for their own purposes (recreation, business meetings, etc.) without receiving any kind of remuneration.
All scheduled air transport is commercial, but general aviation can be either commercial or private. Normally, the pilot, aircraft, and operator must all be authorized to perform commercial operations through separate commercial licensing, registration, and operation certificates.
Non-civil aviation is referred to as state aviation. This includes military aviation, state VIP transports, and police/customs aircraft.
== History ==
=== Postwar aviation ===
After World War II, commercial aviation grew rapidly, using mostly ex-military pilots to transport people and cargo. Factories that had produced bombers were quickly adapted to the production of passenger aircraft like the Douglas DC-4. This growth was accelerated by the establishment of military airports throughout the world, either for combat use or training. These could easily be turned to civil aviation use. The first commercial jet airliner to fly was the British de Havilland DH.106 Comet. By 1952, the British state airline British Overseas Airways Corporation had introduced the Comet into scheduled service. While it was a technical achievement, the airplane suffered a series of highly public failures, as the shape of the windows led to cracks due to metal fatigue. By the time the problems were overcome, other jet airliner designs such as the Boeing 707 had already entered service.
== Civil aviation authorities ==
The Chicago Convention on International Civil Aviation was originally established in 1944; it states that signatories should collectively work to harmonize and standardize the use of airspace for safety, efficiency and regularity of air transport. Each signatory country, of which there are at least 193, has a civil aviation authority (such as the Federal Aviation Administration in the United States) to oversee the following areas of civil aviation:
Personnel licensing — regulating the basic training and issuance of licenses and certificates.
Flight operations — carrying out safety oversight of commercial operators.
Airworthiness — issuing certificates of registration and certificates of airworthiness to civil aircraft, and overseeing the safety of aircraft maintenance organizations.
Aerodromes — designing and constructing aerodrome facilities.
Air traffic services — managing the traffic inside of a country's airspace.
== Statistics ==
The World Bank lists monotonously growing numbers for the number of passengers transported per year worldwide with a preliminary all-time high in 2015 of 3.44 billion passengers. Likewise, the number of registered carrier departures worldwide has reached a peak in 2015 with almost 33 million takeoffs. In the U.S. alone, the passenger miles "computed by summing the products of the aircraft-miles flown on each inter airport segment multiplied by the number of passengers carried on that segment" have reached 607,772 million miles (978,114×10^6 km) in 2014 (as compared to highway car traffic with 4,371,706 million miles (7,035,579×10^6 km)). The global seasonally adjusted revenue passenger kilometers per month peaked at more than 550 billion kilometres (3,700 AU) (~6.6 trillion per year, corresponding to roughly 2000 km per passenger) in January 2016, a 7% rise over one year. The passenger numbers are distinctively more volatile than general economic indicators. Global political, economic or health crises have an amplifying effect.
== See also ==
Air travel
Military aviation
Private aviation
CUNY Aviation Institute
International Civil Aviation Organization
National Aviation Intelligence Integration Office
== References ==
== External links ==
International Civil Aviation Organization (ICAO) — the U.N. agency responsible for civil aviation |
Climate change mitigation | Climate change mitigation (or decarbonisation) is action to limit the greenhouse gases in the atmosphere that cause climate change. Climate change mitigation actions include conserving energy and replacing fossil fuels with clean energy sources. Secondary mitigation strategies include changes to land use and removing carbon dioxide (CO2) from the atmosphere. Current climate change mitigation policies are insufficient as they would still result in global warming of about 2.7 °C by 2100, significantly above the 2015 Paris Agreement's goal of limiting global warming to below 2 °C.
Solar energy and wind power can replace fossil fuels at the lowest cost compared to other renewable energy options. The availability of sunshine and wind is variable and can require electrical grid upgrades, such as using long-distance electricity transmission to group a range of power sources. Energy storage can also be used to even out power output, and demand management can limit power use when power generation is low. Cleanly generated electricity can usually replace fossil fuels for powering transportation, heating buildings, and running industrial processes. Certain processes are more difficult to decarbonise, such as air travel and cement production. Carbon capture and storage (CCS) can be an option to reduce net emissions in these circumstances, although fossil fuel power plants with CCS technology is currently a high-cost climate change mitigation strategy.
Human land use changes such as agriculture and deforestation cause about 1/4th of climate change. These changes impact how much CO2 is absorbed by plant matter and how much organic matter decays or burns to release CO2. These changes are part of the fast carbon cycle, whereas fossil fuels release CO2 that was buried underground as part of the slow carbon cycle. Methane is a short-lived greenhouse gas that is produced by decaying organic matter and livestock, as well as fossil fuel extraction. Land use changes can also impact precipitation patterns and the reflectivity of the surface of the Earth. It is possible to cut emissions from agriculture by reducing food waste, switching to a more plant-based diet (also referred to as low-carbon diet), and by improving farming processes.
Various policies can encourage climate change mitigation. Carbon pricing systems have been set up that either tax CO2 emissions or cap total emissions and trade emission credits. Fossil fuel subsidies can be eliminated in favour of clean energy subsidies, and incentives offered for installing energy efficiency measures or switching to electric power sources. Another issue is overcoming environmental objections when constructing new clean energy sources and making grid modifications. Limiting climate change by reducing greenhouse gas emissions or removing greenhouse gases from the atmosphere could be supplemented by climate technologies such as solar radiation management (or solar geoengineering). Complementary climate change actions, including climate activism, have a focus on political and cultural aspects.
== Definitions and scope ==
Climate change mitigation aims to sustain ecosystems to maintain human civilisation. This requires drastic cuts in greenhouse gas emissions.: 1–64 The Intergovernmental Panel on Climate Change (IPCC) defines mitigation (of climate change) as "a human intervention to reduce emissions or enhance the sinks of greenhouse gases".: 2239
It is possible to approach various mitigation measures in parallel. This is because there is no single pathway to limit global warming to 1.5 or 2 °C.: 109 There are four types of measures:
Sustainable energy and sustainable transport
Energy conservation, including efficient energy use
Sustainable agriculture and green industrial policy
Enhancing carbon sinks and carbon dioxide removal (CDR), including carbon sequestration
The IPCC defined carbon dioxide removal as "Anthropogenic activities removing carbon dioxide (CO2) from the atmosphere and durably storing it in geological, terrestrial, or ocean reservoirs, or in products. It includes existing and potential anthropogenic enhancement of biological or geochemical CO2 sinks and direct air carbon dioxide capture and storage (DACCS) but excludes natural CO2 uptake not directly caused by human activities."
== Emission trends and pledges ==
Greenhouse gas emissions from human activities strengthen the greenhouse effect. This contributes to climate change. Most is carbon dioxide from burning fossil fuels: coal, oil, and natural gas. Human-caused emissions have increased atmospheric carbon dioxide by about 50% over pre-industrial levels. Emissions in the 2010s averaged a record 56 billion tons (Gt) a year. In 2016, energy for electricity, heat and transport was responsible for 73.2% of GHG emissions. Direct industrial processes accounted for 5.2%, waste for 3.2% and agriculture, forestry and land use for 18.4%.
Electricity generation and transport are major emitters. The largest single source is coal-fired power stations with 20% of greenhouse gas emissions. Deforestation and other changes in land use also emit carbon dioxide and methane. The largest sources of anthropogenic methane emissions are agriculture, and gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Agricultural soils emit nitrous oxide, partly due to fertilizers. There is now a political solution to the problem of fluorinated gases from refrigerants. This is because many countries have ratified the Kigali Amendment.
Carbon dioxide (CO2) is the dominant emitted greenhouse gas. Methane (CH4) emissions almost have the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-Gases) play a minor role. Livestock and manure produce 5.8% of all greenhouse gas emissions. But this depends on the time frame used to calculate the global warming potential of the respective gas.
Greenhouse gas (GHG) emissions are measured in CO2 equivalents. Scientists determine their CO2 equivalents from their global warming potential (GWP). This depends on their lifetime in the atmosphere. There are widely used greenhouse gas accounting methods that convert volumes of methane, nitrous oxide and other greenhouse gases to carbon dioxide equivalents. Estimates largely depend on the ability of oceans and land sinks to absorb these gases. Short-lived climate pollutants (SLCPs) persist in the atmosphere for a period ranging from days to 15 years. Carbon dioxide can remain in the atmosphere for millennia. Short-lived climate pollutants include methane, hydrofluorocarbons (HFCs), tropospheric ozone and black carbon.
Scientists increasingly use satellites to locate and measure greenhouse gas emissions and deforestation. Earlier, scientists largely relied on or calculated estimates of greenhouse gas emissions and governments' self-reported data.
=== Needed emissions cuts ===
The annual "Emissions Gap Report" by UNEP stated in 2022 that it was necessary to almost halve emissions. "To get on track for limiting global warming to 1.5°C, global annual GHG emissions must be reduced by 45 per cent compared with emissions projections under policies currently in place in just eight years, and they must continue to decline rapidly after 2030, to avoid exhausting the limited remaining atmospheric carbon budget.": xvi The report commented that the world should focus on broad-based economy-wide transformations and not incremental change.: xvi
In 2022, the Intergovernmental Panel on Climate Change (IPCC) released its Sixth Assessment Report on climate change. It warned that greenhouse gas emissions must peak before 2025 at the latest and decline 43% by 2030 to have a good chance of limiting global warming to 1.5 °C (2.7 °F). Or in the words of Secretary-General of the United Nations António Guterres: "Main emitters must drastically cut emissions starting this year".
A 2023 synthesis by leading climate scientists highlighted ten critical areas in climate science with significant policy implications. These include the near inevitability of temporarily exceeding the 1.5 °C warming limit, the urgent need for a rapid and managed fossil fuel phase-out, challenges in scaling carbon dioxide removal technologies, uncertainties regarding the future contribution of natural carbon sinks, and the interconnected crises of biodiversity loss and climate change. These insights underscore the necessity for immediate and comprehensive mitigation strategies to address the multifaceted challenges of climate change.
=== Pledges ===
Climate Action Tracker described the situation on 9 November 2021 as follows. The global temperature will rise by 2.7 °C by the end of the century with current policies and by 2.9 °C with nationally adopted policies. The temperature will rise by 2.4 °C if countries only implement the pledges for 2030. The rise would be 2.1 °C with the achievement of the long-term targets too. Full achievement of all announced targets would mean the rise in global temperature will peak at 1.9 °C and go down to 1.8 °C by the year 2100. Experts gather information about climate pledges in the Global Climate Action Portal - Nazca. The scientific community is checking their fulfilment.
There has not been a definitive or detailed evaluation of most goals set for 2020. But it appears the world failed to meet most or all international goals set for that year.
One update came during the 2021 United Nations Climate Change Conference in Glasgow. The group of researchers running the Climate Action Tracker looked at countries responsible for 85% of greenhouse gas emissions. It found that only four countries or political entities—the EU, UK, Chile and Costa Rica—have published a detailed official policy‑plan that describes the steps to realise 2030 mitigation targets. These four polities are responsible for 6% of global greenhouse gas emissions.
In 2021 the US and EU launched the Global Methane Pledge to cut methane emissions by 30% by 2030. The UK, Argentina, Indonesia, Italy and Mexico joined the initiative. Ghana and Iraq signalled interest in joining. A White House summary of the meeting noted those countries represent six of the top 15 methane emitters globally. Israel also joined the initiative.
== Low-carbon energy ==
The energy system includes the delivery and use of energy. It is the main emitter of carbon dioxide (CO2).: 6–6 Rapid and deep reductions in the carbon dioxide and other greenhouse gas emissions from the energy sector are necessary to limit global warming to well below 2 °C.: 6–3 IPCC recommendations include reducing fossil fuel consumption, increasing production from low- and zero carbon energy sources, and increasing use of electricity and alternative energy carriers.: 6–3
Nearly all scenarios and strategies involve a major increase in the use of renewable energy in combination with increased energy efficiency measures.: xxiii It will be necessary to accelerate the deployment of renewable energy six-fold from 0.25% annual growth in 2015 to 1.5% to keep global warming under 2 °C.
The competitiveness of renewable energy is a key to a rapid deployment. In 2020, onshore wind and solar photovoltaics were the cheapest source for new bulk electricity generation in many regions. Renewables may have higher storage costs but non-renewables may have higher clean-up costs. A carbon price can increase the competitiveness of renewable energy.
=== Solar and wind energy ===
Wind and sun can provide large amounts of low-carbon energy at competitive production costs. The IPCC estimates that these two mitigation options have the largest potential to reduce emissions before 2030 at low cost.: 43
Solar photovoltaics (PV) has become the cheapest way to generate electricity in many regions of the world. The growth of photovoltaics has been close to exponential. It has about doubled every three years since the 1990s. A different technology is concentrated solar power (CSP). This uses mirrors or lenses to concentrate a large area of sunlight on to a receiver. With CSP, the energy can be stored for a few hours. This provides supply in the evening. Solar water heating doubled between 2010 and 2019.
Regions in the higher northern and southern latitudes have the greatest potential for wind power. Offshore wind farms are more expensive. But offshore units deliver more energy per installed capacity with less fluctuations. In most regions, wind power generation is higher in the winter when PV output is low. For this reason, combinations of wind and solar power lead to better-balanced systems.
=== Other renewables ===
Other well-established renewable energy forms include hydropower, bioenergy and geothermal energy.
Hydroelectricity is electricity generated by hydropower and plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions.
Bioenergy can provide energy for electricity, heat and transport. Bioenergy, in particular biogas, can provide dispatchable electricity generation. While burning plant-derived biomass releases CO2, the plants withdraw CO2 from the atmosphere while they grow. The technologies for producing, transporting and processing a fuel have a significant impact on the lifecycle emissions of the fuel. For example, aviation is starting to use renewable biofuels.
Geothermal power is electrical power generated from geothermal energy. Geothermal electricity generation is currently used in 26 countries. Geothermal heating is in use in 70 countries.
=== Integrating variable renewable energy ===
Wind and solar power production does not consistently match demand. To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems must be flexible. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. The integration of larger amounts of solar and wind energy into the grid requires a change of the energy system; this is necessary to ensure that the supply of electricity matches demand.
There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale. There is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines also makes it possible to reduce variability. It is possible to shift energy demand in time. Energy demand management and the use of smart grids make it possible to match the times when variable energy production is highest. Sector coupling can provide further flexibility. This involves coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles.
Energy storage helps overcome barriers to intermittent renewable energy. The most commonly used and available storage method is pumped-storage hydroelectricity. This requires locations with large differences in height and access to water. Batteries are also in wide use. They typically store electricity for short periods. Batteries have low energy density. This and their cost makes them impractical for the large energy storage necessary to balance inter-seasonal variations in energy production. Some locations have implemented pumped hydro storage with capacity for multi-month usage.
=== Nuclear power ===
Nuclear power could complement renewables for electricity. On the other hand, environmental and security risks could outweigh the benefits. Examples of these environmental risks being the discharge of radioactive water to nearby ecosystems, and the routine release of radioactive gases as well.
The construction of new nuclear reactors currently takes about 10 years. This is much longer than scaling up the deployment of wind and solar.: 335 And this timing gives rise to credit risks. However nuclear may be much cheaper in China. China is building a significant number of new power plants. As of 2019 the cost of extending nuclear power plant lifetimes is competitive with other electricity generation technologies if long term costs for nuclear waste disposal are excluded from the calculation. There is also no sufficient financial insurance for nuclear accidents.
=== Replacing coal with natural gas ===
== Demand reduction ==
Reducing demand for products and services that cause greenhouse gas emissions can help in mitigating climate change. One is to reduce demand by behavioural and cultural changes, for example by making changes in diet, especially the decision to reduce meat consumption, an effective action individuals take to fight climate change. Another is by reducing the demand by improving infrastructure, by building a good public transport network, for example. Lastly, changes in end-use technology can reduce energy demand. For instance a well-insulated house emits less than a poorly-insulated house.: 119
Mitigation options that reduce demand for products or services help people make personal choices to reduce their carbon footprint. This could be in their choice of transport or food.: 5–3 So these mitigation options have many social aspects that focus on demand reduction; they are therefore demand-side mitigation actions. For example, people with high socio-economic status often cause more greenhouse gas emissions than those from a lower status. If they reduce their emissions and promote green policies, these people could become low-carbon lifestyle role models.: 5–4 However, there are many psychological variables that influence consumers. These include awareness and perceived risk.
Government policies can support or hinder demand-side mitigation options. For example, public policy can promote circular economy concepts which would support climate change mitigation.: 5–6 Reducing greenhouse gas emissions is linked to the sharing economy.
There is a debate regarding the correlation of economic growth and emissions. It seems economic growth no longer necessarily means higher emissions.
A 2024 article in Nature Climate Change emphasises the importance of integrating behavioural science into climate change mitigation strategies. The article presents six key recommendations aimed at improving individual and collective actions toward reducing greenhouse gas emissions, including overcoming barriers to research, fostering cross-disciplinary collaborations, and promoting practical behaviour-oriented solutions. These insights suggest that behavioural science plays a crucial role alongside technological and policy measures in addressing climate change.
=== Energy conservation and efficiency ===
Global primary energy demand exceeded 161,000 terawatt hours (TWh) in 2018. This refers to electricity, transport and heating including all losses. In transport and electricity production, fossil fuel usage has a low efficiency of less than 50%. Large amounts of heat in power plants and in motors of vehicles go to waste. The actual amount of energy consumed is significantly lower at 116,000 TWh.
Energy conservation is the effort made to reduce the consumption of energy by using less of an energy service. One way is to use energy more efficiently. This means using less energy than before to produce the same service. Another way is to reduce the amount of service used. An example of this would be to drive less. Energy conservation is at the top of the sustainable energy hierarchy. When consumers reduce wastage and losses they can conserve energy. The upgrading of technology as well as the improvements to operations and maintenance can result in overall efficiency improvements.
Efficient energy use (or energy efficiency) is the process of reducing the amount of energy required to provide products and services. Improved energy efficiency in buildings ("green buildings"), industrial processes and transportation could reduce the world's energy needs in 2050 by one third. This would help reduce global emissions of greenhouse gases. For example, insulating a building allows it to use less heating and cooling energy to achieve and maintain thermal comfort. Improvements in energy efficiency are generally achieved by adopting a more efficient technology or production process. Another way is to use commonly accepted methods to reduce energy losses.
=== Lifestyle changes ===
Individual action on climate change can include personal choices in many areas. These include diet, travel, household energy use, consumption of goods and services, and family size. People who wish to reduce their carbon footprint can take high-impact actions such as avoiding frequent flying and petrol-fuelled cars, eating mainly a plant-based diet, having fewer children, using clothes and electrical products for longer, and electrifying homes. These approaches are more practical for people in high-income countries with high-consumption lifestyles. Naturally, it is more difficult for those with lower income statuses to make these changes. This is because choices like electric-powered cars may not be available. Excessive consumption is more to blame for climate change than population increase. High-consumption lifestyles have a greater environmental impact, with the richest 10% of people emitting about half the total lifestyle emissions.
=== Dietary change ===
Some scientists say that avoiding meat and dairy foods is the single biggest way an individual can reduce their environmental impact. The widespread adoption of a vegetarian diet could cut food-related greenhouse gas emissions by 63% by 2050. China introduced new dietary guidelines in 2016 which aim to cut meat consumption by 50% and thereby reduce greenhouse gas emissions by 1 Gt per year by 2030. Overall, food accounts for the largest share of consumption-based greenhouse gas emissions. It is responsible for nearly 20% of the global carbon footprint. Almost 15% of all anthropogenic greenhouse gas emissions have been attributed to the livestock sector.
A shift towards plant-based diets would help to mitigate climate change. In particular, reducing meat consumption would help to reduce methane emissions. If high-income nations switched to a plant-based diet, vast amounts of land used for animal agriculture could be allowed to return to their natural state. This in turn has the potential to sequester 100 billion tonnes of CO2 by the end of the century. A comprehensive analysis found that plant based diets reduce emissions, water pollution and land use significantly (by 75%), while reducing the destruction of wildlife and usage of water.
=== Family size ===
Population growth has resulted in higher greenhouse gas emissions in most regions, particularly Africa.: 6–11 However, economic growth has a bigger effect than population growth.: 6–622 Rising incomes, changes in consumption and dietary patterns, as well as population growth, cause pressure on land and other natural resources. This leads to more greenhouse gas emissions and fewer carbon sinks.: 117 Some scholars have argued that humane policies to slow population growth should be part of a broad climate response together with policies that end fossil fuel use and encourage sustainable consumption. Advances in female education and reproductive health, especially voluntary family planning, can contribute to reducing population growth.: 5–35
== Preserving and enhancing carbon sinks ==
An important mitigation measure is "preserving and enhancing carbon sinks". This refers to the management of Earth's natural carbon sinks in a way that preserves or increases their capability to remove CO2 from the atmosphere and to store it durably. Scientists call this process also carbon sequestration. In the context of climate change mitigation, the IPCC defines a sink as "Any process, activity or mechanism which removes a greenhouse gas, an aerosol or a precursor of a greenhouse gas from the atmosphere".: 2249 Globally, the two most important carbon sinks are vegetation and the ocean.
To enhance the ability of ecosystems to sequester carbon, changes are necessary in agriculture and forestry. Examples are preventing deforestation and restoring natural ecosystems by reforestation.: 266 Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century.: 1068 : 17 There are concerns about over-reliance on these technologies, and their environmental impacts.: 17 : 34 But ecosystem restoration and reduced conversion are among the mitigation tools that can yield the most emissions reductions before 2030.: 43
Land-based mitigation options are referred to as "AFOLU mitigation options" in the 2022 IPCC report on mitigation. The abbreviation stands for "agriculture, forestry and other land use": 37 The report described the economic mitigation potential from relevant activities around forests and ecosystems as follows: "the conservation, improved management, and restoration of forests and other ecosystems (coastal wetlands, peatlands, savannas and grasslands)". A high mitigation potential is found for reducing deforestation in tropical regions. The economic potential of these activities has been estimated to be 4.2 to 7.4 gigatonnes of carbon dioxide equivalent (GtCO2 -eq) per year.: 37
=== Forests ===
==== Conservation ====
The Stern Review on the economics of climate change stated in 2007 that curbing deforestation was a highly cost-effective way of reducing greenhouse gas emissions. About 95% of deforestation occurs in the tropics, where clearing of land for agriculture is one of the main causes. One forest conservation strategy is to transfer rights over land from public ownership to its indigenous inhabitants. Land concessions often go to powerful extractive companies. Conservation strategies that exclude and even evict humans, called fortress conservation, often lead to more exploitation of the land. This is because the native inhabitants turn to work for extractive companies to survive.
Proforestation is promoting forests to capture their full ecological potential. This is a mitigation strategy as secondary forests that have regrown in abandoned farmland are found to have less biodiversity than the original old-growth forests. Original forests store 60% more carbon than these new forests. Strategies include rewilding and establishing wildlife corridors.
==== Afforestation and reforestation ====
Afforestation is the establishment of trees where there was previously no tree cover. Scenarios for new plantations covering up to 4000 million hectares (Mha) (6300 x 6300 km) suggest cumulative carbon storage of more than 900 GtC (2300 GtCO2) until 2100. But they are not a viable alternative to aggressive emissions reduction. This is because the plantations would need to be so large they would eliminate most natural ecosystems or reduce food production. One example is the Trillion Tree Campaign. However, preserving biodiversity is also important and for example not all grasslands are suitable for conversion into forests. Grasslands can even turn from carbon sinks to carbon sources.
Reforestation is the restocking of existing depleted forests or in places where there were recently forests. Reforestation could save at least 1 GtCO2 per year, at an estimated cost of $5–15 per tonne of carbon dioxide (tCO2). Restoring all degraded forests all over the world could capture about 205 GtC (750 GtCO2). With increased intensive agriculture and urbanisation, there is an increase in the amount of abandoned farmland. By some estimates, for every acre of original old-growth forest cut down, more than 50 acres of new secondary forests are growing. In some countries, promoting regrowth on abandoned farmland could offset years of emissions.
Planting new trees can be expensive and a risky investment. For example, about 80 per cent of planted trees in the Sahel die within two years. Reforestation has higher carbon storage potential than afforestation. Even long-deforested areas still contain an "underground forest" of living roots and tree stumps. Helping native species sprout naturally is cheaper than planting new trees and they are more likely to survive. This could include pruning and coppicing to accelerate growth. This also provides woodfuel, which is otherwise a major source of deforestation. Such practices, called farmer-managed natural regeneration, are centuries old but the biggest obstacle towards implementation is ownership of the trees by the state. The state often sells timber rights to businesses which leads to locals uprooting seedlings because they see them as a liability. Legal aid for locals and changes to property law such as in Mali and Niger have led to significant changes. Scientists describe them as the largest positive environmental transformation in Africa. It is possible to discern from space the border between Niger and the more barren land in Nigeria, where the law has not changed.
=== Soils ===
There are many measures to increase soil carbon. This makes it complex and hard to measure and account for. One advantage is that there are fewer trade-offs for these measures than for BECCS or afforestation, for example.
Globally, protecting healthy soils and restoring the soil carbon sponge could remove 7.6 billion tonnes of carbon dioxide from the atmosphere annually. This is more than the annual emissions of the US. Trees capture CO2 while growing above ground and exuding larger amounts of carbon below ground. Trees contribute to the building of a soil carbon sponge. Carbon formed above ground is released as CO2 immediately when wood is burned. If dead wood remains untouched, only some of the carbon returns to the atmosphere as decomposition proceeds.
Farming can deplete soil carbon and render soil incapable of supporting life. However, conservation farming can protect carbon in soils, and repair damage over time. The farming practice of cover crops is a form of carbon farming. Methods that enhance carbon sequestration in soil include no-till farming, residue mulching and crop rotation. Scientists have described the best management practices for European soils to increase soil organic carbon. These are conversion of arable land to grassland, straw incorporation, reduced tillage, straw incorporation combined with reduced tillage, ley cropping system and cover crops.
Another mitigation option is the production of biochar and its storage in soils This is the solid material that remains after the pyrolysis of biomass. Biochar production releases half of the carbon from the biomass—either released into the atmosphere or captured with CCS—and retains the other half in the stable biochar. It can endure in soil for thousands of years. Biochar may increase the soil fertility of acidic soils and increase agricultural productivity. During production of biochar, heat is released which may be used as bioenergy.
=== Wetlands ===
Wetland restoration is an important mitigation measure. It has moderate to great mitigation potential on a limited land area with low trade-offs and costs. Wetlands perform two important functions in relation to climate change. They can sequester carbon, converting carbon dioxide to solid plant material through photosynthesis. They also store and regulate water. Wetlands store about 45 million tonnes of carbon per year globally.
Some wetlands are a significant source of methane emissions. Some also emit nitrous oxide. Peatland globally covers just 3% of the land's surface. But it stores up to 550 gigatonnes (Gt) of carbon. This represents 42% of all soil carbon and exceeds the carbon stored in all other vegetation types, including the world's forests. The threat to peatlands includes draining the areas for agriculture. Another threat is cutting down trees for lumber, as the trees help hold and fix the peatland. Additionally, peat is often sold for compost. It is possible to restore degraded peatlands by blocking drainage channels in the peatland, and allowing natural vegetation to recover.
Mangroves, salt marshes and seagrasses make up the majority of the ocean's vegetated habitats. They only equal 0.05% of the plant biomass on land. But they store carbon 40 times faster than tropical forests. Bottom trawling, dredging for coastal development and fertiliser runoff have damaged coastal habitats. Notably, 85% of oyster reefs globally have been removed in the last two centuries. Oyster reefs clean the water and help other species thrive. This increases biomass in that area. In addition, oyster reefs mitigate the effects of climate change by reducing the force of waves from hurricanes. They also reduce the erosion from rising sea levels. Restoration of coastal wetlands is thought to be more cost-effective than restoration of inland wetlands.
=== Deep ocean ===
These options focus on the carbon which ocean reservoirs can store. They include ocean fertilization, ocean alkalinity enhancement or enhanced weathering.: 12–36 The IPCC found in 2022 ocean-based mitigation options currently have only limited deployment potential. But it assessed that their future mitigation potential is large.: 12–4 It found that in total, ocean-based methods could remove 1–100 Gt of CO2 per year.: TS-94 Their costs are in the order of US$40–500 per tonne of CO2. Most of these options could also help to reduce ocean acidification. This is the drop in pH value caused by increased atmospheric CO2 concentrations. The recovery of whale populations can play a role in mitigation as whales play a significant part in nutrient recycling in the ocean. This occurs through what is referred to as the whale pump, where whales’ liquid feces stay at the surface of the ocean. Phytoplankton live near the surface of the ocean in order use sunlight to photosynthesize and rely on much of the carbon, nitrogen and iron of the feces. As the phytoplankton form the base of the marine food chain this increases ocean biomass and thus the amount of carbon sequestrated in it.
Blue carbon management is another type of ocean-based biological carbon dioxide removal (CDR). It can involve land-based as well as ocean-based measures.: 12–51 : 764 The term usually refers to the role that tidal marshes, mangroves and seagrasses can play in carbon sequestration.: 2220 Some of these efforts can also take place in deep ocean waters. This is where the vast majority of ocean carbon is held. These ecosystems can contribute to climate change mitigation and also to ecosystem-based adaptation. Conversely, when blue carbon ecosystems are degraded or lost they release carbon back to the atmosphere.: 2220 There is increasing interest in developing blue carbon potential. Scientists have found that in some cases these types of ecosystems remove far more carbon per area than terrestrial forests. However, the long-term effectiveness of blue carbon as a carbon dioxide removal solution remains under discussion.
=== Enhanced weathering ===
Enhanced weathering could remove 2–4 Gt of CO2 per year. This process aims to accelerate natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces. This speeds up chemical reactions between rocks, water, and air. It removes carbon dioxide from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. Cost estimates are in the US$50–200 per tonne range of CO2.: TS-94
== Other methods to capture and store CO2 ==
In addition to traditional land-based methods to remove carbon dioxide (CO2) from the air, other technologies are under development. These could reduce CO2 emissions and lower existing atmospheric CO2 levels. Carbon capture and storage (CCS) is a method to mitigate climate change by capturing CO2 from large point sources, such as cement factories or biomass power plants. It then stores it away safely instead of releasing it into the atmosphere. The IPCC estimates that the costs of halting global warming would double without CCS.
Among the most viable carbon dioxide removal methods considered alongside solar radiation modification, biochar soil amendment is already being deployed commercially. Studies indicate that the carbon it contains remains stable in soils for centuries, giving it a durable potential of removing gigatonnes of CO2 per year. Expert assessments place the net cost of removing CO2 with biochar between US$30 and $120 per tonne. Market data show that biochar supplied 94% of all durable CDR credits delivered in 2023, demonstrating current scalability. Stratospheric aerosol injection (SAI), by comparison, could reduce global temperature quickly by dispersing sulfate aerosols in the stratosphere; however, deployment at climatically relevant scale would require the design and certification of a new fleet of high‑altitude aircraft, a process estimated to take a decade or more, and ongoing operating costs of about US$18 billion for each degree Celsius of cooling. While models confirm that SAI would lower global mean temperature, there are potential side effect including ozone depletion, altered
regional precipitation patterns, and the risk of a sudden "termination shock" warming if the programme were interrupted. These systemic risks are absent from biochar deployment.
Bioenergy with carbon capture and storage (BECCS) expands on the potential of CCS and aims to lower atmospheric CO2 levels. This process uses biomass grown for bioenergy. The biomass yields energy in useful forms such as electricity, heat, biofuels, etc. through consumption of the biomass via combustion, fermentation, or pyrolysis. The process captures the CO2 that was extracted from the atmosphere when it grew. It then stores it underground or via land application as biochar. This effectively removes it from the atmosphere. This makes BECCS a negative emissions technology (NET).
Scientists estimated the potential range of negative emissions from BECCS in 2018 as 0–22 Gt per year. As of 2022, BECCS was capturing approximately 2 million tonnes per year of CO2 annually. The cost and availability of biomass limits wide deployment of BECCS.: 10 BECCS currently forms a big part of achieving climate targets beyond 2050 in modelling, such as by the Integrated Assessment Models (IAMs) associated with the IPCC process. But many scientists are sceptical due to the risk of loss of biodiversity.
Direct air capture is a process of capturing CO2 directly from the ambient air. This is in contrast to CCS which captures carbon from point sources. It generates a concentrated stream of CO2 for sequestration, utilisation or production of carbon-neutral fuel and windgas. Artificial processes vary, and there are concerns about the long-term effects of some of these processes.
== Mitigation by sector ==
=== Buildings ===
The building sector accounts for 23% of global energy-related CO2 emissions.: 141 About half of the energy is used for space and water heating. Building insulation can reduce the primary energy demand significantly. Heat pump loads may also provide a flexible resource that can participate in demand response to integrate variable renewable resources into the grid. Solar water heating uses thermal energy directly. Sufficiency measures include moving to smaller houses when the needs of households change, mixed use of spaces and the collective use of devices.: 71 Planners and civil engineers can construct new buildings using passive solar building design, low-energy building, or zero-energy building techniques. In addition, it is possible to design buildings that are more energy-efficient to cool by using lighter-coloured, more reflective materials in the development of urban areas.
Heat pumps efficiently heat buildings, and cool them by air conditioning. A modern heat pump typically transports around three to five times more thermal energy than electrical energy consumed. The amount depends on the coefficient of performance and the outside temperature.
Refrigeration and air conditioning account for about 10% of global CO2 emissions caused by fossil fuel-based energy production and the use of fluorinated gases. Alternative cooling systems, such as passive cooling building design and passive daytime radiative cooling surfaces, can reduce air conditioning use. Suburbs and cities in hot and arid climates can significantly reduce energy consumption from cooling with daytime radiative cooling.
Energy consumption for cooling is likely to rise significantly due to increasing heat and availability of devices in poorer countries. Of the 2.8 billion people living in the hottest parts of the world, only 8% currently have air conditioners, compared with 90% of people in the US and Japan. Adoption of air conditioners typically increases in warmer areas at above $10,000 annual household income. By combining energy efficiency improvements and decarbonising electricity for air conditioning with the transition away from super-polluting refrigerants, the world could avoid cumulative greenhouse gas emissions of up to 210–460 GtCO2-eq over the next four decades. A shift to renewable energy in the cooling sector comes with two advantages: Solar energy production with mid-day peaks corresponds with the load required for cooling and additionally, cooling has a large potential for load management in the electric grid.
=== Urban planning ===
Cities emitted 28 GtCO2-eq in 2020 of combined CO2 and CH4 emissions.: TS-61 This was from producing and consuming goods and services.: TS-61 Climate-smart urban planning aims to reduce sprawl to reduce the distance travelled. This lowers emissions from transportation. Switching from cars by improving walkability and cycling infrastructure is beneficial to a country's economy as a whole.
Urban forestry, lakes and other blue and green infrastructure can reduce emissions directly and indirectly by reducing energy demand for cooling.: TS-66 Methane emissions from municipal solid waste can be reduced by segregation, composting, and recycling.
=== Transport ===
Transportation accounts for 15% of emissions worldwide. Increasing the use of public transport, low-carbon freight transport and cycling are important components of transport decarbonisation.
Electric vehicles and environmentally friendly rail help to reduce the consumption of fossil fuels. In most cases, electric trains are more efficient than air transport and truck transport. Other efficiency means include improved public transport, smart mobility, carsharing and electric hybrids. Fossil-fuel for passenger cars can be included in emissions trading. Furthermore, moving away from a car-dominated transport system towards low-carbon advanced public transport system is important.
Heavyweight, large personal vehicles (such as cars) require a lot of energy to move and take up much urban space. Several alternatives modes of transport are available to replace these. The European Union has made smart mobility part of its European Green Deal. In smart cities, smart mobility is also important.
The World Bank is helping lower income countries buy electric buses. Their purchase price is higher than diesel buses. But lower running costs and health improvements due to cleaner air can offset this higher price.
Between one quarter and three quarters of cars on the road by 2050 are forecast to be electric vehicles. Hydrogen may be a solution for long-distance heavy freight trucks, if batteries alone are too heavy.
==== Shipping ====
In the shipping industry, the use of liquefied natural gas (LNG) as a marine bunker fuel is driven by emissions regulations. Ship operators must switch from heavy fuel oil to more expensive oil-based fuels, implement costly flue gas treatment technologies or switch to LNG engines. Methane slip, when gas leaks unburned through the engine, lowers the advantages of LNG. Maersk, the world's biggest container shipping line and vessel operator, warns of stranded assets when investing in transitional fuels like LNG. The company lists green ammonia as one of the preferred fuel types of the future. It has announced the first carbon-neutral vessel on the water by 2023, running on carbon-neutral methanol. Cruise operators are trialling partially hydrogen-powered ships.
Hybrid and all electric ferries are suitable for short distances. Norway's goal is an all electric fleet by 2025.
==== Air transport ====
Jet airliners contribute to climate change by emitting carbon dioxide, nitrogen oxides, contrails and particulates. Their radiative forcing is estimated at 1.3–1.4 that of CO2 alone, excluding induced cirrus cloud. In 2018, global commercial operations generated 2.4% of all CO2 emissions.
The aviation industry has become more fuel efficient. But overall emissions have risen as the volume of air travel has increased. By 2020, aviation emissions were 70% higher than in 2005 and they could grow by 300% by 2050.
It is possible to reduce aviation's environmental footprint by better fuel economy in aircraft. Optimising flight routes to lower non-CO2 effects on climate from nitrogen oxides, particulates or contrails can also help. Aviation biofuel, carbon emission trading and carbon offsetting, part of the 191 nation ICAO's Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA), can lower CO2 emissions. Short-haul flight bans, train connections, personal choices and taxation on flights can lead to fewer flights. Hybrid electric aircraft and electric aircraft or hydrogen-powered aircraft may replace fossil fuel-powered aircraft.
Experts expect emissions from aviation to rise in most projections, at least until 2040. They currently amount to 180 Mt of CO2 or 11% of transport emissions. Aviation biofuel and hydrogen can only cover a small proportion of flights in the coming years. Experts expect hybrid-driven aircraft to start commercial regional scheduled flights after 2030. Battery-powered aircraft are likely to enter the market after 2035. Under CORSIA, flight operators can purchase carbon offsets to cover their emissions above 2019 levels. CORSIA will be compulsory from 2027.
=== Agriculture, forestry and land use ===
Almost 20% of greenhouse gas emissions come from the agriculture and forestry sector. To significantly reduce these emissions, annual investments in the agriculture sector need to increase to $260 billion by 2030. The potential benefits from these investments are estimated at $4.3 trillion by 2030, offering a substantial economic return of 16-to-1.: 7–8
Mitigation measures in the food system can be divided into four categories. These are demand-side changes, ecosystem protections, mitigation on farms, and mitigation in supply chains. On the demand side, limiting food waste is an effective way to reduce food emissions. Changes to a diet less reliant on animal products such as plant-based diets are also effective.: XXV
With 21% of global methane emissions, cattle are a major driver of global warming.: 6 When rainforests are cut and the land is converted for grazing, the impact is even higher. In Brazil, producing 1 kg of beef can result in the emission of up to 335 kg CO2-eq.
Increasing the milk yield of dairy cows has been shown to reduce emissions.
Other livestock, manure management and rice cultivation also emit greenhouse gases, in addition to fossil fuel combustion in agriculture.
Important mitigation options for reducing the greenhouse gas emissions from livestock include genetic selection, introduction of methanotrophic bacteria into the rumen, vaccines, feeds, diet modification and grazing management. Other options are diet changes towards ruminant-free alternatives, such as milk substitutes and meat analogues. Non-ruminant livestock, such as poultry, emit far fewer GHGs.
It is possible to cut methane emissions in rice cultivation by improved water management, combining dry seeding and one drawdown, or executing a sequence of wetting and drying. This results in emission reductions of up to 90% compared to full flooding and even increased yields.
=== Industry ===
Industry is the largest emitter of greenhouse gases when direct and indirect emissions are included. Electrification can reduce emissions from industry. Green hydrogen can play a major role in energy-intensive industries for which electricity is not an option. Further mitigation options involve the steel and cement industry, which can switch to a less polluting production process. Products can be made with less material to reduce emission-intensity and industrial processes can be made more efficient. Finally, circular economy measures reduce the need for new materials. This also saves on emissions that would have been released from the mining of collecting of those materials.: 43
The decarbonisation of cement production requires new technologies, and therefore investment in innovation. Bioconcrete is one possibility to reduce emissions. But no technology for mitigation is yet mature. So CCS will be necessary at least in the short-term.
Another sector with a significant carbon footprint is the steel sector, which is responsible for about 7% of global emissions. Emissions can be reduced by using electric arc furnaces to melt and recycle scrap steel. To produce virgin steel without emissions, blast furnaces could be replaced by hydrogen direct reduced iron and electric arc furnaces. Alternatively, carbon capture and storage solutions can be used.
Coal, gas and oil production often come with significant methane leakage. In the early 2020s some governments recognised the scale of the problem and introduced regulations. Methane leaks at oil and gas wells and processing plants are cost-effective to fix in countries which can easily trade gas internationally. There are leaks in countries where gas is cheap; such as Iran, Russia, and Turkmenistan. Nearly all this can be stopped by replacing old components and preventing routine flaring. Coalbed methane may continue leaking even after the mine has been closed. But it can be captured by drainage and/or ventilation systems. Fossil fuel firms do not always have financial incentives to tackle methane leakage.
== Co-benefits ==
Co-benefits of climate change mitigation, also often referred to as ancillary benefits, were firstly dominated in the scientific literature by studies that describe how lower GHG emissions lead to better air quality and consequently impact human health positively. The scope of co-benefits research expanded to its economic, social, ecological and political implications.
Positive secondary effects that occur from climate mitigation and adaptation measures have been mentioned in research since the 1990s. The IPCC first mentioned the role of co-benefits in 2001, followed by its fourth and fifth assessment cycle stressing improved working environment, reduced waste, health benefits and reduced capital expenditures. In the early 2000s the OECD was further fostering its efforts in promoting ancillary benefits.
The IPCC pointed out in 2007: "Co-benefits of GHG mitigation can be an important decision criteria in analyses carried out by policy-makers, but they are often neglected" and added that the co-benefits are "not quantified, monetised or even identified by businesses and decision-makers". Appropriate consideration of co-benefits can greatly "influence policy decisions concerning the timing and level of mitigation action", and there can be "significant advantages to the national economy and technical innovation".
An analysis of climate action in the UK found that public health benefits are a major component of the total benefits derived from climate action.
=== Employment and economic development ===
Co-benefits can positively impact employment, industrial development, states' energy independence and energy self-consumption. The deployment of renewable energies can foster job opportunities. Depending on the country and deployment scenario, replacing coal power plants with renewable energy can more than double the number of jobs per average MW capacity. Investments in renewable energies, especially in solar- and wind energy, can boost the value of production. Countries which rely on energy imports can enhance their energy independence and ensure supply security by deploying renewables. National energy generation from renewables lowers the demand for fossil fuel imports which scales up annual economic saving.
The European Commission forecasts a shortage of 180,000 skilled workers in hydrogen production and 66,000 in solar photovoltaic power by 2030.
=== Energy security ===
A higher share of renewables can additionally lead to more energy security. Socioeconomic co-benefits have been analysed such as energy access in rural areas and improved rural livelihoods. Rural areas which are not fully electrified can benefit from the deployment of renewable energies. Solar-powered mini-grids can remain economically viable, cost-competitive and reduce the number of power cuts. Energy reliability has additional social implications: stable electricity improves the quality of education.
The International Energy Agency (IEA) spelled out the "multiple benefits approach" of energy efficiency while the International Renewable Energy Agency (IRENA) operationalised the list of co-benefits of the renewable energy sector.
=== Health and well-being ===
The health benefits from climate change mitigation are significant. Potential measures can not only mitigate future health impacts from climate change but also improve health directly. Climate change mitigation is interconnected with various health co-benefits, such as those from reduced air pollution. Air pollution generated by fossil fuel combustion is both a major driver of global warming and the cause of a large number of annual deaths. Some estimates are as high as 8.7 million excess deaths during 2018. A 2023 study estimated that fossil fuels kill over 5 million people each year, as of 2019, by causing diseases such as heart attack, stroke and chronic obstructive pulmonary disease. Particulate air pollution kills by far the most, followed by ground-level ozone.
Mitigation policies can also promote healthier diets such as less red meat, more active lifestyles, and increased exposure to green urban spaces. Access to urban green spaces provides benefits to mental health as well.: 18 The increased use of green and blue infrastructure can reduce the urban heat island effect. This reduces heat stress on people.: TS-66
=== Climate change adaptation ===
Some mitigation measures have co-benefits in the area of climate change adaptation.: 8–63 This is for example the case for many nature-based solutions.: 4–94 : 6 Examples in the urban context include urban green and blue infrastructure which provide mitigation as well as adaptation benefits. This can be in the form of urban forests and street trees, green roofs and walls, urban agriculture and so forth. The mitigation is achieved through the conservation and expansion of carbon sinks and reduced energy use of buildings. Adaptation benefits come for example through reduced heat stress and flooding risk.: 8–64
== Negative side effects ==
Mitigation measures can also have negative side effects and risks.: TS-133 In agriculture and forestry, mitigation measures can affect biodiversity and ecosystem functioning.: TS-87 In renewable energy, mining for metals and minerals can increase threats to conservation areas. There is some research into ways to recycle solar panels and electronic waste. This would create a source for materials so there is no need to mine them.
Scholars have found that discussions about risks and negative side effects of mitigation measures can lead to deadlock or the feeling that there are insuperable barriers to taking action.
== Costs and funding ==
Several factors affect mitigation cost estimates. One is the baseline. This is a reference scenario that the alternative mitigation scenario is compared with. Others are the way costs are modelled, and assumptions about future government policy.: 622 Cost estimates for mitigation for specific regions depend on the quantity of emissions allowed for that region in future, as well as the timing of interventions.: 90
Mitigation costs will vary according to how and when emissions are cut. Early, well-planned action will minimise the costs. Globally, the benefits of keeping warming under 2 °C exceed the costs, which according to The Economist are affordable.
Economists estimate the cost of climate change mitigation at between 1% and 2% of GDP. While this is a large sum, it is still far less than the subsidies governments provide to the ailing fossil fuel industry. The International Monetary Fund estimated this at more than $5 trillion per year.
Another estimate says that financial flows for climate mitigation and adaptation are going to be over $800 billion per year. These financial requirements are predicted to exceed $4 trillion per year by 2030.
Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs.: 300 The economic repercussions of mitigation vary widely across regions and households, depending on policy design and level of international cooperation. Delayed global cooperation increases policy costs across regions, especially in those that are relatively carbon intensive at present. Pathways with uniform carbon values show higher mitigation costs in more carbon-intensive regions, in fossil-fuels exporting regions and in poorer regions. Aggregate quantifications expressed in GDP or monetary terms undervalue the economic effects on households in poorer countries. The actual effects on welfare and well-being are comparatively larger.
Cost–benefit analysis may be unsuitable for analysing climate change mitigation as a whole. But it is still useful for analysing the difference between a 1.5 °C target and 2 °C. One way of estimating the cost of reducing emissions is by considering the likely costs of potential technological and output changes. Policymakers can compare the marginal abatement costs of different methods to assess the cost and amount of possible abatement over time. The marginal abatement costs of the various measures will differ by country, by sector, and over time.
Eco-tariffs on only imports contribute to reduced global export competitiveness and to deindustrialisation.
=== Avoided costs of climate change effects ===
It is possible to avoid some of the costs of the effects of climate change by limiting climate change. According to the Stern Review, inaction can be as high as the equivalent of losing at least 5% of global gross domestic product (GDP) each year, now and forever. This can be up to 20% of GDP or more when including a wider range of risks and impacts. But mitigating climate change will only cost about 2% of GDP. Also it may not be a good idea from a financial perspective to delay significant reductions in greenhouse gas emissions.
Mitigation solutions are often evaluated in terms of costs and greenhouse gas reduction potentials. This fails to take into account the direct effects on human well-being.
=== Distributing emissions abatement costs ===
Mitigation at the speed and scale required to limit warming to 2 °C or below implies deep economic and structural changes. These raise multiple types of distributional concerns across regions, income classes and sectors.
There have been different proposals on how to allocate responsibility for cutting emissions.: 103 These include egalitarianism, basic needs according to a minimum level of consumption, proportionality and the polluter-pays principle. A specific proposal is "equal per capita entitlements".: 106 This approach has two categories. In the first category, emissions are allocated according to national population. In the second category, emissions are allocated in a way that attempts to account for historical or cumulative emissions.
=== Funding ===
In order to reconcile economic development with mitigating carbon emissions, developing countries need particular support. This would be both financial and technical. The IPCC found that accelerated support would also tackle inequities in financial and economic vulnerability to climate change. One way to achieve this is the Kyoto Protocol's Clean Development Mechanism (CDM).
== Policies ==
=== National policies ===
Climate change mitigation policies can have a large and complex impact on the socio-economic status of individuals and countries This can be both positive and negative. It is important to design policies well and make them inclusive. Otherwise climate change mitigation measures can impose higher financial costs on poor households.
An evaluation was conducted on 1,500 climate policy interventions made between 1998 and 2022. The interventions took place in 41 countries and across 6 continents, which together contributed 81% of the world's total emissions as of 2019. The evaluation found 63 successful interventions that resulted in significant emission reductions; the total CO2 release averted by these interventions was between 0.6 and 1.8 billion metric tonnes. The study focused on interventions with at least 4.5% emission reductions, but the researchers noted that meeting the reductions required by the Paris Agreement would require 23 billion metric tonnes per year. Generally, carbon pricing was found to be most effective in developed countries, while regulation was most effective in the developing countries. Complementary policy mixes benefited from synergies, and were mostly found to be more effective interventions than the implementation of isolated policies.
The OECD recognise 48 distinct climate mitigation policies suitable for implementation at national level. Broadly, these can be categorised into three types: market based instruments, non market based instruments and other policies.
Other policies include the Establishing an Independent climate advisory body.
Non market based policies include the Implementing or tighening of Regulatory standards. These set technology or performance standards. They can be effective in addressing the market failure of informational barriers.: 412
Among market based policies, the carbon price has been found to be the most effective (at least for developed economies), and has its own section below. Additional market based policy instruments for climate change mitigation include:
Emissions taxes These often require domestic emitters to pay a fixed fee or tax for every tonne of CO2 emissions they release into the atmosphere.: 4123 Methane emissions from fossil fuel extraction are also occasionally taxed. But methane and nitrous oxide from agriculture are typically not subject to tax.
Removing unhelpful subsidies: Many countries provide subsidies for activities that affect emissions. For example, significant fossil fuel subsidies are present in many countries. Phasing-out fossil fuel subsidies is crucial to address the climate crisis. It must however be done carefully to avoid protests and making poor people poorer.
Creating helpful subsidies: Creating subsidies and financial incentives. One example is energy subsidies to support clean generation which is not yet commercially viable such as tidal power.
Tradable permits: A permit system can limit emissions.: 415
==== Carbon pricing ====
Imposing additional costs on greenhouse gas emissions can make fossil fuels less competitive and accelerate investments into low-carbon sources of energy. A growing number of countries raise a fixed carbon tax or participate in dynamic carbon emission trading (ETS) systems. In 2021, more than 21% of global greenhouse gas emissions were covered by a carbon price. This was a big increase from earlier due to the introduction of the Chinese national carbon trading scheme.: 23
Trading schemes offer the possibility to limit emission allowances to certain reduction targets. However, an oversupply of allowances keeps most ETS at low price levels around $10 with a low impact. This includes the Chinese ETS which started with $7/tCO2 in 2021. One exception is the European Union Emission Trading Scheme where prices began to rise in 2018. They reached about €80/tCO2 in 2022. This results in additional costs of about €0.04/KWh for coal and €0.02/KWh for gas combustion for electricity, depending on the emission intensity. Industries which have high energy requirements and high emissions often pay only very low energy taxes, or even none at all.: 11–80
While this is often part of national schemes, carbon offsets and credits can be part of a voluntary market as well such as on the international market. Notably, the company Blue Carbon of the UAE has bought ownership over an area equivalent to the United Kingdom to be preserved in return for carbon credits.
=== International agreements ===
International cooperation is considered a critical enabler for climate action: 52 while conflicts generally hamper it. Almost all countries are parties to the United Nations Framework Convention on Climate Change (UNFCCC). The ultimate objective of the UNFCCC is to stabilise atmospheric concentrations of greenhouse gases at a level that would prevent dangerous human interference with the climate system.
Although not designed for this purpose, the Montreal Protocol has benefited climate change mitigation efforts. The Montreal Protocol is an international treaty that has successfully reduced emissions of ozone-depleting substances such as CFCs. These are also greenhouse gases.
==== Paris Agreement ====
== History ==
Historically efforts to deal with climate change have taken place at a multinational level. They involve attempts to reach a consensus decision at the United Nations, under the United Nations Framework Convention on Climate Change (UNFCCC). This is the dominant approach historically of engaging as many international governments as possible in taking action on a worldwide public issue. The Montreal Protocol in 1987 is a precedent that this approach can work. But some critics say the top-down framework of only utilising the UNFCCC consensus approach is ineffective. They put forward counter-proposals of bottom-up governance. At this same time this would lessen the emphasis on the UNFCCC.
The Kyoto Protocol to the UNFCCC adopted in 1997 set out legally binding emission reduction commitments for the "Annex 1" countries.: 817 The Protocol defined three international policy instruments ("Flexibility Mechanisms") which could be used by the Annex 1 countries to meet their emission reduction commitments. According to Bashmakov, use of these instruments could significantly reduce the costs for Annex 1 countries in meeting their emission reduction commitments.: 402
The Paris Agreement reached in 2015 succeeded the Kyoto Protocol which expired in 2020. Countries that ratified the Kyoto protocol committed to reduce their emissions of carbon dioxide and five other greenhouse gases, or engage in carbon emissions trading if they maintain or increase emissions of these gases.
In 2015, the UNFCCC's "structured expert dialogue" came to the conclusion that, "in some regions and vulnerable ecosystems, high risks are projected even for warming above 1.5 °C". Together with the strong diplomatic voice of the poorest countries and the island nations in the Pacific, this expert finding was the driving force leading to the decision of the 2015 Paris Climate Conference to lay down this 1.5 °C long-term target on top of the existing 2 °C goal.
== Barriers ==
There are individual, institutional and market barriers to achieving climate change mitigation.: 5–71 They differ for all the different mitigation options, regions and societies.
Difficulties with accounting for carbon dioxide removal can act as economic barriers. This would apply to BECCS (bioenergy with carbon capture and storage).: 6–42 The strategies that companies follow can act as a barrier. But they can also accelerate decarbonisation.: 5–84
In order to decarbonise societies the state needs to play a predominant role. This is because it requires a massive coordination effort.: 213 This strong government role can only work well if there is social cohesion, political stability and trust.: 213
For land-based mitigation options, finance is a major barrier. Other barriers are cultural values, governance, accountability and institutional capacity.: 7–5
Developing countries face further barriers to mitigation.
The cost of capital increased in the early 2020s. A lack of available capital and finance is common in developing countries. Together with the absence of regulatory standards, this barrier supports the proliferation of inefficient equipment.
There are also financial and capacity barrier in many of these countries.: 97
One study estimates that only 0.12% of all funding for climate-related research goes on the social science of climate change mitigation. Vastly more funding goes on natural science studies of climate change. Considerable sums also go on studies of the impact of climate change and adaptation to it.
== Society and culture ==
=== Commitments to divest ===
More than 1000 organisations with investments worth US$8 trillion have made commitments to fossil fuel divestment. Socially responsible investing funds allow investors to invest in funds that meet high environmental, social and corporate governance (ESG) standards.
=== Impacts of the COVID-19 pandemic ===
The COVID-19 pandemic led some governments to shift their focus away from climate action, at least temporarily. This obstacle to environmental policy efforts may have contributed to slowed investment in green energy technologies. The economic slowdown resulting from COVID-19 added to this effect.
In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions. The direct impact of pandemic policies had a negligible long-term impact on climate change.
== Examples by country ==
=== United States ===
=== China ===
China has committed to peak emissions by 2030 and reach net zero by 2060. Warming cannot be limited to 1.5 °C if any coal plants in China (without carbon capture) operate after 2045. The Chinese national carbon trading scheme started in 2021.
=== European Union ===
The European Commission estimates that an additional €477 million in annual investment is needed for the European Union to meet its Fit-for-55 decarbonisation goals.
In the European Union, government-driven policies and the European Green Deal have helped position greentech (as an example) as a vital area for venture capital investment. By 2023, venture capital in the EU's greentech sector equalled that of the United States, reflecting a concerted effort to drive innovation and mitigate climate change through targeted financial support. The European Green Deal has fostered policies that contributed to a 30% rise in venture capital for greentech companies in the EU from 2021 to 2023, despite a downturn in other sectors during the same period.
While overall venture capital investment in the EU remains about six times lower than in the United States, the greentech sector has closed this gap significantly, attracting substantial funding. Key areas benefitting from increased investments are energy storage, circular economy initiatives, and agricultural technology. This is supported by the EU's ambitious goal to reduce greenhouse gas emissions by at least 55% by 2030.
== Related approaches ==
=== Relationship with solar radiation modification (SRM) ===
While solar radiation modification (SRM) could reduce surface temperatures, it temporarily masks climate change rather than addressing the root cause, which is greenhouse gases.: 14–56 SRM would work by altering how much solar radiation the Earth absorbs.: 14–56 Examples include reducing the amount of sunlight reaching the surface, reducing the optical thickness and lifetime of clouds, and changing the ability of the surface to reflect radiation. The IPCC describes SRM as a climate risk reduction strategy or supplementary option rather than a climate mitigation option.
The terminology in this area is still evolving. Experts sometimes use the term geoengineering or climate engineering in the scientific literature for both CDR or SRM, if the techniques are used at a global scale.: 6–11 IPCC reports no longer use the terms geoengineering or climate engineering.
== See also ==
Carbon budget
Carbon offsets and credits
Carbon price
Climate movement
Climate change denial
Tipping points in the climate system
== References == |
Climate crisis | Climate crisis is a term that is used to describe global warming and climate change and their effects. This term and the term climate emergency have been used to emphasize the threat of global warming to Earth's natural environment and to humans, and to urge aggressive climate change mitigation and transformational adaptation.
The term climate crisis is used by those who "believe it evokes the gravity of the threats the planet faces from continued greenhouse gas emissions and can help spur the kind of political willpower that has long been missing from climate advocacy". They believe, much as global warming provoked more emotional engagement and support for action than climate change, calling climate change a crisis could have an even stronger effect.
A study has shown the term climate crisis invokes a strong emotional response by conveying a sense of urgency. However, some caution this response may be counter-productive and may cause a backlash due to perceptions of alarmist exaggeration.
In the scientific journal BioScience, a January 2020 article that was endorsed by over 11,000 scientists states: "the climate crisis has arrived" and that an "immense increase of scale in endeavors to conserve our biosphere is needed to avoid untold suffering due to the climate crisis".
== Scientific basis ==
Until the mid 2010s, the scientific community had been using neutral, constrained language when discussing climate change. Advocacy groups, politicians and media have traditionally been using more-powerful language than that used by climate scientists. From around 2014, a shift in scientists' language connoted an increased sense of urgency.: 2546 Use of the terms urgency, climate crisis and climate emergency in scientific publications and in mass media has grown. Scientists have called for more-extensive action and transformational climate-change adaptation that focuses on large-scale change in systems.: 2546
In 2020, a group of over 11,000 scientists said in a paper in BioScience describing global warming as a climate emergency or climate crisis was appropriate. The scientists stated an "immense increase of scale in endeavor" is needed to conserve the biosphere. They warned about "profoundly troubling signs", which may have many indirect effects such as large-scale human migration and food insecurity; these signs include increases in dairy and meat production, fossil fuel consumption, greenhouse gas emissions and deforestation, activities that are all concurrent with upward trends in climate-change effects such as rising global temperatures, global ice melt and extreme weather.
In 2019, scientists published an article in Nature saying evidence from climate tipping points alone suggests "we are in a state of planetary emergency". They defined emergency as a product of risk and urgency, factors they said are "acute". Previous research had shown individual tipping points could be exceeded with a 1–2 °C (1.8–3.6 °F) of global temperature increase; warming has already exceeded 1 °C (1.8 °F). A global cascade of tipping points is possible with greater warming.
== Definitions ==
In the context of climate change, the word crisis is used to denote "a crucial or decisive point or situation that could lead to a tipping point". It is a situation with an "unprecedented circumstance". A similar definition states in this context, crisis means "a turning point or a condition of instability or danger" and implies "action needs to be taken now or else the consequences will be disastrous". Another definition defines climate crisis as "the various negative effects that unmitigated climate change is causing or threatening to cause on our planet, especially where these effects have a direct impact on humanity".
== Use of the term ==
=== 20th century ===
Former U.S. Vice President Al Gore has used crisis terminology since the 1980s; the Climate Crisis Coalition, which was formed in 2004, formalized the term climate crisis. A 1990 report by the American University International Law Review includes legal texts that use the word crisis. "The Cairo Compact: Toward a Concerted World-Wide Response to the Climate Crisis" (1989) states: "All nations ... will have to cooperate on an unprecedented scale. They will have to make difficult commitments without delay to address this crisis."
=== 21st century ===
In the late 2010s, the phrase climate crisis emerged "as a crucial piece of the climate hawk lexicon", and was adopted by the Green New Deal, The Guardian, Greta Thunberg, and U.S. Democratic political candidates such as Kamala Harris. At the same time, it came into more-popular use following a series of warnings from climate scientists and newly-energized activists.
In the U.S. in late 2018, the United States House of Representatives established the House Select Committee on the Climate Crisis, the name of which was regarded as "a reminder of how much energy politics have changed in the last decade". The original House climate committee had been called the "Select Committee on Energy Independence and Global Warming" in 2007. It was abolished in 2011 when Republicans regained control of the House.
The advocacy group Public Citizen reported that in 2018, less than 10% of articles in top-50 U.S. newspapers used the terms crisis or emergency in the context of climate change. In the same year, 3.5% of national television news segments in the U.S. referred to climate change as a crisis or an emergency (50 of 1,400). In 2019, Public Citizen launched a campaign called "Call it a Climate Crisis"; it urged major media organizations to adopt the term climate crisis. In the first four months of 2019, the number of uses of the term in U.S. media tripled to 150. Likewise, the Sierra Club, the Sunrise Movement, Greenpeace, and other environmental and progressive organizations joined in a June 6, 2019 Public Citizen letter to news organizations urging the news organizations to call climate change and human inaction "what it is–a crisis–and to cover it like one".
In 2019, the language describing climate appeared to change: the UN Secretary General's address at the 2019 UN Climate Action Summit used more emphatic language; Al Gore's campaign The Climate Reality Project, Greenpeace and the Sunrise Movement petitioned news organizations to alter their language; and in May 2019, The Guardian changed its style guide to favor the terms "climate emergency, crisis or breakdown" and "global heating". Editor-in-Chief Katharine Viner said: "We want to ensure that we are being scientifically precise, while also communicating clearly with readers on this very important issue. The phrase 'climate change', for example, sounds rather passive and gentle when what scientists are talking about is a catastrophe for humanity." The Guardian became a lead partner in Covering Climate Now, an initiative of news organizations Columbia Journalism Review and The Nation that was founded in 2019 to address the need for stronger climate coverage.
In May 2019, The Climate Reality Project promoted an open petition of news organizations to use climate crisis instead of climate change and global warming. The NGO said: "it's time to abandon both terms in culture".
In June 2019, Spanish news agency EFE announced its preferred phrase was "crisis climática". In November 2019, Hindustan Times also adopted the term because climate change "does not correctly reflect the enormity of the existential threat". The Polish newspaper Gazeta Wyborcza also uses the term climate crisis rather than climate change; one of its editors described climate change as one of the most-important topics the paper has ever covered.
Also in June 2019, the Canadian Broadcasting Corporation (CBC) changed its language guide to say: "Climate crisis and climate emergency are OK in some cases as synonyms for 'climate change'. But they're not always the best choice ... For example, 'climate crisis' could carry a whiff of advocacy in certain political coverage". Journalism professor Sean Holman does not agree with this and said in an interview:
It's about being accurate in terms of the scope of the problem that we are facing. And in the media we, generally speaking, don't have any hesitation about naming a crisis when it is a crisis. Look at the opioid epidemic [in the U.S.], for example. We call it an epidemic because it is one. So why are we hesitant about saying the climate crisis is a crisis?
In June 2019, climate activists demonstrated outside the offices of The New York Times; they urged the newspaper's editors to adopt terms such as climate emergency or climate crisis. This kind of public pressure led New York City Council to make New York the largest city in the world to formally adopt a climate emergency declaration.
In November 2019, the website Oxford Dictionaries named climate crisis Word of the year for 2019. The term was chosen because it matches the "ethos, mood, or preoccupations of the passing year".
In 2021, the Finnish newspaper Helsingin Sanomat created a free variable font called Climate Crisis that has eight weights that correlate with Arctic sea ice decline, visualizing historical changes in ice melt. The newspaper's art director said the font both evokes the aesthetics of environmentalism and is a data visualization graphic.
In updates to the World Scientists' Warning to Humanity of 2021 and 2022, scientists used the terms climate crisis and climate emergency; the title of the publications is "World Scientists' Warning of a Climate Emergency". They said: "we need short, frequent, and easily accessible updates on the climate emergency".
Within weeks of his second inauguration in 2025, U.S. President Donald Trump's administration flagged hundreds of words to limit or avoid on government websites, memos, and unofficial agency guidance—the list including climate crisis.
== Effectiveness ==
In September 2019, Bloomberg journalist Emma Vickers said crisis terminology may be "showing results", citing a 2019 poll by The Washington Post and the Kaiser Family Foundation saying 38% of U.S. adults termed climate change "a crisis" while an equal number called it "a major problem but not a crisis". Five years earlier, 23% of U.S. adults considered climate change to be a crisis. As of 2019, use of crisis terminology in non-binding climate-emergency declarations is regarded as ineffective in making governments "shift into action".
=== Concerns about crisis terminology ===
Emergency framing may have several disadvantages. Such framing may implicitly prioritize climate change over other important social issues, encouraging competition among activists rather than cooperation. It could also de-emphasize dissent within the climate-change movement. Emergency framing may suggest a need for solutions by government, which provides less-reliable long-term commitment than does popular mobilization, and which may be perceived as being "imposed on a reluctant population". Without immediate dramatic effects of climate change, emergency framing may be counterproductive by causing disbelief, disempowerment in the face of a problem that seems overwhelming, and withdrawal.
There could also be a "crisis fatigue" in which urgency to respond to threats loses its appeal over time. Crisis terminology could lose audiences if meaningful policies to address the emergency are not enacted. According to researchers Susan C. Moser and Lisa Dilling of University of Colorado, appeals to fear usually do not create sustained, constructive engagement; they noted psychologists consider human responses to danger—fight, flight or freeze—can be maladaptive if they do not reduce the danger. According to Sander van der Linden, director of the Cambridge Social Decision-Making Lab, fear is a "paralyzing emotion". He favors climate crisis over other terms because it conveys a sense of both urgency and optimism, and not a sense of doom. Van der Linden said: "people know that crises can be avoided and that they can be resolved".
Climate scientist Katharine Hayhoe said in early 2019 crisis framing is only "effective for those already concerned about climate change, but complacent regarding solutions". She added it "is not yet effective" for those who perceive climate activists "to be alarmist Chicken Littles", and that "it would further reinforce their pre-conceived—and incorrect—notions". According to Nick Reimer, journalists in Germany say the word crisis may be misunderstood to mean climate change is "inherently episodic"—crises are "either solved or they pass"—or as a temporary state before a return to normalcy that is not possible. Arnold Schwarzenegger, organizer of the Austrian World Summit for climate action, said people are not motivated by the term climate change; according to Schwarzenegger, focusing on the word pollution might evoke be a more-direct and negative connotation. A 2023 U.S. survey found no evidence that climate crisis or climate emergency—terms less familiar to those surveyed—elicit more perceived urgency than climate change or global warming.
=== Psychological and neuroscientific studies ===
In 2019, an advertising consulting agency conducted a neuroscientific study involving 120 U.S. people who were equally divided into supporters of the Republican Party, the Democratic Party and independents. The study involved electroencephalography (EEG) and galvanic skin response (GSR) measurements. Responses to the terms climate crisis, environmental destruction, environmental collapse, weather destabilization, global warming and climate change were measured. The study found Democrats had a 60% greater emotional response to climate crisis than to climate change. In Republicans, the emotional response to climate crisis was three times stronger than that for climate change. According to CBS News, climate crisis "performed well in terms of responses across the political spectrum and elicited the greatest emotional response among independents". The study concluded climate crisis elicited stronger emotional responses than neutral and "worn out" terms like global warming and climate change. Climate crisis was found to encourage a sense of urgency, though not a strong-enough response to cause cognitive dissonance that would cause people to generate counterarguments.
== Related terminology ==
Research has shown the naming of a phenomenon and the way it is framed "has a tremendous effect on how audiences come to perceive that phenomenon" and "can have a profound impact on the audience's reaction". Climate change, and its real and hypothetical effects, are usually described in scientific-and-practitioner literature in terms of climate risks.
The many related terms other than climate crisis include:
global weirding (author and environmentalist L. Hunter Lovins, as a variation of global warming, early 2000s)
climate catastrophe (used with reference to a 2019 David Attenborough documentary, the 2019–20 Australian bushfire season, and the 2022 Pakistan floods)
threats that impact the earth (World Wildlife Fund, 2012—)
climate breakdown (climate scientist Peter Kalmus, 2018)
climate chaos ("The New York Times" article title, 2019; U.S. Democratic candidates, 2019; and an Ad Age marketing team, 2019)
climate ruin (U.S. Democratic candidates, 2019)
global heating (Richard A. Betts, Met Office U.K., 2018)
global overheating (Public Citizen, 2019)
climate emergency (11,000 scientists' warning letter in BioScience, and in The Guardian, both 2019),
ecological breakdown, ecological crisis and ecological emergency (all set forth by climate activist Greta Thunberg, 2019)
global meltdown, Scorched Earth, The Great Collapse, and Earthshattering (an Ad Age marketing team, 2019)
climate disaster (The Guardian, 2019)
environmental Armageddon (Fiji Prime Minister Frank Bainimarama)
climate calamity (Los Angeles Times, 2022)
climate havoc (The New York Times, 2022)
climate pollution, carbon pollution (Grist, 2022)
global boiling (U.N. Secretary-General António Guterres speech, July 2023)
climate breaking point (Stuart P.M. Mackintosh, The Hill, August 2023)
(Has humanity) broken the climate (The Guardian, August 2023)
(climate) abyss (spokesman for the United Nations secretary general, May 2024)
climate hell (U.N. Secretary-General António Guterres, June 2024)
In addition to climate crisis, other terms have been investigated for their effects upon audiences, including global warming, climate change, climatic disruption, environmental destruction, weather destabilization and environmental collapse.
In 2022, The New York Times journalist Amanda Hess said "end of the world" characterizations of the future, such as climate apocalypse, are often used to refer to the current climate crisis, and that the characterization is spreading from "the ironized hellscape of the internet" to books and film.
== See also ==
Climate psychology – Field of psychology
Economic analysis of climate change
Environmental communication – Type of communication
Extinction Rebellion – Environmental pressure group
Human extinction risk estimates – Hypothetical end of the human species
Media coverage of climate change
Psychology of climate change denial – Human behaviour with regards to climate change denial
Public opinion on climate change – Aspect of worldwide public opinion
== Footnotes ==
== References ==
== Further reading ==
"Act now and avert a climate crisis (editorial)". Nature. September 15, 2019. Archived from the original on September 22, 2019. (Nature joining Covering Climate Now.)
Feldman, Lauren; Hart, P. Sol (November 16, 2021). "Upping the ante? The effects of "emergency" and "crisis" framing in climate change news". Climatic Change. 169 (10): 10. Bibcode:2021ClCh..169...10F. doi:10.1007/s10584-021-03219-5. S2CID 244119978.
Hall, Aaron (November 27, 2019). "Renaming Climate Change: Can a New Name Finally Make Us Take Action". Ad Age. Archived from the original on December 21, 2019. (advertising perspective by a "professional namer")
Visram, Talib (December 6, 2021). "The language of climate is evolving, from 'change' to 'catastrophe'". Fast Company. Archived from the original on December 6, 2021.
== External links ==
Covering Climate Now (CCNow), a collaboration among news organizations "to produce more informed and urgent climate stories" (archive) |
Cloud computing | Cloud computing is "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand," according to ISO.
== Essential characteristics ==
In 2011, the National Institute of Standards and Technology (NIST) identified five "essential characteristics" for cloud systems. Below are the exact definitions according to NIST:
On-demand self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider."
Broad network access: "Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations)."
Resource pooling: " The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand."
Rapid elasticity: "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time."
Measured service: "Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
By 2023, the International Organization for Standardization (ISO) had expanded and refined the list.
== History ==
The history of cloud computing extends to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This was a time of exploration and experimentation with ways to make large-scale computing power available to more users through time-sharing, optimizing the infrastructure, platform, and applications, and increasing efficiency for end users.
The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom. The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.
In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.
The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.
== Value proposition ==
Cloud computing can enable shorter time to market by providing pre-configured tools, scalable resources, and managed services, allowing users to focus on their core business value instead of maintaining infrastructure. Cloud platforms can enable organizations and individuals to reduce upfront capital expenditures on physical infrastructure by shifting to an operational expenditure model, where costs scale with usage. Cloud platforms also offer managed services and tools, such as artificial intelligence, data analytics, and machine learning, which might otherwise require significant in-house expertise and infrastructure investment.
While cloud computing can offer cost advantages through effective resource optimization, organizations often face challenges such as unused resources, inefficient configurations, and hidden costs without proper oversight and governance. Many cloud platforms provide cost management tools, such as AWS Cost Explorer and Azure Cost Management, and frameworks like FinOps have emerged to standardize financial operations in the cloud. Cloud computing also facilitates collaboration, remote work, and global service delivery by enabling secure access to data and applications from any location with an internet connection.
Cloud providers offer various redundancy options for core services, such as managed storage and managed databases, though redundancy configurations often vary by service tier. Advanced redundancy strategies, such as cross-region replication or failover systems, typically require explicit configuration and may incur additional costs or licensing fees.
Cloud environments operate under a shared responsibility model, where providers are typically responsible for infrastructure security, physical hardware, and software updates, while customers are accountable for data encryption, identity and access management (IAM), and application-level security. These responsibilities vary depending on the cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS)—with customers typically having more control and responsibility in IaaS environments and progressively less in PaaS and SaaS models, often trading control for convenience and managed services.
== Factors influencing the adoption and suitability of cloud computing ==
The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization.
Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups.
On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs.
In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations.
== Challenges and limitations ==
One of the main challenges of cloud computing, in comparison to more traditional on-premises computing, is data security and privacy. Cloud users entrust their sensitive data to third-party providers, who may not have adequate measures to protect it from unauthorized access, breaches, or leaks. Cloud users also face compliance risks if they have to adhere to certain regulations or standards regarding data protection, such as GDPR or HIPAA.
Another challenge of cloud computing is reduced visibility and control. Cloud users may not have full insight into how their cloud resources are managed, configured, or optimized by their providers. They may also have limited ability to customize or modify their cloud services according to their specific needs or preferences. Complete understanding of all technology may be impossible, especially given the scale, complexity, and deliberate opacity of contemporary systems; however, there is a need for understanding complex technologies and their interconnections to have power and agency within them. The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.
Additionally, cloud migration is a significant challenge. This process involves transferring data, applications, or workloads from one cloud environment to another, or from on-premises infrastructure to the cloud. Cloud migration can be complicated, time-consuming, and expensive, particularly when there are compatibility issues between different cloud platforms or architectures. If not carefully planned and executed, cloud migration can lead to downtime, reduced performance, or even data loss.
=== Cloud migration challenges ===
According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:
"Understanding application dependencies"
"Comparing on-premise and cloud costs"
"Assessing technical feasibility."
=== Implementation challenges ===
Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.
=== Cloud cost overruns ===
In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization.
The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns.
=== Service Level Agreements ===
Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.
=== Leaky abstractions ===
Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize.
=== Service lock-in within the same vendor ===
Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change.
=== Security and privacy ===
Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end-users' choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.
According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users, there may be a possibility that information belonging to different customers resides on the same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak. Dropbox had been breached in October 2014, having over seven million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).
There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership. Physical control of the computer equipment (private cloud) is more secure than having the equipment off-site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services. Some small businesses that do not have expertise in IT security could find that it is more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes do not read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.
The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems.
== Service models ==
The National Institute of Standards and Technology recognized three cloud service models in 2011: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The International Organization for Standardization (ISO) later identified additional models in 2023, including "Network as a Service", "Communications as a Service", "Compute as a Service", and "Data Storage as a Service".
=== Infrastructure as a service (IaaS) ===
Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. The use of containers offers higher performance than virtualization because there is no hypervisor overhead. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.
The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."
IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the number of resources allocated and consumed.
=== Platform as a service (PaaS) ===
The NIST's definition of cloud computing defines Platform as a Service as:
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.
PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including an operating system, programming-language execution environment, database, and the web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.
Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools.
=== Software as a service (SaaS) ===
The NIST's definition of cloud computing defines Software as a Service as:
The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.
=== Serverless computing ===
Serverless computing allows customers to use various cloud capabilities without the need to provision, deploy, or manage hardware or software resources, apart from providing their application code or data. ISO/IEC 22123-2:2023 classifies serverless alongside Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) under the broader category of cloud service categories. Notably, while ISO refers to these classifications as cloud service categories, the National Institute of Standards and Technology (NIST) refers to them as service models.
== Deployment models ==
"A cloud deployment model represents the way in which cloud computing can be organized based on the control and sharing of physical or virtual resources." Cloud deployment models define the fundamental patterns of interaction between cloud customers and cloud providers. They do not detail implementation specifics or the configuration of resources.
=== Private ===
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
=== Public ===
Cloud services are considered "public" when they are delivered over the public Internet, and they may be offered as a paid subscription, or free of charge. Architecturally, there are few differences between public- and private-cloud services, but security concerns increase substantially when services (applications, storage, and other resources) are shared by multiple customers. Most public-cloud providers offer direct-connection services that allow customers to securely link their legacy data centers to their cloud-resident applications.
Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.
=== Hybrid ===
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it cannot be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.
Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.
=== Community ===
Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether it is managed internally or by a third-party, and hosted internally or externally, the costs are distributed among fewer users compared to a public cloud (but more than a private cloud). As a result, only a portion of the potential cost savings of cloud computing is achieved.
=== Multi cloud ===
According to ISO/IEC 22123-1: "multi-cloud is a cloud deployment model in which a customer uses public cloud services provided by two or more cloud service providers". Poly cloud refers to the use of multiple public clouds for the purpose of leveraging specific services that each provider offers. It differs from Multi cloud in that it is not designed to increase flexibility or mitigate against failures but is rather used to allow an organization to achieve more than could be done with a single provider.
== Market ==
According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025. Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023. According to a McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030. In 2022, more than $1.3 trillion in enterprise IT spending was at stake from the shift to the cloud, growing to almost $1.8 trillion in 2025, according to Gartner.
The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:: Section 3
fragmentation of the digital single market across the EU
concerns about contracts including reservations about data access and ownership, data portability, and change control
variations in standards applicable to cloud computing
The Communication set out a series of "digital agenda actions" which the Commission proposed to undertake in order to support the development of a fair and effective market for cloud computing services.: Pages 6–14
== Cloud Computing Vendors ==
As of 2025, the three largest cloud computing providers by market share, commonly referred to as hyperscalers, are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These companies dominate the global cloud market due to their extensive infrastructure, broad service offerings, and scalability.
In recent years, organizations have increasingly adopted alternative cloud providers, which offer specialized services that distinguish them from hyperscalers. These providers may offer advantages such as lower costs, improved cost transparency and predictability, enhanced data sovereignty (particularly within regions such as the European Union to comply with regulations like the General Data Protection Regulation (GDPR)), stronger alignment with local regulatory requirements, or industry-specific services.
Alternative cloud providers are often part of multi-cloud strategies, where organizations use multiple cloud services—both from hyperscalers and specialized providers—to optimize performance, compliance, and cost efficiency. However, they do not necessarily serve as direct replacements for hyperscalers, as their offerings are typically more specialized.
=== Alternative Cloud Providers ===
Several alternative cloud providers offer specialized services that differentiate them from hyperscalers.
CoreWeave – Specializes in high-performance computing and GPU-accelerated workloads, often used for AI, graphics rendering, and scientific simulations.
Scaleway - – A French cloud provider offering infrastructure-as-a-service (IaaS) solutions, with a focus on GDPR compliance and regional data sovereignty.
MongoDB Atlas is a fully managed database-as-a-service (DBaaS) that allows users to deploy and operate MongoDB databases on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
OVHcloud – A France-based cloud provider known for its emphasis on data sovereignty, offering private cloud, dedicated servers, and European-hosted cloud solutions.
Lambda Labs – Provides GPU cloud computing tailored for AI research, deep learning, and machine learning development.
Paperspace – Specializes in cloud computing solutions for AI and machine learning, with a focus on scalable GPU access.
RunPod – Offers cloud computing infrastructure optimized for AI applications and model deployment.
== Similar concepts ==
The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacles. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.
Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.
Cloud computing shares characteristics with:
Client–server model – Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
Computer bureau – A service bureau providing computer services, particularly from the 1960s to 1980s.
Grid computing – A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
Fog computing – Distributed computing paradigm that provides data, compute, storage and application services closer to the client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client-side (e.g. mobile devices), instead of sending data to a remote location for processing.
Utility computing – The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer – A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
Cloud sandbox – A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.
== See also ==
== Notes ==
== References ==
== Further reading ==
Millard, Christopher (2013). Cloud Computing Law. Oxford University Press. ISBN 978-0-19-967168-7.
Weisser, Alexander (2020). International Taxation of Cloud Computing. Editions Juridiques Libres, ISBN 978-2-88954-030-3.
Singh, Jatinder; Powles, Julia; Pasquier, Thomas; Bacon, Jean (July 2015). "Data Flow Management and Compliance in Cloud Computing". IEEE Cloud Computing. 2 (4): 24–32. doi:10.1109/MCC.2015.69. S2CID 9812531.
Armbrust, Michael; Stoica, Ion; Zaharia, Matei; Fox, Armando; Griffith, Rean; Joseph, Anthony D.; Katz, Randy; Konwinski, Andy; Lee, Gunho; Patterson, David; Rabkin, Ariel (1 April 2010). "A view of cloud computing". Communications of the ACM. 53 (4): 50. doi:10.1145/1721654.1721672. S2CID 1673644.
Hu, Tung-Hui (2015). A Prehistory of the Cloud. MIT Press. ISBN 978-0-262-02951-3.
Mell, P. (2011, September). The NIST Definition of Cloud Computing. Retrieved November 1, 2015, from National Institute of Standards and Technology website
Media related to Cloud computing at Wikimedia Commons |
Clément Ader | Clément Ader (French pronunciation: [klemɑ̃ adɛʁ]; 2 April 1841 – 3 May 1925) was a French inventor and engineer who was born near Toulouse in Muret, Haute-Garonne, and died in Toulouse. He is remembered primarily for his pioneering work in aviation. In 1870 he was also one of the pioneers in the sport of cycling in France.
== Electrical and mechanical inventions ==
Ader was an innovator in electrical and mechanical engineering. He originally studied electrical engineering, and in 1878 improved on the telephone invented by Alexander Graham Bell. After this he established the telephone network in Paris in 1880. In 1881, he invented the théâtrophone, a system of telephonic transmission where listeners received a separate channel for each ear, enabling stereophonic perception of the actors on a set; it was this invention which gave the first stereo transmission of opera performances, over a distance of 2 miles (3 km) in 1881. In 1903, he devised a V8 engine for the Paris–Madrid race, but although three or four were produced, none was sold.
== Aircraft prototypes ==
Following his work with V8 engines, Ader turned to the problem of mechanical flight and until the end of his life gave much time and money to this. Using the studies of Louis Pierre Mouillard (1834–1897) on the flight of birds, he constructed his first flying machine in 1886, the Ader Éole. It was a bat-like design run by a lightweight steam engine of his own invention, with 4 cylinders with a power rating of 20 hp (15 kW), driving a four-blade propeller. The engine weighed 51 kg (112 lb). The wings had a span of 14 m (46 ft). All-up weight was 300 kg (660 lb). On 9 October 1890 Ader attempted to fly the Éole. Aviation historians give credit to this effort as a powered take-off and uncontrolled flight in ground effect of approximately 50 m (160 ft) at a height of approximately 20 centimetres (8 in). Ader also claimed credit for getting off the ground in the Éole.
Ader began construction of a second aircraft he called the Avion II, also referred to as the Zephyr or Éole II. Most sources agree that work on this aircraft was never completed, and it was abandoned in favour of the Avion III. Ader's later claim that he flew the Avion II in August 1892 for a distance of 100 m (330 ft) in Satory near Paris, was never widely accepted.
Ader's progress attracted the interest of the minister of war, Charles de Freycinet. With the backing of the French War Office, Ader developed and constructed the Avion III. It resembled an enormous bat made of linen and wood, with a 15 m (48 ft) wingspan, equipped with two four-bladed tractor propellers, each powered by a steam engine of 30 hp (22 kW). Using a circular track at Satory, Ader carried out taxiing trials on 12 October 1897 and two days later attempted a flight. After a short run the machine was caught by a gust of wind, slewed off the track, and came to a stop. After this the French army withdrew its funding, but kept the results secret. The commission released in November 1910 the official reports on Ader's attempted flights, stating that they were unsuccessful.
== Book on aviation ==
Clément Ader remained an active proponent of the development of aviation. In 1909 he published L'Aviation Militaire, a very popular book which went through 10 editions in the five years before the First World War. It is notable for its vision of aerial warfare and for its foreseeing the form of the modern aircraft carrier, with a flat flight deck, an island superstructure, deck elevators and a hangar bay. His idea for an aircraft carrier was relayed by the US naval attaché in Paris and was followed by the first trials in the United States in November 1910.
An airplane-carrying vessel is indispensable. These vessels will be constructed on a plan very different from what is currently used. First of all the deck will be cleared of all obstacles. It will be flat, as wide as possible without jeopardizing the nautical lines of the hull, and it will look like a landing field.
== Influence ==
Ader is still admired for his early powered flight efforts, and his aircraft gave the French language the word avion for a heavier-than-air aircraft. In 1938, France issued a postage stamp honoring him. Airbus named one of its aircraft assembly sites in Toulouse after him. Clément Ader has been referred to as "the father of aviation".
== See also ==
Early flying machines
== Notes ==
== Further reading ==
Lissarague, Pierre, Ader, Inventor of Aeroplanes. Toulouse, France: Editions Privat, 1990
Lissarague, Pierre, Pegasus, the magazine of Musee de l'Air, Nos. 52, 56, 58, . (Articles on restoration of Ader's Aeroplane 3 and on the testing of engines and propellers.)
Pierre Lissarague and J. Forestier, Icarus an article on restoration of Ader's Avion 3, No. 134 (October 1990).
Gibbs-Smith, Charles Harvard (1968). A inventor: His Flight-Claims and His Place in History. Aeronautical engineers. London: Her Majesty's Stationery Office. p. 214.
Gibbs-Smith C.H., Aviation: An Historical Survey. London, NMSI, 2008. ISBN 1-900747-52-9
== External links ==
Media related to Clément Ader at Wikimedia Commons
Works by or about Clément Ader at the Internet Archive
Clement Ader
Clement Ader's flying machines
Clement Ader's first flight in 1879 ? |
Comac | The Commercial Aircraft Corporation of China, Ltd. (Comac, sometimes stylized as COMAC, Chinese: 中国商用飞机有限责任公司) is a Chinese state-owned aerospace manufacturer established on 11 May 2008 in Shanghai. The headquarters are in Pudong, Shanghai. The company has a registered capital of RMB 19 billion (US$2.7 billion as of May 2008). The corporation is a designer and constructor of large passenger aircraft with capacities of over 150 passengers.
The first aircraft marketed by Comac is the ARJ21 regional jet, which was developed by China Aviation Industry Corporation I (AVIC I). This was followed by the C919 narrow-body aircraft, which can seat up to 168 passengers and made its maiden flight in 2017, entering into commercial service in March 2023.
== History ==
=== Origins ===
The Commercial Aircraft Corporation of China (Comac) was established on 11 May 2008 in Shanghai. It was established jointly by Aviation Industry Corporation of China (AVIC), Aluminum Corporation of China, Baosteel Group Corporation, Sinochem Group, Shanghai Guosheng Corporation Limited, and State-owned Assets Supervision and Administration Commission.
=== U.S. sanctions ===
In January 2021, the United States government named Comac as a company "owned or controlled" by the People's Liberation Army (PLA) and thereby prohibited any American company or individual from investing in it. In January 2025, Comac was added to a United States Department of Defense list of companies that allegedly work with the PLA.
== Products ==
For all models beginning with the 919, Comac's naming system for commercial airliners has taken the form of 9X9. In November 2024, Comac rebranded the ARJ21 as the C909 to match the format of the other models.
=== Orders and deliveries ===
As of March 2025.
== Collaborations ==
=== Bombardier ===
On 24 March 2011, Comac and the Canadian company Bombardier Inc. signed a framework agreement for a long-term strategic cooperation on commercial aircraft.
In May 2017, Bombardier and Comac began holding talks about an investment into Bombardier's passenger jet business.
=== Boeing ===
On 23 September 2015, Boeing announced plans to build a Boeing 737 completion and finishing plant in China. The facility will be used to paint exteriors and install interiors into airframes built in the United States. The joint-venture plant will be located in Zhoushan, Zhejiang.
=== Ryanair ===
In June 2011 Comac and Irish low-cost airline Ryanair signed an agreement to cooperate on the development of the C919, a 200-seat narrow-body commercial jet which will compete with the Boeing 737 and Airbus A320.
=== UAC ===
China-Russia Commercial Aircraft International Co. Ltd. (CRAIC), a joint venture company invested by Comac and Russia's United Aircraft Corporation (UAC) responsible for the development of a wide-body commercial jet, was established in Shanghai on 22 May 2017. Research and development for the new plane was to be conducted in Moscow, with aircraft to be assembled in Shanghai. Subsequently the partnership was dropped, and by November 2023 Comac announced that it would develop the aircraft model (since rebranded C929) on its own.
== See also ==
List of civil aircraft
List of aircraft produced by China
Aero Engine Corporation of China (AECC)
== References ==
== External links ==
Official website (in Chinese)
Official website (in English) |
Comac ARJ21 | The Comac C909, originally known as the ARJ21 Xiangfeng (Chinese: 翔凤; pinyin: xiángfèng; lit. 'Soaring Phoenix'), is a 78–90 seat regional jet manufactured by the Chinese state-owned aerospace company Comac.
Development of the ARJ21 began in March 2002, led by the state-owned ACAC consortium. The first prototype was rolled out on 21 December 2007, and made its maiden flight on 28 November 2008 from Shanghai. It received its CAAC Type Certification on 30 December 2014 and was introduced on 28 June 2016 by Chengdu Airlines. The ACAC consortium was reorganized in 2009 as part of Comac and the jet was rebranded as the C909 in November 2024.
It features a 25° swept, supercritical wing designed by Antonov and twin rear-mounted General Electric CF34 engines. By 2025, 172 airframes had been delivered.
== Development ==
In 1985, Shanghai Aircraft Manufacturing Company, now a part of Comac, launched a "troubled" partnership with McDonnell Douglas to co-produce the MD-80, a similar-looking small jet aircraft. After producing 20 MD-80s, the joint venture eventually collapsed, but China refused to return the tooling used. Western analysts state that the ARJ21 is "heavily derived" from the MD-80, including its 1980s-era airframe. However, Chinese state media claim that the ARJ21 is an indigenous design.
The development of the ARJ21 (Advanced Regional Jet) was a key project in the "10th Five-Year Plan" of China. The project officially began in March 2002 and was led by the state-owned ACAC consortium. The maiden flight of the ARJ21 was initially planned to take place in 2005 with commercial service beginning 18 months later. The programme became eight years behind schedule.
The design work was delayed and the final trial production stage did not begin until June 2006.
The first prototype (serial number 101) rolled out on 21 December 2007, with a maiden flight on 28 November 2008 at Shanghai's Dachang Airfield. The aircraft completed a long-distance test flight on 15 July 2009, flying from Shanghai to Xi'an in 2 hours 19 minutes, over a distance of 1,300 km. The second ARJ21 (serial number 102) completed the same test flight route on 24 August 2009. The third aircraft (serial number 103) similarly completed its first test flight on 12 September 2009. The fourth aircraft (CN 104) flew by November 2010. By August 2011, static, flutter and crosswind flight tests had been completed.
The ACAC consortium was reorganized in 2009 and became a part of COMAC.
=== Key flight tests and CAAC certification ===
AC104 returned to China on 28 April 2014, after completing natural-icing tests in North America. This was the first time a turbofan-powered regional jet independently developed by China had flown abroad to carry out flight tests in special weather conditions. At the same time, other flight-test aircraft covered more than 30,000 km across Asia, America, Europe, and the Pacific and Atlantic oceans. Natural-icing tests are required for airworthiness certification, and conducting these tests outside China showed it was feasible to do certification tests for civil aircraft in other countries.
The first production aircraft flew on 18 June 2014. and AC104 completed an airspeed calibration flight on 30 October. Route-proving started on 29 October 2014, and AC105 made 83 flights between ten airports in Chengdu, Guiyang, Guilin, Haikou, Fuzhou, Zhoushan, Tianjin, Shijiazhuang, Yinchuan and Xianyang. The cumulative flight time was 173 hours and 55 minutes. By November 2014, AC104 had completed 711 flights in 1,442 hours and 23 minutes. Certification tests included stall, high-speed, noise and simulated and natural icing. AC105 returned to Yanliang airport on December 16, 2014, from Xi'an Xianyang International Airport after the last function and reliability flight. This completed the testing for the ARJ21-700 airworthiness certificate.
The ARJ21-700 received its Type Certification under Chapter 25 of the Chinese civil aviation regulations from the Civil Aviation Administration of China (CAAC), on 30 December 2014. The certification program for the CAAC required 5,000 hours.
An ARJ21-700 completed a final demonstration flight on 12 September 2015 before being delivered to a customer.
=== Introduction ===
On 29 November 2015, COMAC delivered the first ARJ21-700 to Chengdu Airlines. The first commercial flight took off from Chengdu Shuangliu Airport on 28 June 2016, landing in Shanghai two hours later, one day after its commercial flight was approved by the CAAC. During the summer schedule period of 2016, i.e. until 29 October 2016, the ARJ21-700 was scheduled to operate three weekly rotations between Chengdu and Shanghai Hongqiao. 85 flight segments were operated by ARJ21 (81 by B-3321, four by B-3322).
=== Further developments ===
In June 2018 an ARJ21-700+ was proposed for 2021 with weight and drag reductions. Subsequently, a -900 stretch version was designed to accommodate 115 all-economy seats, similar to the Bombardier CRJ900, Embraer E175-E2 or Mitsubishi MRJ90.
Structurally conservative and designed for hot and high operations, the ARJ21's 25 t (55,000 lb) empty weight is higher than initially targeted in 2002, and also higher than competing aircraft. In 2018 an executive version was in final assembly and a cargo variant was proposed.
=== Freighter conversion program ===
The ARJ21 COMAC Converted Freighter (CCF) conversion program began in May 2020; the type certification and testing program was completed in December 2022 and the type certified by the CAAC on 1 January 2023.
The first two ARJ21 converted freighters (B-3329 and B-3388) were delivered to customers on 30 October 2023. The two airframes were initially delivered to Chengdu Airlines in 2018 in the passenger configuration and were subsequently withdrawn for the CCF program in 2021. Airframe B-3329 was handed over to YTO Cargo Airlines which intends to operate the type on short-haul international routes while airframe B-3388 was delivered to Air Central (based in Zhengzhou, China) for flights on domestic routes. The converted freighters have a maximum payload capacity of 10 tonnes and a range of about 1500 nautical miles (2780 km).
=== Production ===
In early July 2017, the CAAC certified the ARJ21 for mass production. On 6 March 2020, the first ARJ21 assembled at the second production line in Pudong, took its first production test flight. The second production line, with a production capacity of up to 30 jets a year, is located at the same facility that assembles the C919.
=== Rebranding ===
In October 2024, images of an ARJ21 in C909 livery emerged. Comac officially announced the rebranding at the Zhuhai Air Show in November 2024. This brings the naming in line with the convention of Comac's other two programmes, the C919 and C929.
== Design ==
Several sources have noted that the ARJ21 closely resembles the McDonnell Douglas MD-80 and the MD-90, which were produced under licence in China. Comac states that the ARJ21 is a completely indigenous design. The ARJ21's development did depend heavily on foreign suppliers, including engines and avionics from the United States. The ARJ21 has a new supercritical wing designed by Antonov with a sweepback of 25 degrees and winglets. Some of China's supercomputers have been used to design parts for the ARJ21.
=== Frame ===
Members of the ACAC consortium, which was formed to develop the aircraft, will manufacture major framework components of the aircraft:
Chengdu Aircraft Industry Group: construction of the nose
Xi'an Aircraft Company: construction of the wings and fuselage; wing designed by Antonov
Shenyang Aircraft Corporation: construction of the empennage
Shanghai Aircraft Company: final assembly
=== Engine ===
The inflight power source of COMAC C909 is General Electric CF-34 turbofan, which is also widely used on other regional jets like the Mitsubishi CRJ and Embraer E-Jets.
=== Avionics ===
COMAC chose Collins Aerospace Pro Line 21 integrated avionics system (IAS) as their flight deck avionics solution,they also supply the FMS-4200 flight management system (FMS) for the C909, which can also be seen on Mitsubishi CRJ550/700/900/1000 regional aircraft and the weather radar.
== Variants ==
=== C909 STD ===
The C909 STD is the baseline variant of the C909 family.
=== C909 ER ===
The C909 ER is the extended-range variant of the C909 family. It has a increased maximum takeoff weight (MTOW), maximum landing weight (MLW) and maximum taxi weight (MTW) compared to the C909 STD, therefore expanded the aircraft’s performance envelope and range capability without having to install auxiliary fuel tanks (ACT).
==== C909 EMJ ====
On February 20, 2024, it is reported that the Chinese state-owned Henan Civil Aviation Development and Investment Group ordered 6 C909 variants including the C909 EMJ.
==== C909 CCF ====
The C909 CCF (COMAC converted freighter) is designed with a maximum payload of 9,467kg and is compatible with PMC, PAG and AKE cargo containers. The first aircraft began conversion operations on 22 December, 2022 at GAMECO in Guangzhou, China. The first batch of conversions involves two C909ER [ARJ21-700 ER] aircraft originally built and operated by Chengdu Airlines and returned to COMAC in 2021.
== Operators ==
As of April 2025, there were 146 aircraft in commercial service.
By April 2025, 176 aircraft had been delivered to customers.
=== Orders and deliveries ===
As of 9 April 2025, Comac had 386 outstanding orders, after 23 deliveries to launch operator Chengdu Airlines who put it in service on 28 June 2016.
On 30 March 2025, COMAC announced the delivery of the first C909 aircraft to Lao Airlines. Lao Airlines thereby becomes the second operator outside China (after Indonesia's TransNusa) to take delivery of this type.
==== Executing orders ====
The following table is current as of 15 March 2025. Note that the numbers listed in the table have been obtained by cross-referencing the two web-based sources cited in the footnotes. Also note that the numbers listed are for the initial annual deliveries to (non-COMAC) commercial operators and do not necessarily reflect the number of airframes currently operated by each listed operator; as a result, the total number delivered may exceed the total number of airframes cited in the original contracts.
Reported Orders
An Indonesian airline will fly with its entire fleet consisting of 60 ARJ21 aircraft, although as of now that airline is not specified.
== Specifications ==
Notes: Data are provided for reference only. STD = Standard Range, ER = Extended Range
Sources: ARJ21 Series, ICAS
== See also ==
Comac C919
Comac C929
Comac C939
Aircraft of comparable role, configuration, and era
Antonov An-148
Sukhoi Superjet 100
Bombardier CRJ700 series
Embraer E-Jets
Fokker 100
Boeing 717
Mitsubishi Regional Jet
Related lists
List of jet airliners
List of airliners
List of Chinese aircraft
== References ==
== External links ==
ACAC Manufacturer of ARJ21
Toh, Mavis (27 August 2015). "Comac working toward November ARJ21 delivery". Flightglobal.
Govindasamy, Siva; Miller, Matthew (21 October 2015). "China-made regional jet set for delivery, but no U.S. certification". Reuters.
"ACAC Selection Of GE's CF34 Engine". |
Combustion | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. The study of combustion is known as combustion science.
Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242 kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):
2
H
2
(
g
)
+
O
2
(
g
)
→
2
H
2
O
↑
{\displaystyle {\ce {2H_{2}(g){+}O_{2}(g)\rightarrow 2H_{2}O\uparrow }}}
Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.
Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.
Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.
== Types ==
=== Complete and incomplete ===
==== Complete ====
In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated NOx species) form when the air is the oxidative.
Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. NOx species appear in significant amounts above about 2,800 °F (1,540 °C), and more is produced at higher temperatures. The amount of NOx is also a function of oxygen excess.
In most industrial applications and in fires, air is the source of oxygen (O2). In the air, each mole of oxygen is mixed with approximately 3.71 mol of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to NOx (mostly NO, with much smaller amounts of NO2). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.
The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.
==== Incomplete ====
Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.
For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.
The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.
The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.
Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.
==== Problems associated with incomplete combustion ====
===== Environmental problems =====
These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.
===== Human health problems =====
Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.
=== Smoldering ===
Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.
=== Spontaneous ===
Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition.
For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.
=== Turbulent ===
Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.
=== Micro-gravity ===
The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).
=== Micro-combustion ===
Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.
== Chemical equations ==
=== Stoichiometric combustion of a hydrocarbon in oxygen ===
Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:
C
x
H
y
+
(
x
+
y
4
)
O
2
⟶
x
CO
2
+
y
2
H
2
O
{\displaystyle {\ce {C}}_{x}{\ce {H}}_{y}+\left(x+{y \over 4}\right){\ce {O2->}}x{\ce {CO2}}+{y \over 2}{\ce {H2O}}}
For example, the stoichiometric combustion of methane in oxygen is:
CH
4
methane
+
2
O
2
⟶
CO
2
+
2
H
2
O
{\displaystyle {\ce {{\underset {methane}{CH4}}+ 2O2 -> CO2 + 2H2O}}}
=== Stoichiometric combustion of a hydrocarbon in air ===
If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% − O2%) / O2% where O2% is 20.95% vol:
C
x
H
y
+
z
O
2
+
3.77
z
N
2
⟶
x
CO
2
+
y
2
H
2
O
+
3.77
z
N
2
{\displaystyle {\ce {C}}_{x}{\ce {H}}_{y}+z{\ce {O2}}+3.77z{\ce {N2 ->}}x{\ce {CO2}}+{y \over 2}{\ce {H2O}}+3.77z{\ce {N2}}}
where
z
=
x
+
y
4
{\displaystyle z=x+{y \over 4}}
.
For example, the stoichiometric combustion of methane in air is:
CH
4
methane
+
2
O
2
+
7.54
N
2
⟶
CO
2
+
2
H
2
O
+
7.54
N
2
{\displaystyle {\ce {{\underset {methane}{CH4}}+ 2O2}}+7.54{\ce {N2-> CO2 + 2H2O}}+7.54{\ce {N2}}}
The stoichiometric composition of methane in air is 1 / (1 + 2 + 7.54) = 9.49% vol.
The stoichiometric combustion reaction for CαHβOγ in air:
C
α
H
β
O
γ
+
(
α
+
β
4
−
γ
2
)
(
O
2
+
3.77
N
2
)
⟶
α
CO
2
+
β
2
H
2
O
+
3.77
(
α
+
β
4
−
γ
2
)
N
2
{\displaystyle {\ce {C_{\mathit {\alpha }}H_{\mathit {\beta }}O_{\mathit {\gamma }}}}+\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}\right)\left({\ce {O_{2}}}+3.77{\ce {N_{2}}}\right)\longrightarrow \alpha {\ce {CO_{2}}}+{\frac {\beta }{2}}{\ce {H_{2}O}}+3.77\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}\right){\ce {N_{2}}}}
The stoichiometric combustion reaction for CαHβOγSδ:
C
α
H
β
O
γ
S
δ
+
(
α
+
β
4
−
γ
2
+
δ
)
(
O
2
+
3.77
N
2
)
⟶
α
CO
2
+
β
2
H
2
O
+
δ
SO
2
+
3.77
(
α
+
β
4
−
γ
2
+
δ
)
N
2
{\displaystyle {\ce {C_{\mathit {\alpha }}H_{\mathit {\beta }}O_{\mathit {\gamma }}S_{\mathit {\delta }}}}+\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}+\delta \right)\left({\ce {O_{2}}}+3.77{\ce {N_{2}}}\right)\longrightarrow \alpha {\ce {CO_{2}}}+{\frac {\beta }{2}}{\ce {H_{2}O}}+\delta {\ce {SO_{2}}}+3.77\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}+\delta \right){\ce {N_{2}}}}
The stoichiometric combustion reaction for CαHβOγNδSε:
C
α
H
β
O
γ
N
δ
S
ϵ
+
(
α
+
β
4
−
γ
2
+
ϵ
)
(
O
2
+
3.77
N
2
)
⟶
α
CO
2
+
β
2
H
2
O
+
ϵ
SO
2
+
(
3.77
(
α
+
β
4
−
γ
2
+
ϵ
)
+
δ
2
)
N
2
{\displaystyle {\ce {C_{\mathit {\alpha }}H_{\mathit {\beta }}O_{\mathit {\gamma }}N_{\mathit {\delta }}S_{\mathit {\epsilon }}}}+\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}+\epsilon \right)\left({\ce {O_{2}}}+3.77{\ce {N_{2}}}\right)\longrightarrow \alpha {\ce {CO_{2}}}+{\frac {\beta }{2}}{\ce {H_{2}O}}+\epsilon {\ce {SO_{2}}}+\left(3.77\left(\alpha +{\frac {\beta }{4}}-{\frac {\gamma }{2}}+\epsilon \right)+{\frac {\delta }{2}}\right){\ce {N_{2}}}}
The stoichiometric combustion reaction for CαHβOγFδ:
C
α
H
β
O
γ
F
δ
+
(
α
+
β
−
δ
4
−
γ
2
)
(
O
2
+
3.77
N
2
)
⟶
α
CO
2
+
β
−
δ
2
H
2
O
+
δ
HF
+
3.77
(
α
+
β
−
δ
4
−
γ
2
)
N
2
{\displaystyle {\ce {C_{\mathit {\alpha }}H_{\mathit {\beta }}O_{\mathit {\gamma }}F_{\mathit {\delta }}}}+\left(\alpha +{\frac {\beta -\delta }{4}}-{\frac {\gamma }{2}}\right)\left({\ce {O_{2}}}+3.77{\ce {N_{2}}}\right)\longrightarrow \alpha {\ce {CO_{2}}}+{\frac {\beta -\delta }{2}}{\ce {H_{2}O}}+\delta {\ce {HF}}+3.77\left(\alpha +{\frac {\beta -\delta }{4}}-{\frac {\gamma }{2}}\right){\ce {N_{2}}}}
=== Trace combustion products ===
Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about 1600 K. When excess air is used, nitrogen may oxidize to NO and, to a much lesser extent, to NO2. CO forms by disproportionation of CO2, and H2 and OH form by disproportionation of H2O.
For example, when 1 mol of propane is burned with 28.6 mol of air (120% of the stoichiometric amount), the combustion products contain 3.3% O2. At 1400 K, the equilibrium combustion products contain 0.03% NO and 0.002% OH. At 1800 K, the combustion products contain 0.17% NO, 0.05% OH, 0.01% CO, and 0.004% H2.
Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).
=== Incomplete combustion of a hydrocarbon in oxygen ===
The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly CO2, CO, H2O, and H2. Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:
C
x
H
y
fuel
+
z
O
2
oxygen
⟶
a
CO
2
carbon
dioxide
+
b
CO
carbon
monoxide
+
c
H
2
O
water
+
d
H
2
hydrogen
{\displaystyle {\ce {{\underset {fuel}{C_{\mathit {x}}H_{\mathit {y}}}}+{\underset {oxygen}{{\mathit {z}}O2}}->{\underset {carbon\ dioxide}{{\mathit {a}}CO2}}+{\underset {carbon\ monoxide}{{\mathit {b}}CO}}+{\underset {water}{{\mathit {c}}H2O}}+{\underset {hydrogen}{{\mathit {d}}H2}}}}}
When z falls below roughly 50% of the stoichiometric value, CH4 can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.
The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane (C3H8) with four moles of O2, seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:
Carbon:
a
+
b
=
3
{\displaystyle a+b=3}
Hydrogen:
2
c
+
2
d
=
8
{\displaystyle 2c+2d=8}
Oxygen:
2
a
+
b
+
c
=
8
{\displaystyle 2a+b+c=8}
These three equations are insufficient in themselves to calculate the combustion gas composition.
However, at the equilibrium position, the water-gas shift reaction gives another equation:
CO
+
H
2
O
⟶
CO
2
+
H
2
{\displaystyle {\ce {CO + H2O -> CO2 + H2}}}
;
K
e
q
=
a
×
d
b
×
c
{\displaystyle K_{eq}={\frac {a\times d}{b\times c}}}
For example, at 1200 K the value of Keq is 0.728. Solving, the combustion gas consists of 42.4% H2O, 29.0% CO2, 14.7% H2, and 13.9% CO. Carbon becomes a stable phase at 1200 K and 1 atm pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% H2 and CO and about 0.5% CH4.
Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.
=== Liquid fuels ===
Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.
=== Gaseous fuels ===
Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.
=== Solid fuels ===
The act of combustion consists of three relatively distinct but overlapping phases:
Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation.
Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours.
Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders.
== Combustion management ==
Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.
In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually CO and H2) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane (CH4) combustion, for example, slightly more than two molecules of oxygen are required.
The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.
Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of O2 in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.
== Reaction mechanism ==
Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.
Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.
Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.
The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).
Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.
The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.
Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:
The Relaxation Redistribution Method (RRM)
The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments
The invariant-constrained equilibrium edge preimage curve method.
A few variational approaches
The Computational Singular perturbation (CSP) method and further developments.
The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach.
The G-Scheme.
The Method of Invariant Grids (MIG).
=== Kinetic modelling ===
The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.
== Temperature ==
Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).
In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:
the heating value;
the stoichiometric air to fuel ratio
λ
{\displaystyle {\lambda }}
;
the specific heat capacity of fuel and air;
the air and fuel inlet temperatures.
The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.
Most commonly, the adiabatic combustion temperatures for coals are around 2,200 °C (3,992 °F) (for inlet air and fuel at ambient temperatures and for
λ
=
1.0
{\displaystyle \lambda =1.0}
), around 2,150 °C (3,902 °F) for oil and 2,000 °C (3,632 °F) for natural gas.
In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.
== Instabilities ==
Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180 dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of NOx emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the NOx emissions; however, running the combustion lean makes it very susceptible to combustion instability.
The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability
where q' is the heat release rate perturbation and p' is the pressure fluctuation.
When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index.
== See also ==
== References ==
== Further reading ==
Poinsot, Thierry; Veynante, Denis (2012). Theoretical and Numerical Combustion (3rd ed.). European Centre for Research and Advanced Training in Scientific Computation. Archived from the original on 2017-09-12. Retrieved 2011-11-18.
Lackner, Maximilian; Winter, Franz; Agarwal, Avinash K., eds. (2010). Handbook of Combustion, 5 volume set. Wiley-VCH. ISBN 978-3-527-32449-1. Archived from the original on 2011-01-17. Retrieved 2010-04-29.
Baukal, Charles E., ed. (1998). Oxygen-Enhanced Combustion. CRC Press.
Glassman, Irvin; Yetter, Richard. Combustion (Fourth ed.).
Turns, Stephen (2011). An Introduction to Combustion: Concepts and Applications.
Ragland, Kenneth W; Bryden, Kenneth M. (2011). Combustion Engineering (Second ed.).
Baukal, Charles E. Jr, ed. (2013). "Industrial Combustion". The John Zink Hamworthy Combustion Handbook: Three-Volume Set (Second ed.).
Gardiner, W. C. Jr (2000). Gas-Phase Combustion Chemistry (Revised ed.). |
Commercial aviation | Commercial aviation is the part of civil aviation that involves operating aircraft for remuneration or hire, as opposed to private aviation.
== Definition ==
Commercial aviation is not a rigorously defined category. All commercial air transport and aerial work operations are regarded as commercial aviation, as well as some general aviation flights.
An aircraft operation involving the transportation of people, goods, or mail for payment or hiring is referred to as commercial air transport. Both scheduled and unscheduled air transport operations are included. An aircraft used for specialized services including agriculture, construction, photography, surveying, observation and patrol, search and rescue, advertising, etc. is referred to as aerial work.
General aviation includes commercial activities such as flight instruction, aerial work, and corporate and business aviation, as well as non-commercial activities such as recreational flying.
Most commercial aviation activities require at minimum a commercial pilot licence, and some require an airline transport pilot licence (ATPL). In the US, the pilot in command of a scheduled air carriers' aircraft must hold an ATPL. In the UK, pilots must hold an ATPL before they be pilot in command of an aircraft with 9 or more passenger seats.
Not all activities involving pilot remuneration require a commercial pilot licence. For example, in European Union Aviation Safety Agency states and the UK it is possible to become a paid flight instructor with only a private pilot licence. Nonetheless, in the UK, flight instruction is considered a commercial operation.
It is the purpose of the flight, not the aircraft or pilot, that determines whether the flight is commercial or private. For example, if a commercially licensed pilot flies a plane to visit a friend or attend a business meeting, this would be a private flight. Conversely, a private pilot could legally fly a multi-engine complex aircraft carrying passengers for non-commercial purposes (no compensation paid to the pilot, and a pro rata or larger portion of the aircraft operating expenses paid by the pilot).
== History ==
=== United States ===
==== Origins ====
Harry Bruno and Juan Trippe were early promoters of commercial aviation.
The Air Commerce Act of 1926 began to regularize commercial aviation by establishing standards, facilitation, and promotion. An Aeronautical Branch was established in the Department of Commerce with William P. MacCracken Jr. as director. To promote commercial aviation, he told town fathers that "Communities without airports would be communities without airmail."
Writing for Collier's in 1929, he noted "Commercial aviation is the first industry inspired by hero-worship and built upon heros". He cited the promotion in South America by Herbert Dargue in early 1927. After his 1927 trans-Atlantic flight, Charles Lindbergh made a tour of the contiguous United States, paid for by the Daniel Guggenheim Foundation for the Promotion of Aeronautics. From that point, commercial aviation took off:
Roads were choked on Sundays, for weeks afterward, by motorists trying to get to Lambert Field, Lindbergh's home port in Saint Louis, to buy their first air hop. Hundreds of thousands of you went aloft for the first time that summer.
The Aeronautical Branch was charged with issuing commercial pilot licenses, airworthiness certificates, and with investigating air accidents.
==== 1920s and 30s ====
Many small regional airlines operated in the 1920s in the United States. Many of them merged or were acquired late in the decade by the first developing nationwide airlines, such as Eastern Airlines, Pan Am, American Airlines, and TWA.
==== After 1945 ====
After World War II, commercial aviation grew rapidly, using mostly ex-military aircraft to transport people and cargo. The experience used in designing heavy bombers such as the Boeing B-29 Superfortress and Avro Lancaster could be used for designing heavy commercial aircraft. The Douglas DC-3 also made for easier and longer commercial flights. The first commercial jet airliner to fly was the British de Havilland DH.106 Comet. By 1952, the British state airline British Overseas Airways Corporation had introduced the Comet into scheduled service. While a technical achievement, the plane suffered a series of highly public failures, including the crashing of BOAC Flight 781 and South African Airways Flight 201. By the time the problems were overcome, other jet airliner designs had already taken to the skies.
=== Latin America ===
==== Pre-war ====
Inspired by the major players such as the United States, the Soviet Union, Russia, France and Britain in the aviation industry. In the 1910s, Brazil and Argentina were among the first Latin American countries to possess the instruments of aircraft that were not all locally made, yet the aircraft was locally congregated. At that time, many individuals were interested to be pilots in Latin American countries, yet there were not sufficient resources and funding to support and promote the best interests of the aviation industry. Amidst these obstacles, Argentina and the Dominican Republic made efforts in creating jet aviation rather than creating and using propeller planes. In 1944, the Chicago Convention on International Civil Aviation attended by all Latin American countries except Argentina drafted the clauses of aviation law. The introduction of the jet fighter F-80 by the US in 1945 pushed the Latin American countries even further away from development of aviation industry because it was simply expensive to recreate the sophisticated technology of F-80.
==== Post-war ====
The Latin American Civil Aviation Commission (LACAC) was formed in December 1973 "intended to provide civil aviation authorities in the region with an adequate framework for cooperation and coordination of activities related to civil aviation". In 1976, about seven percent of the world logged in the Latin American and Caribbean region. This contributed to the increase of average annual rate of air traffic. Subsequently, higher passenger load factor decided the profitability of these airlines.
According to C. Bogolasky, airline pooling agreements between Latin American airlines contributed to better financial performance of the airlines. The economic problems related to the "airline capacity regulation, regulation of non-scheduled operations, tariff enforcement, high operating costs, passenger and cargo rates."
== Business aviation ==
=== Private jet ownership ===
Private jet ownership refers to individuals or corporations owning their own aircraft. Owners are responsible for the management and maintenance of their aircraft and often employ a dedicated crew. The aircraft may be operated for personal or business use.
=== Charter flights ===
Charter flights allow individuals or groups to rent an aircraft for a specific trip, without the need for long-term commitments. Charter flights provide flexibility and convenience, as travelers can choose their own schedules and destinations.
=== Fractional ownership ===
Fractional ownership is a model that allows individuals or corporations to purchase a share of an aircraft, granting them access to a fleet of aircraft managed by a provider. Fractional owners typically pay an initial acquisition cost, followed by monthly management fees and hourly flight rates.
=== Jet card programs ===
Jet card programs are prepaid programs offered by private aviation companies, allowing customers to purchase a set number of flight hours on a specific aircraft or fleet. Jet card holders can use their hours to book flights, often with guaranteed availability and fixed hourly rates.
== Social and environmental impact ==
Air travel is a noted source of pollution, contributing about 2.4% of global CO2 emissions in 2018. Airline companies have become increasingly interested in corporate social responsibility (CSR) and environmental, social, and governance (ESG) issues.
While aviation is often taxed, most jurisdictions do not tax fuel for commercial aircraft.
In 2024, Air New Zealand, having previously set emissions reduction targets, cancelled these commitments.
== Radiation exposure ==
Exposure to ionizing radiation is higher in the upper atmosphere, and airline pilots are the fourth most exposed group of employees, with an average annual effective dose of 3 millisieverts (mSv). This is on top of the average effective dose of a typical person in the United States of 3.11 mSv from background sources, and well below the recommended limit of 20 mSv per year. Doses of less than 50 mSv over any time period are safe.
Radiation exposure is higher at higher altitudes, and higher in polar regions than in mid-latitude and equatorial regions.
== See also ==
Airliner
Flight level
Direct flight
Domestic flight
Environmental impact of aviation (including effects on climate change)
International flight
Mainline
Non-stop flight
Private aviation
== Notes ==
== References ==
== External links ==
Transport Canada Flight Test Guide – Commercial Pilot License – Aeroplane |
Communication | Communication is commonly defined as the transmission of information. Its precise definition is disputed and there are disagreements about whether unintentional or failed transmissions are included and whether communication not only transmits meaning but also creates it. Models of communication are simplified overviews of its main components and their interactions. Many models include the idea that a source uses a coding system to express information in the form of a message. The message is sent through a channel to a receiver who has to decode it to understand it. The main field of inquiry investigating communication is called communication studies.
A common way to classify communication is by whether information is exchanged between humans, members of other species, or non-living entities such as computers. For human communication, a central contrast is between verbal and non-verbal communication. Verbal communication involves the exchange of messages in linguistic form, including spoken and written messages as well as sign language. Non-verbal communication happens without the use of a linguistic system, for example, using body language, touch, and facial expressions. Another distinction is between interpersonal communication, which happens between distinct persons, and intrapersonal communication, which is communication with oneself. Communicative competence is the ability to communicate well and applies to the skills of formulating messages and understanding them.
Non-human forms of communication include animal and plant communication. Researchers in this field often refine their definition of communicative behavior by including the criteria that observable responses are present and that the participants benefit from the exchange. Animal communication is used in areas like courtship and mating, parent–offspring relations, navigation, and self-defense. Communication through chemicals is particularly important for the relatively immobile plants. For example, maple trees release so-called volatile organic compounds into the air to warn other plants of a herbivore attack. Most communication takes place between members of the same species. The reason is that its purpose is usually some form of cooperation, which is not as common between different species. Interspecies communication happens mainly in cases of symbiotic relationships. For instance, many flowers use symmetrical shapes and distinctive colors to signal to insects where nectar is located. Humans engage in interspecies communication when interacting with pets and working animals.
Human communication has a long history and how people exchange information has changed over time. These changes were usually triggered by the development of new communication technologies. Examples are the invention of writing systems, the development of mass printing, the use of radio and television, and the invention of the internet. The technological advances also led to new forms of communication, such as the exchange of data between computers.
== Definitions ==
The word communication has its root in the Latin verb communicare, which means 'to share' or 'to make common'. Communication is usually understood as the transmission of information: a message is conveyed from a sender to a receiver using some medium, such as sound, written signs, bodily movements, or electricity. Sender and receiver are often distinct individuals but it is also possible for an individual to communicate with themselves. In some cases, sender and receiver are not individuals but groups like organizations, social classes, or nations. In a different sense, the term communication refers to the message that is being communicated or to the field of inquiry studying communicational phenomena.
The precise characterization of communication is disputed. Many scholars have raised doubts that any single definition can capture the term accurately. These difficulties come from the fact that the term is applied to diverse phenomena in different contexts, often with slightly different meanings. The issue of the right definition affects the research process on many levels. This includes issues like which empirical phenomena are observed, how they are categorized, which hypotheses and laws are formulated as well as how systematic theories based on these steps are articulated.
Some definitions are broad and encompass unconscious and non-human behavior. Under a broad definition, many animals communicate within their own species and flowers communicate by signaling the location of nectar to bees through their colors and shapes. Other definitions restrict communication to conscious interactions among human beings. Some approaches focus on the use of symbols and signs while others stress the role of understanding, interaction, power, or transmission of ideas. Various characterizations see the communicator's intent to send a message as a central component. In this view, the transmission of information is not sufficient for communication if it happens unintentionally. A version of this view is given by philosopher Paul Grice, who identifies communication with actions that aim to make the recipient aware of the communicator's intention. One question in this regard is whether only successful transmissions of information should be regarded as communication. For example, distortion may interfere with and change the actual message from what was originally intended. A closely related problem is whether acts of deliberate deception constitute communication.
According to a broad definition by literary critic I. A. Richards, communication happens when one mind acts upon its environment to transmit its own experience to another mind. Another interpretation is given by communication theorists Claude Shannon and Warren Weaver, who characterize communication as a transmission of information brought about by the interaction of several components, such as a source, a message, an encoder, a channel, a decoder, and a receiver. The transmission view is rejected by transactional and constitutive views, which hold that communication is not just about the transmission of information but also about the creation of meaning. Transactional and constitutive perspectives hold that communication shapes the participant's experience by conceptualizing the world and making sense of their environment and themselves. Researchers studying animal and plant communication focus less on meaning-making. Instead, they often define communicative behavior as having other features, such as playing a beneficial role in survival and reproduction, or having an observable response.
== Models of communication ==
Models of communication are conceptual representations of the process of communication. Their goal is to provide a simplified overview of its main components. This makes it easier for researchers to formulate hypotheses, apply communication-related concepts to real-world cases, and test predictions. Due to their simplified presentation, they may lack the conceptual complexity needed for a comprehensive understanding of all the essential aspects of communication. They are usually presented visually in the form of diagrams showing the basic components and their interaction.
Models of communication are often categorized based on their intended applications and how they conceptualize communication. Some models are general in the sense that they are intended for all forms of communication. Specialized models aim to describe specific forms, such as models of mass communication.
One influential way to classify communication is to distinguish between linear transmission, interaction, and transaction models. Linear transmission models focus on how a sender transmits information to a receiver. They are linear because this flow of information only goes in a single direction. This view is rejected by interaction models, which include a feedback loop. Feedback is needed to describe many forms of communication, such as a conversation, where the listener may respond to a speaker by expressing their opinion or by asking for clarification. Interaction models represent the process as a form of two-way communication in which the communicators take turns sending and receiving messages. Transaction models further refine this picture by allowing representations of sending and responding at the same time. This modification is needed to describe how the listener can give feedback in a face-to-face conversation while the other person is talking. Examples are non-verbal feedback through body posture and facial expression. Transaction models also hold that meaning is produced during communication and does not exist independently of it.
All the early models, developed in the middle of the 20th century, are linear transmission models. Lasswell's model, for example, is based on five fundamental questions: "Who?", "Says what?", "In which channel?", "To whom?", and "With what effect?". The goal of these questions is to identify the basic components involved in the communicative process: the sender, the message, the channel, the receiver, and the effect. Lasswell's model was initially only conceived as a model of mass communication, but it has been applied to other fields as well. Some communication theorists, like Richard Braddock, have expanded it by including additional questions, like "Under what circumstances?" and "For what purpose?".
The Shannon–Weaver model is another influential linear transmission model. It is based on the idea that a source creates a message, which is then translated into a signal by a transmitter. Noise may interfere with and distort the signal. Once the signal reaches the receiver, it is translated back into a message and made available to the destination. For a landline telephone call, the person calling is the source and their telephone is the transmitter. The transmitter translates the message into an electrical signal that travels through the wire, which acts as the channel. The person taking the call is the destination and their telephone is the receiver. The Shannon–Weaver model includes an in-depth discussion of how noise can distort the signal and how successful communication can be achieved despite noise. This can happen by making the message partially redundant so that decoding is possible nonetheless. Other influential linear transmission models include Gerbner's model and Berlo's model.
The earliest interaction model was developed by communication theorist Wilbur Schramm. He states that communication starts when a source has an idea and expresses it in the form of a message. This process is called encoding and happens using a code, i.e. a sign system that is able to express the idea, for instance, through visual or auditory signs. The message is sent to a destination, who has to decode and interpret it to understand it. In response, they formulate their own idea, encode it into a message, and send it back as a form of feedback. Another innovation of Schramm's model is that previous experience is necessary to be able to encode and decode messages. For communication to be successful, the fields of experience of source and destination have to overlap.
The first transactional model was proposed by communication theorist Dean Barnlund in 1970. He understands communication as "the production of meaning, rather than the production of messages". Its goal is to decrease uncertainty and arrive at a shared understanding. This happens in response to external and internal cues. Decoding is the process of ascribing meaning to them and encoding consists in producing new behavioral cues as a response.
== Human ==
There are many forms of human communication. A central distinction is whether language is used, as in the contrast between verbal and non-verbal communication. A further distinction concerns whether one communicates with others or with oneself, as in the contrast between interpersonal and intrapersonal communication. Forms of human communication are also categorized by their channel or the medium used to transmit messages. The field studying human communication is known as anthroposemiotics.
=== Verbal ===
Verbal communication is the exchange of messages in linguistic form, i.e., by means of language. In colloquial usage, verbal communication is sometimes restricted to oral communication and may exclude writing and sign language. However, in academic discourse, the term is usually used in a wider sense, encompassing any form of linguistic communication, whether through speech, writing, or gestures. Some of the challenges in distinguishing verbal from non-verbal communication come from the difficulties in defining what exactly language means. Language is usually understood as a conventional system of symbols and rules used for communication. Such systems are based on a set of simple units of meaning that can be combined to express more complex ideas. The rules for combining the units into compound expressions are called grammar. Words are combined to form sentences.
One hallmark of human language, in contrast to animal communication, lies in its complexity and expressive power. Human language can be used to refer not just to concrete objects in the here-and-now but also to spatially and temporally distant objects and to abstract ideas. Humans have a natural tendency to acquire their native language in childhood. They are also able to learn other languages later in life as second languages. However, this process is less intuitive and often does not result in the same level of linguistic competence. The academic discipline studying language is called linguistics. Its subfields include semantics (the study of meaning), morphology (the study of word formation), syntax (the study of sentence structure), pragmatics (the study of language use), and phonetics (the study of basic sounds).
A central contrast among languages is between natural and artificial or constructed languages. Natural languages, like English, Spanish, and Japanese, developed naturally and for the most part unplanned in the course of history. Artificial languages, like Esperanto, Quenya, C++, and the language of first-order logic, are purposefully designed from the ground up. Most everyday verbal communication happens using natural languages. Central forms of verbal communication are speech and writing together with their counterparts of listening and reading. Spoken languages use sounds to produce signs and transmit meaning while for writing, the signs are physically inscribed on a surface. Sign languages, like American Sign Language and Nicaraguan Sign Language, are another form of verbal communication. They rely on visual means, mostly by using gestures with hands and arms, to form sentences and convey meaning.
Verbal communication serves various functions. One key function is to exchange information, i.e. an attempt by the speaker to make the audience aware of something, usually of an external event. But language can also be used to express the speaker's feelings and attitudes. A closely related role is to establish and maintain social relations with other people. Verbal communication is also utilized to coordinate one's behavior with others and influence them. In some cases, language is not employed for an external purpose but only for entertainment or personal enjoyment. Verbal communication further helps individuals conceptualize the world around them and themselves. This affects how perceptions of external events are interpreted, how things are categorized, and how ideas are organized and related to each other.
=== Non-verbal ===
Non-verbal communication is the exchange of information through non-linguistic modes, like facial expressions, gestures, and postures. However, not every form of non-verbal behavior constitutes non-verbal communication. Some theorists, like Judee Burgoon, hold that it depends on the existence of a socially shared coding system that is used to interpret the meaning of non-verbal behavior. Non-verbal communication has many functions. It frequently contains information about emotions, attitudes, personality, interpersonal relations, and private thoughts.
Non-verbal communication often happens unintentionally and unconsciously, like sweating or blushing, but there are also conscious intentional forms, like shaking hands or raising a thumb. It often happens simultaneously with verbal communication and helps optimize the exchange through emphasis and illustration or by adding additional information. Non-verbal cues can clarify the intent behind a verbal message. Using multiple modalities of communication in this way usually makes communication more effective if the messages of each modality are consistent. However, in some cases different modalities can contain conflicting messages. For example, a person may verbally agree with a statement but press their lips together, thereby indicating disagreement non-verbally.
There are many forms of non-verbal communication. They include kinesics, proxemics, haptics, paralanguage, chronemics, and physical appearance. Kinesics studies the role of bodily behavior in conveying information. It is commonly referred to as body language, even though it is, strictly speaking, not a language but rather non-verbal communication. It includes many forms, like gestures, postures, walking styles, and dance. Facial expressions, like laughing, smiling, and frowning, all belong to kinesics and are expressive and flexible forms of communication. Oculesics is another subcategory of kinesics in regard to the eyes. It covers questions like how eye contact, gaze, blink rate, and pupil dilation form part of communication. Some kinesic patterns are inborn and involuntary, like blinking, while others are learned and voluntary, like giving a military salute.
Proxemics studies how personal space is used in communication. The distance between the speakers reflects their degree of familiarity and intimacy with each other as well as their social status. Haptics examines how information is conveyed using touching behavior, like handshakes, holding hands, kissing, or slapping. Meanings linked to haptics include care, concern, anger, and violence. For instance, handshaking is often seen as a symbol of equality and fairness, while refusing to shake hands can indicate aggressiveness. Kissing is another form often used to show affection and erotic closeness.
Paralanguage, also known as vocalics, encompasses non-verbal elements in speech that convey information. Paralanguage is often used to express the feelings and emotions that the speaker has but does not explicitly stated in the verbal part of the message. It is not concerned with the words used but with how they are expressed. This includes elements like articulation, lip control, rhythm, intensity, pitch, fluency, and loudness. For example, saying something loudly and in a high pitch conveys a different meaning on the non-verbal level than whispering the same words. Paralanguage is mainly concerned with spoken language but also includes aspects of written language, like the use of colors and fonts as well as spatial arrangement in paragraphs and tables. Non-linguistic sounds may also convey information; crying indicates that an infant is distressed, and babbling conveys information about infant health and well-being.
Chronemics concerns the use of time, such as what messages are sent by being on time versus late for a meeting. The physical appearance of the communicator, such as height, weight, hair, skin color, gender, clothing, tattooing, and piercing, also carries information. Appearance is an important factor for first impressions but is more limited as a mode of communication since it is less changeable. Some forms of non-verbal communication happen using such artifacts as drums, smoke, batons, traffic lights, and flags.
Non-verbal communication can also happen through visual media like paintings and drawings. They can express what a person or an object looks like and can also convey other ideas and emotions. In some cases, this type of non-verbal communication is used in combination with verbal communication, for example, when diagrams or maps employ labels to include additional linguistic information.
Traditionally, most research focused on verbal communication. However, this paradigm began to shift in the 1950s when research interest in non-verbal communication increased and emphasized its influence. For example, many judgments about the nature and behavior of other people are based on non-verbal cues. It is further present in almost every communicative act to some extent and certain parts of it are universally understood. These considerations have prompted some communication theorists, like Ray Birdwhistell, to claim that the majority of ideas and information is conveyed this way. It has also been suggested that human communication is at its core non-verbal and that words can only acquire meaning because of non-verbal communication. The earliest forms of human communication, such as crying and babbling, are non-verbal. Some basic forms of communication happen even before birth between mother and embryo and include information about nutrition and emotions. Non-verbal communication is studied in various fields besides communication studies, like linguistics, semiotics, anthropology, and social psychology.
=== Interpersonal ===
Interpersonal communication is communication between distinct people. Its typical form is dyadic communication, i.e. between two people, but it can also refer to communication within groups. It can be planned or unplanned and occurs in many forms, like when greeting someone, during salary negotiations, or when making a phone call. Some communication theorists, like Virginia M. McDermott, understand interpersonal communication as a fuzzy concept that manifests in degrees. In this view, an exchange varies in how interpersonal it is based on several factors. It depends on how many people are present, and whether it happens face-to-face rather than through telephone or email. A further factor concerns the relation between the communicators: group communication and mass communication are less typical forms of interpersonal communication and some theorists treat them as distinct types.
Interpersonal communication can be synchronous or asynchronous. For asynchronous communication, the parties take turns in sending and receiving messages. This occurs when exchanging letters or emails. For synchronous communication, both parties send messages at the same time. This happens when one person is talking while the other person sends non-verbal messages in response signaling whether they agree with what is being said. Some communication theorists, like Sarah Trenholm and Arthur Jensen, distinguish between content messages and relational messages. Content messages express the speaker's feelings toward the topic of discussion. Relational messages, on the other hand, demonstrate the speaker's feelings toward their relation with the other participants.
Various theories of the function of interpersonal communication have been proposed. Some focus on how it helps people make sense of their world and create society. Others hold that its primary purpose is to understand why other people act the way they do and to adjust one's behavior accordingly. A closely related approach is to focus on information and see interpersonal communication as an attempt to reduce uncertainty about others and external events. Other explanations understand it in terms of the needs it satisfies. This includes the needs of belonging somewhere, being included, being liked, maintaining relationships, and influencing the behavior of others. On a practical level, interpersonal communication is used to coordinate one's actions with the actions of others to get things done. Research on interpersonal communication includes topics like how people build, maintain, and dissolve relationships through communication. Other questions are why people choose one message rather than another and what effects these messages have on the communicators and their relation. A further topic is how to predict whether two people would like each other.
=== Intrapersonal ===
Intrapersonal communication is communication with oneself. In some cases this manifests externally, such as when engaged in a monologue, taking notes, highlighting a passage, and writing a diary or a shopping list. But many forms of intrapersonal communication happen internally in the form of an inner exchange with oneself, as when thinking about something or daydreaming. Closely related to intrapersonal communication is communication that takes place within an organism below the personal level, such as exchange of information between organs or cells.
Intrapersonal communication can be triggered by internal and external stimuli. It may happen in the form of articulating a phrase before expressing it externally. Other forms are to make plans for the future and to attempt to process emotions to calm oneself down in stressful situations. It can help regulate one's own mental activity and outward behavior as well as internalize cultural norms and ways of thinking. External forms of intrapersonal communication can aid one's memory. This happens, for example, when making a shopping list. Another use is to unravel difficult problems, as when solving a complex mathematical equation line by line. New knowledge can also be internalized this way, such as when repeating new vocabulary to oneself. Because of these functions, intrapersonal communication can be understood as "an exceptionally powerful and pervasive tool for thinking."
Based on its role in self-regulation, some theorists have suggested that intrapersonal communication is more basic than interpersonal communication. Young children sometimes use egocentric speech while playing in an attempt to direct their own behavior. In this view, interpersonal communication only develops later when the child moves from their early egocentric perspective to a more social perspective. A different explanation holds that interpersonal communication is more basic since it is first used by parents to regulate what their child does. Once the child has learned this, they can apply the same technique to themselves to get more control over their own behavior.
=== Channels ===
For communication to be successful, the message has to travel from the sender to the receiver. The channel is the way this is accomplished. It is not concerned with the meaning of the message but only with the technical means of how the meaning is conveyed. Channels are often understood in terms of the senses used to perceive the message, i.e. hearing, seeing, smelling, touching, and tasting. But in the widest sense, channels encompass any form of transmission, including technological means like books, cables, radio waves, telephones, or television. Naturally transmitted messages usually fade rapidly whereas some messages using artificial channels have a much longer lifespan, as in the case of books or sculptures.
The physical characteristics of a channel have an impact on the code and cues that can be used to express information. For example, typical telephone calls are restricted to the use of verbal language and paralanguage but exclude facial expressions. It is often possible to translate messages from one code into another to make them available to a different channel. An example is writing down a spoken message or expressing it using sign language.
The transmission of information can occur through multiple channels at once. For example, face-to-face communication often combines the auditory channel to convey verbal information with the visual channel to transmit non-verbal information using gestures and facial expressions. Employing multiple channels can enhance the effectiveness of communication by helping the receiver better understand the subject matter. The choice of channels often matters since the receiver's ability to understand may vary depending on the chosen channel. For instance, a teacher may decide to present some information orally and other information visually, depending on the content and the student's preferred learning style. This underlines the role of a media-adequate approach.
=== Communicative competence ===
Communicative competence is the ability to communicate effectively or to choose the appropriate communicative behavior in a given situation. It concerns what to say, when to say it, and how to say it. It further includes the ability to receive and understand messages. Competence is often contrasted with performance since competence can be present even if it is not exercised, while performance consists in the realization of this competence. However, some theorists reject a stark contrast and hold that performance is the observable part and is used to infer competence in relation to future performances.
Two central components of communicative competence are effectiveness and appropriateness. Effectiveness is the degree to which the speaker achieves their desired outcomes or the degree to which preferred alternatives are realized. This means that whether a communicative behavior is effective does not just depend on the actual outcome but also on the speaker's intention, i.e. whether this outcome was what they intended to achieve. Because of this, some theorists additionally require that the speaker be able to give an explanation of why they engaged in one behavior rather than another. Effectiveness is closely related to efficiency, the difference being that effectiveness is about achieving goals while efficiency is about using few resources (such as time, effort, and money) in the process.
Appropriateness means that the communicative behavior meets social standards and expectations. Communication theorist Brian H. Spitzberg defines it as "the perceived legitimacy or acceptability of behavior or enactments in a given context". This means that the speaker is aware of the social and cultural context in order to adapt and express the message in a way that is considered acceptable in the given situation. For example, to bid farewell to their teacher, a student may use the expression "Goodbye, sir" but not the expression "I gotta split, man", which they may use when talking to a peer. To be both effective and appropriate means to achieve one's preferred outcomes in a way that follows social standards and expectations. Some definitions of communicative competence put their main emphasis on either effectiveness or appropriateness while others combine both features.
Many additional components of communicative competence have been suggested, such as empathy, control, flexibility, sensitivity, and knowledge. It is often discussed in terms of the individual skills employed in the process, i.e. the specific behavioral components that make up communicative competence. Message production skills include reading and writing. They are correlated with the reception skills of listening and reading. There are both verbal and non-verbal communication skills. For example, verbal communication skills involve the proper understanding of a language, including its phonology, orthography, syntax, lexicon, and semantics.
Many aspects of human life depend on successful communication, from ensuring basic necessities of survival to building and maintaining relationships. Communicative competence is a key factor regarding whether a person is able to reach their goals in social life, like having a successful career and finding a suitable spouse. Because of this, it can have a large impact on the individual's well-being. The lack of communicative competence can cause problems both on the individual and the societal level, including professional, academic, and health problems.
Barriers to effective communication can distort the message. They may result in failed communication and cause undesirable effects. This can happen if the message is poorly expressed because it uses terms with which the receiver is not familiar, or because it is not relevant to the receiver's needs, or because it contains too little or too much information. Distraction, selective perception, and lack of attention to feedback may also be responsible. Noise is another negative factor. It concerns influences that interfere with the message on its way to the receiver and distort it. Crackling sounds during a telephone call are one form of noise. Ambiguous expressions can also inhibit effective communication and make it necessary to disambiguate between possible interpretations to discern the sender's intention. These interpretations depend also on the cultural background of the participants. Significant cultural differences constitute an additional obstacle and make it more likely that messages are misinterpreted.
== Other species ==
Besides human communication, there are many other forms of communication found in the animal kingdom and among plants. They are studied in fields like biocommunication and biosemiotics. There are additional obstacles in this area for judging whether communication has taken place between two individuals. Acoustic signals are often easy to notice and analyze for scientists, but it is more difficult to judge whether tactile or chemical changes should be understood as communicative signals rather than as other biological processes.
For this reason, researchers often use slightly altered definitions of communication to facilitate their work. A common assumption in this regard comes from evolutionary biology and holds that communication should somehow benefit the communicators in terms of natural selection. The biologists Rumsaïs Blatrix and Veronika Mayer define communication as "the exchange of information between individuals, wherein both the signaller and receiver may expect to benefit from the exchange". According to this view, the sender benefits by influencing the receiver's behavior and the receiver benefits by responding to the signal. These benefits should exist on average but not necessarily in every single case. This way, deceptive signaling can also be understood as a form of communication. One problem with the evolutionary approach is that it is often difficult to assess the impact of such behavior on natural selection. Another common pragmatic constraint is to hold that it is necessary to observe a response by the receiver following the signal when judging whether communication has occurred.
=== Animals ===
Animal communication is the process of giving and taking information among animals. The field studying animal communication is called zoosemiotics. There are many parallels to human communication. One is that humans and many animals express sympathy by synchronizing their movements and postures. Nonetheless, there are also significant differences, like the fact that humans also engage in verbal communication, which uses language, while animal communication is restricted to non-verbal (i.e. non-linguistic) communication. Some theorists have tried to distinguish human from animal communication based on the claim that animal communication lacks a referential function and is thus not able to refer to external phenomena. However, various observations seem to contradict this view, such as the warning signals in response to different types of predators used by vervet monkeys, Gunnison's prairie dogs, and red squirrels. A further approach is to draw the distinction based on the complexity of human language, especially its almost limitless ability to combine basic units of meaning into more complex meaning structures. One view states that recursion sets human language apart from all non-human communicative systems. Another difference is that human communication is frequently linked to the conscious intention to send information, which is often not discernable for animal communication. Despite these differences, some theorists use the term "animal language" to refer to certain communicative patterns in animal behavior that have similarities with human language.
Animal communication can take a variety of forms, including visual, auditory, tactile, olfactory, and gustatory communication. Visual communication happens in the form of movements, gestures, facial expressions, and colors. Examples are movements seen during mating rituals, the colors of birds, and the rhythmic light of fireflies. Auditory communication takes place through vocalizations by species like birds, primates, and dogs. Auditory signals are frequently used to alert and warn. Lower-order living systems often have simple response patterns to auditory messages, reacting either by approach or avoidance. More complex response patterns are observed for higher animals, which may use different signals for different types of predators and responses. For example, some primates use one set of signals for airborne predators and another for land predators. Tactile communication occurs through touch, vibration, stroking, rubbing, and pressure. It is especially relevant for parent-young relations, courtship, social greetings, and defense. Olfactory and gustatory communication happen chemically through smells and tastes, respectively.
There are large differences between species concerning what functions communication plays, how much it is realized, and the behavior used to communicate. Common functions include the fields of courtship and mating, parent-offspring relations, social relations, navigation, self-defense, and territoriality. One part of courtship and mating consists in identifying and attracting potential mates. This can happen through various means. Grasshoppers and crickets communicate acoustically by using songs, moths rely on chemical means by releasing pheromones, and fireflies send visual messages by flashing light. For some species, the offspring depends on the parent for its survival. One central function of parent-offspring communication is to recognize each other. In some cases, the parents are also able to guide the offspring's behavior.
Social animals, like chimpanzees, bonobos, wolves, and dogs, engage in various forms of communication to express their feelings and build relations. Communication can aid navigation by helping animals move through their environment in a purposeful way, e.g. to locate food, avoid enemies, and follow other animals. In bats, this happens through echolocation, i.e. by sending auditory signals and processing the information from the echoes. Bees are another often-discussed case in this respect since they perform a type of dance to indicate to other bees where flowers are located. In regard to self-defense, communication is used to warn others and to assess whether a costly fight can be avoided. Another function of communication is to mark and claim territories used for food and mating. For example, some male birds claim a hedge or part of a meadow by using songs to keep other males away and attract females.
Two competing theories in the study of animal communication are nature theory and nurture theory. Their conflict concerns to what extent animal communication is programmed into the genes as a form of adaptation rather than learned from previous experience as a form of conditioning. To the degree that it is learned, it usually happens through imprinting, i.e. as a form of learning that only occurs in a certain phase and is then mostly irreversible.
=== Plants, fungi, and bacteria ===
Plant communication refers to plant processes involving the sending and receiving of information. The field studying plant communication is called phytosemiotics. This field poses additional difficulties for researchers since plants are different from humans and other animals in that they lack a central nervous system and have rigid cell walls. These walls restrict movement and usually prevent plants from sending and receiving signals that depend on rapid movement. However, there are some similarities since plants face many of the same challenges as animals. For example, they need to find resources, avoid predators and pathogens, find mates, and ensure that their offspring survive. Many of the evolutionary responses to these challenges are analogous to those in animals but are implemented using different means. One crucial difference is that chemical communication is much more prominent in the plant kingdom in contrast to the importance of visual and auditory communication for animals.
In plants, the term behavior is usually not defined in terms of physical movement, as is the case for animals, but as a biochemical response to a stimulus. This response has to be short relative to the plant's lifespan. Communication is a special form of behavior that involves conveying information from a sender to a receiver. It is distinguished from other types of behavior, like defensive reactions and mere sensing. Like in the field of animal communication, plant communication researchers often require as additional criteria that there is some form of response in the receiver and that the communicative behavior is beneficial to sender and receiver. Biologist Richard Karban distinguishes three steps of plant communication: the emission of a cue by a sender, the perception of the cue by a receiver, and the receiver's response. For plant communication, it is not relevant to what extent the emission of a cue is intentional. However, it should be possible for the receiver to ignore the signal. This criterion can be used to distinguish a response to a signal from a defense mechanism against an unwanted change like intense heat.
Plant communication happens in various forms. It includes communication within plants, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. A prominent form of communication is airborne and happens through volatile organic compounds (VOCs). For example, maple trees release VOCs when they are attacked by a herbivore to warn neighboring plants, which then react accordingly by adjusting their defenses. Another form of plant-to-plant communication happens through mycorrhizal fungi. These fungi form underground networks, colloquially referred to as the Wood-Wide Web, and connect the roots of different plants. The plants use the network to send messages to each other, specifically to warn other plants of a pest attack and to help prepare their defenses.
Communication can also be observed for fungi and bacteria. Some fungal species communicate by releasing pheromones into the external environment. For instance, they are used to promote sexual interaction in several aquatic fungal species. One form of communication between bacteria is called quorum sensing. It happens by releasing hormone-like molecules, which other bacteria detect and respond to. This process is used to monitor the environment for other bacteria and to coordinate population-wide responses, for example, by sensing the density of bacteria and regulating gene expression accordingly. Other possible responses include the induction of bioluminescence and the formation of biofilms.
=== Interspecies ===
Most communication happens between members within a species as intraspecies communication. This is because the purpose of communication is usually some form of cooperation. Cooperation happens mostly within a species while different species are often in conflict with each other by competing over resources. However, there are also some forms of interspecies communication. This occurs especially for symbiotic relations and significantly less for parasitic or predator-prey relations.
Interspecies communication plays a key role for plants that depend on external agents for reproduction. For example, flowers need insects for pollination and provide resources like nectar and other rewards in return. They use communication to signal their benefits and attract visitors by using distinctive colors and symmetrical shapes to stand out from their surroundings. This form of advertisement is necessary since flowers compete with each other for visitors. Many fruit-bearing plants rely on plant-to-animal communication to disperse their seeds and move them to a favorable location. This happens by providing nutritious fruits to animals. The seeds are eaten together with the fruit and are later excreted at a different location. Communication makes animals aware of where the fruits are and whether they are ripe. For many fruits, this happens through their color: they have an inconspicuous green color until they ripen and take on a new color that stands in visual contrast to the environment. Another example of interspecies communication is found in the ant-plant relation. It concerns, for instance, the selection of seeds by ants for their ant gardens and the pruning of exogenous vegetation as well as plant protection by ants.
Some animal species also engage in interspecies communication, like apes, whales, dolphins, elephants, and dogs. For example, different species of monkeys use common signals to cooperate when threatened by a common predator. Humans engage in interspecies communication when interacting with pets and working animals. For instance, acoustic signals play a central role in communication with dogs. Dogs can learn to react to various commands, like "sit" and "come". They can even be trained to respond to short syntactic combinations, like "bring X" or "put X in a box". They also react to the pitch and frequency of the human voice to detect emotions, dominance, and uncertainty. Dogs use a range of behavioral patterns to convey their emotions to humans, for example, in regard to aggressiveness, fearfulness, and playfulness.
== Computer ==
Computer communication concerns the exchange of data between computers and similar devices. For this to be possible, the devices have to be connected through a transmission system that forms a network between them. A transmitter is needed to send messages and a receiver is needed to receive them. A personal computer may use a modem as a transmitter to send information to a server through the public telephone network as the transmission system. The server may use a modem as its receiver. To transmit the data, it has to be converted into an electric signal. Communication channels used for transmission are either analog or digital and are characterized by features like bandwidth and latency.
There are many forms of computer networks. The most commonly discussed ones are LANs and WANs. LAN stands for local area network, which is a computer network within a limited area, usually with a distance of less than one kilometer. This is the case when connecting two computers within a home or an office building. LANs can be set up using a wired connection, like Ethernet, or a wireless connection, like Wi-Fi. WANs, on the other hand, are wide area networks that span large geographical regions, like the internet. Their networks are more complex and may use several intermediate connection nodes to transfer information between endpoints. Further types of computer networks include PANs (personal area networks), CANs (campus area networks), and MANs (metropolitan area networks).
For computer communication to be successful, the involved devices have to follow a common set of conventions governing their exchange. These conventions are known as the communication protocol. They concern various aspects of the exchange, like the format of messages and how to respond to transmission errors. They also cover how the two systems are synchronized, for example, how the receiver identifies the start and end of a signal. Based on the flow of informations, systems are categorized as simplex, half-duplex, and full-duplex. For simplex systems, signals flow only in one direction from the sender to the receiver, like in radio, cable television, and screens displaying arrivals and departures at airports. Half-duplex systems allow two-way exchanges but signals can only flow in one direction at a time, like walkie-talkies and police radios. In the case of full-duplex systems, signals can flow in both directions at the same time, like regular telephone and internet. In either case, it is often important for successful communication that the connection is secure to ensure that the transmitted data reaches only the intended destination and is not intercepted by an unauthorized third party. This can be achieved by using cryptography, which changes the format of the transmitted information to make it unintelligible to potential interceptors.
Human-computer communication is a closely related field that concerns topics like how humans interact with computers and how data in the form of inputs and outputs is exchanged. This happens through a user interface, which includes the hardware used to interact with the computer, like a mouse, a keyboard, and a monitor, as well as the software used in the process. On the software side, most early user interfaces were command-line interfaces in which the user must type a command to interact with the computer. Most modern user interfaces are graphical user interfaces, like Microsoft Windows and macOS, which are usually much easier to use for non-experts. They involve graphical elements through which the user can interact with the computer, commonly using a design concept known as skeumorphism to make a new concept feel familiar and speed up understanding by mimicking the real-world equivalent of the interface object. Examples include the typical computer folder icon and recycle bin used for discarding files. One aim when designing user interfaces is to simplify the interaction with computers. This helps make them more user-friendly and accessible to a wider audience while also increasing productivity.
== Communication studies ==
Communication studies, also referred to as communication science, is the academic discipline studying communication. It is closely related to semiotics, with one difference being that communication studies focuses more on technical questions of how messages are sent, received, and processed. Semiotics, on the other hand, tackles more abstract questions in relation to meaning and how signs acquire it. Communication studies covers a wide area overlapping with many other disciplines, such as biology, anthropology, psychology, sociology, linguistics, media studies, and journalism.
Many contributions in the field of communication studies focus on developing models and theories of communication. Models of communication aim to give a simplified overview of the main components involved in communication. Theories of communication try to provide conceptual frameworks to accurately present communication in all its complexity. Some theories focus on communication as a practical art of discourse while others explore the roles of signs, experience, information processing, and the goal of building a social order through coordinated interaction. Communication studies is also interested in the functions and effects of communication. It covers issues like how communication satisfies physiological and psychological needs, helps build relationships, and assists in gathering information about the environment, other individuals, and oneself. A further topic concerns the question of how communication systems change over time and how these changes correlate with other societal changes. A related topic focuses on psychological principles underlying those changes and the effects they have on how people exchange ideas.
Communication was studied as early as Ancient Greece. Early influential theories were created by Plato and Aristotle, who stressed public speaking and the understanding of rhetoric. According to Aristotle, for example, the goal of communication is to persuade the audience. The field of communication studies only became a separate research discipline in the 20th century, especially starting in the 1940s. The development of new communication technologies, such as telephone, radio, newspapers, television, and the internet, has had a big impact on communication and communication studies.
Today, communication studies is a wide discipline. Some works in it try to provide a general characterization of communication in the widest sense. Others attempt to give a precise analysis of one specific form of communication. Communication studies includes many subfields. Some focus on wide topics like interpersonal communication, intrapersonal communication, verbal communication, and non-verbal communication. Others investigate communication within a specific area. Organizational communication concerns communication between members of organizations such as corporations, nonprofits, or small businesses. Central in this regard is the coordination of the behavior of the different members as well as the interaction with customers and the general public. Closely related terms are business communication, corporate communication, and professional communication. The main element of marketing communication is advertising but it also encompasses other communication activities aimed at advancing the organization's objective to its audiences, like public relations. Political communication covers topics like electoral campaigns to influence voters and legislative communication, like letters to a congress or committee documents. Specific emphasis is often given to propaganda and the role of mass media.
Intercultural communication is relevant to both organizational and political communication since they often involve attempts to exchange messages between communicators from different cultural backgrounds. The cultural background affects how messages are formulated and interpreted and can be the cause of misunderstandings. It is also relevant for development communication, which is about the use of communication for assisting in development, like aid given by first-world countries to third-world countries. Health communication concerns communication in the field of healthcare and health promotion efforts. One of its topics is how healthcare providers, like doctors and nurses, should communicate with their patients.
== History ==
Communication history studies how communicative processes evolved and interacted with society, culture, and technology. Human communication has a long history and the way people communicate has changed considerably over time. Many of these changes were triggered by the development of new communication technologies and had various effects on how people exchanged ideas. New communication technologies usually require new skills that people need to learn to use them effectively.
In the academic literature, the history of communication is usually divided into ages based on the dominant form of communication in that age. The number of ages and the precise periodization are disputed. They usually include ages for speaking, writing, and print as well as electronic mass communication and the internet. According to communication theorist Marshall Poe, the dominant media for each age can be characterized in relation to several factors. They include the amount of information a medium can store, how long it persists, how much time it takes to transmit it, and how costly it is to use the medium. Poe argues that subsequent ages usually involve some form of improvement of one or more of the factors.
According to some scientific estimates, language developed around 40,000 years ago while others consider it to be much older. Before this development, human communication resembled animal communication and happened through a combination of grunts, cries, gestures, and facial expressions. Language helped early humans to organize themselves and plan ahead more efficiently. In early societies, spoken language was the primary form of communication. Most knowledge was passed on through it, often in the form of stories or wise sayings. This form does not produce stable knowledge since it depends on imperfect human memory. Because of this, many details differ from one telling to the next and are presented differently by distinct storytellers. As people started to settle and form agricultural communities, societies grew and there was an increased need for stable records of ownership of land and commercial transactions. This triggered the invention of writing, which is able to solve many problems that arose from using exclusively oral communication. It is much more efficient at preserving knowledge and passing it on between generations since it does not depend on human memory. Before the invention of writing, certain forms of proto-writing had already developed. Proto-writing encompasses long-lasting visible marks used to store information, like decorations on pottery items, knots in a cord to track goods, or seals to mark property.
Most early written communication happened through pictograms. Pictograms are graphical symbols that convey meaning by visually resembling real-world objects. The use of basic pictographic symbols to represent things like farming produce was common in ancient cultures and began around 9000 BCE. The first complex writing system including pictograms was developed around 3500 BCE by the Sumerians and is called cuneiform. Pictograms are still in use today, like no-smoking signs and the symbols of male and female figures on bathroom doors. A significant disadvantage of pictographic writing systems is that they need a large amount of symbols to refer to all the objects one wants to talk about. This problem was solved by the development of other writing systems. For example, the symbols of alphabetic writing systems do not stand for regular objects. Instead, they relate to the sounds used in spoken language. Other types of early writing systems include logographic and ideographic writing systems. A drawback of many early forms of writing, like the clay tablets used for cuneiform, was that they were not very portable. This made it difficult to transport the texts from one location to another to share information. This changed with the invention of papyrus by the Egyptians around 2500 BCE and was further improved later by the development of parchment and paper.
Until the 1400s, almost all written communication was hand-written, which limited the spread of written media within society since copying texts by hand was costly. The introduction and popularization of mass printing in the middle of the 15th century by Johann Gutenberg resulted in rapid changes. Mass printing quickly increased the circulation of written media and also led to the dissemination of new forms of written documents, like newspapers and pamphlets. One side effect was that the augmented availability of written documents significantly improved the general literacy of the population. This development served as the foundation for revolutions in various fields, including science, politics, and religion.
Scientific discoveries in the 19th and 20th centuries caused many further developments in the history of communication. They include the invention of telegraphs and telephones, which made it even easier and faster to transmit information from one location to another without the need to transport written documents. These communication forms were initially limited to cable connections, which had to be established first. Later developments found ways of wireless transmission using radio signals. They made it possible to reach wide audiences and radio soon became one of the central forms of mass communication. Various innovations in the field of photography enabled the recording of images on film, which led to the development of cinema and television. The reach of wireless communication was further enhanced with the development of satellites, which made it possible to broadcast radio and television signals to stations all over the world. This way, information could be shared almost instantly everywhere around the globe. The development of the internet constitutes a further milestone in the history of communication. It made it easier than ever before for people to exchange ideas, collaborate, and access information from anywhere in the world by using a variety of means, such as websites, e-mail, social media, and video conferences.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Quotations related to Communication at Wikiquote
Media related to Communication at Wikimedia Commons |
Communications satellite | A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit 22,236 miles (35,785 km) above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently.
The radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference.
== History ==
=== Origins ===
In October 1945, Arthur C. Clarke published an article titled "Extraterrestrial Relays" in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits to relay radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term 'Clarke Belt' is employed as a description of the orbit.
The first artificial Earth satellite was Sputnik 1, which was put into orbit by the Soviet Union on 4 October 1957. It was developed by Mikhail Tikhonravov and Sergey Korolev, building on work by Konstantin Tsiolkovsky. Sputnik 1 was equipped with an on-board radio transmitter that worked on two frequencies of 20.005 and 40.002 MHz, or 7 and 15 meters wavelength. The satellite was not placed in orbit to send data from one point on Earth to another, but the radio transmitter was meant to study the properties of radio wave distribution throughout the ionosphere. The launch of Sputnik 1 was a major step in the exploration of space and rocket development, and marks the beginning of the Space Age.
=== Early active and passive satellite experiments ===
There are two major classes of communications satellites, passive and active. Passive satellites only reflect the signal coming from the source, toward the direction of the receiver. With passive satellites, the reflected signal is not amplified at the satellite, and only a small amount of the transmitted energy actually reaches the receiver. Since the satellite is so far above Earth, the radio signal is attenuated due to free-space path loss, so the signal received on Earth is very weak. Active satellites, on the other hand, amplify the received signal before retransmitting it to the receiver on the ground. Passive satellites were the first communications satellites, but are little used now.
Work that was begun in the field of electrical intelligence gathering at the United States Naval Research Laboratory in 1951 led to a project named Communication Moon Relay. Military planners had long shown considerable interest in secure and reliable communications lines as a tactical necessity, and the ultimate goal of this project was the creation of the longest communications circuit in human history, with the Moon, Earth's natural satellite, acting as a passive relay. After achieving the first transoceanic communication between Washington, D.C., and Hawaii on 23 January 1956, this system was publicly inaugurated and put into formal production in January 1960.
The first satellite purpose-built to actively relay communications was Project SCORE, led by Advanced Research Projects Agency (ARPA) and launched on 18 December 1958, which used a tape recorder to carry a stored voice message, as well as to receive, store, and retransmit messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. The satellite also executed several realtime transmissions before the non-rechargeable batteries failed on 30 December 1958 after eight hours of actual operation.
The direct successor to SCORE was another ARPA-led project called Courier. Courier 1B was launched on 4 October 1960 to explore whether it would be possible to establish a global military communications network by using "delayed repeater" satellites, which receive and store information until commanded to rebroadcast them. After 17 days, a command system failure ended communications from the satellite.
NASA's satellite applications program launched the first artificial satellite used for passive relay communications in Echo 1 on 12 August 1960. Echo 1 was an aluminized balloon satellite acting as a passive reflector of microwave signals. Communication signals were bounced off the satellite from one point on Earth to another. This experiment sought to establish the feasibility of worldwide broadcasts of telephone, radio, and television signals.
=== More firsts and further experiments ===
Telstar was the first active, direct relay communications commercial satellite and marked the first transatlantic transmission of television signals. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on 10 July 1962, in the first privately sponsored space launch.
Another passive relay experiment primarily intended for military communications purposes was Project West Ford, which was led by Massachusetts Institute of Technology's Lincoln Laboratory. After an initial failure in 1961, a launch on 9 May 1963 dispersed 350 million copper needle dipoles to create a passive reflecting belt. Even though only about half of the dipoles properly separated from each other, the project was able to successfully experiment and communicate using frequencies in the SHF X band spectrum.
An immediate antecedent of the geostationary satellites was the Hughes Aircraft Company's Syncom 2, launched on 26 July 1963. Syncom 2 was the first communications satellite in a geosynchronous orbit. It revolved around the Earth once per day at constant speed, but because it still had north–south motion, special equipment was needed to track it. Its successor, Syncom 3, launched on 19 July 1964, was the first geostationary communications satellite. Syncom 3 obtained a geosynchronous orbit, without a north–south motion, making it appear from the ground as a stationary object in the sky.
A direct extension of the passive experiments of Project West Ford was the Lincoln Experimental Satellite program, also conducted by the Lincoln Laboratory on behalf of the United States Department of Defense. The LES-1 active communications satellite was launched on 11 February 1965 to explore the feasibility of active solid-state X band long-range military communications. A total of nine satellites were launched between 1965 and 1976 as part of this series.
=== International commercial satellite projects ===
In the United States, 1962 saw the creation of the Communications Satellite Corporation (COMSAT) private corporation, which was subject to instruction by the US Government on matters of national policy. Over the next two years, international negotiations led to the Intelsat Agreements, which in turn led to the launch of Intelsat 1, also known as Early Bird, on 6 April 1965, and which was the first commercial communications satellite to be placed in geosynchronous orbit. Subsequent Intelsat launches in the 1960s provided multi-destination service and video, audio, and data service to ships at sea (Intelsat 2 in 1966–67), and the completion of a fully global network with Intelsat 3 in 1969–70. By the 1980s, with significant expansions in commercial satellite capacity, Intelsat was on its way to become part of the competitive private telecommunications industry, and had started to get competition from the likes of PanAmSat in the United States, which, ironically, was then bought by its archrival in 2005.
When Intelsat was launched, the United States was the only launch source outside of the Soviet Union, who did not participate in the Intelsat agreements. The Soviet Union launched its first communications satellite on 23 April 1965 as part of the Molniya program. This program was also unique at the time for its use of what then became known as the Molniya orbit, which describes a highly elliptical orbit, with two high apogees daily over the northern hemisphere. This orbit provides a long dwell time over Russian territory as well as over Canada at higher latitudes than geostationary orbits over the equator.
In the 2020s, the popularity of low Earth orbit satellite internet constellations providing relatively low-cost internet services led to reducing demand for new geostationary orbit communications satellites.
== Satellite orbits ==
Communications satellites usually have one of three primary types of orbit, while other orbital classifications are used to further specify orbital details. MEO and LEO are non-geostationary orbit (NGSO).
Geostationary satellites have a geostationary orbit (GEO), which is 22,236 miles (35,785 km) from Earth's surface. This orbit has the special characteristic that the apparent position of the satellite in the sky when viewed by a ground observer does not change, the satellite appears to "stand still" in the sky. This is because the satellite's orbital period is the same as the rotation rate of the Earth. The advantage of this orbit is that ground antennas do not have to track the satellite across the sky, they can be fixed to point at the location in the sky the satellite appears.
Medium Earth orbit (MEO) satellites are closer to Earth. Orbital altitudes range from 2,000 to 36,000 kilometres (1,200 to 22,400 mi) above Earth.
The region below medium orbits is referred to as low Earth orbit (LEO), and is about 160 to 2,000 kilometres (99 to 1,243 mi) above Earth.
As satellites in MEO and LEO orbit the Earth faster, they do not remain visible in the sky to a fixed point on Earth continually like a geostationary satellite, but appear to a ground observer to cross the sky and "set" when they go behind the Earth beyond the visible horizon. Therefore, to provide continuous communications capability with these lower orbits requires a larger number of satellites, so that one of these satellites will always be visible in the sky for transmission of communication signals. However, due to their closer distance to the Earth, LEO or MEO satellites can communicate to ground with reduced latency and at lower power than would be required from a geosynchronous orbit.
=== Low Earth orbit (LEO) ===
A low Earth orbit (LEO) typically is a circular orbit about 160 to 2,000 kilometres (99 to 1,243 mi) above the Earth's surface and, correspondingly, a period (time to revolve around the Earth) of about 90 minutes.
Because of their low altitude, these satellites are only visible from within a radius of roughly 1,000 kilometres (620 mi) from the sub-satellite point. In addition, satellites in low Earth orbit change their position relative to the ground position quickly. So even for local applications, many satellites are needed if the mission requires uninterrupted connectivity.
Low-Earth-orbiting satellites are less expensive to launch into orbit than geostationary satellites and, due to proximity to the ground, do not require as high signal strength (signal strength falls off as the square of the distance from the source, so the effect is considerable). Thus there is a trade off between the number of satellites and their cost.
In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions.
=== Satellite constellation ===
A group of satellites working in concert is known as a satellite constellation. Two such constellations, intended to provide satellite phone and low-speed data services, primarily to remote areas, are the Iridium and Globalstar systems. The Iridium system has 66 satellites, which orbital inclination of 86.4° and inter-satellite links provide service availability over the entire surface of Earth. Starlink is a satellite internet constellation operated by SpaceX, that aims for global satellite Internet access coverage.
It is also possible to offer discontinuous coverage using a low-Earth-orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada's CASSIOPE communications satellite. Another system using this store and forward method is Orbcomm.
=== Medium Earth orbit (MEO) ===
A medium Earth orbit is a satellite in orbit somewhere between 2,000 and 35,786 kilometres (1,243 and 22,236 mi) above the Earth's surface. MEO satellites are similar to LEO satellites in functionality. MEO satellites are visible for much longer periods of time than LEO satellites, usually between 2 and 8 hours. MEO satellites have a larger coverage area than LEO satellites. A MEO satellite's longer duration of visibility and wider footprint means fewer satellites are needed in a MEO network than a LEO network. One disadvantage is that a MEO satellite's distance gives it a longer time delay and weaker signal than a LEO satellite, although these limitations are not as severe as those of a GEO satellite.
Like LEOs, these satellites do not maintain a stationary distance from the Earth. This is in contrast to the geostationary orbit, where satellites are always 35,786 kilometres (22,236 mi) from Earth.
Typically the orbit of a medium Earth orbit satellite is about 16,000 kilometres (10,000 mi) above Earth. In various patterns, these satellites make the trip around Earth in anywhere from 2 to 8 hours.
==== Examples of MEO ====
In 1962, the communications satellite, Telstar, was launched. It was a medium Earth orbit satellite designed to help facilitate high-speed telephone signals. Although it was the first practical way to transmit signals over the horizon, its major drawback was soon realised. Because its orbital period of about 2.5 hours did not match the Earth's rotational period of 24 hours, continuous coverage was impossible. It was apparent that multiple MEOs needed to be used in order to provide continuous coverage.
In 2013, the first four of a constellation of 20 MEO satellites was launched. The O3b satellites provide broadband internet services, in particular to remote locations and maritime and in-flight use, and orbit at an altitude of 8,063 kilometres (5,010 mi)).
=== Geostationary orbit (GEO) ===
To an observer on Earth, a satellite in a gestationary orbit appears motionless, in a fixed position in the sky. This is because it revolves around the Earth at Earth's own angular velocity (one revolution per sidereal day, in an equatorial orbit).
A geostationary orbit is useful for communications because ground antennas can be aimed at the satellite without their having to track the satellite's motion. This is relatively inexpensive.
In applications that require many ground antennas, such as DirecTV distribution, the savings in ground equipment can more than outweigh the cost and complexity of placing a satellite into orbit.
==== Examples of GEO ====
The first geostationary satellite was Syncom 3, launched on 19 August 1964, and used for communication across the Pacific starting with television coverage of the 1964 Summer Olympics. Shortly after Syncom 3, Intelsat I, aka Early Bird, was launched on 6 April 1965 and placed in orbit at 28° west longitude. It was the first geostationary satellite for telecommunications over the Atlantic Ocean.
On 9 November 1972, Canada's first geostationary satellite serving the continent, Anik A1, was launched by Telesat Canada, with the United States following suit with the launch of Westar 1 by Western Union on 13 April 1974.
On 30 May 1974, the first geostationary communications satellite in the world to be three-axis stabilized was launched: the experimental satellite ATS-6 built for NASA.
After the launches of the Telstar through Westar 1 satellites, RCA Americom (later GE Americom, now SES) launched Satcom 1 in 1975. It was Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS), HBO, CBN (now Freeform) and The Weather Channel become successful, because these channels distributed their programming to all of the local cable TV headends using the satellite. Additionally, it was the first satellite used by broadcast television networks in the United States, like ABC, NBC, and CBS, to distribute programming to their local affiliate stations. Satcom 1 was widely used because it had twice the communications capacity of the competing Westar 1 in America (24 transponders as opposed to the 12 of Westar 1), resulting in lower transponder-usage costs. Satellites in later decades tended to have even higher transponder numbers.
By 2000, Hughes Space and Communications (now Boeing Satellite Development Center) had built nearly 40 percent of the more than one hundred satellites in service worldwide. Other major satellite manufacturers include Space Systems/Loral, Orbital Sciences Corporation with the Star Bus series, Indian Space Research Organisation, Lockheed Martin (owns the former RCA Astro Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space, now Thales Alenia Space, with the Spacebus series, and Astrium.
=== Molniya orbit ===
Geostationary satellites must operate above the equator and therefore appear lower on the horizon as the receiver gets farther from the equator. This will cause problems for extreme northerly latitudes, affecting connectivity and causing multipath interference (caused by signals reflecting off the ground and into the ground antenna).
Thus, for areas close to the North (and South) Pole, a geostationary satellite may appear below the horizon. Therefore, Molniya orbit satellites have been launched, mainly in Russia, to alleviate this problem.
Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly inclined, guaranteeing good elevation over selected positions during the northern portion of the orbit. (Elevation is the extent of the satellite's position above the horizon. Thus, a satellite at the horizon has zero elevation and a satellite directly overhead has elevation of 90 degrees.)
The Molniya orbit is designed so that the satellite spends the great majority of its time over the far northern latitudes, during which its ground footprint moves only slightly. Its period is one half day, so that the satellite is available for operation over the targeted region for six to nine hours every second revolution. In this way a constellation of three Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage.
The first satellite of the Molniya series was launched on 23 April 1965 and was used for experimental transmission of TV signals from a Moscow uplink station to downlink stations located in Siberia and the Russian Far East, in Norilsk, Khabarovsk, Magadan and Vladivostok. In November 1967 Soviet engineers created a unique system of national TV network of satellite television, called Orbita, that was based on Molniya satellites.
=== Polar orbit ===
In the United States, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) was established in 1994 to consolidate the polar satellite operations of
NASA (National Aeronautics and Space Administration)
NOAA (National Oceanic and Atmospheric Administration). NPOESS manages a number of satellites for various purposes; for example, METSAT for meteorological satellite, EUMETSAT for the European branch of the program, and METOP for meteorological operations.
These orbits are Sun synchronous, meaning that they cross the equator at the same local time each day. For example, the satellites in the NPOESS (civilian) orbit will cross the equator, going from south to north, at times 1:30 P.M., 5:30 P.M., and 9:30 P.M.
=== Beyond geostationary orbit ===
There are plans and initiatives to bring dedicated communications satellite beyond geostationary orbits.
NASA proposed LunaNet as a data network aiming to provide a "Lunar Internet" for cis-lunar spacecraft and Installations.
The Moonlight Initiative is an equivalent ESA project that is stated to be compatible and providing navigational services for the lunar surface. Both programmes are satellite constellations of several satellites in various orbits around the Moon.
Other orbits are also planned to be used. Positions in the Earth-Moon-Libration points are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth. Also, dedicated communication satellites in orbits around Mars supporting different missions on surface and other orbits are considered, such as the Mars Telecommunications Orbiter.
== Structure ==
Communications Satellites are usually composed of the following subsystems:
Communication Payload, normally composed of transponders, antennas, amplifiers and switching systems
Engines used to bring the satellite to its desired orbit
A station keeping tracking and stabilization subsystem used to keep the satellite in the right orbit, with its antennas pointed in the right direction, and its power system pointed towards the Sun
Power subsystem, used to power the Satellite systems, normally composed of solar cells, and batteries that maintain power during solar eclipse
Command and Control subsystem, which maintains communications with ground control stations. The ground control Earth stations monitor the satellite performance and control its functionality during various phases of its life-cycle.
The bandwidth available from a satellite depends upon the number of transponders provided by the satellite. Each service (TV, Voice, Internet, radio) requires a different amount of bandwidth for transmission. This is typically known as link budgeting and a network simulator can be used to arrive at the exact value.
== Frequency allocation for satellite systems ==
Allocating frequencies to satellite services is a complicated process which requires international coordination and planning. This is carried out under the auspices of the International Telecommunication Union (ITU).
To facilitate frequency planning, the world is divided into three regions:
Region 1: Europe, Africa, the Middle East, what was formerly the Soviet Union, and Mongolia
Region 2: North and South America and Greenland
Region 3: Asia (excluding region 1 areas), Australia, and the southwest Pacific
Within these regions, frequency bands are allocated to various satellite services, although a given service may be allocated different frequency bands in different regions. Some of the services provided by satellites are:
Fixed satellite service (FSS)
Broadcasting satellite service (BSS)
Mobile-satellite service
Radionavigation-satellite service
Meteorological-satellite service
== Applications ==
=== Telephony ===
The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an Earth station, where they are then transmitted to a geostationary satellite. The downlink follows an analogous path. Improvements in submarine communications cables through the use of fiber-optics caused some decline in the use of satellites for fixed telephony in the late 20th century.
Satellite communications are still used in many applications today. Remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service, need satellite telephones. There are also regions of some continents and countries where landline telecommunications are rare to non existent, for example large regions of South America, Africa, Canada, China, Russia, and Australia. Satellite communications also provide connection to the edges of Antarctica and Greenland. Other land use for satellite phones are rigs at sea, a backup for hospitals, military, and recreation. Ships at sea, as well as planes, often use satellite phones.
Satellite phone systems can be accomplished by a number of means. On a large scale, often there will be a local telephone system in an isolated area with a link to the telephone system in a main land area. There are also services that will patch a radio signal to a telephone system. In this example, almost any type of satellite can be used. Satellite phones connect directly to a constellation of either geostationary or low-Earth-orbit satellites. Calls are then forwarded to a satellite teleport connected to the Public Switched Telephone Network .
=== Television ===
As television became the main market, its demand for simultaneous delivery of relatively few signals of large bandwidth to many receivers being a more precise match for the capabilities of geosynchronous comsats. Two satellite types are used for North American television and radio: Direct broadcast satellite (DBS), and Fixed Service Satellite (FSS).
The definitions of FSS and DBS satellites outside of North America, especially in Europe, are a bit more ambiguous. Most satellites used for direct-to-home television in Europe have the same high power output as DBS-class satellites in North America, but use the same linear polarization as FSS-class satellites. Examples of these are the Astra, Eutelsat, and Hotbird spacecraft in orbit over the European continent. Because of this, the terms FSS and DBS are more so used throughout the North American continent, and are uncommon in Europe.
Fixed Service Satellites use the C band, and the lower portions of the Ku band. They are normally used for broadcast feeds to and from television networks and local affiliate stations (such as program feeds for network and syndicated programming, live shots, and backhauls), as well as being used for distance learning by schools and universities, business television (BTV), Videoconferencing, and general commercial telecommunications. FSS satellites are also used to distribute national cable channels to cable television headends.
Free-to-air satellite TV channels are also usually distributed on FSS satellites in the Ku band. The Intelsat Americas 5, Galaxy 10R and AMC 3 satellites over North America provide a quite large amount of FTA channels on their Ku band transponders.
The American Dish Network DBS service has also recently used FSS technology as well for their programming packages requiring their SuperDish antenna, due to Dish Network needing more capacity to carry local television stations per the FCC's "must-carry" regulations, and for more bandwidth to carry HDTV channels.
A direct broadcast satellite is a communications satellite that transmits to small DBS satellite dishes (usually 18 to 24 inches or 45 to 60 cm in diameter). Direct broadcast satellites generally operate in the upper portion of the microwave Ku band. DBS technology is used for DTH-oriented (Direct-To-Home) satellite TV services, such as DirecTV, DISH Network and Orby TV in the United States, Bell Satellite TV and Shaw Direct in Canada, Freesat and Sky in the UK, Ireland, and New Zealand and DSTV in South Africa.
Operating at lower frequency and lower power than DBS, FSS satellites require a much larger dish for reception (3 to 8 feet (1 to 2.5 m) in diameter for Ku band, and 12 feet (3.6 m) or larger for C band). They use linear polarization for each of the transponders' RF input and output (as opposed to circular polarization used by DBS satellites), but this is a minor technical difference that users do not notice. FSS satellite technology was also originally used for DTH satellite TV from the late 1970s to the early 1990s in the United States in the form of TVRO (Television Receive Only) receivers and dishes. It was also used in its Ku band form for the now-defunct Primestar satellite TV service.
Some satellites have been launched that have transponders in the Ka band, such as DirecTV's SPACEWAY-1 satellite, and Anik F2. NASA and ISRO have also launched experimental satellites carrying Ka band beacons recently.
Some manufacturers have also introduced special antennas for mobile reception of DBS television. Using Global Positioning System (GPS) technology as a reference, these antennas automatically re-aim to the satellite no matter where or how the vehicle (on which the antenna is mounted) is situated. These mobile satellite antennas are popular with some recreational vehicle owners. Such mobile DBS antennas are also used by JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which passengers can view on-board on LCD screens mounted in the seats.
=== Radio broadcasting ===
Satellite radio offers audio broadcast services in some countries, notably the United States. Mobile services allow listeners to roam a continent, listening to the same audio programming anywhere.
A satellite radio or subscription radio (SR) is a digital radio signal that is broadcast by a communications satellite, which covers a much wider geographical range than terrestrial radio signals.
=== Amateur radio ===
Amateur radio operators have access to amateur satellites, which have been designed specifically to carry amateur radio traffic. Most such satellites operate as spaceborne repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio equipment and highly directional antennas such as Yagis or dish antennas. Due to launch costs, most current amateur satellites are launched into fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts at any given time. Some satellites also provide data-forwarding services using the X.25 or similar protocols.
=== Internet access ===
After the 1990s, satellite communication technology has been used as a means to connect to the Internet via broadband data connections. This can be very useful for users who are located in remote areas, and cannot access a broadband connection, or require high availability of services.
=== Military ===
Communications satellites are used for military communications applications, such as Global Command and Control Systems. Examples of military systems that use communication satellites are the MILSTAR, the DSCS, and the FLTSATCOM of the United States, NATO satellites, United Kingdom satellites (for instance Skynet), and satellites of the former Soviet Union. India has launched its first Military Communication satellite GSAT-7, its transponders operate in UHF, F, C and Ku band bands. Typically military satellites operate in the UHF, SHF (also known as X-band) or EHF (also known as Ka band) frequency bands.
=== Data collection ===
Near-ground in situ environmental monitoring equipment (such as tide gauges, weather stations, weather buoys, and radiosondes), may use satellites for one-way data transmission or two-way telemetry and telecontrol. It may be based on a secondary payload of a weather satellite (as in the case of GOES and METEOSAT and others in the Argos system) or in dedicated satellites (such as SCD). The data rate is typically much lower than in satellite Internet access.
== See also ==
== References ==
=== Notes ===
=== Citations ===
== Further reading ==
Slotten, Hugh R. Beyond Sputnik and the Space Race: The Origins of Global Satellite Communications (Johns Hopkins University Press, 2022); online review
== External links ==
Communications satellites short history Archived 2015-05-12 at the Wayback Machine by David J. Whalen
Beyond The Ionosphere: Fifty Years of Satellite Communication (NASA SP-4217, 1997) |
Composite material | A composite or composite material (also composition material) is a material which is produced from two or more constituent materials. These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. Within the finished structure, the individual elements remain separate and distinct, distinguishing composites from mixtures and solid solutions. Composite materials with more than one distinct layer are called composite laminates.
Typical engineered composite materials are made up of a binding agent forming the matrix and a filler material (particulates or fibres) giving substance, e.g.:
Concrete, reinforced concrete and masonry with cement, lime or mortar (which is itself a composite material) as a binder
Composite wood such as glulam and plywood with wood glue as a binder
Reinforced plastics, such as fiberglass and fibre-reinforced polymer with resin or thermoplastics as a binder
Ceramic matrix composites (composite ceramic and metal matrices)
Metal matrix composites
advanced composite materials, often first developed for spacecraft and aircraft applications.
Composite materials can be less expensive, lighter, stronger or more durable than common materials. Some are inspired by biological structures found in plants and animals.
Robotic materials are composites that include sensing, actuation, computation, and communication components.
Composite materials are used for construction and technical structures such as boat hulls, swimming pool panels, racing car bodies, shower stalls, bathtubs, storage tanks, imitation granite, and cultured marble sinks and countertops. They are also being increasingly used in general automotive applications.
== History ==
The earliest composite materials were made from straw and mud combined to form bricks for building construction. Ancient brick-making was documented by Egyptian tomb paintings.
Wattle and daub might be the oldest composite materials, at over 6000 years old.
Woody plants, both true wood from trees and such plants as palms and bamboo, yield natural composites that were used prehistorically by humankind and are still used widely in construction and scaffolding.
Plywood, 3400 BC, by the Ancient Mesopotamians; gluing wood at different angles gives better properties than natural wood.
Cartonnage, layers of linen or papyrus soaked in plaster dates to the First Intermediate Period of Egypt c. 2181–2055 BC and was used for death masks.
Cob mud bricks, or mud walls, (using mud (clay) with straw or gravel as a binder) have been used for thousands of years.
Concrete was described by Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of aggregate appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana, which were volcanic sands from the sandlike beds of Pozzuoli brownish-yellow-gray in colour near Naples and reddish-brown at Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for cements used in buildings and a 1:2 ratio of lime to pulvis Puteolanus for underwater work, essentially the same ratio mixed today for concrete used at sea. Natural cement-stones, after burning, produced cements used in concretes from post-Roman times into the 20th century, with some properties superior to manufactured Portland cement.
Papier-mâché, a composite of paper and glue, has been used for hundreds of years.
The first artificial fibre reinforced plastic was a combination of fiber glass and bakelite, performed in 1935 by Al Simison and Arthur D Little in Owens Corning Company
One of the most common and familiar composite is fibreglass, in which small glass fibre are embedded within a polymeric material (normally an epoxy or polyester). The glass fibre is relatively strong and stiff (but also brittle), whereas the polymer is ductile (but also weak and flexible). Thus the resulting fibreglass is relatively stiff, strong, flexible, and ductile.
Composite bow
Leather cannon, wooden cannon
== Examples ==
=== Composite materials ===
Concrete is the most common artificial composite material of all. As of 2009, about 7.5 billion cubic metres of concrete are made each year.
Concrete typically consists of loose stones (construction aggregate) held with a matrix of cement. Concrete is an inexpensive material resisting large compressive forces, however, susceptible to tensile loading. To give concrete the ability to resist being stretched, steel bars, which can resist high stretching (tensile) forces, are often added to concrete to form reinforced concrete.
Fibre-reinforced polymers include carbon-fiber-reinforced polymers and glass-reinforced plastic. If classified by matrix then there are thermoplastic composites, short fibre thermoplastics, long fibre thermoplastics or long-fiber-reinforced thermoplastics. There are numerous thermoset composites, including paper composite panels. Many advanced thermoset polymer matrix systems usually incorporate aramid fibre and carbon fibre in an epoxy resin matrix.
Shape-memory polymer composites are high-performance composites, formulated using fibre or fabric reinforcements and shape-memory polymer resin as the matrix. Since a shape-memory polymer resin is used as the matrix, these composites have the ability to be easily manipulated into various configurations when they are heated above their activation temperatures and will exhibit high strength and stiffness at lower temperatures. They can also be reheated and reshaped repeatedly without losing their material properties. These composites are ideal for applications such as lightweight, rigid, deployable structures; rapid manufacturing; and dynamic reinforcement.
High strain composites are another type of high-performance composites that are designed to perform in a high deformation setting and are often used in deployable systems where structural flexing is advantageous. Although high strain composites exhibit many similarities to shape-memory polymers, their performance is generally dependent on the fibre layout as opposed to the resin content of the matrix.
Composites can also use metal fibres reinforcing other metals, as in metal matrix composites (MMC) or ceramic matrix composites (CMC), which includes bone (hydroxyapatite reinforced with collagen fibres), cermet (ceramic and metal), and concrete. Ceramic matrix composites are built primarily for fracture toughness, not for strength. Another class of composite materials involve woven fabric composite consisting of longitudinal and transverse laced yarns. Woven fabric composites are flexible as they are in form of fabric.
Organic matrix/ceramic aggregate composites include asphalt concrete, polymer concrete, mastic asphalt, mastic roller hybrid, dental composite, syntactic foam, and mother of pearl. Chobham armour is a special type of composite armour used in military applications.
Additionally, thermoplastic composite materials can be formulated with specific metal powders resulting in materials with a density range from 2 g/cm3 to 11 g/cm3 (same density as lead). The most common name for this type of material is "high gravity compound" (HGC), although "lead replacement" is also used. These materials can be used in place of traditional materials such as aluminium, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting, balancing (for example, modifying the centre of gravity of a tennis racquet), vibration damping, and radiation shielding applications. High density composites are an economically viable option when certain materials are deemed hazardous and are banned (such as lead) or when secondary operations costs (such as machining, finishing, or coating) are a factor.
There have been several studies indicating that interleaving stiff and brittle epoxy-based carbon-fiber-reinforced polymer laminates with flexible thermoplastic laminates can help to make highly toughened composites that show improved impact resistance. Another interesting aspect of such interleaved composites is that they are able to have shape memory behaviour without needing any shape-memory polymers or shape-memory alloys e.g. balsa plies interleaved with hot glue, aluminium plies interleaved with acrylic polymers or PVC and carbon-fiber-reinforced polymer laminates interleaved with polystyrene.
A sandwich-structured composite is a special class of composite material that is fabricated by attaching two thin but stiff skins to a lightweight but thick core. The core material is normally low strength material, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density.
Wood is a naturally occurring composite comprising cellulose fibres in a lignin and hemicellulose matrix. Engineered wood includes a wide variety of different products such as wood fibre board, plywood, oriented strand board, wood plastic composite (recycled wood fibre in polyethylene matrix), Pykrete (sawdust in ice matrix), plastic-impregnated or laminated paper or textiles, Arborite, Formica (plastic), and Micarta. Other engineered laminate composites, such as Mallite, use a central core of end grain balsa wood, bonded to surface skins of light alloy or GRP. These generate low-weight, high rigidity materials.
Particulate composites have particle as filler material dispersed in matrix, which may be nonmetal, such as glass, epoxy. Automobile tire is an example of particulate composite.
Advanced diamond-like carbon (DLC) coated polymer composites have been reported where the coating increases the surface hydrophobicity, hardness and wear resistance.
Ferromagnetic composites, including those with a polymer matrix consisting, for example, of nanocrystalline filler of Fe-based powders and polymers matrix. Amorphous and nanocrystalline powders obtained, for example, from metallic glasses can be used. Their use makes it possible to obtain ferromagnetic nanocomposites with controlled magnetic properties.
=== Products ===
Fibre-reinforced composite materials have gained popularity (despite their generally high cost) in high-performance products that need to be lightweight, yet strong enough to take harsh loading conditions such as aerospace components (tails, wings, fuselages, propellers), boat and scull hulls, bicycle frames, and racing car bodies. Other uses include fishing rods, storage tanks, swimming pool panels, and baseball bats. The Boeing 787 and Airbus A350 structures including the wings and fuselage are composed largely of composites. Composite materials are also becoming more common in the realm of orthopedic surgery, and it is the most common hockey stick material.
Carbon composite is a key material in today's launch vehicles and heat shields for the re-entry phase of spacecraft. It is widely used in solar panel substrates, antenna reflectors and yokes of spacecraft. It is also used in payload adapters, inter-stage structures and heat shields of launch vehicles. Furthermore, disk brake systems of airplanes and racing cars are using carbon/carbon material, and the composite material with carbon fibres and silicon carbide matrix has been introduced in luxury vehicles and sports cars.
In 2006, a fibre-reinforced composite pool panel was introduced for in-ground swimming pools, residential as well as commercial, as a non-corrosive alternative to galvanized steel.
In 2007, an all-composite military Humvee was introduced by TPI Composites Inc and Armor Holdings Inc, the first all-composite military vehicle. By using composites the vehicle is lighter, allowing higher payloads. In 2008, carbon fibre and DuPont Kevlar (five times stronger than steel) were combined with enhanced thermoset resins to make military transit cases by ECS Composites creating 30-percent lighter cases with high strength.
Pipes and fittings for various purpose like transportation of potable water, fire-fighting, irrigation, seawater, desalinated water, chemical and industrial waste, and sewage are now manufactured in glass reinforced plastics.
Composite materials used in tensile structures for facade application provides the advantage of being translucent. The woven base cloth combined with the appropriate coating allows better light transmission. This provides a very comfortable level of illumination compared to the full brightness of outside.
The wings of wind turbines, in growing sizes in the order of 50 m length are fabricated in composites since several years.
Two-lower-leg-amputees run on carbon-composite spring-like artificial feet as quick as non-amputee athletes.
High-pressure gas cylinders typically about 7–9 litre volume x 300 bar pressure for firemen are nowadays constructed from carbon composite. Type-4-cylinders include metal only as boss that carries the thread to screw in the valve.
On 5 September 2019, HMD Global unveiled the Nokia 6.2 and Nokia 7.2 which are claimed to be using polymer composite for the frames.
== Overview ==
Composite materials are created from individual materials. These individual materials are known as constituent materials, and there are two main categories of it. One is the matrix (binder) and the other reinforcement. A portion of each kind is needed at least. The reinforcement receives support from the matrix as the matrix surrounds the reinforcement and maintains its relative positions. The properties of the matrix are improved as the reinforcements impart their exceptional physical and mechanical properties. The mechanical properties become unavailable from the individual constituent materials by synergism. At the same time, the designer of the product or structure receives options to choose an optimum combination from the variety of matrix and strengthening materials.
To shape the engineered composites, it must be formed. The reinforcement is placed onto the mould surface or into the mould cavity. Before or after this, the matrix can be introduced to the reinforcement. The matrix undergoes a melding event which sets the part shape necessarily. This melding event can happen in several ways, depending upon the matrix nature, such as solidification from the melted state for a thermoplastic polymer matrix composite or chemical polymerization for a thermoset polymer matrix.
According to the requirements of end-item design, various methods of moulding can be used. The natures of the chosen matrix and reinforcement are the key factors influencing the methodology. The gross quantity of material to be made is another main factor. To support high capital investments for rapid and automated manufacturing technology, vast quantities can be used. Cheaper capital investments but higher labour and tooling expenses at a correspondingly slower rate assists the small production quantities.
Many commercially produced composites use a polymer matrix material often called a resin solution. There are many different polymers available depending upon the starting raw ingredients. There are several broad categories, each with numerous variations. The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide, polyamide, polypropylene, PEEK, and others. The reinforcement materials are often fibres but also commonly ground minerals. The various methods described below have been developed to reduce the resin content of the final product, or the fibre content is increased. As a rule of thumb, lay up results in a product containing 60% resin and 40% fibre, whereas vacuum infusion gives a final product with 40% resin and 60% fibre content. The strength of the product is greatly dependent on this ratio.
Martin Hubbe and Lucian A Lucia consider wood to be a natural composite of cellulose fibres in a matrix of lignin.
== Cores in composites ==
Several layup designs of composite also involve a co-curing or post-curing of the prepreg with many other media, such as foam or honeycomb. Generally, this is known as a sandwich structure. This is a more general layup for the production of cowlings, doors, radomes or non-structural parts.
Open- and closed-cell-structured foams like polyvinyl chloride, polyurethane, polyethylene, or polystyrene foams, balsa wood, syntactic foams, and honeycombs are generally utilized core materials. Open- and closed-cell metal foam can also be utilized as core materials. Recently, 3D graphene structures ( also called graphene foam) have also been employed as core structures. A recent review by Khurram and Xu et al., have provided the summary of the state-of-the-art techniques for fabrication of the 3D structure of graphene, and the examples of the use of these foam like structures as a core for their respective polymer composites.
=== Semi-crystalline polymers ===
Although the two phases are chemically equivalent, semi-crystalline polymers can be described both quantitatively and qualitatively as composite materials. The crystalline portion has a higher elastic modulus and provides reinforcement for the less stiff, amorphous phase. Polymeric materials can range from 0% to 100% crystallinity aka volume fraction depending on molecular structure and thermal history. Different processing techniques can be employed to vary the percent crystallinity in these materials and thus the mechanical properties of these materials as described in the physical properties section. This effect is seen in a variety of places from industrial plastics like polyethylene shopping bags to spiders which can produce silks with different mechanical properties. In many cases these materials act like particle composites with randomly dispersed crystals known as spherulites. However they can also be engineered to be anisotropic and act more like fiber reinforced composites. In the case of spider silk, the properties of the material can even be dependent on the size of the crystals, independent of the volume fraction. Ironically, single component polymeric materials are some of the most easily tunable composite materials known.
== Methods of fabrication ==
Normally, the fabrication of composite includes wetting, mixing or saturating the reinforcement with the matrix. The matrix is then induced to bind together (with heat or a chemical reaction) into a rigid structure. Usually, the operation is done in an open or closed forming mould. However, the order and ways of introducing the constituents alters considerably. Composites fabrication is achieved by a wide variety of methods, including advanced fibre placement (automated fibre placement), fibreglass spray lay-up process, filament winding, lanxide process, tailored fibre placement, tufting, and z-pinning.
=== Overview of mould ===
The reinforcing and matrix materials are merged, compacted, and cured (processed) within a mould to undergo a melding event. The part shape is fundamentally set after the melding event. However, under particular process conditions, it can deform. The melding event for a thermoset polymer matrix material is a curing reaction that is caused by the possibility of extra heat or chemical reactivity such as an organic peroxide. The melding event for a thermoplastic polymeric matrix material is a solidification from the melted state. The melding event for a metal matrix material such as titanium foil is a fusing at high pressure and a temperature near the melting point.
It is suitable for many moulding methods to refer to one mould piece as a "lower" mould and another mould piece as an "upper" mould. Lower and upper does not refer to the mould's configuration in space, but the different faces of the moulded panel. There is always a lower mould, and sometimes an upper mould in this convention. Part construction commences by applying materials to the lower mould. Lower mould and upper mould are more generalized descriptors than more common and specific terms such as male side, female side, a-side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing utilizes a different nomenclature.
Usually, the moulded product is referred to as a panel. It can be referred to as casting for certain geometries and material combinations. It can be referred to as a profile for certain continuous processes. Some of the processes are autoclave moulding, vacuum bag moulding, pressure bag moulding, resin transfer moulding, and light resin transfer moulding.
=== Other fabrication methods ===
Other types of fabrication include casting, centrifugal casting, braiding (onto a former), continuous casting, filament winding, press moulding, transfer moulding, pultrusion moulding, and slip forming. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding, to name a few. The practice of curing ovens and paint booths is also required for some projects.
==== Finishing methods ====
The composite parts finishing is also crucial in the final design. Many of these finishes will involve rain-erosion coatings or polyurethane coatings.
=== Tooling ===
The mould and mould inserts are referred to as "tooling". The mould/tooling can be built from different materials. Tooling materials include aluminium, carbon fibre, invar, nickel, reinforced silicone rubber and steel. The tooling material selection is normally based on, but not limited to, the coefficient of thermal expansion, expected number of cycles, end item tolerance, desired or expected surface condition, cure method, glass transition temperature of the material being moulded, moulding method, matrix, cost, and other various considerations.
== Physical properties ==
Usually, the composite's physical properties are dependent on the direction of consideration, and so are anisotropic. This applies to many properties including elastic modulus, ultimate tensile strength, thermal conductivity, and electrical conductivity. The rule of mixtures and inverse rule of mixtures give upper and lower bounds for these properties. The real value will lie somewhere between these values and can depend on many factors including:
the orientation of interest
the length of the fibres
the accuracy of the fibre alignment
the properties of the matrix and fibres
delamination of the fibres and matrix
the inclusion of any impurities
For some material property
E
{\displaystyle E}
, the rule of mixtures states that the overall property in the direction parallel to the fibers could be as high as
E
∥
=
f
E
f
+
(
1
−
f
)
E
m
{\displaystyle E_{\parallel }=fE_{f}+\left(1-f\right)E_{m}}
The inverse rule of mixtures states that in the direction perpendicular to the fibers, the elastic modulus of a composite could be as low as
E
⊥
=
(
f
E
f
+
1
−
f
E
m
)
−
1
.
{\displaystyle E_{\perp }=\left({\frac {f}{E_{f}}}+{\frac {1-f}{E_{m}}}\right)^{-1}.}
where
f
=
V
f
V
f
+
V
m
{\displaystyle f={\frac {V_{f}}{V_{f}+V_{m}}}}
is the volume fraction of the fibers
E
∥
{\displaystyle E_{\parallel }}
is the material property of the composite parallel to the fibers
E
⊥
{\displaystyle E_{\perp }}
is the material property of the composite perpendicular to the fibers
E
f
{\displaystyle E_{f}}
is the material property of the fibers
E
m
{\displaystyle E_{m}}
is the material property of the matrix
The majority of commercial composites are formed with random dispersion and orientation of the strengthening fibres, in which case the composite Young's modulus will fall between the isostrain and isostress bounds. However, in applications where the strength-to-weight ratio is engineered to be as high as possible (such as in the aerospace industry), fibre alignment may be tightly controlled.
In contrast to composites, isotropic materials (for example, aluminium or steel), in standard wrought forms, possess the same stiffness typically despite the directional orientation of the applied forces and/or moments. The relationship between forces/moments and strains/curvatures for an isotropic material can be described with the following material properties: Young's Modulus, the shear modulus, and the Poisson's ratio, in relatively simple mathematical relationships. For the anisotropic material, it needs the mathematics of a second-order tensor and up to 21 material property constants. For the special case of orthogonal isotropy, there are three distinct material property constants for each of Young's Modulus, Shear Modulus and Poisson's ratio—a total of 9 constants to express the relationship between forces/moments and strains/curvatures.
Techniques that take benefit of the materials' anisotropic properties involve mortise and tenon joints (in natural composites such as wood) and pi joints in synthetic composites.
== Mechanical properties of composites ==
=== Particle reinforcement ===
In general, particle reinforcement is strengthening the composites less than fiber reinforcement. It is used to enhance the stiffness of the composites while increasing the strength and the toughness. Because of their mechanical properties, they are used in applications in which wear resistance is required. For example, hardness of cement can be increased by reinforcing gravel particles, drastically. Particle reinforcement a highly advantageous method of tuning mechanical properties of materials since it is very easy implement while being low cost.
The elastic modulus of particle-reinforced composites can be expressed as,
E
c
=
V
m
E
m
+
K
c
V
p
E
p
{\displaystyle E_{c}=V_{m}E_{m}+K_{c}V_{p}E_{p}}
where E is the elastic modulus, V is the volume fraction. The subscripts c, p and m are indicating composite, particle and matrix, respectively.
K
c
{\displaystyle K_{c}}
is a constant can be found empirically.
Similarly, tensile strength of particle-reinforced composites can be expressed as,
(
T
.
S
.
)
c
=
V
m
(
T
.
S
.
)
m
+
K
s
V
p
(
T
.
S
.
)
p
{\displaystyle (T.S.)_{c}=V_{m}(T.S.)_{m}+K_{s}V_{p}(T.S.)_{p}}
where T.S. is the tensile strength, and
K
s
{\displaystyle K_{s}}
is a constant (not equal to
K
c
{\displaystyle K_{c}}
) that can be found empirically.
=== Short fiber reinforcement (shear lag theory) ===
Short fibers are often cheaper or more convenient to manufacture than longer continuous fibers, but still provide better properties than particle reinforcement. A common example is carbon fiber reinforced 3D printing filaments, which use chopped short carbon fibers mixed into a matrix, typically PLA or PETG.
Shear lag theory uses the shear lag model to predict properties such as the Young's modulus for short fiber composites. The model assumes that load is transferred from the matrix to the fibers solely through the interfacial shear stresses
τ
i
{\displaystyle \tau _{i}}
acting on the cylindrical interface. Shear lag theory says then that the rate of change of the axial stress in the fiber as you move along the fiber is proportional to the ratio of the interfacial shear stresses over the radius of the fibre
r
0
{\displaystyle r_{0}}
:
d
σ
f
d
x
=
−
2
τ
i
r
0
{\displaystyle {\frac {d\sigma _{f}}{dx}}=-{\frac {2\tau _{i}}{r_{0}}}}
This leads to the average fiber stress over the full length of the fibre being given by:
σ
f
=
E
f
ε
1
(
1
−
tanh
(
n
s
)
n
s
)
{\displaystyle \sigma _{f}=E_{f}\varepsilon _{1}\left(1-{\frac {\tanh(ns)}{ns}}\right)}
where
ε
1
{\displaystyle \varepsilon _{1}}
is the macroscopic strain in the composite
s
{\displaystyle s}
is the fiber aspect ratio (length over diameter)
n
=
(
2
E
m
E
f
(
1
+
ν
m
)
ln
(
1
/
f
)
)
1
/
2
{\displaystyle n=\left({\frac {2E_{m}}{E_{f}(1+\nu _{m})\ln(1/f)}}\right)^{1/2}}
is a dimensionless constant
ν
m
{\displaystyle \nu _{m}}
is the Poisson's ratio of the matrix
By assuming a uniform tensile strain, this results in:
E
1
=
σ
1
ε
1
=
f
E
f
(
1
−
tanh
(
n
s
)
n
s
)
+
(
1
−
f
)
E
m
{\displaystyle E_{1}={\frac {\sigma _{1}}{\varepsilon _{1}}}=fE_{f}\left(1-{\frac {\tanh(ns)}{ns}}\right)+(1-f)E_{m}}
As s becomes larger, this tends towards the rule of mixtures, which represents the Young's modulus parallel to continuous fibers.
=== Continuous fiber reinforcement ===
In general, continuous fiber reinforcement is implemented by incorporating a fiber as the strong phase into a weak phase, matrix. The reason for the popularity of fiber usage is materials with extraordinary strength can be obtained in their fiber form. Non-metallic fibers are usually showing a very high strength to density ratio compared to metal fibers because of the covalent nature of their bonds. The most famous example of this is carbon fibers that have many applications extending from sports gear to protective equipment to space industries.
The stress on the composite can be expressed in terms of the volume fraction of the fiber and the matrix.
σ
c
=
V
f
σ
f
+
V
m
σ
m
{\displaystyle \sigma _{c}=V_{f}\sigma _{f}+V_{m}\sigma _{m}}
where
σ
{\displaystyle \sigma }
is the stress, V is the volume fraction. The subscripts c, f and m are indicating composite, fiber and matrix, respectively.
Although the stress–strain behavior of fiber composites can only be determined by testing, there is an expected trend, three stages of the stress–strain curve. The first stage is the region of the stress–strain curve where both fiber and the matrix are elastically deformed. This linearly elastic region can be expressed in the following form.
σ
c
−
E
c
ϵ
c
=
ϵ
c
(
V
f
E
f
+
V
m
E
m
)
{\displaystyle \sigma _{c}-E_{c}\epsilon _{c}=\epsilon _{c}(V_{f}E_{f}+V_{m}E_{m})}
where
σ
{\displaystyle \sigma }
is the stress,
ϵ
{\displaystyle \epsilon }
is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively.
After passing the elastic region for both fiber and the matrix, the second region of the stress–strain curve can be observed. In the second region, the fiber is still elastically deformed while the matrix is plastically deformed since the matrix is the weak phase. The instantaneous modulus can be determined using the slope of the stress–strain curve in the second region. The relationship between stress and strain can be expressed as,
σ
c
=
V
f
E
f
ϵ
c
+
V
m
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{c}=V_{f}E_{f}\epsilon _{c}+V_{m}\sigma _{m}(\epsilon _{c})}
where
σ
{\displaystyle \sigma }
is the stress,
ϵ
{\displaystyle \epsilon }
is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. To find the modulus in the second region derivative of this equation can be used since the slope of the curve is equal to the modulus.
E
c
′
=
d
σ
c
d
ϵ
c
=
V
f
E
f
+
V
m
(
d
σ
c
d
ϵ
c
)
{\displaystyle E_{c}'={\frac {d\sigma _{c}}{d\epsilon _{c}}}=V_{f}E_{f}+V_{m}\left({\frac {d\sigma _{c}}{d\epsilon _{c}}}\right)}
In most cases it can be assumed
E
c
′
=
V
f
E
f
{\displaystyle E_{c}'=V_{f}E_{f}}
since the second term is much less than the first one.
In reality, the derivative of stress with respect to strain is not always returning the modulus because of the binding interaction between the fiber and matrix. The strength of the interaction between these two phases can result in changes in the mechanical properties of the composite. The compatibility of the fiber and matrix is a measure of internal stress.
The covalently bonded high strength fibers (e.g. carbon fibers) experience mostly elastic deformation before the fracture since the plastic deformation can happen due to dislocation motion. Whereas, metallic fibers have more space to plastically deform, so their composites exhibit a third stage where both fiber and the matrix are plastically deforming. Metallic fibers have many applications to work at cryogenic temperatures that is one of the advantages of composites with metal fibers over nonmetallic. The stress in this region of the stress–strain curve can be expressed as,
σ
c
(
ϵ
c
)
=
V
f
σ
f
ϵ
c
+
V
m
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{c}(\epsilon _{c})=V_{f}\sigma _{f}\epsilon _{c}+V_{m}\sigma _{m}(\epsilon _{c})}
where
σ
{\displaystyle \sigma }
is the stress,
ϵ
{\displaystyle \epsilon }
is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively.
σ
f
(
ϵ
c
)
{\displaystyle \sigma _{f}(\epsilon _{c})}
and
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{m}(\epsilon _{c})}
are for fiber and matrix flow stresses respectively. Just after the third region the composite exhibit necking. The necking strain of composite is happened to be between the necking strain of the fiber and the matrix just like other mechanical properties of the composites. The necking strain of the weak phase is delayed by the strong phase. The amount of the delay depends upon the volume fraction of the strong phase.
Thus, the tensile strength of the composite can be expressed in terms of the volume fraction.
(
T
.
S
.
)
c
=
V
f
(
T
.
S
.
)
f
+
V
m
σ
m
(
ϵ
m
)
{\displaystyle (T.S.)_{c}=V_{f}(T.S.)_{f}+V_{m}\sigma _{m}(\epsilon _{m})}
where T.S. is the tensile strength,
σ
{\displaystyle \sigma }
is the stress,
ϵ
{\displaystyle \epsilon }
is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. The composite tensile strength can be expressed as
(
T
.
S
.
)
c
=
V
m
(
T
.
S
.
)
m
{\displaystyle (T.S.)_{c}=V_{m}(T.S.)_{m}}
for
V
f
{\displaystyle V_{f}}
is less than or equal to
V
c
{\displaystyle V_{c}}
(arbitrary critical value of volume fraction)
(
T
.
S
.
)
c
=
V
f
(
T
.
S
.
)
f
+
V
m
(
σ
m
)
{\displaystyle (T.S.)_{c}=V_{f}(T.S.)_{f}+V_{m}(\sigma _{m})}
for
V
f
{\displaystyle V_{f}}
is greater than or equal to
V
c
{\displaystyle V_{c}}
The critical value of volume fraction can be expressed as,
V
c
=
[
(
T
.
S
.
)
m
−
σ
m
(
ϵ
f
)
]
[
(
T
.
S
.
)
f
+
(
T
.
S
.
)
m
−
σ
m
(
ϵ
f
)
]
{\displaystyle V_{c}={\frac {[(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}{[(T.S.)_{f}+(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}}}
Evidently, the composite tensile strength can be higher than the matrix if
(
T
.
S
.
)
c
{\displaystyle (T.S.)_{c}}
is greater than
(
T
.
S
.
)
m
{\displaystyle (T.S.)_{m}}
.
Thus, the minimum volume fraction of the fiber can be expressed as,
V
c
=
[
(
T
.
S
.
)
m
−
σ
m
(
ϵ
f
)
]
[
(
T
.
S
.
)
f
−
σ
m
(
ϵ
f
)
]
{\displaystyle V_{c}={\frac {[(T.S.)_{m}-\sigma _{m}(\epsilon _{f})]}{[(T.S.)_{f}-\sigma _{m}(\epsilon _{f})]}}}
Although this minimum value is very low in practice, it is very important to know since the reason for the incorporation of continuous fibers is to improve the mechanical properties of the materials/composites, and this value of volume fraction is the threshold of this improvement.
=== The effect of fiber orientation ===
==== Aligned fibers ====
A change in the angle between the applied stress and fiber orientation will affect the mechanical properties of fiber-reinforced composites, especially the tensile strength. This angle,
θ
{\displaystyle \theta }
, can be used predict the dominant tensile fracture mechanism.
At small angles,
θ
≈
0
∘
{\displaystyle \theta \approx 0^{\circ }}
, the dominant fracture mechanism is the same as with load-fiber alignment, tensile fracture. The resolved force acting upon the length of the fibers is reduced by a factor of
cos
θ
{\displaystyle \cos \theta }
from rotation.
F
res
=
F
cos
θ
{\displaystyle F_{\mbox{res}}=F\cos \theta }
. The resolved area on which the fiber experiences the force is increased by a factor of
cos
θ
{\displaystyle \cos \theta }
from rotation.
A
res
=
A
0
/
cos
θ
{\displaystyle A_{\mbox{res}}=A_{0}/\cos \theta }
. Taking the effective tensile strength to be
(
T.S.
)
c
=
F
res
/
A
res
{\displaystyle ({\mbox{T.S.}})_{\mbox{c}}=F_{\mbox{res}}/A_{\mbox{res}}}
and the aligned tensile strength
σ
∥
∗
=
F
/
A
{\displaystyle \sigma _{\parallel }^{*}=F/A}
.
(
T.S.
)
c
(
longitudinal fracture
)
=
σ
∥
∗
cos
2
θ
{\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{longitudinal fracture}})={\frac {\sigma _{\parallel }^{*}}{\cos ^{2}\theta }}}
At moderate angles,
θ
≈
45
∘
{\displaystyle \theta \approx 45^{\circ }}
, the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction.
F
res
=
F
cos
θ
{\displaystyle F_{\mbox{res}}=F\cos \theta }
. The resolved area on which the force acts is
A
res
=
A
m
/
sin
θ
{\displaystyle A_{\mbox{res}}=A_{m}/\sin \theta }
. The resulting tensile strength depends on the shear strength of the matrix,
τ
m
{\displaystyle \tau _{m}}
.
(
T.S.
)
c
(
shear failure
)
=
τ
m
sin
θ
cos
θ
{\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{shear failure}})={\frac {\tau _{m}}{\sin {\theta }\cos {\theta }}}}
At extreme angles,
θ
≈
90
∘
{\displaystyle \theta \approx 90^{\circ }}
, the dominant mode of failure is tensile fracture in the matrix in the perpendicular direction. As in the isostress case of layered composite materials, the strength in this direction is lower than in the aligned direction. The effective areas and forces act perpendicular to the aligned direction so they both scale by
sin
θ
{\displaystyle \sin \theta }
. The resolved tensile strength is proportional to the transverse strength,
σ
⊥
∗
{\displaystyle \sigma _{\perp }^{*}}
.
(
T.S.
)
c
(
transverse fracture
)
=
σ
⊥
∗
sin
2
θ
{\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{transverse fracture}})={\frac {\sigma _{\perp }^{*}}{\sin ^{2}\theta }}}
The critical angles from which the dominant fracture mechanism changes can be calculated as,
θ
c
1
=
tan
−
1
(
τ
m
σ
∥
∗
)
{\displaystyle \theta _{c_{1}}=\tan ^{-1}\left({\frac {\tau _{m}}{\sigma _{\parallel }^{*}}}\right)}
θ
c
2
=
tan
−
1
(
σ
⊥
∗
τ
m
)
{\displaystyle \theta _{c_{2}}=\tan ^{-1}\left({\frac {\sigma _{\perp }^{*}}{\tau _{m}}}\right)}
where
θ
c
1
{\displaystyle \theta _{c_{1}}}
is the critical angle between longitudinal fracture and shear failure, and
θ
c
2
{\displaystyle \theta _{c_{2}}}
is the critical angle between shear failure and transverse fracture.
By ignoring length effects, this model is most accurate for continuous fibers and does not effectively capture the strength-orientation relationship for short fiber reinforced composites. Furthermore, most realistic systems do not experience the local maxima predicted at the critical angles. The Tsai-Hill criterion provides a more complete description of fiber composite tensile strength as a function of orientation angle by coupling the contributing yield stresses:
σ
∥
∗
{\displaystyle \sigma _{\parallel }^{*}}
,
σ
⊥
∗
{\displaystyle \sigma _{\perp }^{*}}
, and
τ
m
{\displaystyle \tau _{m}}
.
(
T.S.
)
c
(
Tsai-Hill
)
=
[
cos
4
θ
(
σ
∥
∗
)
2
+
cos
2
θ
sin
2
θ
(
1
(
τ
m
)
2
−
1
(
σ
∥
∗
)
2
)
+
sin
4
θ
(
σ
⊥
∗
)
2
]
−
1
/
2
{\displaystyle ({\mbox{T.S.}})_{\mbox{c}}\;({\mbox{Tsai-Hill}})={\bigg [}{\frac {\cos ^{4}\theta }{({\sigma _{\parallel }^{*}})^{2}}}+\cos ^{2}\theta \sin ^{2}\theta \left({\frac {1}{({\tau _{m}})^{2}}}-{\frac {1}{({\sigma _{\parallel }^{*}})^{2}}}\right)+{\frac {\sin ^{4}\theta }{({\sigma _{\perp }^{*}})^{2}}}{\bigg ]}^{-1/2}}
==== Randomly oriented fibers ====
Anisotropy in the tensile strength of fiber reinforced composites can be removed by randomly orienting the fiber directions within the material. It sacrifices the ultimate strength in the aligned direction for an overall, isotropically strengthened material.
E
c
=
K
V
f
E
f
+
V
m
E
m
{\displaystyle E_{c}=KV_{f}E_{f}+V_{m}E_{m}}
Where K is an empirically determined reinforcement factor; similar to the particle reinforcement equation. For fibers with randomly distributed orientations in a plane,
K
≈
0.38
{\displaystyle K\approx 0.38}
, and for a random distribution in 3D,
K
≈
0.20
{\displaystyle K\approx 0.20}
.
=== Stiffness and Compliance Elasticity ===
Composite materials are generally anisotropic, and in many cases are orthotropic. Voigt notation can be used to reduce the rank of the stress and strain tensors such that the stiffness
C
{\displaystyle C}
(often also referred to by
Q
{\displaystyle Q}
) and compliance
S
{\displaystyle S}
can be written as a matrix:
[
σ
1
σ
2
σ
3
σ
4
σ
5
σ
6
]
=
[
C
11
C
12
C
13
C
14
C
15
C
16
C
12
C
22
C
23
C
24
C
25
C
26
C
13
C
23
C
33
C
34
C
35
C
36
C
14
C
24
C
34
C
44
C
45
C
46
C
15
C
25
C
35
C
45
C
55
C
56
C
16
C
26
C
36
C
46
C
56
C
66
]
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
]
{\displaystyle {\begin{bmatrix}\sigma _{1}\\\sigma _{2}\\\sigma _{3}\\\sigma _{4}\\\sigma _{5}\\\sigma _{6}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&C_{13}&C_{14}&C_{15}&C_{16}\\C_{12}&C_{22}&C_{23}&C_{24}&C_{25}&C_{26}\\C_{13}&C_{23}&C_{33}&C_{34}&C_{35}&C_{36}\\C_{14}&C_{24}&C_{34}&C_{44}&C_{45}&C_{46}\\C_{15}&C_{25}&C_{35}&C_{45}&C_{55}&C_{56}\\C_{16}&C_{26}&C_{36}&C_{46}&C_{56}&C_{66}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\end{bmatrix}}}
and
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
]
=
[
S
11
S
12
S
13
S
14
S
15
S
16
S
12
S
22
S
23
S
24
S
25
S
26
S
13
S
23
S
33
S
34
S
35
S
36
S
14
S
24
S
34
S
44
S
45
S
46
S
15
S
25
S
35
S
45
S
55
S
56
S
16
S
26
S
36
S
46
S
56
S
66
]
[
σ
1
σ
2
σ
3
σ
4
σ
5
σ
6
]
{\displaystyle {\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\end{bmatrix}}={\begin{bmatrix}S_{11}&S_{12}&S_{13}&S_{14}&S_{15}&S_{16}\\S_{12}&S_{22}&S_{23}&S_{24}&S_{25}&S_{26}\\S_{13}&S_{23}&S_{33}&S_{34}&S_{35}&S_{36}\\S_{14}&S_{24}&S_{34}&S_{44}&S_{45}&S_{46}\\S_{15}&S_{25}&S_{35}&S_{45}&S_{55}&S_{56}\\S_{16}&S_{26}&S_{36}&S_{46}&S_{56}&S_{66}\end{bmatrix}}{\begin{bmatrix}\sigma _{1}\\\sigma _{2}\\\sigma _{3}\\\sigma _{4}\\\sigma _{5}\\\sigma _{6}\end{bmatrix}}}
When considering each ply individually, it is assumed that they can be treated as thi lamina and so out–of–plane stresses and strains are negligible. That is
σ
3
=
σ
4
=
σ
5
=
0
{\displaystyle \sigma _{3}=\sigma _{4}=\sigma _{5}=0}
and
ε
4
=
ε
5
=
0
{\displaystyle \varepsilon _{4}=\varepsilon _{5}=0}
. This allows the stiffness and compliance matrices to be reduced to 3x3 matrices as follows:
C
=
[
E
1
1
−
ν
12
ν
21
E
2
ν
12
1
−
ν
12
ν
21
0
E
2
ν
12
1
−
ν
12
ν
21
E
2
1
−
ν
12
ν
21
0
0
0
G
12
]
{\displaystyle C={\begin{bmatrix}{\tfrac {E_{\rm {1}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&{\tfrac {E_{\rm {2}}{\nu _{\rm {12}}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&0\\{\tfrac {E_{\rm {2}}{\nu _{\rm {12}}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&{\tfrac {E_{\rm {2}}}{1-{\nu _{\rm {12}}}{\nu _{\rm {21}}}}}&0\\0&0&G_{\rm {12}}\\\end{bmatrix}}\quad }
and
S
=
[
1
E
1
−
ν
21
E
2
0
−
ν
12
E
1
1
E
2
0
0
0
1
G
12
]
{\displaystyle \quad S={\begin{bmatrix}{\tfrac {1}{E_{\rm {1}}}}&-{\tfrac {\nu _{\rm {21}}}{E_{\rm {2}}}}&0\\-{\tfrac {\nu _{\rm {12}}}{E_{\rm {1}}}}&{\tfrac {1}{E_{\rm {2}}}}&0\\0&0&{\tfrac {1}{G_{\rm {12}}}}\\\end{bmatrix}}}
For fiber-reinforced composite, the fiber orientation in material affect anisotropic properties of the structure. From characterizing technique i.e. tensile testing, the material properties were measured based on sample (1-2) coordinate system. The tensors above express stress-strain relationship in (1-2) coordinate system. While the known material properties is in the principal coordinate system (x-y) of material. Transforming the tensor between two coordinate system help identify the material properties of the tested sample. The transformation matrix with
θ
{\displaystyle \theta }
degree rotation is
T
(
θ
)
ϵ
=
[
cos
2
θ
sin
2
θ
cos
θ
sin
θ
s
i
n
2
θ
cos
2
θ
−
cos
θ
sin
θ
−
2
cos
θ
sin
θ
2
cos
θ
sin
θ
cos
2
θ
−
sin
2
θ
]
{\displaystyle T(\theta )_{\epsilon }={\begin{bmatrix}\cos ^{2}\theta &\sin ^{2}\theta &\cos \theta \sin \theta \\sin^{2}\theta &\cos ^{2}\theta &-\cos \theta \sin \theta \\-2\cos \theta \sin \theta &2\cos \theta \sin \theta &\cos ^{2}\theta -\sin ^{2}\theta \end{bmatrix}}}
for
[
ϵ
´
]
=
T
(
θ
)
ϵ
[
ϵ
]
{\displaystyle {\begin{bmatrix}{\acute {\epsilon }}\end{bmatrix}}=T(\theta )_{\epsilon }{\begin{bmatrix}\epsilon \end{bmatrix}}}
T
(
θ
)
σ
=
[
cos
2
θ
sin
2
θ
2
cos
θ
sin
θ
s
i
n
2
θ
cos
2
θ
−
2
cos
θ
sin
θ
−
cos
θ
sin
θ
cos
θ
sin
θ
cos
2
θ
−
sin
2
θ
]
{\displaystyle T(\theta )_{\sigma }={\begin{bmatrix}\cos ^{2}\theta &\sin ^{2}\theta &2\cos \theta \sin \theta \\sin^{2}\theta &\cos ^{2}\theta &-2\cos \theta \sin \theta \\-\cos \theta \sin \theta &\cos \theta \sin \theta &\cos ^{2}\theta -\sin ^{2}\theta \end{bmatrix}}}
for
[
σ
´
]
=
T
(
θ
)
σ
[
σ
]
{\displaystyle {\begin{bmatrix}{\acute {\sigma }}\end{bmatrix}}=T(\theta )_{\sigma }{\begin{bmatrix}\sigma \end{bmatrix}}}
=== Types of fibers and mechanical properties ===
The most common types of fibers used in industry are glass fibers, carbon fibers, and kevlar due to their ease of production and availability. Their mechanical properties are very important to know, therefore the table of their mechanical properties is given below to compare them with S97 steel. The angle of fiber orientation is very important because of the anisotropy of fiber composites (please see the section "Physical properties" for a more detailed explanation). The mechanical properties of the composites can be tested using standard mechanical testing methods by positioning the samples at various angles (the standard angles are 0°, 45°, and 90°) with respect to the orientation of fibers within the composites. In general, 0° axial alignment makes composites resistant to longitudinal bending and axial tension/compression, 90° hoop alignment is used to obtain resistance to internal/external pressure, and ± 45° is the ideal choice to obtain resistance against pure torsion.
==== Mechanical properties of fiber composite materials ====
==== Carbon fiber & fiberglass composites vs. aluminum alloy and steel ====
Although strength and stiffness of steel and aluminum alloys are comparable to fiber composites, specific strength and stiffness of composites (i.e. in relation to their weight) are significantly higher.
=== Failure ===
Shock, impact of varying speed, or repeated cyclic stresses can provoke the laminate to separate at the interface between two layers, a condition known as delamination. Individual fibres can separate from the matrix, for example, fibre pull-out.
Composites can fail on the macroscopic or microscopic scale. Compression failures can happen at both the macro scale or at each individual reinforcing fibre in compression buckling. Tension failures can be net section failures of the part or degradation of the composite at a microscopic scale where one or more of the layers in the composite fail in tension of the matrix or failure of the bond between the matrix and fibres.
Some composites are brittle and possess little reserve strength beyond the initial onset of failure while others may have large deformations and have reserve energy absorbing capacity past the onset of damage. The distinctions in fibres and matrices that are available and the mixtures that can be made with blends leave a very broad range of properties that can be designed into a composite structure. The most famous failure of a brittle ceramic matrix composite occurred when the carbon-carbon composite tile on the leading edge of the wing of the Space Shuttle Columbia fractured when impacted during take-off. It directed to the catastrophic break-up of the vehicle when it re-entered the Earth's atmosphere on 1 February 2003.
Composites have relatively poor bearing strength compared to metals.
Another failure mode is fiber tensile fracture, which becomes more likely when fibers are aligned with the loading direction, so is the possibility of fiber tensile fracture, assuming the tensile strength exceeds that of the matrix. When a fiber has some angle of misorientation θ, several fracture modes are possible. For small values of θ the stress required to initiate fracture is increased by a factor of (cos θ)−2 due to the increased cross-sectional area (A cos θ) of the fibre and reduced force (F/cos θ) experienced by the fiber, leading to a composite tensile strength of σparallel /cos2 θ where σparallel is the tensile strength of the composite with fibers aligned parallel with the applied force.
Intermediate angles of misorientation θ lead to matrix shear failure. Again the cross sectional area is modified but since shear stress is now the driving force for failure the area of the matrix parallel to the fibers is of interest, increasing by a factor of 1/sin θ. Similarly, the force parallel to this area again decreases (F/cos θ) leading to a total tensile strength of τmy /sin θ cos θ where τmy is the matrix shear strength.
Finally, for large values of θ (near π/2) transverse matrix failure is the most likely to occur, since the fibers no longer carry the majority of the load. Still, the tensile strength will be greater than for the purely perpendicular orientation, since the force perpendicular to the fibers will decrease by a factor of 1/sin θ and the area decreases by a factor of 1/sin θ producing a composite tensile strength of σperp /sin2θ where σperp is the tensile strength of the composite with fibers align perpendicular to the applied force.
=== Testing ===
Composites are tested before and after construction to assist in predicting and preventing failures. Pre-construction testing may adopt finite element analysis (FEA) for ply-by-ply analysis of curved surfaces and predicting wrinkling, crimping and dimpling of composites. Materials may be tested during manufacturing and after construction by various non-destructive methods including ultrasonic, thermography, shearography and X-ray radiography, and laser bond inspection for NDT of relative bond strength integrity in a localized area.
== See also ==
3D composites
Aluminium composite panel
American Composites Manufacturers Association
Chemical vapour infiltration
Composite laminate
Discontinuous aligned composite
Epoxy granite
Hybrid material
Lay-up process
Nanocomposite
Pykrete
Rule of mixtures
Scaled Composites
Smart material
Smart Materials and Structures
Void (composites)
== References ==
== Further reading ==
== External links ==
cdmHUB – the Global Composites Community
Distance learning course in polymers and composites
OptiDAT composite material database Archived 2013-11-04 at the Wayback Machine |
Concorde | Concorde () is a retired Anglo-French supersonic airliner jointly developed and manufactured by Sud Aviation and the British Aircraft Corporation (BAC).
Studies started in 1954, and France and the United Kingdom signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£1.68 billion in 2023).
Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969.
The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines.
On 9 October 1975, it received its French certificate of airworthiness, and from the UK CAA on 5 December.
Concorde is a tailless aircraft design with a narrow fuselage permitting four-abreast seating for 92 to 128 passengers, an ogival delta wing, and a droop nose for landing visibility.
It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed.
Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls.
The airliner had transatlantic range while supercruising at twice the speed of sound for 75% of the distance.
Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£11–16 billion in 2023).
Concorde entered service on 21 January 1976 with Air France from Paris-Roissy and British Airways from London Heathrow.
Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977.
Air France and British Airways remained the sole customers with seven airframes each, for a total production of 20.
Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only.
Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built.
On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The surviving aircraft were retired in 2003, 27 years after commercial operations had begun. All but two of the 20 aircraft built have been preserved and are on display across Europe and North America.
== Development ==
=== Early studies ===
In the early 1950s, Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study supersonic transport (SST). The group met in February 1954 and delivered their first report in April 1955. Robert T. Jones' work at NACA had demonstrated that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin, trapezoidal wings such as those seen on the control surfaces of many missiles, or aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730.
This short wingspan produced little lift at low speed, resulting in long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways, and to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics.
=== Slender deltas ===
Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta". The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift. This had been noticed by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that the effect could be used to improve low-speed performance.
Küchemann and Weber's papers changed the entire nature of supersonic design. The delta had already been used on aircraft, but these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance, but also have reasonable take-off and landing speeds using vortex generation. The aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low-speed handling qualities of such a design.
Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the birth of the Concorde project.
=== Supersonic Transport Aircraft Committee ===
On 1 October 1956, the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), to develop a practical SST design and find industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test-bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft demonstrated safe control at speeds as low as 69 mph (111 km/h), about one-third that of the F-104 Starfighter.
STAC stated that an SST would have economic performance similar to existing subsonic types. Lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. The aircraft would need more thrust than a subsonic design of the same size. Although they would use more fuel in cruise, they would be able to fly more revenue-earning flights in a given time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs.
STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100-passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan suggested that the US was already involved in a similar project, and that if the UK failed to respond, it would be locked out of an airliner market that he believed would be dominated by SST aircraft.
In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed, shorter-range category. Both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation.
=== Ogee planform selected ===
Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes - the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had advantages and disadvantages. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs.
Generally, the wing's centre of pressure (CP, or "lift point") should be close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, the CG commonly moves fore or aft. With a normal wing design, this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore.
To test the new wing, NASA assisted the team by modifying a Douglas F5D Skylancer to mimic the wing selection. In 1965, the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA also ran simulations at Ames that showed the aircraft would exhibit a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training.
=== Partnership with Sud Aviation ===
France had its own SST plans. In the late 1950s, the government requested designs from the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet-powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board.
As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work.
France had no modern large jet engines and had already decided to buy a British design (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of heat-resistant metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air heats the metal so much that it begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Everyone involved agreed that Küchemann's ogee-shaped wing was the right one.
The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. Common components could be used in both designs, with the shorter-range version using a clipped fuselage and four engines, and the longer one a stretched fuselage and six engines, leaving only the wing to be extensively redesigned. The teams continued to meet in 1961, and by this time it was clear that the two aircraft would be very similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More-powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines.
=== Cabinet response, treaty ===
While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft told the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner.
When the STAC plans were presented to the UK cabinet, the economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be £150 million (US$420 million), which were repeatedly overrun in the industry. The Treasury Ministry presented a negative view, suggesting that the project in no way would have any positive financial returns for the government, especially because "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low."
This led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that any direct positive economic outcome would be unlikely, but that the project should still be considered because everyone else was going supersonic, and they were concerned they would be locked out of future markets. The project apparently would not be likely to significantly affect other, more important, research efforts.
At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies, and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle vetoed the UK's entry into the European Community in a speech on 25 January 1963.
=== Naming ===
At Charles de Gaulle's January 1963 press conference, the aircraft was first called "Concorde". The name was suggested by the 18-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (IPA: [kɔ̃kɔʁd]), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was changed to Concord by Harold Macmillan in response to a perceived slight by de Gaulle. At the French roll-out in Toulouse in late 1967, the British Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a letter from a Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!"
In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than "the Concorde" or "a Concorde".
=== Sales efforts ===
Advertisements for Concorde during the late 1960s placed in publications such as Aviation Week & Space Technology predicted a market for 350 aircraft by 1980. The new consortium intended to produce one long-range and one short-range version, but prospective customers showed no interest in the short-range version, thus it was later dropped.
Concorde's costs spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £180.49 million in 2023). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made airlines cautious about aircraft with high fuel consumption, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. A trend in favour of cheaper airline tickets also caused airlines such as Qantas to question Concorde's market suitability. During the early 2000s, Flight International described Concorde as being "one of aerospace's most ambitious but commercially flawed projects",
The consortium received orders (non-binding options) for more than 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six aircraft each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight, the options list contained 74 options from 16 airlines:
=== Testing ===
The design work was supported by a research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance.
Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at Dallas/Fort Worth Regional Airport to mark the airport's opening.
Concorde had initially held a great deal of customer interest, but the project was hit by order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues of supersonic aircraft – the sonic boom, take-off noise and pollution – had produced a change in the public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits.
The US government cut federal funding for the Boeing 2707, its supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers.
=== Programme cost ===
The original programme cost estimate was £70 million in 1962, (£1.68 billion in 2023). After cost overruns and delays the programme eventually cost between £1.5 and £2.1 billion in 1976, (£11.4 billion – 16 billion in 2023). This cost was the main reason the production run was much smaller than expected.
== Design ==
=== General features ===
Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It has an unusual tailless configuration for a commercial aircraft, as does the Tupolev Tu-144. Concorde was the first airliner to have a fly-by-wire flight-control system (in this case, analogue); the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy.
Concorde pioneered the following technologies:
For high speed and optimisation of flight:
Double delta (ogee/ogival) shaped wings
Variable engine air intake ramp system controlled by digital computers
Supercruise capability
For weight-saving and enhanced performance:
Mach 2.02 (~2,154 km/h or 1,338 mph) cruising speed for optimum fuel consumption (supersonic drag minimum and turbojet engines are more efficient at higher speed); fuel consumption at Mach 2 (2,120 km/h; 1,320 mph) and at altitude of 60,000 feet (18,000 m) was 4,800 US gallons per hour (18,000 L/h).
Mainly aluminium construction using a high-temperature alloy similar to that developed for aero-engine pistons. This material gave low weight and allowed conventional manufacture (higher speeds would have ruled out aluminium)
Full-regime autopilot and autothrottle allowing "hands off" control of the aircraft from climb out to landing
Fully electrically controlled analogue fly-by-wire flight controls systems
High-pressure hydraulic system using 28 MPa (4,100 psi) for lighter hydraulic components.
Air data computer (ADC) for the automated monitoring and transmission of aerodynamic measurements (total pressure, static pressure, angle of attack, side-slip).
Fully electrically controlled analogue brake-by-wire system
No auxiliary power unit, as Concorde would only visit large airports where ground air start carts were available.
=== Powerplant ===
A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag (but would be studied for future SSTs). Olympus turbojet technology was already available for development to meet the design requirements. Rolls-Royce proposed developing the RB.169 to power Concorde during its initial design phase, but developing a wholly-new engine for a single aircraft would have been extremely costly, so the existing BSEL Olympus Mk 320 turbojet engine, which was already flying in the BAC TSR-2 supersonic strike bomber prototype, was chosen instead.
Boundary layer management in the podded installation was put forward as simpler with only an inlet cone, however, Dr. Seddon of the RAE favoured a more integrated buried installation. One concern of placing two or more engines behind a single intake was that an intake failure could lead to a double or triple engine failure. While a ducted fan over the turbojet would reduce noise, its larger cross-section also incurred more drag. Acoustics specialists were confident that a turbojet's noise could be reduced and SNECMA made advances in silencer design during the programme. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but was not pursued. By 1974, the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees".
The powerplant configuration selected for Concorde highlighted airfield noise, boundary layer management and interactions between adjacent engines and the requirement that the powerplant, at Mach 2, tolerate pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws addressed most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6".
Situated behind the wing leading edge, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened and caused surging. Wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem. Each engine had its own intake and the nacelles were paired with a splitter plate between them to minimise the chance of one powerplant influencing the other. Only above Mach 1.6 (1,960 km/h; 1,220 mph) was an engine surge likely to affect the adjacent engine.
The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle.
As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise. Concorde's Air Intake Control Units (AICUs) made use of a digital processor for intake control. It was the first use of a digital processor with full authority control of an essential system in a passenger aircraft. It was developed by BAC's Electronics and Space Systems division after the analogue AICUs (developed by Ultra Electronics) fitted to the prototype aircraft were found to lack sufficient accuracy. Ultra Electronics also developed Concorde's thrust-by-wire engine control system.
Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure. Concorde used reheat (afterburners) only at take-off and to pass through the transonic speed range, between Mach 0.95 and 1.7.
=== Heating problems ===
Kinetic heating from the high speed boundary layer caused the skin to heat up during supersonic flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Apart from the engine bay, the hottest part of any supersonic aircraft's structure is the nose, due to aerodynamic heating. Hiduminium R.R. 58, an aluminium alloy, was used throughout the aircraft because it was relatively cheap and easy to work with. The highest temperature it could sustain over the life of the aircraft was 127 °C (261 °F), which limited the top speed to Mach 2.02. Concorde went through two cycles of cooling and heating during a flight, first cooling down as it gained altitude at subsonic speed, then heating up accelerating to cruise speed, finally cooling again when descending and slowing down before heating again in low altitude air before landing. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The airframe was designed for a life of 45,000 flying hours.
As the fuselage heated up it expanded by as much as 300 mm (12 in). The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight a visor was used to keep high temperature air from flowing over the cockpit skin.
Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects. The white finish reduced the skin temperature by 6 to 11 °C (11 to 20 °F). In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at Mach 2 (2,120 km/h; 1,320 mph) for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations.
=== Structural issues ===
Due to its high speeds, large forces were applied to the aircraft during turns, causing distortion of the aircraft's structure. There were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by ratio changes between the inboard and outboard elevon deflections, varying at differing speeds including supersonic. Only the innermost elevons, attached to the stiffest area of the wings, were used at higher speeds. The narrow fuselage flexed, which was apparent to rear passengers looking along the length of the cabin.
When any aircraft passes the critical mach of its airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The wings were designed to reduce this, but there was still a shift of about 2 metres (6 ft 7 in). This could have been countered by the use of trim controls, but at such high speeds, this would have increased drag which would have been unacceptable. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control.
=== Range ===
To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of powerplants which were efficient at twice the speed of sound, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. Only a modest payload could be carried and the aircraft was trimmed without using deflected control surfaces, to avoid the drag that would incur.
Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It would have higher thrust engines with noise reducing features and no environmentally-objectionable afterburner. Preliminary design studies showed that an engine with a 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593 could be produced. This would have given 500 mi (805 km) additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s.
=== Radiation concerns ===
Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below 47,000 feet (14,000 m).
=== Cabin pressurisation ===
Airliner cabins were usually maintained at a pressure equivalent to 6,000–8,000 feet (1,800–2,400 m) elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, 6,000 feet (1,800 m). Concorde's maximum cruising altitude was 60,000 feet (18,000 m); subsonic airliners typically cruise below 44,000 feet (13,000 m).
A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above 50,000 feet (15,000 m), a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks.
=== Flight characteristics ===
While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of 18,300 metres (60,000 ft) and an average cruise speed of Mach 2.02 (2,150 km/h; 1,330 mph), more than twice the speed of conventional aircraft.
With no other civil traffic operating at its cruising altitude of about 56,000 ft (17,000 m), Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a 15,000-foot (4,570 m) block, allowing for a slow climb from 45,000 to 60,000 ft (14,000 to 18,000 m) during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off.
The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was 170 miles per hour (274 km/h). Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload.
The only thing that tells you that you're moving is that occasionally when you're flying over the subsonic aeroplanes you can see all these 747s 20,000 feet below you almost appearing to go backwards, I mean you are going 800 miles an hour or thereabouts faster than they are. The aeroplane was an absolute delight to fly, it handled beautifully. And remember we are talking about an aeroplane that was being designed in the late 1950s – mid-1960s. I think it's absolutely amazing and here we are, now in the 21st century, and it remains unique.
=== Brakes and undercarriage ===
Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation (199 knots or 369 kilometres per hour or 229 miles per hour indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed.
The four main wheel tyres on each bogie unit are inflated to 232 psi (1,600 kPa). The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of 191 psi (1,320 kPa), and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of 250 mph (400 km/h).
The high take-off speed of 250 miles per hour (400 km/h) required Concorde to have upgraded brakes. Like most airliners, Concorde has anti-skid braking to prevent the tyres from losing traction when the brakes are applied. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of 1,200 lb (540 kg). Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around 300–400 °C (570–750 °F). Landing Concorde required a minimum of 6,000 feet (1,800 m) runway length; the shortest runway Concorde ever landed on carrying commercial passengers was Cardiff Airport. Concorde G-AXDN (101) made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just 6,000 feet (1,800 m) at the time. This was the last aircraft to land at Duxford before the runway was shortened later that year.
=== Droop nose ===
Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining.
A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes.
The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of 100 °C (210 °F) at supersonic flight, were developed by Triplex.
== Operational history ==
=== First flights and routes flown ===
Concorde began scheduled flights with British Airways and Air France on 21 January 1976.
Concorde operated on various routes, including London–Bahrain, London–New York, London–Miami, and London–Barbados (with British Airways), and Paris–Dakar–Rio de Janeiro, Paris–Azores–Caracas, Paris–New York, and Paris–Washington (with Air France), but faced challenges such as bans and low profitability. Later, British Airways repositioned Concorde as a super-premium service and it then became profitable.
=== Retirement ===
In 2003, Air France and British Airways announced the retirement of Concorde, due to rising maintenance costs, low passenger numbers following the 25 July 2000 crash, and the slump in air travel following the September 11 attacks.
Air France flew its last commercial flight on 30 May 2003 with British Airways retiring its Concorde fleet on 24 October 2003.
=== Operators ===
Air France
British Airways
Braniff International Airways operated Concordes at subsonic speed between Dulles International Airport and Dallas Fort Worth International Airport, from January 1979 until May 1980, using its own flight and cabin crew, under its own insurance and operator's license. Stickers containing a US registration were placed over the French and British registrations of the aircraft during each rotation, and a placard was temporarily placed behind the cockpit to signify the operator and operator's license in command.
Singapore Airlines had its livery placed on the left side of Concorde G-BOAD, and held a joint marketing agreement which saw Singapore insignias on the cabin fittings, as well as the airline's "Singapore Girl" stewardesses jointly sharing cabin duty with British Airways flight attendants. All flight crew, operations, and insurances remained solely under British Airways however, and at no point did Singapore Airlines operate Concorde services under its own operator's certification, nor wet-lease an aircraft. This arrangement initially only lasted for three flights, conducted between 9–13 December 1977; it later resumed on 24 January 1979, and operated until 1 November 1980. The Singapore livery was used on G-BOAD from 1977 to 1980.
== Accidents and incidents ==
=== Air France Flight 4590 ===
On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets. According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse.
Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths, but there had been two prior non-fatal accidents and a rate of tyre damage 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements made after the crash included more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. In a flight of 3 hours 20 minutes over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and 60,000 ft (18,000 m) then returned to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations.
The first flight with passengers after the 2000 grounding landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers.
=== Other accidents and incidents ===
On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, Australia, suffered a structural failure at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through 40,000 feet (12,000 m) at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period before the accident due to moisture seepage past the rivets in the rudder. Production staff had not followed proper procedures during an earlier modification of the rudder; the procedures were difficult to adhere to. The aircraft was repaired and returned to service.
On 21 March 1992, G-BOAB while flying British Airways Flight 001 from London to New York, also suffered a structural failure at supersonic speed. While cruising at Mach 2, at approximately 53,000 feet (16,000 m), the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine. The crew shut down the No 2 engine and made a successful landing in New York, noting that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had separated from the structure of the rudder, which led to most of the upper rudder detaching in flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated.
The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged a left engine and electrical cables, with the loss of two of the craft's hydraulic systems.
== Aircraft on display ==
Twenty Concorde aircraft were built: two prototypes, two pre-production aircraft, two development aircraft and 14 production aircraft for commercial service. With the exception of two of the production aircraft, all are preserved, mostly in museums. One aircraft was scrapped in 1994, and another was destroyed in the Air France Flight 4590 crash in 2000.
== Comparable aircraft ==
=== Tu-144 ===
Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. Soviet espionage efforts allegedly stole Concorde blueprints to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler wing design. The Tu-144 required braking parachutes to land. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978.
Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983.
=== SST and others ===
The main competing designs for the US government-funded supersonic transport (SST) were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built.
== Impact ==
=== Environmental ===
Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director stated, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them."
Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport.
Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990.
=== Public perception ===
Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts. As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage.
The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her".
In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire.
=== Special missions ===
The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagship aircraft on foreign visits. Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989.
Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999.
=== Records ===
The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996, aided by a 175 mph (282 km/h) tailwind, by the British Airways G-BOAD, in 2 hours, 52 minutes, 59 seconds from take-off to touchdown. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops.
Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain.
The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco.
On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory.
== Specifications ==
Data from The Wall Street Journal, The Concorde Story, The International Directory of Civil Aircraft,
Aérospatiale/BAC Concorde 1969 onwards (all models)General characteristics
Crew: 3 (2 pilots and 1 flight engineer)
Capacity: 92–120 passengers (128 in high-density layout)
Length: 202 ft 4 in (61.66 m)
Wingspan: 84 ft 0 in (25.6 m)
Height: 40 ft 0 in (12.2 m)
Wing area: 3,856.2 sq ft (358.25 m2)
Empty weight: 173,504 lb (78,700 kg)
Gross weight: 245,000 lb (111,130 kg)
Max takeoff weight: 408,010 lb (185,070 kg)
Fuel capacity: 210,940 lb (95,680 kg); 119,600 L (26,300 imp gal; 31,600 US gal)
Fuselage internal length: 129 ft 0 in (39.32 m)
Fuselage width: maximum of 9 ft 5 in (2.87 m) external, 8 ft 7 in (2.62 m) internal
Fuselage height: maximum of 10 ft 10 in (3.30 m) external, 6 ft 5 in (1.96 m) internal
Maximum taxiing weight: 412,000 lb (187,000 kg)
Powerplant: 4 × Rolls-Royce/Snecma Olympus 593 Mk 610 turbojets with reheat, 31,000 lbf (140 kN) thrust each dry, 38,050 lbf (169.3 kN) with afterburner
Performance
Maximum speed: 1,354 mph (2,179 km/h, 1,177 kn)
Maximum speed: Mach 2.04 (temperature limited)
Cruise speed: 1,341 mph (2,158 km/h, 1,165 kn)
Range: 4,488.0 mi (7,222.8 km, 3,900.0 nmi)
Service ceiling: 60,000 ft (18,300 m)
Rate of climb: 3,300–4,900 ft/min (17–25 m/s) at sea level
Lift-to-drag: Low speed– 3.94; Approach– 4.35; 250 kn, 10,000 ft– 9.27; Mach 0.94– 11.47, Mach 2.04– 7.14
Fuel consumption: 47 lb/mi (13.2 kg/km)
Thrust/weight: 0.373
Maximum nose tip temperature: 127 °C (260 °F; 400 K)
Runway requirement (with maximum load): 3,600 m (11,800 ft)
Avionics
Digital Air Intake Control Units
Fly by wire flight controls
Analogue electronic engine controls
Triple inertial navigation units, one per flight crew
Dual VHF omnidirectional range instruments
Dual automatic direction finder instruments
Dual distance measuring equipment instruments
Triple Delco Carousel Inertial Navigation Units
Dual instrument landing systems
Automatic flight control system with dual autopilots, autothrottles, and flight directors: full autoland capability with visibility limits 250 m (820 ft) horizontally, 15 ft (4.6 m) decision height
Ekco E390/564 weather radar
Radio altimeters
== Notable appearances in media ==
== See also ==
Barbara Harmer, the first qualified female Concorde pilot
Museo del Concorde, a former museum in Mexico dedicated to the airliner
== Notes ==
== References ==
=== Citations ===
=== Bibliography ===
Armbruster, Michel (January–February 2005). "How to Avoid Uncontrolled Droop". Air Enthusiast. No. 115. p. 75. ISSN 0143-5450.
Conway, Eric (2005). High-Speed Dreams: NASA and the Technopolitics of Supersonic Transportation, 1945–1999. JHU Press. ISBN 978-0-8018-8067-4.
Beniada, Frederic (2006). Concorde. Minneapolis, Minnesota: Zenith Press. ISBN 978-0-7603-2703-6.
Calvert, Brian (2002). Flying Concorde: The Full Story. London: Crowood Press. ISBN 978-1-84037-352-3.
Deregel, Xavier; Lemaire, Jean-Philippe (2009). Concorde Passion. New York: LBM. ISBN 978-2-915347-73-9.
Endres, Günter (2001). Concorde. St Paul, Minnesota: MBI Publishing Company. ISBN 978-0-7603-1195-0.
Ferrar, Henry, ed. (1980). The Concise Oxford French-English Dictionary. New York: Oxford University Press. ISBN 978-0-19-864157-5.
Frawley, Gerald (2003). The International Directory of Civil Aircraft, 2003/2004. Aerospace Publications. ISBN 978-1-875671-58-8.
Gordon, Yefim; Rigmant, Vladimir (2005). Tupolev Tu-144. Hinckley, Leicestershire, UK: Midland. ISBN 978-1-85780-216-0..
Gunn, John (2010). Crowded Skies. Turnkey Productions. ISBN 978-0-646-54973-6.
Kelly, Neil (2005). The Concorde Story: 34 Years of Supersonic Air Travel. Surrey, UK: Merchant Book Company Ltd. ISBN 978-1-904779-05-6.
Key Publishing (2023). Concorde. Historic Commercial Aircraft Series, Vol 10. Stamford, Lincs, UK: Key Publishing. ISBN 9781802823752.
Knight, Geoffrey (1976). Concorde: The Inside Story. London: Weidenfeld and Nicolson. ISBN 978-0-297-77114-2.
Lewis, Rob; Lewis, Edwin (2004). Supersonic Secrets: The Unauthorised Biography of Concorde. London: Exposé. ISBN 978-0-9546617-0-0.
McIntyre, Ian (1992). Dogfight: The Transatlantic Battle over Airbus. Westport, Connecticut: Praeger Publishers. ISBN 978-0-275-94278-6.
Nunn, John Francis (1993). Nunn's Applied Respiratory Physiology. Burlington, Maryland: Butterworth-Heineman. ISBN 978-0-7506-1336-1.
Olivier, Jean-Marc (2018). 1969 First Flight of the Concorde. Editions midi-pyrénéennes. ISBN 979-1-09-349833-1. OCLC 1066694697.
Owen, Kenneth (2001). Concorde: Story of a Supersonic Pioneer. London: Science Museum. ISBN 978-1-900747-42-4.
Orlebar, Christopher (2004). The Concorde Story. Oxford, UK: Osprey Publishing. ISBN 978-1-85532-667-5.
Ross, Douglas (March 1978). "The Concorde Compromise: the politics of decision-making". Bulletin of the Atomic Scientists. 34 (3): 46–53. Bibcode:1978BuAtS..34c..46R. doi:10.1080/00963402.1978.11458481.
Schrader, Richard K (1989). Concorde: The Full Story of the Anglo-French SST. Kent, UK: Pictorial Histories Pub. Co. ISBN 978-0-929521-16-9.
Taylor, John W. R. (1965). Jane's All the World's Aircraft 1965–66. Marston.
Talbot, Ted (2013), Concorde A Designer's Life The Journey To Mach 2, The History Press, ISBN 978-0-7524-8928-5
Towey, Barrie, ed. (2007). Jet Airliners of the World 1949–2007. Tunbridge Wells, Kent, UK: Air-Britain (Historians) Ltd. ISBN 978-0-85130-348-2.
Winchester, Jim (2005a). The World's Worst Aircraft: From Pioneering Failures to Multimillion Dollar Disasters. London: Amber Books Ltd. ISBN 978-1-904687-34-4.
Winchester, Jim (2005b). X-Planes and Prototypes: From Nazi Secret Weapons to the Warplanes of the Future. Amber Books Ltd. ISBN 978-1-84013-815-3.
== External links ==
=== Legacy ===
British Airways Concorde page
BAC Concorde at BAE Systems site
Design Museum (UK) Concorde page
Heritage Concorde preservation group site
=== Articles ===
Donald Fink (10 March 1969). "Concorde Enters Flight Test Phase" (PDF). Aviation Week & Space Technology. Archived from the original (PDF) on 16 March 2015.
"First Concorde Supersonic Transport Flies" (PDF). Aviation Week & Space Technology. 17 March 1969. Archived from the original (PDF) on 16 March 2015.
Capt R. E. Gillman (24 January 1976). "Concorde as viewed from the flightdeck". Flight International.
Dave North (20 October 2003). "End of an Era". Aviation Week & Space Technology.
"The day Concorde flew into the history books". Airbus. 2 March 2019.
=== Videos ===
"Video: Roll-out." British Movietone/Associated Press. 14 December 1967, posted online on 21 July 2015.
"This plane could cross the Atlantic in 3.5 hours. Why did it fail?." Vox Media. 19 July 2016. |
Condensation trails | Contrails (; short for "condensation trails") or vapour trails are line-shaped clouds produced by aircraft engine exhaust or changes in air pressure, typically at aircraft cruising altitudes several kilometres/miles above the Earth's surface. They are composed primarily of water, in the form of ice crystals. The combination of water vapor in aircraft engine exhaust and the low ambient temperatures at high altitudes causes the trails' formation. Impurities in the engine exhaust from the fuel, including soot and sulfur compounds (0.05% by weight in jet fuel) provide some of the particles that serve as cloud condensation nuclei for water droplet growth in the exhaust. If water droplets form, they can freeze to form ice particles that compose a contrail. Their formation can also be triggered by changes in air pressure in wingtip vortices, or in the air over the entire wing surface. Contrails, and other clouds caused directly by human activity, are called homogenitus.
The vapor trails produced by rockets are referred to as "missile contrails" or "rocket contrails." The water vapor and aerosol produced by rockets promote the "formation of ice clouds in ice supersaturated layers of the atmosphere." Missile contrail clouds mainly comprise "metal oxide particles, high-temperature water vapor condensation particles, and other byproducts of engine combustion."
Depending on the temperature and humidity at the altitude where the contrails form, they may be visible for only a few seconds or minutes, or may persist for hours and spread to be several kilometres/miles wide, eventually resembling natural cirrus or altocumulus clouds. Persistent contrails are of particular interest to scientists because they increase the cloudiness of the atmosphere. The resulting cloud forms are formally described as homomutatus, and may resemble cirrus, cirrocumulus, or cirrostratus, and are sometimes called cirrus aviaticus. Some persistent spreading contrails contribute to climate change.
== Condensation trails as a result of engine exhaust ==
Engine exhaust is predominantly made up of water and carbon dioxide, the combustion products of hydrocarbon fuels. Many other chemical byproducts of incomplete hydrocarbon fuel combustion, including volatile organic compounds, inorganic gases, polycyclic aromatic hydrocarbons, oxygenated organics, alcohols, ozone and particles of soot have been observed at lower concentrations. The exact quality is a function of engine type and basic combustion engine function, with up to 30% of aircraft exhaust being unburned fuel. (Micron-sized metallic particles resulting from engine wear have also been detected.) At high altitudes as this water vapor emerges into a cold environment, the localized increase in water vapor can raise the relative humidity of the air past saturation point. The vapor then condenses into tiny water droplets which freeze if the temperature is low enough. These millions of tiny water droplets and/or ice crystals form the contrails. The time taken for the vapor to cool enough to condense accounts for the contrail forming some distance behind the aircraft. At high altitudes, supercooled water vapor requires a trigger to encourage deposition or condensation. The exhaust particles in the aircraft's exhaust act as this trigger, causing the trapped vapor to condense rapidly. Exhaust contrails usually form at high altitudes; usually above 8,000 m (26,000 ft), where the air temperature is below −36.5 °C (−34 °F). They can also form closer to the ground when the air is cold and moist.
A 2013–2014 study jointly supported by NASA, the German aerospace center DLR, and Canada's National Research Council NRC, determined that biofuels could reduce contrail generation. This reduction was explained by demonstrating that biofuels produce fewer soot particles, which are the nuclei around which the ice crystals form. The tests were performed by flying a DC-8 at cruising altitude with a sample-gathering aircraft flying in trail. In these samples, the contrail-producing soot particle count was reduced by 50 to 70 percent, using a 50% blend of conventional Jet A1 fuel and HEFA (hydroprocessed esters and fatty acids) biofuel produced from camelina.
== Condensation from decreases in pressure ==
As a wing generates lift, it causes a vortex to form at the wingtip, and at the tip of the flap when deployed (wingtips and flap boundaries represent discontinuities in airflow). These wingtip vortices persist in the atmosphere long after the aircraft has passed. The reduction in pressure and temperature across each vortex can cause water to condense and make the cores of the wingtip vortices visible; this effect is more common on humid days. Wingtip vortices can sometimes be seen behind the wing flaps of airliners during takeoff and landing, and during Space Shuttle landings.
The visible cores of wingtip vortices contrast with the other major type of contrails which are caused by the combustion of fuel. Contrails produced from jet engine exhaust are seen at high altitude, directly behind each engine. By contrast, the visible cores of wingtip vortices are usually seen only at low altitude where the aircraft is travelling slowly after takeoff or before landing, and where the ambient humidity is higher; they trail behind the wingtips and wing flaps rather than behind the engines.
At high-thrust settings the fan blades at the intake of a turbofan engine reach transonic speeds, causing a sudden drop in air pressure. This creates the condensation fog (inside the intake) which is often observed by air travelers during takeoff.
The tips of rotating surfaces (such as propellers and rotors) sometimes produce visible contrails.
In firearms, a vapor trail is sometimes observed when firing under rare conditions, due to condensation induced by changes in air pressure around the bullet. A vapor trail from a bullet is observable from any direction. Vapor trail should not be confused with bullet trace, a refractive effect due to changes in air pressure as the bullet travels, which is a much more common phenomenon (and is usually only observable directly from behind the shooter).
== Impacts on climate ==
It is considered that the largest contribution of aviation to climate change comes from contrails.
In general, aircraft contrails trap outgoing longwave radiation emitted by the Earth and atmosphere more than they reflect incoming solar radiation, resulting in a net increase in radiative forcing. In 1992, this warming effect was estimated between 3.5 mW/m2 and 17 mW/m2.
In 2009, its 2005 value was estimated at 12 mW/m2, based on the reanalysis data, climate models, and radiative transfer codes; with an uncertainty range of 5 to 26 mW/m2, and with a low level of scientific understanding.
Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention. For comparison, the total radiative forcing from human activities amounted to 2.72 W/m2 (with a range between 1.96 and 3.48W/m2) in 2019, and the increase from 2011 to 2019 alone amounted to 0.34W/m2. Contrail effects differ a lot depending on when they are formed, as they decrease the daytime temperature and increase the nighttime temperature, reducing their difference. In 2006, it was estimated that night flights contribute 60 to 80% of contrail radiative forcing while accounting for 25% of daily air traffic, and winter flights contribute half of the annual mean radiative forcing while accounting for 22% of annual air traffic.
Starting from the 1990s, it was suggested that contrails during daytime have a strong cooling effect, and when combined with the warming from night-time flights, this would lead to a substantial diurnal temperature variation (the difference in the day's highs and lows at a fixed station). When no commercial aircraft flew across the USA following the September 11 attacks, the diurnal temperature variation was widened by 1.1 °C (2.0 °F). Measured across 4,000 weather stations in the continental United States, this increase was the largest recorded in 30 years. Without contrails, the local diurnal temperature range was 1 °C (1.8 °F) higher than immediately before. In the southern US, the difference was diminished by about 3.3 °C (6 °F), and by 2.8 °C (5 °F) in the US midwest. However, follow-up studies found that a natural change in cloud cover can more than explain these findings. The authors of a 2008 study wrote, "The variations in high cloud cover, including contrails and contrail-induced cirrus clouds, contribute weakly to the changes in the diurnal temperature range, which is governed primarily by lower altitude clouds, winds, and humidity."
In 2011, a study of British meteorological records taken during World War II identified one event where the temperature was 0.8 °C (1.4 °F) higher than the day's average near airbases used by USAAF strategic bombers after they flew in a formation. However, its authors cautioned that this was a single event, making it difficult to draw firm conclusions from it. Then, the global response to the 2020 coronavirus pandemic led to a reduction in global air traffic of nearly 70% relative to 2019. Thus, it provided an extended opportunity to study the impact of contrails on regional and global temperature. Multiple studies found "no significant response of diurnal surface air temperature range" as the result of contrail changes, and either "no net significant global ERF" (effective radiative forcing) or a very small warming effect.
An EU project launched in 2020 aims to assess the feasibility of minimising contrail effects by the operational choices in making flight plans. Other similar projects include ContrailNet from Eurocontrol, Reviate, and the Ciconia project, as well as Google's 'project contrails'.
== Head-on contrails ==
A contrail from an airplane flying towards the observer can appear to be generated by an object moving vertically. On 8 November 2010 in the US state of California, a contrail of this type gained media attention as a "mystery missile" that could not be explained by U.S. military and aviation authorities, and its explanation as a contrail took more than 24 hours to become accepted by U.S. media and military institutions.
== Distrails ==
Where an aircraft passes through a cloud, it can disperse the cloud in its path. This is known as a distrail (short for "dissipation trail"). The plane's warm engine exhaust and enhanced vertical mixing in the aircraft's wake can cause existing cloud droplets to evaporate. If the cloud is sufficiently thin, such processes can yield a cloud-free corridor in an otherwise solid cloud layer. An early satellite observation of distrails that most likely were elongated, aircraft-induced fallstreak holes appeared in Corfidi and Brandli (1986).
Clouds form when invisible water vapor condenses into microscopic water droplets or into microscopic ice crystals. This may happen when air with a high proportion of gaseous water cools. A distrail forms when the heat of engine exhaust evaporates the liquid water droplets in a cloud, turning them back into invisible, gaseous water vapor. Distrails also may arise as a result of enhanced mixing (entrainment) of drier air immediately above or below a thin cloud layer following passage of an aircraft through the cloud, as shown in the second image below:
== See also ==
== References ==
== External links ==
Contrail Education (archived) | NASA
Contrails.nl: Contrails and AviationSmog Archived 20 February 2011 at the Wayback Machine | Galleys of contrails and aviation smog
Contral Science | Reference site for debunking weird stories about contrails
Dunning, Brian (15 February 2007). "Skeptoid #27: Chemtrails: Real or Not?". Skeptoid. |
Contrail | Contrails (; short for "condensation trails") or vapour trails are line-shaped clouds produced by aircraft engine exhaust or changes in air pressure, typically at aircraft cruising altitudes several kilometres/miles above the Earth's surface. They are composed primarily of water, in the form of ice crystals. The combination of water vapor in aircraft engine exhaust and the low ambient temperatures at high altitudes causes the trails' formation. Impurities in the engine exhaust from the fuel, including soot and sulfur compounds (0.05% by weight in jet fuel) provide some of the particles that serve as cloud condensation nuclei for water droplet growth in the exhaust. If water droplets form, they can freeze to form ice particles that compose a contrail. Their formation can also be triggered by changes in air pressure in wingtip vortices, or in the air over the entire wing surface. Contrails, and other clouds caused directly by human activity, are called homogenitus.
The vapor trails produced by rockets are referred to as "missile contrails" or "rocket contrails." The water vapor and aerosol produced by rockets promote the "formation of ice clouds in ice supersaturated layers of the atmosphere." Missile contrail clouds mainly comprise "metal oxide particles, high-temperature water vapor condensation particles, and other byproducts of engine combustion."
Depending on the temperature and humidity at the altitude where the contrails form, they may be visible for only a few seconds or minutes, or may persist for hours and spread to be several kilometres/miles wide, eventually resembling natural cirrus or altocumulus clouds. Persistent contrails are of particular interest to scientists because they increase the cloudiness of the atmosphere. The resulting cloud forms are formally described as homomutatus, and may resemble cirrus, cirrocumulus, or cirrostratus, and are sometimes called cirrus aviaticus. Some persistent spreading contrails contribute to climate change.
== Condensation trails as a result of engine exhaust ==
Engine exhaust is predominantly made up of water and carbon dioxide, the combustion products of hydrocarbon fuels. Many other chemical byproducts of incomplete hydrocarbon fuel combustion, including volatile organic compounds, inorganic gases, polycyclic aromatic hydrocarbons, oxygenated organics, alcohols, ozone and particles of soot have been observed at lower concentrations. The exact quality is a function of engine type and basic combustion engine function, with up to 30% of aircraft exhaust being unburned fuel. (Micron-sized metallic particles resulting from engine wear have also been detected.) At high altitudes as this water vapor emerges into a cold environment, the localized increase in water vapor can raise the relative humidity of the air past saturation point. The vapor then condenses into tiny water droplets which freeze if the temperature is low enough. These millions of tiny water droplets and/or ice crystals form the contrails. The time taken for the vapor to cool enough to condense accounts for the contrail forming some distance behind the aircraft. At high altitudes, supercooled water vapor requires a trigger to encourage deposition or condensation. The exhaust particles in the aircraft's exhaust act as this trigger, causing the trapped vapor to condense rapidly. Exhaust contrails usually form at high altitudes; usually above 8,000 m (26,000 ft), where the air temperature is below −36.5 °C (−34 °F). They can also form closer to the ground when the air is cold and moist.
A 2013–2014 study jointly supported by NASA, the German aerospace center DLR, and Canada's National Research Council NRC, determined that biofuels could reduce contrail generation. This reduction was explained by demonstrating that biofuels produce fewer soot particles, which are the nuclei around which the ice crystals form. The tests were performed by flying a DC-8 at cruising altitude with a sample-gathering aircraft flying in trail. In these samples, the contrail-producing soot particle count was reduced by 50 to 70 percent, using a 50% blend of conventional Jet A1 fuel and HEFA (hydroprocessed esters and fatty acids) biofuel produced from camelina.
== Condensation from decreases in pressure ==
As a wing generates lift, it causes a vortex to form at the wingtip, and at the tip of the flap when deployed (wingtips and flap boundaries represent discontinuities in airflow). These wingtip vortices persist in the atmosphere long after the aircraft has passed. The reduction in pressure and temperature across each vortex can cause water to condense and make the cores of the wingtip vortices visible; this effect is more common on humid days. Wingtip vortices can sometimes be seen behind the wing flaps of airliners during takeoff and landing, and during Space Shuttle landings.
The visible cores of wingtip vortices contrast with the other major type of contrails which are caused by the combustion of fuel. Contrails produced from jet engine exhaust are seen at high altitude, directly behind each engine. By contrast, the visible cores of wingtip vortices are usually seen only at low altitude where the aircraft is travelling slowly after takeoff or before landing, and where the ambient humidity is higher; they trail behind the wingtips and wing flaps rather than behind the engines.
At high-thrust settings the fan blades at the intake of a turbofan engine reach transonic speeds, causing a sudden drop in air pressure. This creates the condensation fog (inside the intake) which is often observed by air travelers during takeoff.
The tips of rotating surfaces (such as propellers and rotors) sometimes produce visible contrails.
In firearms, a vapor trail is sometimes observed when firing under rare conditions, due to condensation induced by changes in air pressure around the bullet. A vapor trail from a bullet is observable from any direction. Vapor trail should not be confused with bullet trace, a refractive effect due to changes in air pressure as the bullet travels, which is a much more common phenomenon (and is usually only observable directly from behind the shooter).
== Impacts on climate ==
It is considered that the largest contribution of aviation to climate change comes from contrails.
In general, aircraft contrails trap outgoing longwave radiation emitted by the Earth and atmosphere more than they reflect incoming solar radiation, resulting in a net increase in radiative forcing. In 1992, this warming effect was estimated between 3.5 mW/m2 and 17 mW/m2.
In 2009, its 2005 value was estimated at 12 mW/m2, based on the reanalysis data, climate models, and radiative transfer codes; with an uncertainty range of 5 to 26 mW/m2, and with a low level of scientific understanding.
Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention. For comparison, the total radiative forcing from human activities amounted to 2.72 W/m2 (with a range between 1.96 and 3.48W/m2) in 2019, and the increase from 2011 to 2019 alone amounted to 0.34W/m2. Contrail effects differ a lot depending on when they are formed, as they decrease the daytime temperature and increase the nighttime temperature, reducing their difference. In 2006, it was estimated that night flights contribute 60 to 80% of contrail radiative forcing while accounting for 25% of daily air traffic, and winter flights contribute half of the annual mean radiative forcing while accounting for 22% of annual air traffic.
Starting from the 1990s, it was suggested that contrails during daytime have a strong cooling effect, and when combined with the warming from night-time flights, this would lead to a substantial diurnal temperature variation (the difference in the day's highs and lows at a fixed station). When no commercial aircraft flew across the USA following the September 11 attacks, the diurnal temperature variation was widened by 1.1 °C (2.0 °F). Measured across 4,000 weather stations in the continental United States, this increase was the largest recorded in 30 years. Without contrails, the local diurnal temperature range was 1 °C (1.8 °F) higher than immediately before. In the southern US, the difference was diminished by about 3.3 °C (6 °F), and by 2.8 °C (5 °F) in the US midwest. However, follow-up studies found that a natural change in cloud cover can more than explain these findings. The authors of a 2008 study wrote, "The variations in high cloud cover, including contrails and contrail-induced cirrus clouds, contribute weakly to the changes in the diurnal temperature range, which is governed primarily by lower altitude clouds, winds, and humidity."
In 2011, a study of British meteorological records taken during World War II identified one event where the temperature was 0.8 °C (1.4 °F) higher than the day's average near airbases used by USAAF strategic bombers after they flew in a formation. However, its authors cautioned that this was a single event, making it difficult to draw firm conclusions from it. Then, the global response to the 2020 coronavirus pandemic led to a reduction in global air traffic of nearly 70% relative to 2019. Thus, it provided an extended opportunity to study the impact of contrails on regional and global temperature. Multiple studies found "no significant response of diurnal surface air temperature range" as the result of contrail changes, and either "no net significant global ERF" (effective radiative forcing) or a very small warming effect.
An EU project launched in 2020 aims to assess the feasibility of minimising contrail effects by the operational choices in making flight plans. Other similar projects include ContrailNet from Eurocontrol, Reviate, and the Ciconia project, as well as Google's 'project contrails'.
== Head-on contrails ==
A contrail from an airplane flying towards the observer can appear to be generated by an object moving vertically. On 8 November 2010 in the US state of California, a contrail of this type gained media attention as a "mystery missile" that could not be explained by U.S. military and aviation authorities, and its explanation as a contrail took more than 24 hours to become accepted by U.S. media and military institutions.
== Distrails ==
Where an aircraft passes through a cloud, it can disperse the cloud in its path. This is known as a distrail (short for "dissipation trail"). The plane's warm engine exhaust and enhanced vertical mixing in the aircraft's wake can cause existing cloud droplets to evaporate. If the cloud is sufficiently thin, such processes can yield a cloud-free corridor in an otherwise solid cloud layer. An early satellite observation of distrails that most likely were elongated, aircraft-induced fallstreak holes appeared in Corfidi and Brandli (1986).
Clouds form when invisible water vapor condenses into microscopic water droplets or into microscopic ice crystals. This may happen when air with a high proportion of gaseous water cools. A distrail forms when the heat of engine exhaust evaporates the liquid water droplets in a cloud, turning them back into invisible, gaseous water vapor. Distrails also may arise as a result of enhanced mixing (entrainment) of drier air immediately above or below a thin cloud layer following passage of an aircraft through the cloud, as shown in the second image below:
== See also ==
== References ==
== External links ==
Contrail Education (archived) | NASA
Contrails.nl: Contrails and AviationSmog Archived 20 February 2011 at the Wayback Machine | Galleys of contrails and aviation smog
Contral Science | Reference site for debunking weird stories about contrails
Dunning, Brian (15 February 2007). "Skeptoid #27: Chemtrails: Real or Not?". Skeptoid. |
Convention on International Civil Aviation | The Convention on International Civil Aviation, also known as the Chicago Convention, established the International Civil Aviation Organization (ICAO), a specialized agency of the United Nations charged with coordinating international air travel. The Convention establishes rules of airspace, aircraft registration and safety, security, and sustainability, and details the rights of the signatories in relation to air travel. The convention also contains provisions pertaining to taxation.
The document was signed on December 7, 1944, in Chicago by 52 signatory states. It received the requisite 26th ratification on March 5, 1947, and went into effect on April 4, 1947, the same date that ICAO came into being. In October of the same year, ICAO became a specialized agency of the United Nations Economic and Social Council (ECOSOC). The convention has since been revised eight times (in 1959, 1963, 1969, 1975, 1980, 1997, 2000 and 2006).
As of March 2019, the Chicago Convention had 193 state parties, which includes all member states of the United Nations except Liechtenstein. The Cook Islands is a party to the Convention although it is not a member of the UN. The convention has been extended to cover Liechtenstein by the ratification of Switzerland.
== Main articles ==
Some important articles are:
Article 1: Every state has complete and exclusive sovereignty over airspace above its territory.
Article 3 bis: Every other state must refrain from resorting to the use of weapons against civil aircraft in flight.
Article 5: The aircraft of states, other than scheduled international air services, have the right to make flights across state's territories and to make stops without obtaining prior permission. However, the state may require the aircraft to make a landing.
Article 6: (Scheduled air services) No scheduled international air service may be operated over or into the territory of a contracting State, except with the special permission or other authorization of that State.
Article 10: (Landing at customs airports): The state can require that landing to be at a designated customs airport and similarly departure from the territory can be required to be from a designated customs airport.
Article 12: Each state shall keep its own rules of the air as uniform as possible with those established under the convention, the duty to ensure compliance with these rules rests with the contracting state.
Article 13: (Entry and Clearance Regulations) A state's laws and regulations regarding the admission and departure of passengers, crew or cargo from aircraft shall be complied with on arrival, upon departure and whilst within the territory of that state.
Article 16: The authorities of each state shall have the right to search the aircraft of other states on landing or departure, without unreasonable delay.
Article 24: Aircraft on a flight to, from, or across the territory of another contracting State shall be admitted temporarily free of duty, subject to the customs regulations of the State. Fuel, lubricating oils, spare parts, regular equipment and aircraft stores on board an aircraft of a contracting State, on arrival in the territory of another contracting State and retained on board on leaving the territory of that State shall be exempt from customs duty, inspection fees or similar national or local duties and charges. This exemption shall not apply to any quantities or articles unloaded, except in accordance with the customs regulations of the State, which may require that they shall be kept under customs supervision.
Article 29: Before an international flight, the pilot in command must ensure that the aircraft is airworthy, duly registered and that the relevant certificates are on board the aircraft. The required documents are:
Certificate of registration
Certificate of airworthiness
Passenger names, place of boarding and destination
Crew licenses
Journey Logbook
Radio Licence
Cargo manifest
Article 30: The aircraft of a state flying in or over the territory of another state shall only carry radios licensed and used in accordance with the regulations of the state in which the aircraft is registered. The radios may only be used by members of the flight crew suitably licensed by the state in which the aircraft is registered.
Article 32: The pilot and crew of every aircraft engaged in international aviation must have certificates of competency and licensees issued or validated by the state in which the aircraft is registered.
Article 33: (Recognition of Certificates and Licences) Certificates of airworthiness, certificates of competency and licensees issued or validated by the state in which the aircraft is registered, shall be recognized as valid by other states. The requirements for the issuing of those certificates or airworthiness, certificates of competency or licensees must be equal to or above the minimum standards established by the convention.
Article 40: No aircraft or personnel with endorsed licenses or certificate will engage in international navigation except with the permission of the state or states whose territory is entered. Any license holder who does not satisfy international standard relating to that license or certificate shall have attached to or endorsed on that license information regarding the particulars in which he does not satisfy those standards.
== Annexes ==
The convention is supported by nineteen annexes containing standards and recommended practices (SARPs). The annexes are amended regularly by ICAO and are as follows:
Annex 1 – Personnel Licensing
Licensing of flight crews, air traffic controllers & aircraft maintenance personnel. Including Chapter 6 containing medical standards.
Annex 2 – Rules of the Air
Appendix 1 - Signals
Appendix 2 - Interception of civil aircraft
Appendix 3 - Tables of cruising levels
Appendix 4 - Unmanned free balloons
ATTACHMENT A. Interception of civil aircraft
ATTACHMENT B. Unlawful interference
Annex 3 – Meteorological Service for International Air Navigation
Vol I – Core SARPs
Vol II – Appendices and Attachments
Annex 4 – Aeronautical Charts
Annex 5 – Units of Measurement to be used in Air and Ground Operations
Annex 6 – Operation of Aircraft
Part I – International Commercial Air Transport – Aeroplanes
Part II – International General Aviation – Aeroplanes
Part III – International Operations – Helicopters
Annex 7 – Aircraft Nationality and Registration Marks
Annex 8 – Airworthiness of Aircraft
Annex 9 – Facilitation
Annex 10 – Aeronautical Telecommunications
Vol I – Radio Navigation Aids
Vol II – Communication Procedures including those with PANS status
Vol III – Communication Systems
Part I – Digital Data Communication Systems
Part II – Voice Communication Systems
Vol IV – Surveillance Radar and Collision Avoidance Systems
Vol V – Aeronautical Radio Frequency Spectrum Utilization
Annex 11 – Air Traffic Services – Air Traffic Control Service, Flight Information Service and Alerting Service
Annex 12 – Search and Rescue
Annex 13 – Aircraft Accident and Incident Investigation
Annex 14 – Aerodromes
Vol I – Aerodrome Design and Operations
Vol II – Heliports
Annex 15 – Aeronautical Information Services, like NOTAMs
Annex 16 – Environmental Protection
Vol I – Aircraft Noise
Vol II – Aircraft Engine Emissions
Vol III – CO2 Certification Requirement
Vol IV – Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA)
Annex 17 – Security: Safeguarding International Civil Aviation Against Acts of Unlawful Interference
Annex 18 – The Safe Transport of Dangerous Goods by Air
Annex 19 – Safety Management (Since 14 November 2013)
Annex 5, Units of Measurement to be Used in Air and Ground Operations, named in its Table 3-3 three "non-SI alternative units permitted for temporary use with the SI": the foot (for vertical distance = altitude), the knot (for speed), and the nautical mile (for long distance).
== Kerosene tax ==
Article 24 of the Chicago Convention stipulates that when flying from one contracting state to another, the kerosene that is already on board aircraft may not be taxed by the state where the aircraft lands, nor by a state through whose airspace the aircraft has flown.: 16, 22 This was intended to prevent double taxation.: 22 However, there is no tax regulation in the Chicago Convention to refuelling the aircraft before departure.: 16
The Chicago Convention does not preclude a kerosene tax on domestic flights and on refueling before international flights.: 16 Although the ICAO has produced various policy documents suggesting that no taxes of any kind should be placed on aviation fuel, none of these are legally binding, and they are not found in the Chicago Convention itself.: 22–23
Although there are numerous bilateral agreements, so-called 'air services agreements', which make more extensive agreements, including often tax exemption when refueling an aircraft that has come from another contracting state, these are independent from the Chicago Convention; moreover, some air services agreements do allow for the taxation of fuels.: 6–7
== See also ==
International Civil Aviation Day
== Notes ==
== References ==
== Further reading ==
Little, Virginia (1949). "Control of International Air Transport". International Organization. 3 (1): 29–40.
== Bibliography ==
Paul Michael Krämer, Chicago Convention, 50th Anniversary Conference, Chicago, October 31 – November 1, 1994. Zeitschrift für Luft und Weltraumrecht 1995, S. 57.
== External links ==
Convention on International Civil Aviation (9th edition), International Civil Aviation Organization, 2006
Annexes 1 to 18 - Convention on International Civil Aviation, International Civil Aviation Organization, 2007
The Postal History of ICAO : The Chicago Convention |
Cyclability | Cyclability is the degree of ease of bicycle circulation. A greater degree of cyclability in cities is related, among others, to benefits for people's health, lower levels of air and noise pollution, improved fluidity of traffic or increased productivity.
== Cyclability factors ==
Among the factors that affect cyclability are:
=== Safety ===
The safety of cycle paths is a requirement for high cyclability:
The safest roads are those that are segregated from motorized traffic (bike lanes), followed by shared paths and, finally, lanes shared with other vehicles.
The width of cycle paths should be wide enough for two bikes to cross or pass each other safely.
The visibility of the road must make it possible to anticipate possible braking and intersections, avoiding curves at right angles.
Intersections must, in turn, be well marked for both cyclists and motorized traffic.
The routes must avoid obstacles, such as lampposts or benches. Also prevent carrying the bike, such as on stairs, in which case bicycle ramps can be incorporated.
The pavement must be smooth, with lowered obstacles such as curbs, with materials that do not offer too much resistance, that drain and are not slippery when it rains.
=== Coherence ===
A coherent cycling network implies:
The cycle paths must cover the entire extension of the city, so that the bicycle can be used to go to as many destinations as possible. Ideally, there should be a cycle path within 250 meters of any point in the city.
They have to be connected to each other continuously.
There must be secure bicycle parkings both at the origin and at the destination of the routes.
The design of cycle paths must be uniform, so that all citizens can quickly perceive the use of that path, avoiding conflicts.
The routes must be correctly signposted, including the destinations offered by each of the routes.
=== Directness ===
Bicycles are driven by people's physical exercise, therefore, a highly cyclable cycling network must allow direct movement without great effort:
The routes between origins and destinations can be made in the most linear way possible, without the need to make large deviations.
The cycle paths should go through the main streets, as they are usually the ones that host the majority of shops and services.
They should avoid or minimize slopes.
Reduce the number of stops such as traffic lights or intersections, which require greater physical effort. This may included Idaho stop, dead red, or red-light-as-yield traffic laws.
== Cyclability indicators ==
One of the best indicators of the degree of cyclability is the balanced proportion of genders and ages that make daily use of the bicycle. Women, children and the elderly are the ones who have a greater perception of insecurity, so if a city has low cyclability, they will not consider the bicycle as a usual means of transport. On the contrary, a composition of bicycle users similar to the demographic structure will indicate a highly cyclable space.
== See also ==
== References == |
Cycle rickshaw | The cycle rickshaw is a small-scale local means of transport. It is a type of tricycle designed to carry passengers on a for-hire basis. It is also known by a variety of other names such as bike taxi, velotaxi, pedicab, bikecab, cyclo, beca, becak, trisikad, sikad, tricycle taxi, trishaw, or hatchback bike.
While the rickshaw is pulled by a person on foot, the cycle rickshaw is human-powered by pedaling. By contrast, the auto rickshaw is motorized.
== Overview ==
The first cycle rickshaws were built in the 1880s and were first used widely in 1929 in Singapore. Six years later, they outnumbered pulled rickshaws there. By 1950, cycle rickshaws were found in every south and east Asian country. By the late 1980s, there were an estimated 4 million cycle rickshaws worldwide.
The vehicle is generally pedal-driven by a driver, though some are equipped with an electric motor to assist the driver.
The vehicle is usually a tricycle, though some quadracycle models exist, and some bicycles with trailers are configured as cycle rickshaws. Some cycle rickshaws have gas or electric motors.
=== Passenger configuration ===
The configuration of driver and passenger seats varies. Generally the driver sits in front of the passengers to pedal the rickshaw. There are some designs, though, where the cyclist driver sits behind the passengers. In many Asian countries, like Bangladesh, India, and China, the passenger seat is located behind the driver, while in Indonesia, Malaysia, Cambodia, and Vietnam the cyclist driver sits behind the passenger. In the Philippines, the passenger seats are usually located beside the driver in a side car. Similar to this, passengers sit alongside the driver in both trishaw, in Singapore, and the sai kaa, in Burma.
== Nomenclature ==
The cycle rickshaw is known by a variety of other names, including:
velotaxi (used in Germany)
pussuss (used in parts of France)
velotram (used in parts of France)
bikecab
cyclo (used in Vietnam and Cambodia)
pedicab (used in the United Kingdom, United States, and Canada)
bike taxi (used in Buffalo, New York)
bicitaxi (used in Mexico)
taxi ecologico (used in Mexico)
trishaw
beca (used in Malaysia)
becak (used in Indonesia)
helicak (used in Indonesia) it is another version of becak but with engines, not manual pedals
traysikad, trisikad, sikad, or padyak (used in the Philippines)
== Country overview ==
Not only are cycle rickshaws used in Asian countries, but they are also used in some cities in Europe and North America. They are used primarily for their novelty value, as an entertaining form of transportation for tourists and locals, but they also have environmental benefits and may be quicker than other forms of transport if traffic congestion is high. Cycle rickshaws used outside Asia often are mechanically more complex, having multiple gears, more powerful brakes, and in some cases electrical motors to provide additional power.
=== Africa ===
==== Madagascar ====
In Madagascar rickshaws, including cycle rickshaws or cyclo-pousse, are a common form of transportation in a number of cities. Rickshaws are known as pousse-pousse, meaning push-push, reportedly for the pulled rickshaws that required a second person to push the vehicles up hills. Cycles are more common in the hillier areas, like Toamasina.
=== Americas ===
==== Canada ====
In Canada there are pedicabs in operation in Victoria, British Columbia, and Vancouver, British Columbia. They are regulated in Toronto, Ontario, and Vancouver, British Columbia.
==== Mexico ====
In Mexico, they are called bicitaxi or taxi ecologico (literally "ecological taxi").
==== United States ====
In many major cities, pedicabs can be found rolling about city centers, nightlife districts, park lands, sports stadiums, and tourist-heavy areas. Myriad uses have been discovered in the states, including car-park-to-event transport at large events nationwide. Thousands of pedicabs today operate on streets in locales including Green Bay and Milwaukee, Wisconsin; Austin, Texas; Manhattan, New York; Chicago, Illinois; San Diego and San Francisco, California; Boston, Massachusetts; Miami, Florida; Washington, D.C.; Denver, Colorado; Portland, Oregon; Seattle, Washington; Charleston, South Carolina; New Orleans, Louisiana; Nashville, Tennessee; Phoenix, Arizona; Salt Lake City, Utah; Philadelphia, Pennsylvania; and dozens of other hot spots. Manhattan sports the largest collection of pedicabs operating within city limits, and the City of New York itself has mandated that approximately 850 pedicabs always sport operating permits issued by the city.
Pedicabs in the United States seem to have gotten their start at the 1962 World's Fair in Seattle.
=== Asia ===
==== Bangladesh ====
Cycle rickshaws (রিকশা riksha) are the most popular modes of transport in Bangladesh and are available for hire throughout the country including the capital city Dhaka, known as the "Rickshaw Capital of the World". They are either pedal or motor-powered. They were introduced here about 1938 and by the end of the 20th century there were 300,000+ cycle rickshaws in Dhaka.
Approximately 400,000 cycle rickshaws run each day. Cycle rickshaws in Bangladesh are also more convenient than the other public modes of transports in the country namely auto rickshaws, cabs and buses. They are mostly convertible, decorated, rickshaws with folding hoods and are the only kind of vehicles that can be driven in many neighbourhoods of the city with narrow streets and lanes. However, increasing traffic congestion and the resulting collisions have led to the banning of rickshaws on many major streets in the city. Urban employment in Bangladesh also largely depend on cycle rickshaws. Because of inflation and unemployment in the rural areas, people from villages crowd in the cities to become rickshaw drivers locally called the riksha-wala (রিকশাওয়ালা).
==== Cambodia ====
Cycle rickshaws are known as cyclo (pronounced see-clo) in Cambodia, derived from the French cyclo.
==== China ====
Since the 1950s, when the pulled rickshaw was phased out, mid-city and large city passengers may travel using three-wheeled pedicabs, or cycle rickshaws. The Chinese term for the conveyance is sanlunche (三轮车). The vehicles may be pedal- or motor-powered. In Shanghai, most of the vehicles are powered by electricity.
Tourists are warned to beware of over-charging vendors, especially who wear an "old fashioned costume" or are located near tourist locations.
Whilst many local tourism authorities still issue licences for rickshaw drivers to carry passengers, authorities in China are tightening rules in order to alleviate cheating of tourists and to reduce traffic congestion (e.g. a typical Chinese cycle-rickshaw will travel at less than 10 km/h and is wide enough to fill an entire motor or bicycle lane and therefore are blamed as a major cause of traffic congestion), and have been banned in many cities already.
==== India ====
The first attempt of improving the existing cycle rickshaws and then converting them to electric ones was done by the Nimbkar Agricultural Research Institute in the late 1990s.
===== Service availability =====
Cycle rickshaws were used in Kolkata starting about 1930 and are now common in rural and urban areas of India.
===== Ecocabs and similar service =====
Navdeep Asija started a dial-a-cycle rickshaw concept known as Ecocabs, Environmental friendly Ecocabs operate in the Punjab towns of Fazilka, Amritsar. Central Delhi and Kolkata. Passengers may call to request transport service, similar to dial-up taxi cab operations.
In November 2010, Patiala GreenCABS, similar to Ecocabs, were introduced in the city by the local non governmental organisation (NGO) the Patiala Foundation.
===== Financing =====
In West Bengal the Rotaract Club of Serampore finances cycle rickshaw purchases so that unemployed people can begin their own rickshaw business. The loans are repaid from the workers' earnings. When paid in full, the rickshaw workers own their rickshaw and other unemployed individuals are entered into the program.
===== Soleckshaw =====
The Soleckshaw is a battery-electric assisted cycle rickshaw. The battery is designed to be charged or exchanged at centralised solar-powered charging stations. Developed by the Council of Scientific & Industrial Research, it was launched in Delhi in October 2008. However, in September 2010 it was reported that no Soleckshaws had been sold on a commercial basis, and the approximately 30 demonstration units, initially deployed in Ahmedabad, Chandigarh, Delhi, Dhanbad, Durgapur, Jaipur, and Kolkata, were "not in operation due to various local administrative and management problems", and the charging stations "are not being used at this point of time as the vehicles are not in operation at those locations".
The 2010 Union budget of India had a concessional excise duty of 4% on solar cycle rickshaws.
==== Indonesia ====
Cycle rickshaws in Indonesia are called becak (pronounced [ˈbetʃaʔ]). They began being used in Jakarta about 1936. Becak were considered an icon of the capital city of Jakarta prior to its ban in the 1970s. Citing concerns of public order, the city government forbade them on the city's main streets. Scenes of the anti-becak campaign appear in the 1971 Canadian film Wet Earth and Warm People, a documentary by Michael Rubbo. Despite the attempts at eradication, however, many becak still operate near slums throughout the city. Attempts at reinforcing the ban resulted in large-scale seizures of the vehicle in the late 1990s and in 2007. In 2018, Governor Anies Baswedan attempted to allow becak again because of a political contract with becak drivers during his campaign.
There are two types of "becak" in Indonesia: the first type is the driver sitting behind the passenger (similar to Dutch-style cargo bikes), the other one which mainly found in Sumatra is the driver sitting beside the passenger. "Becak" is still being used in various part of Indonesia, especially in smaller cities and town.
==== Malaysia ====
In Malaysia, pedestrian-pulled rickshaws were gradually replaced by cycle rickshaws (beca in Malay, from Hokkien bé-chhia 馬車 "horse cart"). Cycle rickshaws were ubiquitous up to the 1970s in cities. Since then, rapid urbanisation has increased demand for more efficient public transport, resulting in dwindling cycle rickshaw numbers. Today, cycle rickshaws are operated mostly as a tourist attraction, with small numbers operating in Malacca, Penang, Kelantan, and Terengganu.
==== Myanmar ====
In Myanmar, cycle rickshaws or trishaws (Burmese: ဆိုက်ကား, romanized: saik kar, directly pronounced as in the English word 'side car') came first into wide use in 1938, when the 1300 Revolution, which originated from the Chauk oil-field strike, inspired the people in Mandalay to have a consciousness of nationalism and to boycott British goods and services. The auto body technician Saya Nyo built the first trishaw in Mandalay by attaching a side-car to the side of an old bicycle. So two passengers are on the right of the driver.
Only two forms of transportation were then available in the city; the cab and the electric train. The latter could run only on ten-kilometre (six-mile) tracks. Trishaws could reach every nook and cranny, so the spirit of nationalism plus the advantage of trishaws reaching everywhere made them so popular among Mandalayans that even the train company had to stop its business.
==== Nepal ====
In the Terai region of Nepal, cycle rickshaws are still the most popular means of public transport for short-distance commuting. Most big cities in the Terai have hundreds of cycle rickshaws that carry local commuters and travellers, and are also used for carrying goods. Since the Terai region is bordered with India, cycle rickshaws are also popular means for shoppers, businessmen and travellers to travel in and out of the country freely. The free border between India and Nepal enable the rickshaw owners from both countries to operate across the border without any restriction.
However, in Hilly regions of Nepal, cycle rickshaws are primarily used to attract tourists who can relax and travel around the popular streets and markets at reasonable fares. Cycle rickshaws are particularly popular among tourists to roam around the popular streets and markets of Thamel, Kathmandu.
==== Pakistan ====
The cycle and pulled rickshaw were banned in Pakistan in November 1991.
==== Philippines ====
In the Philippines, it is called a pedicab, traysikad, trisikad—or simply sikad or padyak, from the Philippine word meaning to tramp or stamp one's feet. It is made by mounting a sidecar to a regular bicycle. They are used mainly to ferry passengers short distances along smaller, more residential streets, often to or from jeepneys or other public utility vehicles. They are also used for transporting cargo too heavy to carry by hand and over a distance too short or roads too congested for motor transport, such as a live pig. During rainy seasons, they are useful as a way to avoid walking through flood waters. Along with the jeepney, the motorcycle-powered tricycle, and the engine-powered kuliglig, the open-air pedicab provides shade when needed.
==== South Korea ====
The Korean term for cycle rickshaw is illyeokgeo (인력거), which can be pedal- or motor-powered, though most in South Korea are electric. While not commonly used as a primary mode of transportation, cycle rickshaws can still be found in certain areas like Bukchon Hanok Village in Seoul, where they operate mainly for tourism purposes.
==== Thailand ====
In Thailand, any three-wheeler is called samlor (Thai: สามล้อ, lit. 'three wheels'), whether motorised or not, including pedicabs, motorcycles with attached vending carts or sidecars, etc. The driver is also called samlor.
==== Vietnam ====
Cycle rickshaws are known as xích lô (pronounced sick-low, from the French cyclo) in Vietnam. Cyclo was an invention of a French named P. Coupeaid, which was introduced in Cambodia and Saigon in 1939. From 2008 to 03/2012, due to the traffic obstruction, cyclos were totally forbidden in Ho Chi Minh City and other provinces, except cyclo tours organised by tourist agencies. Another similar vehicle, a pedicab called xe lôi of the Mekong Delta, are now rarely found in some provinces such as Sóc Trăng, Vĩnh Long, and Châu Đốc. They are on their way to disappear.
Cyclo, a 1995 film about a cyclo driver, won the Golden Lion at the 52nd Venice International Film Festival.
Beyond their practical utility, cyclos held cultural significance in Saigon. They appeared in literature, art, and cinema, becoming emblematic of the city's identity. From romantic rendezvous to everyday commutes, cyclos featured prominently in the daily lives of Saigonese residents.
Despite the challenges, efforts are underway to preserve the legacy of cyclo in Saigon. Some organizations are restoring vintage models, while others are promoting eco-friendly alternatives to traditional cyclos. These initiatives aim to celebrate the cultural heritage of these iconic vehicles and ensure their continued presence in the city.
=== Europe ===
Cycle rickshaws, also called pedicabs, are used in most large continental European cities.
==== Denmark ====
Copenhagen and Odense have pedicab service.
==== Finland ====
Cycle rickshaws are available for rent at Kaivopuisto in Helsinki. The rental company brought the vehicles from the city of Lappeenranta in 2009.
==== France ====
Most French cities have one or more pedicabs, locally known as PussPuss or VeloTaxi. Most common in Paris, Nantes, Lyon, Montpellier and Valence, these cities operate one or more units. France have pedicab vendors.
==== Germany ====
Lake Constance, Berlin, Frankfurt, Dresden, and Hamburg offer cycle rickshaw, also called pedicab, service.
===== Velotaxi =====
In the 1990s, German-made cycle rickshaws called velotaxis were created. They are about 1/3 to 1/2 the cost of regular taxis. Velotaxis are three-wheeled vehicles with a "space-age lightweight plastic cab that is open on both sides", a space for a driver, and behind the driver, space for two passengers. They have been made in Berlin, Germany, by Ludger Matuszewski, the founder of "Velotaxi GmbH" company. Velotaxis are often used for group functions like weddings. Under German traffic laws, transporting people on bicycles was forbidden.
===== Electric-assist pedicabs =====
Berlin's Senate, police, and taxi associations finally agreed that the "cult-flitzer" could be integrated into the city's traffic flow. Germany's highest court later ruled that transporting people on bikes was legal. It is a modern and newly designed pedicab (CityCruiser) with a 500-watt electric assist motor. Although these electric-assist pedicabs were engineered in Germany they are manufactured in the Czech Republic and some clones are now also produced in China. The Chinese clone can be purchased for about 3,000 US dollars; the German original is around 6,000 US dollars (the newest version, 9000+ €). The batteries last about 4 hours with a full charge. As with a few recumbent and semi-recumbent designs, some drivers may suffer with knee and joint pain due to the weight of the vehicle (145 kg).
==== Hungary ====
Pedicab service is available in Budapest.
==== Ireland ====
Pedicabs operate in Cork and Dublin, Ireland.
==== Italy ====
Pedicab service is available in Florence, Milan, Rome, Bari.
==== The Netherlands ====
Pedicab service is available in Amsterdam, The Hague and in the Caribbean, at Willemstad.
Thomas Lundy of Amsterdam adapted his battery-electric assisted cycle rickshaw to become what he terms "semi-solar powered", resulting in a video report on Reuters.
==== Norway ====
Pedicab service is available in Oslo, Fredrikstad, Bergen, Porsgrunn, and Tønsberg.
==== Poland ====
During World War II, when Poland was under Nazi German occupation, the German authorities confiscated most privately owned cars and many of the streetcars and buses. Because of that, public transport was partially replaced by cycle rickshaws, at first improvised and with time mass-produced by bicycle factories. Cycle rickshaws became popular in Warsaw and by the start of the Warsaw Uprising were a common sight on the city's streets.
Pedicabs still can be found in most large cities in Poland from Łódź to Warsaw.
==== Spain ====
Alicante, Barcelona, Zaragoza, Málaga, San Sebastian, and Seville have pedicab service.
==== United Kingdom ====
Cycle rickshaws operate in central London, including Soho, Piccadilly, Leicester Square, and Covent Garden. Pedal Me is a pedicab company using electric cargo bikes to transport passengers and cargo across Central and Inner London. In 2024, Transport for London was given powers to regulate pedicabs, including fare control, vehicle standards and driver licensing.
Rickshaws and pedicabs are found in the centre of Edinburgh where vendors are hired like taxis and provide tours. Pedicabs and their variants are also available in Oxford.
== Economic, social and political aspects ==
=== Economics ===
In many Asian cities where they are widely used, cycle rickshaw driving provides essential employment for recent immigrants from rural areas, generally impoverished men. One study in Bangladesh showed that cycle rickshaw driving was connected with some increases in income for poor agricultural labourers who moved to urban areas, but that the extreme physical demands of the job meant that these benefits decreased for long-term drivers. In Jakarta, most cycle rickshaw drivers in the 1980s were former landless agricultural labourers from rural areas of Java.
In 2003, Dhaka cycle rickshaw drivers earned an estimated average of Tk 143 (US$2.38) per day, of which they paid about Tk 50 (US$0.80) to rent the cycle rickshaw for a day. Older, long-term drivers earned substantially less. A 1988–89 survey found that Jakarta drivers earned a daily average of Rp. 2722 (US$1.57). These wages, while widely considered very low for such physically demanding work, do in some situations compare favourably to jobs available to unskilled workers.
In many cities, most drivers do not own their own cycle rickshaws; instead, they rent them from their owners, some of whom own many cycle rickshaws. Driver-ownership rates vary widely. In Delhi, a 1980 study found only one per cent of drivers owned their vehicles, but ownership rates in several other Indian cities were much higher, including fifteen per cent in Hyderabad and twenty-two per cent in Faridabad. A 1977 study in Chiang Mai, Thailand found that 44% of cycle rickshaw drivers were owners. In Bangladesh, driver-ownership is usually highest in rural areas and lowest in the larger cities. Most cycle rickshaws in that country are owned by individuals who have only one or two of them, but some owners in the largest cities own several hundred.
=== Social aspects ===
In 2012 Ole Kassow, a resident of Copenhagen, wanted to help the elderly get back on their bicycles, but he had to find a solution to their limited mobility. The answer was a cycle rickshaw, and he started offering free cycle rickshaw rides to residents of a nearby nursing home. He then got in touch with a civil society consultant at the City of Copenhagen, Dorthe Pedersen, who was intrigued by the idea, and together they bought five cycle rickshaws and launched an organisation called Cycling Without Age, which has now spread to all corners of Denmark, and since 2015 to another 50 countries around the world.
=== Legislation ===
Some countries and cities have banned or restricted cycle rickshaws. They are often prohibited in congested areas of major cities. For example, they were banned in Bangkok in the mid-1960s as not fitting the modern image of the city being promoted by the government. In Dhaka and Jakarta, they are no longer permitted on major roads, but are still used to provide transportation within individual urban neighbourhoods. They are banned entirely in Pakistan. While they have been criticised for causing congestion, cycle rickshaws are also often hailed as environmentally-friendly, inexpensive modes of transportation.
In Taiwan, the Road Traffic Security Rules require pedicabs to be registered by their owners with the police before they can be legally driven on public roads, or risk an administrative fine of 300 new Taiwan dollars (TWD). Their drivers must carry the police registration documents or risk a fine of 180 TWD, but no driver licence is required. The administrative fines are based on Articles 69 and 71 of the Act Governing the Punishment of Violation of Road traffic Regulations. As Taiwanese road traffic is now heavily motorised, most pedicabs have been replaced by taxicabs, but they can still be found at limited places, such as Cijin District of Kaohsiung City.
Electric-assist pedicabs were banned in New York City in January 2008, the city council decided to allow pedicabs propelled only by muscle power. The city of Toronto, Ontario, Canada, has decided not to issue permits to electric-assist pedicabs.
== Arts ==
As a key part of the urban landscape in many cities, cycle rickshaws have been the subject of films and other artwork, as well as being extensively decorated themselves. The cycle rickshaw in Dhaka is especially well known as a major medium for Bengali folk art, as plasticine cutouts and handpainted figures adorn many cycle rickshaws.
Films featuring cycle rickshaws and their drivers include Kickboxer and Sammo Hung's 1989 martial arts film Pedicab Driver, which dealt with a group of pedicab drivers and their problems with romance and organised crime. Cyclo, a 1995 film by Vietnamese director Tran Anh Hung, is centered on a cycle rickshaw driver. Tollywood films with cycle rickshaw themes include Orey Rickshaw ("Orey" literally means "Hey", in a derogatory tone), which tells a story sympathising with the downtrodden, and Rickshavodu ("Rickshaw Guy").
== Gallery ==
== See also ==
Boda boda (bicycle taxi)
Party bike
Tandem bicycle
Trailer bike
Utility cycling
Rickshaw art
George Bliss (pedicab designer)
== Notes ==
== References ==
== External links ==
Becak Yogya |
Cycling infrastructure | Cycling infrastructure is all infrastructure cyclists are allowed to use. Bikeways include bike paths, bike lanes, cycle tracks, rail trails and, where permitted, sidewalks. Roads used by motorists are also cycling infrastructure, except where cyclists are barred such as many freeways/motorways. It includes amenities such as bike racks for parking, shelters, service centers and specialized traffic signs and signals. The more cycling infrastructure, the more people get about by bicycle.
Good road design, road maintenance and traffic management can make cycling safer and more useful. Settlements with a dense network of interconnected streets tend to be places for getting around by bike. Their cycling networks can give people direct, fast, easy and convenient routes.
== History ==
The history of cycling infrastructure starts from shortly after the bike boom of the 1880s when the first short stretches of dedicated bicycle infrastructure were built, through to the rise of the automobile from the mid-20th century onwards and the concomitant decline of cycling as a means of transport, to cycling's comeback from the 1970s onwards.
== Bikeways ==
A bikeway (US) or cycleway (UK) is a lane, route, way or path which in some manner is specifically designed and /or designated for bicycle travel. Bike lanes demarcated by a painted marking are quite common in many cities. Cycle tracks demarcated by barriers, bollards or boulevards are quite common in some European countries such as the Netherlands, Denmark and Germany. They are also increasingly common in major cities elsewhere, such as New York, Melbourne, Ottawa, Vancouver and San Francisco. Montreal and Davis, California, which have had segregated cycling facilities with barriers for several decades, are among the earliest examples in North America.
Various guides exist to define the different types of bikeway infrastructure, including UK Department for Transport manual The Geometric Design of Pedestrian, Cycle and Equestrian Routes, Sustrans Design Manual, UK Department of Transport Local Transport Note 2/08: Cycle Infrastructure Design, the Danish Road Authority guide Registration and classification of paths, the Dutch CROW, the American Association of State Highway and Transportation Officials (AASHTO) Guide to Bikeway Facilities, the Federal Highway Administration (FHWA) Manual on Uniform Traffic Control Devices (MUTCD), and the US National Association of City Transportation Officials (NACTO) Urban Bikeway Design Guide.
In the Netherlands, the Tekenen voor de fiets design manual recommends a width of at least 2 meters, or 2.5 metres if used by more than 150 bicycles per hour. A minimum width of 2 meters is specified by the cities of Utrecht and 's-Hertogenbosch for new cycle lanes. The Netherlands also has protected intersections to cyclists crossing roads.
=== Terms ===
Some bikeways are separated from motor traffic by physical constraints (e.g. barriers, parking or bollards)—bicycle trail, cycle track—but others are partially separated only by painted markings—bike lane, buffered bike lane, and contraflow bike lane. Some share the roadway with motor vehicles—bicycle boulevard, sharrow, advisory bike lane—or shared with pedestrians—shared use paths and greenways.
==== Segregation ====
The term bikeway is largely used in North America to describe all routes that have been designed or updated to encourage more cycling or make cycling safer. In some jurisdictions such as the United Kingdom, segregated cycling facility is sometimes preferred to describe cycling infrastructure which has varying degrees of separation from motorized traffic, or which has excluded pedestrian traffic in the case of exclusive bike paths.
There is no single usage of segregation; in some cases it can mean the exclusion of motor vehicles and in other cases the exclusion of pedestrians as well. Thus, it includes bike lanes with solid painted lines but not lanes with dotted lines and advisory bike lanes where motor vehicles are allowed to encroach on the lane. It includes cycle tracks as physically distinct from the roadway and sidewalk (e.g. barriers, parking or bollards). And it includes bike paths in their own right of way exclusive to cycling. Paths which are shared with pedestrians and other non-motorized traffic are not considered segregated and are typically called shared use path, multi-use path in North America and shared-use footway in the UK.
=== Safety ===
On major roads, segregated cycle tracks lead to safety improvements compared with cycling in traffic. There are concerns over the safety of cycle tracks and lanes at junctions due to collisions between turning motorists and cyclists, particularly where cycle tracks are two-way. The safety of cycle tracks at junctions can be improved with designs such as cycle path deflection (between 2 m and 5 m) and protected intersections. At multi-lane roundabouts, safety for cyclists is compromised. The installation of separated cycle tracks has been shown to improve safety at roundabouts. A Cochrane review of published evidence found that there was limited evidence to conclude whether cycling infrastructure improves cyclist safety.
=== Legislation ===
Different countries have different ways to legally define and enforce bikeways.
=== Bikeway controversies ===
Some detractors argue that one must be careful in interpreting the operation of dedicated or segregated bikeways/cycle facilities across different designs and contexts; what works for the Netherlands will not necessarily work elsewhere, or claiming that bikeways increase urban air pollution.
Other transportation planners consider an incremental, piecemeal approach to bike infrastructure buildout ineffective and advocate for complete networks to be built in a single phase.
Proponents point out that cycling infrastructure including dedicated bike lanes has been implemented in many cities; when well-designed and well-implemented they are popular and safe, and they are effective at relieving both congestion and air pollution.
=== Bikeway selection ===
Jurisdictions have guidelines around the selection of the right bikeway treatments in order make routes more comfortable and safer for cycling.
A study reviewing the safety of "road diets" (motor traffic lane restrictions) for bike lanes found in summary that crash frequencies at road diets in the period after installation were 6% lower, road diets do not affect crash severity, or result in a significant change in crash types. This research was conducted by looking at areas scheduled for conversion before and after the road diet was performed. While also comparing similar areas that had not received any changes. It is noted that further research is recommended to confirm findings.
== Bikeway types ==
Bikeways can fall into these main categories: separated in-roadway bikeways such as bike lanes and buffered bike lanes; physically separated in-roadway bikeways such as cycle tracks; right-of-way paths such as bike paths and shared use paths; and shared in-roadway bikeways such as bike boulevards, shared lane markings, and advisory bike lanes. The exact categorization changes depending on the jurisdiction and organization, while many just list the types by their commonly used names
=== Dedicated bikeways ===
=== Sharing with motor traffic ===
Cyclists are legally allowed to travel on many roadways in accordance with the rules of the road for drivers of vehicles.
A bicycle boulevard or cycle street is a low speed street which has been optimized for bicycle traffic. Bicycle boulevards discourage cut-through motor vehicle traffic but allow local motor vehicle traffic. They are designed to give priority to cyclists as through-going traffic.
A shared lane marking, also known as a sharrow is a street marking that indicates the preferred lateral position for cyclists (to avoid the door zone and other obstacles) where dedicated bike lanes are not available.
A 2-1 road is a roadway striping configuration which provides for two-way motor vehicle and bicycle traffic using a central vehicular travel lane and "advisory" bike lanes on either side. The center lane is dedicated to, and shared by, motorists traveling in both directions. The center lane is narrower than two vehicular travel lanes and has no centerline; some are narrower than the width of a car. Cyclists are given preference in the bike lanes but motorists can encroach into the bike lanes to pass other motor vehicles after yielding to cyclists. Advisory bike lanes are normally installed on low volume streets. Advisory bike lanes have a number of names. The U.S. Federal Highway Administration calls them "Advisory Shoulders". In New Zealand, they are called 2-minus-1 roads. They are called Schutzstreifen (Germany), Suggestiestrook (Netherlands), and Suggestion Lanes (a literal English translation of Suggestiestrook).
=== Bicycle highways ===
Denmark and the Netherlands have pioneered the concept of "bicycle superhighways". The first Dutch route opened in 2004 between Breda and Etten-Leur; many others have been added since then. In 2017 several bicycle superhighways were opened in the Arnhem-Nijmegen region, with the RijnWaalpad as the best example of this new type of cycling infrastructure.
The first Danish route, C99, opened in 2012 between the Vesterbro rail station in Copenhagen and Albertslund, a western suburb. The route cost 13.4 million Danish kroner and is 17.5 km long, built with few stops and new paths away from traffic. "Service stations" with air pumps are located at regular intervals, and where the route must cross streets, handholds and running boards are provided so cyclists can wait without having to put their feet on the ground. Similar projects have since been built in Germany among other countries.
The cost of building a bicycle super highway depends on many things, but is usually between €300,000/km (for a wide dedicated cycle track) and €800,000/km (when complex civil engineering structures are needed).
== Cycling-friendly streetscape modifications ==
There are various measures cities and regions often take on the roadway to make it more cycling friendly and safer. Aspects of infrastructure may be viewed as either cyclist-hostile or as cyclist-friendly. However, scientific research indicates that different groups of cyclists show varying preferences of which aspects of cycling infrastructure are most relevant when choosing a specific cycling route over another. Measures to encourage cycling include traffic calming; traffic reduction; junction treatment; traffic control systems to recognize cyclists and give them priority; exempt cyclists from banned turns and access restrictions; implement contra-flow cycle lanes on one-way streets; implement on-street parking restrictions; provide advanced stop lines/bypasses for cyclists at traffic signals; marking wide curb/kerb lanes; and marking shared bus/cycle lanes.
Colombian city, Bogota converted some car lanes into bidirectional bike lanes during coronavirus pandemic, adding 84 km of new bike lanes; the government is intending to make these new bike lanes permanent. In the US, slow-street movements have been introduced by erecting makeshift barriers to slow traffic and allow bikers and walkers to safely share the road with motorists.
=== Traffic reduction ===
Removing traffic can be achieved by straightforward diversion or alternatively reduction. Diversion involves routing through-traffic away from roads used by high numbers of cyclists and pedestrians. Examples of diversion include the construction of arterial bypasses and ring roads around urban centers.
Indirect methods involve reducing the infrastructural capacity dedicated to moving motorized vehicles. This can involve reducing the number of road lanes, closing bridges to certain vehicle types and creating vehicle restricted zones or environmental traffic cells. In the 1970s the Dutch city of Delft began restricting private car traffic from crossing the city center. Similarly, Groningen is divided into four zones that cannot be crossed by private motor-traffic, (private cars must use the ring road instead). Cyclists and other traffic can pass between the zones and cycling accounts for 50%+ of trips in Groningen (which reputedly has the third-highest proportion of cycle traffic of any city). The Swedish city of Gothenburg uses a similar system of traffic cells.
Another approach is to reduce the capacity to park cars. Starting in the 1970s, the city of Copenhagen, where now 36% of the trips are done by bicycle, adopted a policy of reducing available car parking capacity by several per cents per year. The city of Amsterdam, where around 40% of all trips are by bicycle, adopted similar parking reduction policies in the 80s and 90s.
Direct traffic reduction methods can involve straightforward bans or more subtle methods like road pricing schemes or road diets. The London congestion charge reportedly resulted in a significant increase in cycle use within the affected area.
=== Traffic calming ===
Speed reduction has traditionally been attempted by statutory speed limits and enforcing the assured clear distance ahead rule.
Recent implementations of shared space schemes have delivered significant traffic speed reductions. The reductions are sustainable, without the need for speed limits or speed limit enforcement. In Norrköping, Sweden, mean traffic speeds in 2006 dropped from 21 to 16 km/h (13 to 10 mph) since the implementation of such a scheme.
Even without shared street implementation, creating 30 km/h zones (or 20 mph zone) has been shown to reduce crash rates and increase numbers of cyclists and pedestrians. Other studies have revealed that lower speeds reduce community severance caused by high speed roads. Research has shown that there is more neighborhood interaction and community cohesion when speeds are reduced to 20 mph.
=== One-way streets ===
German research indicates that making one-way streets two-way for cyclists results in a reduction in the total number of collisions. In Belgium, all one-way streets in 50 km/h zones are by default two-way for cyclists.
A Danish road directorate states that in town centers it is important to be able to cycle in both directions in all streets, and that in certain circumstances, two-way cycle traffic can be accommodated in an otherwise one-way street.
=== Two-way cycling on one-way streets ===
One-way street systems can be viewed as either a product of traffic management that focuses on trying to keep motorized vehicles moving regardless of the social and other impacts, such as by some cycling campaigners, or seen as a useful tool for traffic calming, and for eliminating rat runs, in the view of UK traffic planners.
One-way streets can disadvantage cyclists by increasing trip-length, delays and hazards associated with weaving maneuvers at junctions. In northern European countries such as the Netherlands, however, cyclists are frequently granted exemptions from one-way street restrictions, which improves cycling traffic flow while restricting motorized vehicles.
German research indicates that making one-way streets two-way for cyclists results in a reduction in the total number of collisions.
There are often restrictions to what one-way streets are good candidates for allowing two-way cycling traffic. In Belgium road authorities in principle allow any one-way street in 50 kilometres per hour (31 mph) zones to be two-way for cyclists if the available lane is at least 3 metres (9.8 ft) wide (area free from parking) and no specific local circumstances prevent it.
Denmark, a country with high cycling levels, does not use one-way systems to improve traffic flow. Some commentators argue that the initial goal should be to dismantle large one-way street systems as a traffic calming/traffic reduction measure, followed by the provision of two-way cyclist access on any one-way streets that remain.
=== Intersection and junction design ===
In general, junction designs that favor higher-speed turning, weaving and merging movements by motorists tend to be hostile for cyclists. Free-flowing arrangements can be hazardous for cyclists and should be avoided. Features such as large entry curvature, slip-roads and high flow roundabouts are associated with increased risk of car–cyclist collisions. Cycling advocates argue for modifications and alternative junction types that resolve these issues such as reducing kerb radii on street corners, eliminating slip roads and replacing large roundabouts with signalized intersections.
=== Protected intersection ===
Another approach which the Netherlands innovated is called in North America a protected intersection that reconfigures intersections to reduce risk to cyclists as they cross or turn. Some American cities are starting to pilot protected intersections.
==== Bike box ====
A bike box or an advanced stop line is a designated area at the head of a traffic lane at a signalized intersection that provides bicyclists with a safer and more visible way to get ahead of queuing traffic during the red signal phase.
==== Roundabouts ====
On large roundabouts of the design typically used in the UK and Ireland, cyclists have an injury accident rate that is 14–16 times that of motorists. Research indicates that excessive sightlines at uncontrolled intersections compound these effects. In the UK, a survey of over 8,000 highly experienced and mainly adult male Cyclists Touring Club members found that 28% avoided roundabouts on their regular journey if at all possible. The Dutch CROW guidelines recommend roundabouts only for intersections with motorized traffic up to 1500 per hour. To accommodate greater volumes of traffic, they recommend traffic light intersections or grade separation for cyclists. Examples of grade separation for cyclists include tunnels, or more spectacularly, raised "floating" roundabouts for cyclists.
==== Traffic signals/Traffic control systems ====
How traffic signals are designed and implemented directly impacts cyclists. For instance, poorly adjusted vehicle detector systems, used to trigger signal changes, may not correctly detect cyclists. This can leave cyclists in the position of having to "run" red lights if no motorized vehicle arrives to trigger a signal change. Some cities use urban adaptive traffic control systems (UTCs), which use linked traffic signals to manage traffic in response to changes in demand. There is an argument that using a UTC system merely to provide for increased capacity for motor traffic will simply drive growth in such traffic. However, there are more direct negative impacts. For instance, where signals are arranged to provide motor traffic with so-called green waves, this can create "red waves" for other road users such as cyclists and public transport services. Traffic managers in Copenhagen have now turned this approach on its head and are linking cyclist-specific traffic signals on a major arterial bike lane to provide green waves for rush hour cycle-traffic. However, this would still not resolve the problem of red-waves for slow (old and young) and fast (above average fitness) cyclists. Cycling-specific measures that can be applied at traffic signals include the use of advanced stop lines and/or bypasses. In some cases cyclists might be given a free-turn or a signal bypass if turning into a road on the nearside.
=== Signposting ===
In many places worldwide special signposts for bicycles are used to indicate directions and distances to destinations for cyclists. Apart from signposting in and between urban areas, mountain pass cycling milestones have become an important service for bicycle tourists. They provide cyclists with information about their current position with regard to the summit of the mountain pass.
Numbered-node cycle networks are increasingly used in Europe to give flexible, low-cost signage.
=== Widening outside lanes ===
One method for reducing potential friction between cyclists and motorized vehicles is to provide "wide kerb", or "nearside", lanes (UK terminology) or "wide outside through lane" (U.S. terminology). These extra-wide lanes increase the probability that motorists pass cyclists at a safe distance without having to change lanes. This is held to be particularly important on routes with a high proportion of wide vehicles such as buses or heavy goods vehicles (HGVs). They also provide more room for cyclists to filter past queues of cars in congested conditions and to safely overtake each other. Due to the tendency of all vehicle users to stay in the center of their lane, it would be necessary to sub-divide the cycle lane with a broken white line to facilitate safe overtaking. Overtaking is indispensable for cyclists, as speeds are not dependent on the legal speed limit, but on the rider's capability.
The use of such lanes is specifically endorsed by Cycling: the way ahead for towns and cities, the European Commission policy document on cycle promotion.
=== Shared space ===
Shared space schemes extend this principle further by removing the reliance on lane markings altogether, and also removing road signs and signals, allowing all road users to use any part of the road, and giving all road users equal priority and equal responsibility for each other's safety. Experiences where these schemes are in use show that road users, particularly motorists, undirected by signs, kerbs, or road markings, reduce their speed and establish eye contact with other users. Results from the thousands of such implementations worldwide all show casualty reductions and most also show reduced journey times. After the partial conversion of London's Kensington High Street to shared space, accidents decreased by 44% (the London average was 17%). However, in July 2018, the UK 'paused' all further shared space schemes over fears that a scheme dependent on eye-contact between drivers and pedestrians was unavoidably dangerous to pedestrians with visual impairments.
CFI argues for a marked lane width of 4.25 metres (13.9 ft). On undivided roads, width provides cyclists with adequate clearance from passing HGVs while being narrow enough to deter drivers from "doubling up" to form two lanes. This "doubling up" effect may be related to junctions. At non-junction locations, greater width might be preferable if this effect can be avoided. The European Commission specifically endorses wide lanes in its policy document on cycling promotion, Cycling: the way ahead for towns and cities.
=== Shared bus and cycle lanes ===
Shared bus and cycle lanes are also a method for providing a more comfortable and safer space for cyclists. Depending on the width of the lane, the speeds and number of buses, and other local factors, the safety and popularity of this arrangement vary.
In the Netherlands mixed bus/cycle lanes are uncommon. According to the Sustainable Safety guidelines they would violate the principle of homogeneity and put road users of very different masses and speed behavior into the same lane, which is generally discouraged.
=== Road surface ===
Bicycle tires being narrow, road surface is more important than for other transport, for both comfort and safety. The type and placement of storm drains, manholes, surface markings, and the general road surface quality should all be taken into account by a bicycle transportation engineer. Drain grates, for example, must not catch wheels.
== Trip-end facilities ==
=== Bicycle parking/storage arrangements ===
As secure and convenient bicycle parking is a key factor in influencing a person's decision to cycle, decent parking infrastructure must be provided to encourage the uptake of cycling. Decent bicycle parking involves weather-proof infrastructure such as lockers, stands, staffed or unstaffed bicycle parks, as well as bike parking facilities within workplaces to facilitate bicycle commuting. It also will help if certain legal arrangements are put into place to enable legitimate ad hoc parking, for example to allow people to lock their bicycles to railings, signs and other street furniture when individual proper bike stands are unavailable.
=== Other trip end facilities ===
Some people need to wear special clothes such as business suits or uniforms in their daily work. In some cases the nature of the cycling infrastructure and the prevailing weather conditions may make it very hard to both cycle and maintain the work clothes in a presentable condition. It is argued that such workers can be encouraged to cycle by providing lockers, changing rooms and shower facilities where they can change before starting work.
== Theft reduction measures ==
The theft of bicycles is one of the major problems that slow the development of urban cycling. Bicycle theft discourages regular cyclists from buying new bicycles, as well as putting off people who might want to invest in a bicycle.
Several measures can help reduce bicycle theft:
Bicycle parking stations - buildings or structures designed for use as bicycle parking facilities, primarily for bicycle security
Bicycle registration to enable recovery if stolen
Danish bicycle VIN-system, a law requiring all bicycles in Denmark to have a vehicle identification number (VIN) with the bike's manufacturer code, a serial number, and a construction year code
Making cyclists aware of antitheft devices and their effective use
Mounting sting operations to catch thieves
Secure bicycle parking: offering safe bicycle parking facilities such as guarded bicycle parking (staffed or with camera surveillance) or bicycle lockers
Promoting devices to enable remote tracking of a bicycle's location
Targeting cycle thieves
Using folding bicycles which can be safely stored (for example) in cloakrooms or under desks.
Certain European countries apply such measures with success, such as the Netherlands or certain German cities using registration and recovery. Since mid-2004, France has instituted a system of registration, in some places allowing stolen bicycles to be put on file in partnership with the urban cyclists' associations. This approach has reputedly increased the stolen bicycle recovery rate to more than 40%. By comparison, before the commencement of registration, the recovery rate in France was about 2%.
In some areas of the United Kingdom, bicycles fitted with location tracking devices are left poorly secured in theft hot-spots. When the bike is stolen, the police can locate it and arrest the thieves. This sometimes leads to the dismantling of organized bicycle theft rings, as bike theft generally enjoys a very low priority with the police.
== Bicycle lift ==
Bicycle lifts are used to haul bikes up stairs and steep hills. They are used to improve accessibility and encourage casual cycling.
Bike escalators are widely used in East Asia and are used in parts of Europe.
== Impact ==
According to a 2019 study, protected and separated bike infrastructure is associated with greater safety outcomes for all road users.
A 2021 review of existing research found that closing car lanes and replacing them with bike lanes or pedestrian lanes had positive or non-significant economic effects.
A 2021 case-control study of cities found that redistributing street space for cycling infrastructure—for so-called "pop-up bike lanes" during the COVID-19 pandemic—lead to large additional increases in cycling. These may have substantial environmental and health benefits which contemporary decision-makers have pledged to genuinely strive for with set goals such as CO2 emissions reductions of 55% by 2030 by the EU, climate change mitigation responsibilities of the Paris Agreement and EU air quality rules.
== Integration with public transit ==
Cycling is often integrated with other transport. For example, in the Netherlands and Denmark a large number of train journeys may start by bicycle. In 1991, 44% of Dutch train travelers went to their local station by bicycle and 14% used a bicycle at their destinations. The key ingredients for this are claimed to be:
an efficient, attractive and affordable train service
secure bike parking at train stations
a quick and easy bicycle rental system for commuters, the OV-bicycle scheme, at train stations
a town planning policy that results in a sufficient proportion of the potential commuter population (e.g. 44%) living/working within a reasonable cycling distance of the train stations.
It has been argued in relation to this aspect of Dutch or Danish policy that ongoing investment in rail services is vital to maintaining their levels of cycle use.
Cycling and public transport are well integrated in Japan. Starting in 1978, Japan expanded bicycle parking supply at railway stations from 598,000 spaces in 1977 to 2,382,000 spaces in 1987. As of 1987, Japanese provisions included 516 multi-story garages for bicycle parking.
In some cities, bicycles may be carried on local trains, trams and buses so that they may be used at either end of the trip. The Rheinbahn transit company in Düsseldorf permits bicycle carriage on all its bus, tram and train services at any time of the day. In Munich bicycles are allowed on the S-Bahn commuter trains outside of rush hours, and folding bikes are allowed on city busses. In Copenhagen, you can take your bicycle with you in the S-tog commuter trains, all times a day with no additional costs. In France, the prestigious TGV high-speed trains are even having some of their first class capacity converted to store bicycles. There have also been schemes, such as in Victoria, British Columbia, Acadia, and Canberra, Australia, to provide bicycle carriage on buses using externally mounted bike carriers.
In some Canadian cities, including Edmonton, Alberta, and Toronto, Ontario, busses on most city routes have externally mounted carriers for bicycles, and bikes are allowed on the light rail trains at no extra cost outside of rush hour. All public transit buses in Chicago and suburbs allow up to two bikes at all times. The same is true of Grand River Transit buses in the Region of Waterloo, Ontario, Canada. Trains allow bikes with some restrictions. Where such services are not available, some cyclists get around this restriction by removing their pedals and loosening their handlebars as to fit into a box or by using folding bikes that can be brought onto the train or bus like a piece of luggage. The article on buses in Christchurch, New Zealand, lists 27 routes with bike racks.
In the EU regional train services must carry bikes, and from 2025 new and major upgraded trains are generally required to have space for at least 4 non-folding bikes; however international services with countries outside the EU are exempt from these rules. In 2023 Eurostar cycle booking was described as “farcical”. Nevertheless EU train operators are sometimes allowed to restrict bikes, for example on old rolling stock or during peak hours.
UK provision for bikes on trains varies considerably, with some train operating companies being criticised, for example for only providing vertical storage, which can be difficult or impossible to use. A UK Department for Transport 2021 white paper said “Bringing a bike on board makes a train journey even more convenient, yet even as cycling has grown in popularity, the railways have reduced space available for bikes on trains. Great British Railways will reverse that, increasing space on existing trains wherever practically possible, including on popular leisure routes.” A DoT train specification document issued in 2012 says “ Provision must be made for an excess luggage storage area which, as a minimum, is capable of accommodating two bicycles or luggage up to a minimum total volume of 2m3” with a bicycle being defined as a “Full size ‘road’ bicycle with 25inch frame”. As of 2024 some UK train companies severely limit bikes, for example GWR does not guarantee storage for bikes which have wheels with a rim diameter more than 50cm, which most bicycles do.
== Bikesharing systems ==
A bicycle sharing system, public bicycle system, or bike share scheme, is a service in which bicycles are made available for shared use to individuals on a very short-term basis. Bike share schemes allow people to borrow a bike from point "A" and return it at point "B". Many of the bicycle sharing systems are on a subscription basis.
== Examples of cycling infrastructure ==
== See also ==
Organizing bodies:
Adventure Cycling Association – American nonprofit member organization
National Association of City Transportation Officials – North American association
Muli-modal road safety:
Assured clear distance ahead – Safe driving distance between cars
== References ==
== External links ==
Bicycle Infrastructure Manuals, a compendium of infrastructure design manuals, cycling master plans and strategy guides
Urban Bikeway Design Guide from National Association of City Transportation Officials
Bicycle infrastructure in the Netherlands video and blog explaining the Dutch approach of addressing cycling infrastructure safety
UK cycle infrastructure design guide 2020
UK cycle rail toolkit 3 (2023)
CyclOSM and Opencyclemap are global maps of cycling infrastructure
Bicycle Facilities is a world map and statistics of cycling infrastructure |
Dassault Mirage IV | The Dassault Mirage IV is a French supersonic strategic bomber and deep-reconnaissance aircraft. Developed by Dassault Aviation, the aircraft entered service with the French Air Force in October 1964. For many years it was a vital part of the nuclear triad of the Force de Frappe, France's nuclear deterrent striking force. The Mirage IV was retired from the nuclear strike role in 1996, and the type was entirely retired from operational service in 2005.
During the 1960s, there were plans to export the Mirage IV. In one proposal, Dassault would have entered a partnership with the British Aircraft Corporation to jointly produce a Mirage IV variant for the Royal Air Force and potentially for other export customers, but this project did not come to fruition. The Mirage IV was ultimately not adopted by any other operators.
== Development ==
=== Origins ===
During the 1950s, France embarked on an extensive military program to produce nuclear weapons; however, it was acknowledged that existing French aircraft were unsuitable for the task of delivering the weapons. Thus, the development of a supersonic bomber designed to carry out the delivery mission started in 1956 as a part of the wider development of France's independent nuclear deterrent. In May 1956, the Guy Mollet government drew up a specification for an aerially-refuelable supersonic bomber capable of carrying a 3 tonne, 5.2-metre-long nuclear bomb 2,000 km (without aerial refuelling). According to aviation authors Bill Gunston and Peter Gilchrist, the specification's inclusion of supersonic speed was "surprising" to many at the time.
The final specifications, jointly defined by government authorities and Dassault staff, were approved on 20 March 1957. Sud Aviation and Nord Aviation both submitted competing proposals, both based on existing aircraft; Sud Aviation proposed the Super Vautour, a stretched Sud Aviation Vautour with 47 kilonewtons (10,500 lbf) thrust SNECMA Atar engines and a combat radius of 2,700 kilometres (1,700 mi) at Mach 0.9. Dassault's proposal for what became the Mirage IV was chosen on the basis of lower cost and anticipated simpler development, being based upon a proposed early 1956 twin-engined night-fighter derived from the Dassault Mirage III fighter and the unbuilt Mirage II interceptor. In April 1957, Dassault were informed that they had won the design competition.
Dassault's resulting prototype, dubbed Mirage IV 01, looked a lot like the Mirage IIIA, even though it had double the wing surface, two engines instead of one, and twice the unladen weight. The Mirage IV also carried three times more internal fuel than the Mirage III. The aircraft's aerodynamic features were very similar to the III's but required an entirely new structure and layout. This prototype was 20 metres (67 ft) long, had an 11 metres (37 ft) wingspan, 62 square metres (670 sq ft) of wing area, and weighed approximately 25,000 kilograms (55,000 lb). It was considerably more advanced than the Mirage III, incorporating new features such as machined and chem-milled planks, tapered sheets, a small amount of titanium, and integral fuel tanks in many locations including the leading portion of the tailfin.
The 01 was an experimental prototype built to explore and solve the problems stemming from prolonged supersonic flight. At the time, no aircraft had been designed to cruise at over Mach 1.8 for long periods of time and there were sizable technological and operational uncertainties. Weapon-related issues were another issue. Building the 01 in Dassault's Saint-Cloud plant near Paris took 18 months. In late 1958, the aircraft was transferred to the Melun-Villaroche flight testing area for finishing touches and ground tests. On 17 June 1959, French General Roland Glavany, on a five-year leave from the French Air Force since 1954, took the 01 for its maiden flight.
On 19 September 1960, René Bigand (replacing Glavany as test pilot) increased the world record for speed on a 1000-kilometre closed circuit to 1,822 km/h (1,132 mph) around Paris and the Melun base. Flight 138, on 23 September, corroborated the initial performance and pushed the record on a 500 km closed circuit to an average of 1,972 km/h (1,225 mph), flying between Mach 2.08 and Mach 2.14. The Mirage IV 01 prototype underwent minor modifications during testing in the autumn of 1959, most noticeably, the tail was enlarged (slight reduction in height, large increase in chord).
=== Production ===
In order to increase range, studies were made of a significantly larger Mirage IVB design, powered by two Snecma license-built Pratt & Whitney J75 engines and having a wing area of 120 m² (1,290 sq ft) compared to 70 m² (750 sq ft) of the prototype IV, as well as a speed of Mach 2.4 and a gross weight of 64,000 kilograms (140,000 lb). The Mirage IVB proposal had been instigated as a response to interest by de Gaulle in ensuring that two-way (including the aircraft's return to France) strike missions could be flown. However, development of the aircraft was ultimately cancelled in July 1959 due to the greater cost involved, a decision having been taken to rely upon aerial refueling instead also being a factor.
With the Mirage IVB considered to be too expensive, the medium-sized Mirage IVA, slightly larger than the first prototype, was chosen for three more prototypes to be produced. This aircraft had a wing area of 77.9 square metres (839 sq ft) and weighed about 32,000 kilograms (70,000 lb) On 4 April 1960, a formal order for 50 production Mirage IVA aircraft was issued. The three prototype aircraft were built between 1961 and 1963, with first flights on 12 October 1961, 1 June 1962, and 23 January 1963. By 1962, the second prototype had conducted simulated nuclear bombing runs in the trials range at Colomb-Bechar in southern Algeria. The third prototype was equipped with the navigation/bombing systems and flight refuelling probe. The fourth prototype Mirage IVA-04 was essentially representative of the production aircraft that would follow.
For production, various portions of the aircraft were subcontracted to Sud Aviation (wings and rear fuselage) and Breguet Aviation (tailfin), which was still a separate company from Dassault until 1967; Dassault manufactured the front fuselage and flight-control system internally. Manufacture of both the prototypes and subsequent production aircraft was often hindered by an explicit requirement that there would be no reliance upon foreign suppliers to maintain France's nuclear capabilities; due to this, the Mirage IV initially lacked an inertial navigation system as French industry could not yet produce this device.
On 7 December 1963, the first production Mirage IVA performed its maiden flight. A series of 62 aircraft was built, and they entered service between 1964 and 1968. Although Dassault had designed the Mirage IV for the low-level flight role right from the start, the final batch of 12 aircraft ordered in November 1964 differed from the earlier aircraft in several areas, including the flight controls, avionics, and structural details, for the purpose of providing improved low-level performance. It had been planned for this batch to be powered by the newer Pratt & Whitney/SNECMA TF106 turbofan engine. The improvements featured upon the last 12 Mirage IVs were later retroactively applied to the whole fleet.
In December 1963 Dassault proposed a Mirage IV-106 variant with 2 Snecma TF106 (license-built Pratt and Whitney) engines, an enlarged 105,000 gross-weight fuselage, terrain-avoidance radar, and armed with a proposed French version of the American Douglas GAM-87 Skybolt air-launched ballistic missile. This version would have been very costly, and ultimately was not ordered.
=== Proposed export variants ===
In 1963, the Australian government sought a replacement for the Royal Australian Air Force fleet of English Electric Canberra bombers, largely in response to the Indonesian Air Force's purchase of missile-armed Tupolev Tu-16 bombers. Dassault proposed a version of the Mirage IVA with Rolls-Royce Avon engines. Australian Air Marshall Frederick Scherger seriously considered purchase of the IVA in 1961 because it was considered to be proven hardware already in service (in contrast to the BAC TSR-2 which was still in development), before settling on the General Dynamics F-111C. The IVA was one of five aircraft types that were short listed for the role, but the F-111C was eventually selected.
In April 1965, the British Government cancelled the TSR-2 reconnaissance-strike aircraft. However the operational requirement still existed, so in response, Hawker Siddeley offered the Buccaneer S.2, the Americans the General Dynamics F-111K, while, in July 1965, Dassault and British Aircraft Corporation (BAC) jointly proposed a modified Mirage IV. The Dassault/BAC aircraft, known as the Mirage IV* or Mirage IVS (S for Spey) would be re-engined with more powerful Rolls-Royce Spey turbofan engines with a total of 185 kilonewtons (41,700 lbf), larger (fuselage depth increased by 7.6 centimetres (3 in), had an approximately 0.61 metres (2 ft) forward fuselage extension, and was to weigh 36,000 kilograms (80,000 lb)), and use avionics planned for the TSR-2, although BAC preferred the French Antilope radar. Although designed by Dassault, the production was to be carried out jointly between Dassault and its subcontractors (wing, mid-fuselage, and tail) and BAC (front and rear fuselage). The final assembly location was not determined before this proposal was rejected. The Mirage IV* was to carry a bombload of up to 9,100 kilograms (20,000 lb).
The Mirage IV* was claimed to meet nearly every RAF requirement except for field length, and some claim it exceeded the F-111 slightly in speed and had at least equal range. The estimated cost was £2.321 million per airframe (for 50) or £2.067 million (for 110), less than the price of the F-111K. However Air Ministry and RAF studies of the proposal identified further modifications to meet the RAF low-level performance requirements. These included airframe strengthening and revised cockpit glazing to improve visibility for both the pilot and crew member. There was also a significant shortfall in range, despite the lengthened fuselage. BAC claimed that the British government evaluation into the Mirage IV* was "relatively superficial". RAF pilots who test-flew the Mirage IV were "favourably impressed" with its low-altitude handling. However, some British government officials, including Parliament members Julian Risdale and Roy Jenkins, questioned the Mirage IV*'s capacity to operate from unprepared airstrips or to operate at low level, or claimed that the F-111 was a superior aircraft "in a class of its own". Air historian Bill Gunston notes that low-level Mirage IV missions had been planned since 1963 and Mirage IVs operated regularly at low level since 1965, and argues that the ability of a strategic bomber to operate from unprepared airstrips is historically unimportant.
Ultimately, as much for strategic political reasons as for technical ones, the F-111K was preferred (only to be cancelled later in its turn) and the Spey-engined Mirage abandoned.
BAC and Dassault had also hoped to sell the Mirage IV* to France and to export the Mirage IV* to various nations, such as India, possibly Israel, and others; the lack of a British sale put an end to such possibilities. Some aviation journalists claim that the rejection of the Mirage IV* may have dampened French interest in joint Anglo-French cooperation.
== Design ==
The Mirage IV shares design features and a visual resemblance to the Mirage III fighter, featuring a tailless delta wing and a single square-topped vertical fin. However, the wing is significantly thinner to allow better high-speed performance and has a thickness/chord ratio of only 3.8% at the root and 3.2% at the tip; this wing was the thinnest built in Europe at that time and one of the thinnest in the world. While being significantly smaller than an expensive medium bomber proposal for the role, the Mirage IV was roughly three times the weight of the preceding Mirage III.
The Mirage IV is powered by two Snecma Atar turbojets, fed by two air intakes on either side of the fuselage that had intake half-cone shock diffusers, known as souris ("mice"), which were moved forward as speed increased to trim the inlet for the shock wave angle. It can reach high supersonic speeds: the aircraft is redlined at Mach 2.2 at altitude because of airframe temperature restrictions, although it is capable of higher speeds. While broadly similar to the model used on the Mirage III, the Atar engine had a greater airflow and an elevated overspeed limit from 8,400 rpm to 8,700 rpm for greater thrust during high altitude supersonic flight. While the first Mirage IV prototype was fitted with double-eyelid engine nozzles, production aircraft featured a complicated variable geometry nozzle that automatically varied in response to the descent rate and airspeed.
The aircraft has 14,000 litres (3,700 gal (US)) of internal fuel, and its engines are quite thirsty, especially when the afterburner is active. Fuel was contained within integral tanks within the wings, as well as a double-skinned section of the fuselage between and outboard of the inlet ducts, underneath the ducts and engines, and forward of the main spar of the tail fin; this provided a total internal capacity of 6,400 kilograms (14,000 lb). A refueling probe is built into the nose; aerial refuelling was often necessary in operations as the Mirage IV only had the fuel capacity, even with external drop tanks, to reach the Soviet Union's borders, thus refuelling was required to allow for a 'round trip'. In the event of nuclear war between the major powers, it was thought that there would be little point in having the fuel to return as the host air bases would have been destroyed; instead, surviving Mirage IVs would have diverted to land at bases in nearby neutral countries following the delivery of their ordnance.
The two-man crew, pilot and navigator, were seated in tandem cockpits, each housed under separate clamshell canopies. A bombing/navigation radar is housed within an oblique-facing radome underneath the fuselage between the intakes and aft of the cockpit; much of the Mirage IV's onboard avionics systems, such as the radar communications, navigational instrumentation, and bombing equipment, were produced by Thomson-CSF. Other avionics elements were provided by Dassault itself and SFENA; one of the only major subsystems not of French origin onboard was the Marconi-built AD.2300 doppler radar. Free-falling munitions could also be aimed using a ventral blister-mounted periscope from the navigator's position.
The Mirage IV has two pylons under each wing, with the inboard pylons being normally used for large drop tanks of 2,500-litre (660 gal (US)) capacity. The outer pylons typically carried ECM and chaff/flare dispenser pods to supplement the internal jamming and countermeasures systems. On later aircraft, this equipment typically included a Barax NG jammer pod under the port wing and a BOZ expendables dispenser under the starboard wing. No cannon armament was ever fitted aboard the type. The early Mirage IVA had a fuselage recess under the engines which could hold a single AN-11 or AN-22 nuclear weapon of 60 kt yield. The Mirage IV can carry 12 solid-fuel rockets diagonally down below the wing flaps, for rocket-assisted take off (RATO).
From 1972 onward, 12 aircraft were also equipped to carry the CT52 reconnaissance pod in the bomb recess. These aircraft were designated Mirage IVR for reconnaissance. The CT52 was available in either BA (Basse Altitude, low-level) or HA (Haute Altitude, high-altitude) versions with three or four long-range cameras; a third configuration used an infrared line scanner. The CT52 had no digital systems, relying on older wet-film cameras. The first operational use of the system took place during missions in Chad in September 1974.
During the 1980s, a total of 18 Mirage IVs were retrofitted with a centreline pylon and associated equipment to carry and launch the nuclear Air-Sol Moyenne Portée (ASMP) stand-off missile. The Mirage IVA could theoretically carry up to six large conventional bombs at the cost of drop tanks and ECM pods; such armament was rarely fitted in practice.
== Operational history ==
=== Introduction and early operations ===
In February 1964 deliveries of the Mirage IV to the French Air Force started, with the first French Mirage IV squadron being declared operational on 1 October that year. The Mirage IV bomber force soon consisted of nine squadrons of four aircraft (2 pairs – one aircraft carrying the nuclear bomb, one a buddy-refuelling tanker) each. When fully built up, the force consisted of three wings. These wings were each divided into three bomber squadrons, each equipped with a total of four Mirage IVs, with each deployed at a different base to minimise the potential for an enemy strike to knock out the entire bomber force. These squadrons were:
1/91 'Gascogne' based at Mont de Marsan
2/91 'Bretagne' based at Cazaux
3/91 'Beauvaisis' based at Creil
1/93 'Guyenne' based at Istres
2/93 'Cevennes' based at Orange
3/93 'Sambre' based at Cambrai
1/94 'Bourbonnais' based at Avord
2/94 'Marne' based at St-Dizier
3/94 'Arbois' based at Luxeuil
After establishment of its own deterrent force, the Force de Dissuassion, more commonly known as the Force de frappe, France withdrew from the military command structure of NATO in 1966. De Gaulle viewed the operational establishment of the Mirage IV fleet, a critical component of the independent Force de frappe, as highly influential to his decision to withdraw France from NATO, and that an independent French nuclear deterrent was necessary to ensure independence as a nation. From 1964 to 1971, the Mirage IV was France's sole means of delivering nuclear ordnance. At this point they were each armed with a single 60 kiloton nuclear bomb.
Alert status consisted of an active inventory of 36 Mirage IVs. At any one time 12 aircraft would be in the air, with a further 12 on the ground kept at four minutes' readiness and the final 12 at 45 minutes' readiness, each equipped with an onboard functional nuclear weapon. These 36 active aircraft would be rotated with 26 reserve aircraft; the latter were kept in an airworthy condition or were otherwise subject to maintenance activities. Within the first decade of the type entering service, in excess of 200,000 hours were flown and 40,000 aerial refuelling operations were performed by the Mirage IV fleet alone; at one point, Mirage IV operations were consuming up to 44 per cent of the French Air Force's total spare parts budget.
The primary objectives of the Mirage IVA force were major Soviet cities and bases. With aerial refueling, the plane was able to attack Moscow, Murmansk or various Ukrainian cities when sortieing from French bases. A justification of the Mirage IV given by Armée de l'air Brigadier General Pierre Marie Gallois, an architect of the French nuclear deterrent, was that: "France is not a prize worthy of ten Russian cities".
In order to refuel the Mirage IVA fleet, France purchased 14 (12 plus 2 spares) U.S. Boeing C-135F tankers. Mirage IVAs also often operated in pairs, with one aircraft carrying a weapon and the other carrying fuel tanks and a buddy refueling pack, allowing it to refuel its partner en route to the target. Even so, some sources state that some of the mission profiles envisioned were actually one-way, with the crew having no chance of returning after bombing a Soviet city. The inability for the Mirage IV to return after missions had been a point of controversy during the aircraft's early development.
Both flight and ground crews received training principally by Strategic Air Forces Command 328, stationed at Bordeaux. Several Nord Noratlas were specially modified, having received the Mirage IV's radar, control consoles, and additional electrical generators, for the purpose of training navigators; these were later replaced by a pair of customised Dassault Falcon 20 outfitted with much of the Mirage IVP's avionics.
=== Transition and upgrades ===
Initially, the basic attack flight profile was "high-high-high" at a speed of Mach 1.85, engaging targets up to a maximum radius of 3,500 km (2,175 mi). In the late 1960s, when the threat of surface-to-air missile defences made high-altitude flight too hazardous, the Mirage IVA was modified for low-altitude penetration. Flying low, the maximum attack speed was reduced to 1,100 km/h (680 mph) and the combat radius was also decreased. By 1963, the majority of missions involving the Mirage IV were being planned as low-level flights. By 1964, Mirage IVAs were conducting training penetration runs at an altitude of 200 ft, without the assistance of terrain-following radar, which subjected pilots to considerable workload and those on board to high levels of turbulence.
To improve survivability, the French Air Force began dispersing Mirage IVs to pre-prepared rough strips during the 1960s; while the use of hardened bunkers had been assessed, it was found to be financially impractical. By the 1970s, it had become clear that vulnerability of the Mirage IV to air defences, even while flying at low altitudes, had made the delivery of gravity bombs such as the AN-11 or AN-22 impractical. Thus, it was decided to pass a greater share of the deterrent role onto land-based missiles and submarine-launched ballistic missile alternatives; as a result, a single wing of Mirage IVs was stood down in 1976, partially due to fleet-wide attrition losses.
In 1973, it was reported that a force of 40 Mirage IVs would continue to perform as a part of France's nuclear deterrent until the 1980s, and that steady improvements were to be undertaken. In 1975, all Mirage IVs were progressively painted in a new green/grey camouflage scheme. In 1979, in response to the decreasing effectiveness of free-fall bombs used by both its strategic and tactical nuclear forces, development of the ASMP stand-off missile was initiated; the ASMP would possess a range of up to 400 km (250 mi) and was alternative armed with either a single 150 or 300 kiloton nuclear warhead. Various test launches of dummy and later live ASMPs were performed using the Mirage IV as the launch platform between 1981 and 1983.
In July 1984, a contract was formally issued for the upgrade of a total of 18 Mirage IVAs to carry the ASMP missile in the place of traditional bombs; these aircraft were redesignated Mirage IVP (Penetration). The conversion of Mirage IVAs to IVPs involved a large number of modifications and re-workings of the aircraft. A deep centerline pylon was added, which could accommodate either a single ASMP missile or a CT52 reconnaissance pod. The main radar and electronics suite were removed and replaced by newer counterparts; other modified systems included the navigation system, flight control system, and various elements of the cockpit. On 12 October 1982, the first modernised Mirage IVP performed its first flight; it re-entered active service on 1 May 1986.
In August 1985, a French proposal that would have seen Mirage IVPs stationed at air bases inside neighbouring West Germany was made public; this deployment would have marked a significant philosophical departure from traditional French nuclear defence policy. Aviation authors Bill Gunston and Peter Gilchrist allege that French officials had historically discounted the option of recovering Mirage IVs in friendly territory as unduly optimistic, as those nations might become unfriendly or hostile in the aftermath of a French nuclear attack.
=== Phase-Out ===
On 31 July 1996, the Mirage IVP was formally retired in its bomber capacity, the nuclear mission having been transferred from the Mirage IV to the newer Dassault Mirage 2000N. EB 2/91 was disbanded and EB 1/91 was redesignated ERS 1/91 (Escadron de Reconnaissance Stratégique, Strategic Reconnaissance Squadron), using five remaining Mirage IVPs based at Mont-de-Marsan; the remaining aircraft were stored at Chateaudun. In the reconnaissance role, the Mirage IVP has seen service over Bosnia, Iraq, Kosovo and Afghanistan.
ES 1/91 Gascogne's surviving Mirage IVPs were retired in 2005 and are conserved and stored at the Centre d'Instruction Forces Aériennes Stratégiques (CIFAS) at Bordeaux Mérignac. The retirement of all reconnaissance-configured Mirage IVPs in 2005 meant that the French Air Force's Mirage F1CRs were for some time the only aircraft capable of carrying out aerial reconnaissance missions. The long term replacement for the Mirage IVP was Mirage 2000N aircraft outfitted with a modern PRNG Pod de Reconnaissance Nouvelle Génération (New Generation Reconnaissance Pod), equipped with digital camera equipment.
The Mirage IV had been popular with its crews, who found it enjoyable to fly. In addition, it required surprisingly little maintenance considering its age and complexity.
== Operators ==
France
French Air Force
== Aircraft on display ==
=== Mirage IV A ===
Mirage IV A s/n:2-AB is on display at the Musée de l'air et de l'espace at Paris-Le Bourget
Mirage IV A s/n:4-AC is on display at Rochefort airbase.
Mirage IV A s/n:6-AG is on display at Savigny-les-Beaune.
Mirage IV A s/n:9-AH is on display at the Musée de l'Air et de l'Espace at Paris-Le Bourget. This actual aircraft used to drop live nuclear bombs during Tamouré test.
Mirage IV A s/n:16-AO is on display at St Dizier airbase.
Mirage IV A s/n:18-AQ is on display at Savigny-les-Beaune Museum.
Mirage IV A s/n:31-BC is on display at Musée Européen de l'Aviation de Chasse.
Mirage IV A s/n:32-BE is on display at Orange airbase.
Mirage IV A s/n:43-BP is on display at Mont-de-Marsan Air Base.
Mirage IV A s/n:45-BR was formerly displayed in the Paris Science Museum, but was donated to the Yorkshire Air Museum in 2016; the aircraft arrived in March 2017.
=== Mirage IV P ===
Mirage IV P 1 "AP" is on display at Châteaudun Air Base (CANOPEE Museum).
Mirage IV P 11-AJ is on display at Bordeaux airbase.
Mirage IV P 23-AV is on display at Cazaux airbase.
Mirage IV P 25-AX is on display at Musée de l'Epopée et de l'Industrie Aéronautique.
Mirage IV P 26-AY is on display at Ailes Anciennes Toulouse.
Mirage IV P 28-BA Musée de l'Aviation Clément Ader.
Mirage IV P 29-BB is on display at Avord airbase.It recreates the aircraft in flight with the wheels retracted and a processed support attached to the engine nozzle.
Mirage IV P 36 "BI" is on display at Istres airbase.
Mirage IV P 59 "CF" is on display at Creil airbase.
Mirage IV P 61 "CH" is on display at St Dizier Aero retro Museum.
Mirage IV P 62 "CI" is on display at the Musée de l'Air et de l'Espace at Paris-Le Bourget.
== Specifications (Mirage IVA) ==
Data from Pénétration Augmentation General characteristics
Crew: 2 (pilot & navigator/bombardier)
Length: 23.49 m (77 ft 1 in)
Wingspan: 11.85 m (38 ft 11 in)
Height: 5.4 m (17 ft 9 in)
Wing area: 78 m2 (840 sq ft)
Airfoil: root: 3.8%; tip: 3.2%
Empty weight: 14,500 kg (31,967 lb)
Gross weight: 31,600 kg (69,666 lb)
Max takeoff weight: 33,475 kg (73,800 lb)
Powerplant: 2 × SNECMA Atar 9K-50 afterburning turbojet engines, 49.03 kN (11,020 lbf) thrust each dry, 70.61 kN (15,870 lbf) with afterburner
Performance
Maximum speed: 2,340 km/h (1,450 mph, 1,260 kn) at 13,125 m (43,100 ft)
Maximum speed: Mach 2.2
Combat range: 1,240 km (770 mi, 670 nmi)
Ferry range: 4,000 km (2,500 mi, 2,200 nmi)
Service ceiling: 20,000 m (66,000 ft)
Time to altitude: 11,000 m (36,000 ft) in 4 min 15 sec
Armament
Bombs:
1 × AN-11 free-fall nuclear bomb or
1 × AN-22 free-fall nuclear bomb or
1 × Air-Sol Moyenne Portée nuclear missile (Mirage IVP)
16 × 454 kg (1,000 lb) free-fall conventional bombs
Avionics
CT-52 sensor pod for strategic reconnaissance
== See also ==
Avro Vulcan
Convair B-58 Hustler
North American A-5 Vigilante
Tupolev Tu-22
== References ==
=== Citations ===
=== Bibliography ===
Donald, David and John Lake. Encyclopedia of World Military Aircraft. London:Aerospace Publishing, 1994. ISBN 1-874023-95-6.
Gunston, Bill. Bombers of the West. New York: Charles Scribner's and Sons; 1973. ISBN 0-7110-0456-0.
Gunston, Bill and Peter Gilchrist. Jet Bombers: From the Messerschmitt Me 262 to the Stealth B-2. Osprey, 1993. ISBN 1-85532-258-7.
Jackson, Paul. Modern Combat Aircraft 23: Mirage. Shepperton, UK: Ian Allan, 1985. ISBN 0-7110-1512-0.
Jackson, Paul. "Pénétration Augumentation". Air International, April 1987, Vol. 32 No. 4. pp. 163–171. ISSN 0306-5634.
Lake, Jon. The Great Book of Bombers: The World's Most Important Bombers from World War I to the Present Day. Zenith Imprint, 2002. ISBN 0-7603-1347-4.
Michell, Simon. Jane's Civil and Military Upgrades 1994–95. Coulsdon, UK: Jane's Information Group, 1994. ISBN 0-7106-1208-7.
Sokolski, Henry D. Getting MAD: Nuclear Mutual Assured Destruction, Its Origins and Practice. DIANE Publishing, November 2004. ISBN 1-4289-1033-6.
Spencer, Tucker. The Encyclopedia of Middle East Wars [5 Volumes]: The United States in the Persian Gulf, Afghanistan, and Iraq Conflicts. ABC-CLIO, 2010. ISBN 1-8510-9947-6.
"Tangible Mirages: The Most Successful Family of European Aeroplanes." Flight International, 4 January 1962. pp. 18–21.
Wagner, Paul J. "Air Force Tac Recce Aircraft: NATO and Non-aligned Western European Air Force Tactical Reconnaissance Aircraft of the Cold War (1949–1989)." Dorrance Publishing, 2009. ISBN 1-4349-9458-9.
Withington, Thomas. "A Grand Illusion? The RAF & the Spey Mirage IV, 1965". The Aviation Historian. No. 33. 2020. pp.38-45.
Łaz, Marek; Senkowski, Robert (1999). "Strategiczny samolot bombowy Mirage IV". Lotnictwo Wojskowe (in Polish). No. 2(5)/1999. Magnum-X. ISSN 1505-1196.
== Further reading ==
Cuny, Jean (1989). Les avions de combat français, 2: Chasse lourde, bombardement, assaut, exploration [French Combat Aircraft 2: Heavy Fighters, Bombers, Attack, Reconnaissance]. Docavia (in French). Vol. 30. Ed. Larivière. OCLC 36836833.
== External links ==
Mirage IV information and photos by Yves Fauconnier (French)
Mirage IV data from former Forces Aériennes Stratégiques website Archived 17 July 2007 at the Wayback Machine (French)
AirForceWorld.com Mirage IV bomber page (English) |
David Schwarz (aviation inventor) | David Schwarz (Hungarian: Schwarz Dávid; Croatian: David Švarc, pronounced [dǎʋit ʃʋârt͡s]; 20 December 1850 – 13 January 1897) was a Hungarian aviation pioneer. He is known for creating an airship with a rigid envelope made entirely of metal. Schwarz died only months before the airship was flown. Some sources have claimed that Count Ferdinand Graf von Zeppelin purchased Schwarz's airship patent from his widow, a claim which has been disputed. He was the father of the opera and operetta soprano Vera Schwarz (1888–1964).
== Personal life ==
Schwarz was born in Keszthely, Kingdom of Hungary, then part of the Austrian Empire He was a timber merchant raised in Županja, but he spent most of his life in Zagreb, Kingdom of Croatia-Slavonia. He was Jewish.
Sources for his date of birth vary. The OCLC cites Rotem, Ẓ. giving it as 7 December 1850, while Brockhaus gives it as 20 December 1850 The OCLC, as well as Brockhaus, show Schwarz's place of birth as Zalaegerszeg, Hungary.
Although Schwarz had no special technical training, he became interested in technology and developed improvements for woodcutting machinery.
== First airship thoughts ==
Schwarz first became interested in airships during the 1880s. This occurred while working away from home supervising the felling of some forest land. As the work took longer than planned, he had his wife send him books to while away the evenings. These included a mechanics textbook. Although Schwarz became excited, it is not certain that this inspired him to build his own airship. His lumber business suffered due to his obsession and, like other aviation pioneers, his project attracted mockery. Nevertheless, his wife Melanie supported him. Schwarz proposed aluminium, then a very new material, for construction.
Having worked out the design of an all-metal airship, Schwarz then offered his ideas to the Austro-Hungarian war minister. Some interest was shown, but the government was not ready to provide financial support.
The Russian military attaché, a technically educated man, advised Schwarz to demonstrate his airship in St. Petersburg, where an airship using Schwarz's ideas was built in 1893. Schwarz, and later his widow, assumed that test flights would also be made there, but this did not happen. He began construction in late 1892, with the industrialist Carl Berg supplying the aluminium and necessary funding.
Problems arose during gas-filling: on inflation, the framework collapsed. Schwarz apparently intended the metal skin to contain the gas directly without internal gas bags. The Russian engineer Kowanko pointed out that the lack of a ballonet would cause stresses on the skin during ascent and descent. Also, the skin was not airtight,
The first airship's specifications were:
Power: four cylinder engine weighing 298 kg (657 lb) producing 10 horsepower (7.5 kW) at 480 rpm
Volume: 3,280 m3 (116,000 cu ft)
Empty weight: 2,525 kg (5,567 lb)
Gross lift: 958 kg (2,112 lb)
Ballast and fuel: 170 kg (370 lb)
Equipment and three people: 385 kg (849 lb)
Net lift: 85 kg (187 lb)
The circumstances of Schwarz's return are unclear; there were reports of a hasty departure from Russia.
== Second airship in Berlin ==
In 1894, Carl Berg procured a contract to build an airship for the Royal Prussian government, referring to Schwarz as the originator of the idea. Berg already had experience working with the then novel aluminium, and was to later manufacture components for Zeppelin's first airship. With financial and technical help from Berg and his firm, the airship was designed and built.
Construction began in 1895 at the Tempelhof field in Berlin. For a time the Prussian Airship Battalion placed its grounds and personnel at Schwarz's disposal. The components were produced in Carl Berg's Eveking Westphalia factory and, under the direction of Schwarz, assembled in Berlin. A gondola, also of aluminium, was fixed to the framework. Attached to the gondola was a 12 hp (8.9 kW) Daimler engine that drove aluminium propellers. One of the propellers was used to steer the craft.
In June 1896 Carl Berg sent a card to his stepfather from Moscow apparently indicating that he had searched for information on Schwarz and became cynical of delays and was nearly convinced he had been swindled.
Due to delays, the airship was first filled with gas and tested on 9 October 1896, but the results were not satisfactory because the hydrogen delivered by the Vereinigte Chemische Fabriken from Leopoldshall (part of Staßfurt) was not of the required purity and so did not provide enough lift. However, some sources claim that a test was performed on the 8th October 1896. It was determined that gas with a density of 1.15 kg per cubic metre was needed. Gas of that quality could not be obtained for some time, and a test flight could not be made until November 1897, roughly ten months after Schwarz's death.
== Death and maiden flight ==
Schwarz did not live to see his airship fly. Between 1892 and 1896 he traveled frequently, which affected his health. Shortly before his death he received news that his airship was ready to be filled with gas. On 13 January 1897 he collapsed outside the "Zur Linde" restaurant in Vienna, and died minutes later from heart failure, aged 44. Historical sources speak of a blutsturz (a term meaning either hemoptysis or hematemesis).
David Schwarz was buried in Zentralfriedhof, Vienna.
Carl Berg required confirmation of Schwarz's death, suspecting he had fled to sell his secrets. Nevertheless, Berg resumed the work with Melanie, Schwarz's widow, and together with the Airship Battalion they completed the airship with the addition of a gas relief valve.
This second airship had these specifications:
Volume: 3,250 m3 (114,700 cu ft)
Length: 47.55 m (156.0 ft)
Diameter: 13.49 metres (44 feet 3 inches)
Engine: 16 horsepower (12 kW) Daimler
Four propellers: one of 2.6 metres (8 feet 6 inches) diameter between the gongal and the envelope, two of 2 metres (6 feet 7 inches) diameter mounted on brackets either side of the envelope, and a fourth of 2 metres (6 feet 7 inches) diameter revolving in the horizontal plane mounted below the gondola to drive the craft up or down.
Envelope: 0.2 mm aluminium plates riveted to framework.
A later structural analysis based on the drawings concluded that it was defective, with the skin taking most of the shear stresses: distortions of the skin can be seen in a photo of the craft in flight.
The second airship was tested with partial success at Tempelhof near Berlin, Germany, on 3 November 1897. Airship Battalion mechanic Ernst Jägels climbed into the gondola and lifted off at 3 p.m. However, the airship broke free of the ground crew and, because it rose quickly, Jägels disengaged the vertical axis 'lift' propeller. At an altitude of about 130 m (430 ft) the drive belt slipped off the left propeller, resulting in the ship "...[turning] broadside to the wind, [and with the result that] the forward tether broke free." As the ship rose to 510 m (1,670 ft) the drive belt slipped off the right propeller, the airship thus losing all propulsion. Jägels then opened the newly fitted gas release valve and landed safely, but the ship turned over and collapsed and was damaged beyond repair.
== Legacy ==
About the time of the trial flight and for decades after, various accounts, sometimes conflicting or misleading, were written of the events. Later, Berg, as well as his son, would write negatively of his experiences with Schwarz.
Some sources state that Count Ferdinand von Zeppelin purchased Schwarz's patent from his widow in 1898, while others claim that the count used the design. However, Hugo Eckener, who worked with Count Zeppelin, dismissed these claims:
"Count Zeppelin negotiated with Herr Berg's firm for the purchase of the aluminium for his own ship. The firm, however, was under contract to supply aluminium for airships exclusively to the Schwarz undertaking. It had to obtain release from this contract by an arrangement with Schwarz' heirs before it could deliver aluminium to Count Zeppelin. That is the origin of the legend."
Cvi Rotem (1903–1980) wrote the only known biography of Schwarz, titled David Schwarz: Tragödie des Erfinders. Zur Geschichte des Luftschiffes. Rotem wrote that both Berg and Schwarz wished to keep their work secret.
From 3 December 2000 to 20 April 2001 the Museen der Stadt Lüdenscheid held an exhibition which covered Berg, Schwarz and Zeppelin history from 1892 to 1932, with displays of documents, photographs and airship remnants.
== Notes ==
== References ==
== Bibliography ==
== External links ==
David Schwarz in the German National Library catalogue
Library of Congress Linked Authority File for David Schwarz
Sucur, Ante. "The Airship of David Schwarz". Retrieved 19 October 2009.
Sucur, Ante (2008). "The Airship of David Schwarz / The Construction and Testing of the Airship". Archived from the original on 6 May 2008.
Schwarz's Airship |
De Havilland Comet | The de Havilland DH.106 Comet is the world's first commercial jet airliner. Developed and manufactured by de Havilland in the United Kingdom, the Comet 1 prototype first flew in 1949. It features an aerodynamically clean design with four de Havilland Ghost turbojet engines buried in the wing roots, a pressurised cabin, and large windows. For the era, it offered a relatively quiet, comfortable passenger cabin and was commercially promising at its debut in 1952.
Within a year of the airliner's entry into service, three Comets were lost in highly publicised accidents after suffering catastrophic mishaps mid-flight. Two of these were found to be caused by structural failure resulting from metal fatigue in the airframe, a phenomenon not fully understood at the time; the other was due to overstressing of the airframe during flight through severe weather. The Comet was withdrawn from service and extensively tested. Design and construction flaws, including improper riveting and dangerous stress concentrations around square cut-outs for the ADF (automatic direction finder) antennas were ultimately identified. As a result, the Comet was extensively redesigned, with structural reinforcements and other changes. Rival manufacturers heeded the lessons learned from the Comet when developing their own aircraft.
Although sales never fully recovered, the improved Comet 2 and the prototype Comet 3 culminated in the redesigned Comet 4 series which debuted in 1958 and remained in commercial service until 1981. The Comet was also adapted for a variety of military roles such as VIP, medical and passenger transport, as well as surveillance; the last Comet 4, used as a research platform, made its final flight in 1997. The most extensive modification resulted in a specialised maritime patrol derivative, the Hawker Siddeley Nimrod, which remained in service with the Royal Air Force until 2011, over 60 years after the Comet's first flight.
== Development ==
=== Origins ===
On 11 March 1943, the Cabinet of the United Kingdom formed the Brabazon Committee, which was tasked with determining the UK's airliner needs after the conclusion of the Second World War. One of its recommendations was for the development and production of a pressurised, transatlantic mailplane that could carry 1 long ton (2,200 lb; 1,000 kg) of payload at a cruising speed of 400 mph (640 km/h) non-stop.
Aviation company de Havilland was interested in this requirement, but chose to challenge the then widely held view that jet engines were too fuel-hungry and unreliable for such a role. As a result, committee member Sir Geoffrey de Havilland, head of the de Havilland company, used his personal influence and his company's expertise to champion the development of a jet-propelled aircraft; proposing a specification for a pure turbojet-powered design.
The committee accepted the proposal, calling it the "Type IV" (of five designs), and in 1945 awarded a development and production contract to de Havilland under the designation Type 106. The type and design were to be so advanced that de Havilland had to undertake the design and development of both the airframe and the engines. This was because in 1945 no turbojet engine manufacturer in the world was drawing-up a design specification for an engine with the thrust and specific fuel consumption that could power an aircraft at the proposed cruising altitude (40,000 ft (12,000 m)), speed, and transatlantic range as was called for by the Type 106. First-phase development of the DH.106 focused on short- and intermediate-range mailplanes with small passenger compartments and as few as six seats, before being redefined as a long-range airliner with a capacity of 24 seats. Out of all the Brabazon designs, the DH.106 was seen as the riskiest: both in terms of introducing untried design elements and for the financial commitment involved. Nevertheless, the British Overseas Airways Corporation (BOAC) found the Type IV's specifications attractive, and initially proposed a purchase of 25 aircraft; in December 1945, when a firm contract was created, the order total was revised to 10.
A design team was formed in 1946 under the leadership of chief designer Ronald Bishop, who had been responsible for the Mosquito fighter-bomber. Several unorthodox configurations were considered, ranging from canard to tailless designs; All were rejected. The Ministry of Supply was interested in the most radical of the proposed designs, and ordered two experimental tailless DH 108s to serve as proof of concept aircraft for testing swept-wing configurations in both low-speed and high-speed flight. During flight tests, the DH 108 gained a reputation for being accident-prone and unstable, leading de Havilland and BOAC to gravitate to conventional configurations and, necessarily, designs with less technical risk. The DH 108s were later modified to test the DH.106's power controls.
In September 1946, before completion of the DH 108s, BOAC requests necessitated a redesign of the DH.106 from its previous 24-seat configuration to a larger 36-seat version. With no time to develop the technology necessary for a proposed tailless configuration, Bishop opted for a more conventional 20-degree swept-wing design with unswept tail surfaces, married to an enlarged fuselage accommodating 36 passengers in a four-abreast arrangement with a central aisle. Replacing previously specified Halford H.1 Goblin engines, four new, more-powerful Rolls-Royce Avons were to be incorporated in pairs buried in the wing roots; Halford H.2 Ghost engines were eventually applied as an interim solution while the Avons cleared certification. The redesigned aircraft was named the DH.106 Comet in December 1947. Revised first orders from BOAC and British South American Airways totalled 14 aircraft, with delivery projected for 1952.
=== Testing and prototypes ===
As the Comet represented a new category of passenger aircraft, more rigorous testing was a development priority. From 1947 to 1948, de Havilland conducted an extensive research and development phase, including the use of several stress test rigs at Hatfield Aerodrome for small components and large assemblies alike. Sections of pressurised fuselage were subjected to high-altitude flight conditions via a large decompression chamber on-site and tested to failure. Tracing fuselage failure points proved difficult with this method, and de Havilland ultimately switched to conducting structural tests with a water tank that could be safely configured to increase pressures gradually. The entire forward fuselage section was tested for metal fatigue by repeatedly pressurising to 2.75 pounds per square inch (19.0 kPa) overpressure and depressurising through more than 16,000 cycles, equivalent to about 40,000 hours of airline service. The windows were also tested under a pressure of 12 psi (83 kPa), 4.75 psi (32.8 kPa) above expected pressures at the normal service ceiling of 36,000 ft (11,000 m). One window frame survived 100 psi (690 kPa), about 1,250 per cent over the maximum pressure it was expected to encounter in service.
The first prototype DH.106 Comet (carrying Class B markings G-5-1) was completed in 1949 and was initially used to conduct ground tests and brief early flights. The prototype's maiden flight, out of Hatfield Aerodrome, took place on 27 July 1949 and lasted 31 minutes. At the controls was de Havilland chief test pilot John "Cats Eyes" Cunningham, a famous night-fighter pilot of the Second World War, along with co-pilot Harold "Tubby" Waters, engineers John Wilson (electrics) and Frank Reynolds (hydraulics), and flight test observer Tony Fairbrother.
The prototype was registered G-ALVG just before it was publicly displayed at the 1949 Farnborough Airshow before the start of flight trials. A year later, the second prototype G-5-2 made its maiden flight. The second prototype was registered G-ALZK in July 1950 and it was used by the BOAC Comet Unit at Hurn from April 1951 to carry out 500 flying hours of crew training and route-proving. Australian airline Qantas also sent its own technical experts to observe the performance of the prototypes, seeking to quell internal uncertainty about its prospective Comet purchase. Both prototypes could be externally distinguished from later Comets by the large single-wheeled main landing gear, which was replaced on production models starting with G-ALYP by four-wheeled bogies.
== Design ==
=== Overview ===
The Comet was an all-metal low-wing cantilever monoplane powered by four jet engines; it had a four-place cockpit occupied by two pilots, a flight engineer, and a navigator. The clean, low-drag design of the aircraft featured many design elements that were fairly uncommon at the time, including a swept-wing leading edge, integral wing fuel tanks, and four-wheel bogie main undercarriage units designed by de Havilland. Two pairs of turbojet engines (on the Comet 1s, Halford H.2 Ghosts, subsequently known as de Havilland Ghost 50 Mk1s) were buried in the wings.
The original Comet was the approximate length of, but not as wide as, the later Boeing 737-100, and carried fewer people in a significantly more-spacious environment. BOAC installed 36 reclining "slumberseats" with 45 in (1,100 mm) centres on its first Comets, allowing for greater leg room in front and behind; Air France had 11 rows of seats with four seats to a row installed on its Comets. Large picture window views and table seating accommodations for a row of passengers afforded a feeling of comfort and luxury unusual for transportation of the period. Amenities included a galley that could serve hot and cold food and drinks, a bar, and separate men's and women's toilets. Provisions for emergency situations included several life rafts stored in the wings near the engines, and individual life vests were stowed under each seat.
One of the most striking aspects of Comet travel was the quiet, "vibration-free flying" as touted by BOAC. For passengers used to propeller-driven airliners, smooth and quiet jet flight was a novel experience.
=== Avionics and systems ===
For ease of training and fleet conversion, de Havilland designed the Comet's flight deck layout with a degree of similarity to the Lockheed Constellation, an aircraft that was popular at the time with key customers such as BOAC. The cockpit included full dual-controls for the captain and first officer, and a flight engineer controlled several key systems, including fuel, air conditioning and electrical systems. The navigator occupied a dedicated station, with a table across from the flight engineer.
Several of the Comet's avionics systems were new to civil aviation. One such feature was irreversible, powered flight controls, which increased the pilot's ease of control and the safety of the aircraft by preventing aerodynamic forces from changing the directed positions and placement of the aircraft's control surfaces. Many of the control surfaces, such as the elevators, were equipped with a complex gearing system as a safeguard against accidentally over-stressing the surfaces or airframe at higher speed ranges.
The Comet had a total of four hydraulic systems: two primaries, one secondary, and a final emergency system for basic functions such as lowering the undercarriage. The undercarriage could also be lowered by a combination of gravity and a hand-pump. Power was syphoned from all four engines for the hydraulics, cabin air conditioning, and the de-icing system; these systems had operational redundancy in that they could keep working even if only a single engine was active. The majority of hydraulic components were centred in a single avionics bay. A pressurised refuelling system, developed by Flight Refuelling Ltd, allowed the Comet's fuel tanks to be refuelled at a far greater rate than by other methods.
The cockpit was significantly altered for the Comet 4's introduction, on which an improved layout focusing on the onboard navigational suite was introduced. An EKCO E160 radar unit was installed in the Comet 4's nose cone, providing search functions as well as ground and cloud-mapping capabilities, and a radar interface was built into the Comet 4 cockpit along with redesigned instruments.
Sud-Est's design bureau, while working on the Sud Aviation Caravelle in 1953, licensed several design features from de Havilland, building on previous collaborations on earlier licensed designs, including the DH 100 Vampire; the nose and cockpit layout of the Comet 1 was grafted onto the Caravelle. In 1969, when the Comet 4's design was modified by Hawker Siddeley to become the basis for the Nimrod, the cockpit layout was completely redesigned and bore little resemblance to its predecessors except for the control yoke.
=== Fuselage ===
Diverse geographic destinations and cabin pressurisation alike on the Comet demanded the use of a high proportion of alloys, plastics, and other materials new to civil aviation across the aircraft to meet certification requirements. The Comet's high cabin pressure and high operating speeds were unprecedented in commercial aviation, making its fuselage design an experimental process. At its introduction, Comet airframes would be subjected to an intense, high-speed operating schedule which included simultaneous extreme heat from desert airfields and frosty cold from the kerosene-filled fuel tanks, still cold from cruising at high altitude.
The Comet's thin metal skin was composed of advanced new alloys and was both riveted and chemically bonded, which saved weight and reduced the risk of fatigue cracks spreading from the rivets. The chemical bonding process was accomplished using a new adhesive, Redux, which was liberally used in the construction of the wings and the fuselage of the Comet; it also had the advantage of simplifying the manufacturing process.
When several of the fuselage alloys were discovered to be vulnerable to weakening via metal fatigue, a detailed routine inspection process was introduced. As well as thorough visual inspections of the outer skin, mandatory structural sampling was routinely conducted by both civil and military Comet operators. The need to inspect areas not easily viewable by the naked eye led to the introduction of widespread radiography examination in aviation; this also had the advantage of detecting cracks and flaws too small to be seen otherwise.
Operationally, the design of the cargo holds led to considerable difficulty for the ground crew, especially baggage handlers at the airports. The cargo hold had its doors located directly underneath the aircraft, so each item of baggage or cargo had to be loaded vertically upward from the top of the baggage truck, then slid along the hold floor to be stacked inside. The individual pieces of luggage and cargo also had to be retrieved in a similarly slow manner at the arriving airport.
=== Propulsion ===
The Comet was powered by two pairs of turbojet engines buried in the wings close to the fuselage. Chief designer Bishop chose the Comet's embedded-engine configuration because it avoided the drag of podded engines and allowed for a smaller fin and rudder since the hazards of asymmetric thrust were reduced. The engines were outfitted with baffles to reduce noise emissions, and extensive soundproofing was also implemented to improve passenger conditions.
Placing the engines within the wings had the advantage of a reduction in the risk of foreign object damage, which could seriously damage jet engines. The low-mounted engines and good placement of service panels also made aircraft maintenance easier to perform. The Comet's buried-engine configuration increased its structural weight and complexity. Armour had to be placed around the engine cells to contain debris from any serious engine failures; also, placing the engines inside the wing required a more complicated wing structure.
The Comet 1 featured 5,050 lbf (22.5 kN) de Havilland Ghost 50 Mk1 turbojet engines. Two hydrogen peroxide-powered de Havilland Sprite booster rockets were originally intended to be installed to boost takeoff under hot and high altitude conditions from airports such as Khartoum and Nairobi. These were tested on 30 flights, but the Ghosts alone were considered powerful enough and some airlines concluded that rocket motors were impractical. Sprite fittings were retained on production aircraft. Comet 1s subsequently received more powerful 5,700 lbf (25 kN) Ghost DGT3 series engines.
From the Comet 2 onward, the Ghost engines were replaced by the newer and more powerful 7,000 lbf (31 kN) Rolls-Royce Avon AJ.65 engines. To achieve optimum efficiency with the new powerplants, the air intakes were enlarged to increase mass air flow. Upgraded Avon engines were introduced on the Comet 3, and the Avon-powered Comet 4 was highly praised for its takeoff performance from high-altitude locations such as Mexico City where it was operated by Mexicana de Aviacion, a major scheduled passenger air carrier.
== Operational history ==
=== Introduction ===
The earliest production aircraft, registered G-ALYP ("Yoke Peter"), first flew on 9 January 1951 and was subsequently lent to BOAC for development flying by its Comet Unit. On 22 January 1952, the fifth production aircraft, registered G-ALYS, received the first Certificate of Airworthiness awarded to a Comet, six months ahead of schedule. On 2 May 1952, as part of BOAC's route-proving trials, G-ALYP took off on the world's first jetliner flight with fare-paying passengers and inaugurated scheduled service from London to Johannesburg. The final Comet from BOAC's initial order, registered G-ALYZ, began flying in September 1952 and carried cargo along South American routes while simulating passenger schedules.
Prince Philip returned from the Helsinki Olympic Games with G-ALYS on 4 August 1952. Queen Elizabeth, the Queen Mother and Princess Margaret were guests on a special flight of the Comet on 30 June 1953 hosted by Sir Geoffrey and Lady de Havilland. Flights on the Comet were about twice as fast as advanced piston-engined aircraft such as the Douglas DC-6 (490 mph (790 km/h)
vs 315 mph (507 km/h), respectively), and a faster rate of climb further cut flight times. In August 1953 BOAC scheduled the nine-stop London to Tokyo flights by Comet for 36 hours, compared to 86 hours and 35 minutes on its Argonaut (a DC-4 variant) piston airliner. (Pan Am's DC-6B was scheduled for 46 hours 45 minutes.) The five-stop flight from London to Johannesburg was scheduled for 21 hr 20 min.
In their first year, Comets carried 30,000 passengers. As the aircraft could be profitable with a load factor as low as 43 per cent, commercial success was expected. The Ghost engines allowed the Comet to fly above weather that competitors had to fly through. They ran smoothly and were less noisy than piston engines, had low maintenance costs and were fuel-efficient above 30,000 ft (9,100 m). In summer 1953, eight BOAC Comets left London each week: three to Johannesburg, two to Tokyo, two to Singapore and one to Colombo.
In 1953, the Comet appeared to have achieved success for de Havilland. Popular Mechanics wrote that Britain had a lead of three to five years on the rest of the world in jetliners. As well as the sales to BOAC, two French airlines, Union Aéromaritime de Transport and Air France, each acquired three Comet 1As, an upgraded variant with greater fuel capacity, for flights to West Africa and the Middle East. A slightly longer version of the Comet 1 with more powerful engines, the Comet 2, was being developed, and orders were placed by Air India, British Commonwealth Pacific Airlines, Japan Air Lines, Linea Aeropostal Venezolana, and Panair do Brasil. American carriers Capital Airlines, National Airlines and Pan Am placed orders for the planned Comet 3, an even-larger, longer-range version for transatlantic operations. Qantas was interested in the Comet 1 but concluded that a version with more range and better takeoff performance was needed for the London to Canberra route.
=== Early hull losses ===
On 26 October 1952, the Comet suffered its first hull loss when a BOAC flight departing Rome's Ciampino airport failed to become airborne and ran into rough ground at the end of the runway. Two passengers sustained minor injuries, but the aircraft, G-ALYZ, was a write-off. On 3 March 1953, a new Canadian Pacific Airlines Comet 1A, registered CF-CUN and named Empress of Hawaii, failed to become airborne while attempting a night takeoff from Karachi, Pakistan, on a delivery flight to Australia. The aircraft plunged into a dry drainage canal and collided with an embankment, killing all five crew and six passengers on board. The accident was the first fatal jetliner crash. In response, Canadian Pacific cancelled its remaining order for a second Comet 1A and never operated the type in commercial service.
Both early accidents were originally attributed to pilot error, as overrotation had led to a loss of lift from the leading edge of the aircraft's wings. It was later determined that the Comet's wing profile experienced a loss of lift at a high angle of attack, and its engine inlets also suffered a lack of pressure recovery in the same conditions. As a result, de Havilland re-profiled the wings' leading edge with a pronounced "droop", and wing fences were added to control spanwise flow. A fictionalised investigation into the Comet's takeoff accidents was the subject of the novel Cone of Silence (1959) by Arthur David Beaty, a former BOAC captain. Cone of Silence was made into a film in 1960, and Beaty also recounted the story of the Comet's takeoff accidents in a chapter of his non-fiction work, Strange Encounters: Mysteries of the Air (1984).
The Comet's second fatal accident occurred on 2 May 1953, when BOAC Flight 783, a Comet 1, registered G-ALYV, crashed in a severe thundersquall six minutes after taking off from Calcutta-Dum Dum (now Netaji Subhash Chandra Bose International Airport), India, killing all 43 on board. Witnesses observed the wingless Comet on fire plunging into the village of Jagalgori, leading investigators to suspect structural failure.
==== India Court of Inquiry ====
After the loss of G-ALYV, the Government of India convened a court of inquiry to examine the cause of the accident. Professor Natesan Srinivasan joined the inquiry as the main technical expert. A large portion of the aircraft was recovered and reassembled at Farnborough, during which the break-up was found to have begun with a left elevator spar failure in the horizontal stabilizer. The inquiry concluded that the aircraft had encountered extreme negative g-forces during takeoff; severe turbulence generated by adverse weather was determined to have induced down-loading, leading to the loss of the wings. Examination of the cockpit controls suggested that the pilot may have inadvertently over-stressed the aircraft when pulling out of a steep dive by over-manipulation of the fully powered flight controls. Investigators did not consider metal fatigue as a contributory cause.
The inquiry's recommendations revolved around the enforcement of stricter speed limits during turbulence, and two significant design changes also resulted: all Comets were equipped with weather radar and the "Q feel" system was introduced, which ensured that control column forces (invariably called stick forces) would be proportional to control loads. This artificial feel was the first of its kind to be introduced in any aircraft. The Comet 1 and 1A had been criticised for a lack of "feel" in their controls, and investigators suggested that this might have contributed to the pilot's alleged over-stressing of the aircraft; Comet chief test pilot John Cunningham contended that the jetliner flew smoothly and was highly responsive in a manner consistent with other de Havilland aircraft.
=== Comet disasters of 1954 ===
Just over a year later, Rome's Ciampino airport, the site of the first Comet hull loss, was the origin of a more-disastrous Comet flight. On 10 January 1954, 20 minutes after taking off from Ciampino, the first production Comet, G-ALYP, broke up in mid-air while operating BOAC Flight 781 and crashed into the Mediterranean off the Italian island of Elba with the loss of all 35 on board. With no witnesses to the disaster and only partial radio transmissions as incomplete evidence, no obvious reason for the crash could be deduced. Engineers at de Havilland immediately recommended 60 modifications aimed at any possible design flaw, while the Abell Committee met to determine potential causes of the crash. BOAC also voluntarily grounded its Comet fleet pending investigation into the causes of the accident.
==== Abell Committee Court of Inquiry ====
Media attention centred on potential sabotage; other speculation ranged from clear-air turbulence to an explosion of vapour in an empty fuel tank. The Abell Committee focused on six potential aerodynamic and mechanical causes: control flutter (which had led to the loss of DH 108 prototypes), structural failure due to high loads or metal fatigue of the wing structure, failure of the powered flight controls, failure of the window panels leading to explosive decompression, or fire and other engine problems. The committee concluded that fire was the most likely cause of the problem, and changes were made to the aircraft to protect the engines and wings from damage that might lead to another fire.
During the investigation, the Royal Navy conducted recovery operations. The first pieces of wreckage were discovered on 12 February 1954 and the search continued until September 1954, by which time 70 per cent by weight of the main structure, 80 per cent of the power section, and 50 per cent of the aircraft's systems and equipment had been recovered. The forensic reconstruction effort had just begun when the Abell Committee reported its findings. No apparent fault in the aircraft was found, and the British government decided against opening a further public inquiry into the accident. The prestigious nature of the Comet project, particularly for the British aerospace industry, and the financial impact of the aircraft's grounding on BOAC's operations both served to pressure the inquiry to end without further investigation. Comet flights resumed on 23 March 1954.
On 8 April 1954, Comet G-ALYY ("Yoke Yoke"), on charter to South African Airways, was on a leg from Rome to Cairo (of a longer route, SA Flight 201 from London to Johannesburg), when it crashed in the Mediterranean near Naples with the loss of all 21 passengers and crew on board. The Comet fleet was immediately grounded once again and a large investigation board was formed under the direction of the Royal Aircraft Establishment (RAE). Prime Minister Winston Churchill tasked the Royal Navy with helping to locate and retrieve the wreckage so that the cause of the accident could be determined. The Comet's Certificate of Airworthiness was revoked, and Comet 1 line production was suspended at the Hatfield factory while the BOAC fleet was permanently grounded, cocooned and stored.
==== Cohen Committee Court of Inquiry ====
On 19 October 1954, the Cohen Committee was established to examine the causes of the Comet crashes. Chaired by Lord Cohen, the committee tasked an investigation team led by Sir Arnold Hall, Director of the RAE at Farnborough, to perform a more-detailed investigation. Hall's team began considering fatigue as the most likely cause of both accidents and initiated further research into measurable strain on the aircraft's skin. With the recovery of large sections of G-ALYP from the Elba crash and BOAC's donation of an identical airframe, G-ALYU, for further examination, an extensive "water torture" test eventually provided conclusive results. This time, the entire fuselage was tested in a dedicated water tank that was built specifically at Farnborough to accommodate its full length.
In water-tank testing, engineers subjected G-ALYU to repeated repressurisation and over-pressurisation, and on 24 June 1954, after 3,057 flight cycles (1,221 actual and 1,836 simulated), G-ALYU burst open. Hall, Geoffrey de Havilland and Bishop were immediately called to the scene, where the water tank was drained to reveal that the fuselage had ripped open at a bolt hole, forward of the forward left escape hatch cut out. The failure then occurred longitudinally along a fuselage stringer at the widest point of the fuselage and through a cut out for an escape hatch. The skin thickness was discovered to be insufficient to distribute the load across the structure, leading to overloading of fuselage frames adjacent to fuselage cut outs. (Cohen Inquiry accident report Fig 7). The fuselage frames did not have sufficient strength to prevent the crack from propagating. Although the fuselage failed after a number of cycles that represented three times the life of G-ALYP at the time of the accident, it was still much earlier than expected. A further test reproduced the same results. Based on these findings, Comet 1 structural failures could be expected at anywhere from 1,000 to 9,000 cycles. Before the Elba accident, G-ALYP had made 1,290 pressurised flights, while G-ALYY had made 900 pressurised flights before crashing. Dr P. B. Walker, Head of the Structures Department at the RAE, said he was not surprised by this, noting that the difference was about three to one, and previous experience with metal fatigue suggested a total range of nine to one between experiment and outcome in the field could result in failure.
The RAE also reconstructed about two-thirds of G-ALYP at Farnborough and found fatigue crack growth from a rivet hole at the low-drag fibreglass forward aperture around the Automatic Direction Finder, which had caused a catastrophic break-up of the aircraft in high-altitude flight. The exact origin of the fatigue failure could not be identified but was localised to the ADF antenna cut out. A countersunk bolt hole and manufacturing damage that had been repaired at the time of construction using methods that were common, but were likely insufficient allowing for the stresses involved, were both located along the failure crack. Once the crack initiated the skin failed from the point of the ADF cut out and propagated downward and rearward along a stringer resulting in an explosive decompression.
It was also found that the punch-rivet construction technique employed in the Comet's design had exacerbated its structural fatigue problems; the aircraft's ADF antenna windows had been engineered to be glued and riveted, but had been punch-riveted only. Unlike drill riveting, the imperfect nature of the hole created by punch-riveting could cause fatigue cracks to start developing around the rivet. Principal investigator Hall accepted the RAE's conclusion of design and construction flaws as the likely explanation for G-ALYU's structural failure after 3,060 pressurisation cycles.
==== Earlier structural indications ====
The issue of the lightness of Comet 1 construction (in order to not tax the relatively low thrust de Havilland Ghost engines), had been noted by de Havilland test pilot John Wilson, while flying the prototype during a Farnborough flypast in 1949. On the flight, he was accompanied by Chris Beaumont, Chief Test Pilot of the de Havilland Engine Company who stood in the entrance to the cockpit behind the Flight Engineer. He stated "Every time we pulled 2 1/2-3G to go around the corner, Chris found that the floor on which he was standing, bulging up and there was a loud bang at that speed from the nose of the aircraft where the skin 'panted' (flexed), so when we heard this bang we knew without checking the airspeed indicator, that we were doing 340 knots. In later years we realised that these were the indications of how flimsy the structure really was."
==== Square window myths ====
Despite findings of the Cohen Inquiry, a number of myths have evolved around the cause of the Comet 1's accidents. Most commonly quoted are the 'square' passenger windows. While the report noted that stress around fuselage cut-outs, emergency exits and windows was found to be much higher than expected due to DeHavilland's assumptions and testing methods the passenger windows shape has been commonly misunderstood and cited as a cause of the fuselage failure. In fact the mention of 'windows' in the Cohen report's conclusion, refers specifically to the origin point of failure in the ADF Antenna cut-out 'windows', located above the cockpit, not passenger windows. The shape of the passenger windows were not indicated in any failure mode detailed in the accident report and were not viewed as a contributing factor. A number of other pressurised airliners of the period including the Boeing 377 Stratocruiser, Douglas DC-7, and DC-8 had larger and more 'square' windows than the Comet 1, and experienced no such failures. In fact, the Comet 1's window general shape resembles a slightly larger Boeing 737 window mounted horizontally. They are rectangular not square, have rounded corners and are within 5% of the radius of the Boeing 737 windows and virtually identical to modern airliners. Paul Withey, Professor of Casting at the University of Birmingham School of Metallurgy states in a video presentation delivered in 2019, analysing all available data that: "The fact that DeHavilland put oval windows into later marks, is not because of any 'squareness' of the windows that caused failure." "DeHavilland went to oval windows on the subsequent Marks because it was easier to Redux them in (use adhesive) – nothing to do with the stress concentration and it's purely to remove rivets." (from the structure)
Surviving Comet 1s can be seen on view at the Royal Air Force Museum Midlands and the De Havilland Museum at Salisbury Hall, London Colney.
==== Response ====
In responding to the report de Havilland stated: "Now that the danger of high level fatigue in pressure cabins has been generally appreciated, de Havillands will take adequate measures to deal with this problem. To this end we propose to use thicker gauge materials in the pressure cabin area and to strengthen and redesign windows and cut outs and so lower the general stress to a level at which local stress concentrations either at rivets and bolt holes or as such may occur by reason of cracks caused accidentally during manufacture or subsequently, will not constitute a danger."
The Cohen inquiry closed on 24 November 1954, having "found that the basic design of the Comet was sound", and made no observations or recommendations regarding the shape of the windows. De Havilland nonetheless began a refit programme to strengthen the fuselage and wing structure, employing thicker-gauge skin and replacing the rectangular windows and panels with rounded versions, although this was not related to the erroneous 'square' window claim, as can be seen by the fact that the fuselage escape hatch cut-outs (the source of the failure in test aircraft G-ALYU) retained their rectangular shape.
Following the Comet enquiry, aircraft were designed to "fail-safe" or safe-life standards, though several subsequent catastrophic fatigue failures, such as Aloha Airlines Flight 243 of April 28, 1988 have occurred.
=== Resumption of service ===
With the discovery of the structural problems of the early series, all remaining Comets were withdrawn from service, while de Havilland launched a major effort to build a new version that would be both larger and stronger. All outstanding orders for the Comet 2 were cancelled by airline customers. All production Comet 2s were also modified with thicker gauge skin to better distribute loads and alleviate the fatigue problems (most of these served with the RAF as the Comet C2); a programme to produce a Comet 2 with more powerful Avons was delayed. The prototype Comet 3 first flew in July 1954 and was tested in an unpressurised state pending completion of the Cohen inquiry. Comet commercial flights would not resume until 1958.
Development flying and route proving with the Comet 3 allowed accelerated certification of what was destined to be the most successful variant of the type, the Comet 4. All airline customers for the Comet 3 subsequently cancelled their orders and switched to the Comet 4, which was based on the Comet 3 but with improved fuel capacity. BOAC ordered 19 Comet 4s in March 1955, and American operator Capital Airlines ordered 14 Comets in July 1956. Capital's order included 10 Comet 4As, a variant modified for short-range operations with a stretched fuselage and short wings, lacking the pinion (outboard wing) fuel tanks of the Comet 4. Financial problems and a takeover by United Airlines meant that Capital would never operate the Comet.
The Comet 4 first flew on 27 April 1958 and received its Certificate of Airworthiness on 24 September 1958; the first was delivered to BOAC the next day. The base price of a new Comet 4 was roughly £1.14 million (£29.95 million in 2023). The Comet 4 enabled BOAC to inaugurate the first regular jet-powered transatlantic services on 4 October 1958 between London and New York (albeit still requiring a fuel stop at Gander International Airport, Newfoundland, on westward North Atlantic crossings). While BOAC gained publicity as the first to provide transatlantic jet service, by the end of the month rival Pan American World Airways was flying the Boeing 707 on the New York-Paris route, with a fuel stop at Gander in both directions, and in 1960 began flying Douglas DC-8's on its transatlantic routes as well. The American jets were larger, faster, longer-ranged and more cost-effective than the Comet. After analysing route structures for the Comet, BOAC reluctantly cast-about for a successor, and in 1956 entered into an agreement with Boeing to purchase the 707.
The Comet 4 was ordered by two other airlines: Aerolíneas Argentinas took delivery of six Comet 4s from 1959 to 1960, using them between Buenos Aires and Santiago, New York and Europe, and East African Airways received three new Comet 4s from 1960 to 1962 and operated them to the United Kingdom and to Kenya, Tanzania, and Uganda. The Comet 4A ordered by Capital Airlines was instead built for BEA as the Comet 4B, with a further fuselage stretch of 38 in (970 mm) and seating for 99 passengers. The first Comet 4B flew on 27 June 1959 and BEA began Tel Aviv to London-Heathrow services on 1 April 1960. Olympic Airways was the only other customer to order the type. The last Comet 4 variant, the Comet 4C, first flew on 31 October 1959 and entered service with Mexicana in 1960. The Comet 4C had the Comet 4B's longer fuselage and the longer wings and extra fuel tanks of the original Comet 4, which gave it a longer range than the 4B. Ordered by Kuwait Airways, Middle East Airlines, Misrair (later Egyptair), and Sudan Airways, it was the most popular Comet variant.
=== Later service ===
In 1959, BOAC began shifting its Comets from transatlantic routes and released the Comet to associate companies, making the Comet 4's ascendancy as a premier airliner brief. Besides the 707 and DC-8, the introduction of the Vickers VC10 allowed competing aircraft to assume the high-speed, long-range passenger service role pioneered by the Comet. In 1960, as part of a government-backed consolidation of the British aerospace industry, de Havilland itself was acquired by Hawker Siddeley, within which it became a wholly owned division.
In the 1960s, orders declined, a total of 76 Comet 4s being delivered from 1958 to 1964. In November 1965, BOAC retired its Comet 4s from revenue service; other operators continued commercial passenger flights with the Comet until 1981. Dan-Air played a significant role in the fleet's later history and, at one time, owned all 49 remaining airworthy civil Comets. On 14 March 1997 a Comet 4C serial XS235 and named Canopus, which had been acquired by the British Ministry of Technology and used for radio, radar and avionics trials, made the last documented production Comet flight.
== Legacy ==
The Comet is widely regarded as both an adventurous step forward and a supreme tragedy; the aircraft's legacy includes advances in aircraft design and in accident investigations. The inquiries into the accidents that plagued the Comet 1 were perhaps some of the most extensive and revolutionary that have ever taken place, establishing precedents in accident investigation; many of the deep-sea salvage and aircraft reconstruction techniques employed have remained in use within the aviation industry. In spite of the Comet being subjected to what was then the most rigorous testing of any contemporary airliner, pressurisation and the dynamic stresses involved were not thoroughly understood at the time of the aircraft's development, nor was the concept of metal fatigue. Though these lessons could be implemented on the drawing board for future aircraft, corrections could only be retroactively applied to the Comet.
According to de Havilland's chief test pilot John Cunningham, who had flown the prototype's first flight, representatives from American manufacturers such as Boeing and Douglas privately disclosed that if de Havilland had not experienced the Comet's pressurisation problems first, it would have happened to them. Cunningham likened the Comet to the later Concorde and added that he had assumed that the aircraft would change aviation, which it subsequently did. Aviation author Bill Withuhn concluded that the Comet had pushed "'the state-of-the-art' beyond its limits."
Aeronautical-engineering firms were quick to respond to the Comet's commercial advantages and technical flaws alike; other aircraft manufacturers learned from, and profited by, the hard-earned lessons embodied by de Havilland's Comet. The Comet's buried engines were used on some other early jet airliners, such as the Tupolev Tu-104, but later aircraft, such as the Boeing 707 and Douglas DC-8, differed by employing podded engines held on pylons beneath the wings. Boeing stated that podded engines were selected for their passenger airliners because buried engines carried a higher risk of catastrophic wing failure in the event of engine fire. In response to the Comet tragedies, manufacturers also developed ways of pressurisation testing, often going so far as to explore rapid depressurisation; subsequent fuselage skins were of a greater thickness than the skin of the Comet.
== Variants ==
=== Comet 1 ===
The Comet 1 was the first model produced, a total of 12 aircraft in service and test. Following closely the design features of the two prototypes, the only noticeable change was the adoption of four-wheel bogie main undercarriage units, replacing the single main wheels. Four Ghost 50 Mk 1 engines were fitted (later replaced by more powerful Ghost DGT3 series engines). The span was 115 ft (35 m), and overall length 93 ft (28 m); the maximum takeoff weight was over 105,000 lb (48,000 kg) and over 40 passengers could be carried.
An updated Comet 1A was offered with higher-allowed weight, greater fuel capacity, and water-methanol injection; 10 were produced. In the wake of the 1954 disasters, all Comet 1s and 1As were brought back to Hatfield, placed in a protective cocoon and retained for testing. All were substantially damaged in stress testing or were scrapped entirely.
Comet 1X: Two RCAF Comet 1As were rebuilt with heavier-gauge skins to a Comet 2 standard for the fuselage, and renamed Comet 1X.
Comet 1XB: Four Comet 1As were upgraded to a 1XB standard with a reinforced fuselage structure and oval windows. Both 1X series were limited in number of pressurisation cycles.
The DH 111 Comet Bomber, a nuclear bomb-carrying variant developed to Air Ministry specification B35/46, was submitted to the Air Ministry on 27 May 1948. It had been originally proposed in 1948 as the "PR Comet", a high-altitude photo reconnaissance adaptation of the Comet 1. The Ghost DGT3-powered airframe featured a narrowed fuselage, a bulbous nose with H2S Mk IX radar, and a four-crewmember pressurised cockpit under a large bubble canopy. Fuel tanks carrying 2,400 imperial gallons (11,000 L) were added to attain a range of 3,350 miles (5,390 km). The proposed DH 111 received a negative evaluation from the Royal Aircraft Establishment over serious concerns regarding weapons storage; this, along with the redundant capability offered by the RAF's proposed V bomber trio, led de Havilland to abandon the project on 22 October 1948.
=== Comet 2 ===
The Comet 2 had a slightly larger wing, higher fuel capacity and more-powerful Rolls-Royce Avon engines, which all improved the aircraft's range and performance; its fuselage was 3 ft 1 in (0.94 m) longer than the Comet 1's. Design changes had been made to make the aircraft more suitable for transatlantic operations. Following the Comet 1 disasters, these models were rebuilt with heavier-gauge skin and rounded windows, and the Avon engines featuring larger air intakes and outward-curving jet tailpipes. A total of 12 of the 44-seat Comet 2s were ordered by BOAC for the South Atlantic route. The first production aircraft (G-AMXA) flew on 27 August 1953. Although these aircraft performed well on test flights on the South Atlantic, their range was still not suitable for the North Atlantic. All but four Comet 2s were allocated to the RAF, deliveries beginning in 1955. Modifications to the interiors allowed the Comet 2s to be used in several roles. For VIP transport, the seating and accommodations were altered and provisions for carrying medical equipment including iron lungs were incorporated. Specialised signals intelligence and electronic surveillance capability was later added to some airframes.
Comet 2X: Limited to a single Comet Mk 1 powered by four Rolls-Royce Avon 502 turbojet engines and used as a development aircraft for the Comet 2.
Comet 2E: Two Comet 2 airliners were fitted with Avon 504s in the inner nacelles and Avon 524s in the outer ones. These aircraft were used by BOAC for proving flights during 1957–1958.
Comet T2: The first two of 10 Comet 2s for the RAF were fitted out as crew trainers, the first aircraft (XK669) flying initially on 9 December 1955.
Comet C2: Eight Comet 2s originally destined for the civil market were completed for the RAF and assigned to No. 216 Squadron.
Comet 2R: Three Comet 2s were modified for use in radar and electronic systems development, initially assigned to No. 90 Group (later Signals Command) for the RAF. In service with No. 192 and No. 51 Squadrons, the 2R series was equipped to monitor Warsaw Pact signal traffic and operated in this role from 1958.
=== Comet 3 ===
The Comet 3, which flew for the first time on 19 July 1954, was a Comet 2 lengthened by 15 ft 5 in (4.70 m) and powered by Avon M502 engines developing 10,000 lbf (44 kN). The variant added wing pinion tanks, and offered greater capacity and range. The Comet 3 was destined to remain a development series since it did not incorporate the fuselage-strengthening modifications of the later series aircraft, and was not able to be fully pressurised. Only two Comet 3s began construction; G-ANLO, the only airworthy Comet 3, was demonstrated at the Farnborough SBAC Show in September 1954. The other Comet 3 airframe was not completed to production standard and was used primarily for ground-based structural and technology testing during development of the similarly sized Comet 4. Another nine Comet 3 airframes were not completed and their construction was abandoned at Hatfield.
In BOAC colours, G-ANLO was flown by John Cunningham in a marathon round-the-world promotional tour in December 1955. As a flying testbed, it was later modified with Avon RA29 engines fitted, as well as replacing the original long-span wings with reduced span wings as the Comet 3B and demonstrated in British European Airways (BEA) livery at the Farnborough Airshow in September 1958. Assigned in 1961 to the Blind Landing Experimental Unit (BLEU) at RAE Bedford, the final testbed role played by G–ANLO was in automatic landing system experiments. When retired in 1973, the airframe was used for foam-arrester trials before the fuselage was salvaged at BAE Woodford, to serve as the mock-up for the Nimrod.
=== Comet 4 ===
The Comet 4 was a further improvement on the stretched Comet 3 with even greater fuel capacity. The design had progressed significantly from the original Comet 1, growing by 18 ft 6 in (5.64 m) and typically seating 74 to 81 passengers compared to the Comet 1's 36 to 44 (119 passengers could be accommodated in a special charter seating package in the later 4C series). The Comet 4 was considered the definitive series, having a longer range, higher cruising speed and higher maximum takeoff weight. These improvements were possible largely because of Avon engines, with twice the thrust of the Comet 1's Ghosts. Deliveries to BOAC began on 30 September 1958 with two 48-seat aircraft, which were used to initiate the first scheduled transatlantic services.
Comet 4B: Originally developed for Capital Airlines as the 4A, the 4B featured greater capacity through a 2m longer fuselage, and a shorter wingspan; 18 were produced.
Comet 4C: This variant featured the Comet 4's wings and the 4B's longer fuselage; 28 were produced.
The last two Comet 4C fuselages were used to build prototypes of the Hawker Siddeley Nimrod maritime patrol aircraft. A Comet 4C (SA-R-7) was ordered by Saudi Arabian Airlines with an eventual disposition to the Saudi Royal Flight for the exclusive use of King Saud bin Abdul Aziz. Extensively modified at the factory, the aircraft included a VIP front cabin, a bed, special toilets with gold fittings and was distinguished by a green, gold and white colour scheme with polished wings and lower fuselage that was commissioned from aviation artist John Stroud. Following its first flight, the special order Comet 4C was described as "the world's first executive jet."
=== Comet 5 proposal ===
The Comet 5 was proposed as an improvement over previous models, including a wider fuselage with five-abreast seating, a wing with greater sweep and podded Rolls-Royce Conway engines. Without support from the Ministry of Transport, the proposal languished as a hypothetical aircraft and was never realised.
=== Hawker Siddeley Nimrod ===
The last two Comet 4C aircraft produced were modified as prototypes (XV148 & XV147) to meet a British requirement for a maritime patrol aircraft for the Royal Air Force; initially named "Maritime Comet", the design was designated Type HS 801. This variant became the Hawker Siddeley Nimrod and production aircraft were built at the Hawker Siddeley factory at Woodford Aerodrome. Entering service in 1969, five Nimrod variants were produced. The final Nimrod aircraft were retired in June 2011.
== Operators ==
The original operators of the early Comet 1 and the Comet 1A were BOAC, Union Aéromaritime de Transport and Air France. All early Comets were withdrawn from service for accident inquiries, during which orders from British Commonwealth Pacific Airlines, Japan Air Lines, Linea Aeropostal Venezolana, National Airlines, Pan American World Airways and Panair do Brasil were cancelled. When the redesigned Comet 4 entered service, it was flown by customers BOAC, Aerolíneas Argentinas, and East African Airways, while the Comet 4B variant was operated by customers BEA and Olympic Airways and the Comet 4C model was flown by customers Kuwait Airways, Mexicana, Middle East Airlines, Misrair Airlines and Sudan Airways.
Other operators used the Comet either through leasing arrangements or through second-hand acquisitions. BOAC's Comet 4s were leased out to Air Ceylon, Air India, AREA Ecuador, Central African Airways and Qantas; after 1965 they were sold to AREA Ecuador, Dan-Air, Mexicana, Malaysian Airways, and the Ministry of Defence. BEA's Comet 4Bs were chartered by Cyprus Airways, Malta Airways and Transportes Aéreos Portugueses. Channel Airways obtained five Comet 4Bs from BEA in 1970 for inclusive tour charters. Dan-Air bought all of the surviving flyable Comet 4s from the late 1960s into the 1970s; some were for spares reclamation, but most were operated on the carrier's inclusive-tour charters; a total of 48 Comets of all marks were acquired by the airline.
In military service, the United Kingdom's Royal Air Force was the largest operator, with 51 Squadron (1958–1975; Comet C2, 2R), 192 Squadron (1957–1958; Comet C2, 2R), 216 Squadron (1956–1975; Comet C2 and C4), and the Royal Aircraft Establishment using the aircraft. The Royal Canadian Air Force also operated Comet 1As (later retrofitted to 1XB) through its 412 Squadron from 1953 to 1963.
== Accidents and incidents ==
The Comet was involved in 25 hull-loss accidents, including 13 fatal crashes which resulted in 492 fatalities. Pilot error was blamed for the type's first fatal accident, which occurred during takeoff at Karachi, Pakistan, on 3 March 1953 and involved a Canadian Pacific Airlines Comet 1A. Three fatal Comet 1 crashes were due to structural problems, specifically British Overseas Airways Corporation flight 783 on 2 May 1953, British Overseas Airways Corporation flight 781 on 10 January 1954, and South African Airways flight 201 on 8 April 1954. These accidents led to the grounding of the entire Comet fleet. After design modifications were implemented, Comet services resumed on October 4, 1958, with Comet 4s.
Pilot error resulting in controlled flight into terrain was blamed for five fatal Comet 4 accidents: an Aerolíneas Argentinas crash near Asunción, Paraguay, on 27 August 1959, Aerolíneas Argentinas Flight 322 at Campinas near São Paulo, Brazil, on 23 November 1961, United Arab Airlines Flight 869 in Thailand's Khao Yai mountains on 19 July 1962, a Saudi Arabian Government crash in the Italian Alps on 20 March 1963, and United Arab Airlines Flight 844 in Tripoli, Libya, on 2 January 1971. The Dan-Air de Havilland Comet crash in Spain's Montseny range on 3 July 1970 was attributed to navigational errors by air traffic control and pilots. Other fatal Comet 4 accidents included a British European Airways crash in Ankara, Turkey, following instrument failure on 21 December 1961, a United Arab Airlines Flight 869 crash during inclement weather near Bombay, India, on 28 July 1963, and the terrorist bombing of Cyprus Airways Flight 284 off the Turkish coast on 12 October 1967.
Nine Comets, including Comet 1s operated by BOAC and Union Aeromaritime de Transport and Comet 4s flown by Aerolíneas Argentinas, Dan-Air, Malaysian Airlines and United Arab Airlines, were irreparably damaged during takeoff or landing accidents that were survived by all on board. A hangar fire damaged a No. 192 Squadron RAF Comet 2R beyond repair on 13 September 1957, and three Middle East Airlines Comet 4Cs were destroyed by Israeli troops at Beirut, Lebanon, on 28 December 1968.
== Aircraft on display ==
Since retirement, three early-generation Comet airframes have survived in museum collections. The only complete remaining Comet 1, a Comet 1XB with the registration G-APAS, the last Comet 1 built, is displayed at the Royal Air Force Museum Midlands. Though painted in BOAC colours, it never flew for the airline, having been first delivered to Air France and then to the Ministry of Supply after conversion to 1XB standard; this aircraft also served with the RAF as XM823.
The sole surviving Comet fuselage with the original square-shaped windows, part of a Comet 1A registered F-BGNX, has undergone restoration and is on display at the de Havilland Aircraft Museum near St Albans in Hertfordshire, England. A Comet C2 Sagittarius with serial XK699, later maintenance serial 7971M, was on display at the gate of RAF Lyneham in Wiltshire, England from 1987. In 2012, with the planned closure of RAF Lyneham, the aircraft was slated to be dismantled and shipped to the RAF Museum Cosford where it was to be re-assembled for display. The move was cancelled due to the level of corrosion and the majority of the airframe was scrapped in 2013, the cockpit section going to the Boscombe Down Aviation Collection at Old Sarum Airfield.
Six complete Comet 4s are housed in museum collections. The Imperial War Museum Duxford has a Comet 4 (G-APDB), originally in Dan-Air colours as part of its Flight Line Display, and later in BOAC livery at its AirSpace building. A Comet 4B (G-APYD) is stored in a facility at the Science Museum at Wroughton in Wiltshire, England. Comet 4Cs are exhibited at the Flugausstellung Peter Junior at Hermeskeil, Germany (G-BDIW), the Museum of Flight Restoration Center near Everett, Washington (N888WA), and the National Museum of Flight near Edinburgh, Scotland (G-BDIX).
The last Comet to fly, Comet 4C Canopus (XS235), is kept in running condition at Bruntingthorpe Aerodrome, where fast taxi-runs are regularly conducted. Since the 2000s, several parties have proposed restoring Canopus, which is maintained by a staff of volunteers, to airworthy, fully flight-capable condition. The Bruntingthorpe Aerodrome also displays a related Hawker Siddeley Nimrod MR2 aircraft.
== Specifications ==
== In popular culture ==
== See also ==
Arnold Alexander Hall
Seymour Collection, an aerophilately collection relating to the Comet in the British Library.
Related development
Hawker Siddeley Nimrod
Hawker Siddeley Nimrod R1
British Aerospace Nimrod AEW3
BAE Systems Nimrod MRA4
Aircraft of comparable role, configuration, and era
Avro Canada C102 Jetliner
Boeing 707
Convair 880
Douglas DC-8
Tupolev Tu-104
Sud Aviation Caravelle
Related lists
List of civil aircraft
List of jet airliners
== References ==
Notes
Citations
Bibliography
== External links ==
De Havilland DH106 Comet 1 & 2 from BAE Systems site
"The Comet Emerges"—A 1949 Flight article on the unveiling of the Comet prototype
"Comet in the Sky"—A 1949 Flight article on the Comet's maiden flight
Film of BOAC De Havilland Comet 3 G-ANLO at Vancouver International Airport in December 1955
The de Havilland Comet in RCAF Service
"Comet Construction Methods"—A 1951 Flight article
"The Tale of the Comet"—A 1952 Flight article on the design and development of the Comet
"Conversion to Comets"—A 1953 Flight article on the Comet's handling
"Comet Engineering"—A 1953 Flight article by Bill Gunston
"The Comet Accidents: History of Events"—A 1954 Flight article of Sir Lionel Heald's summary of the enquiry
"Report of the Comet Inquiry"—A 1955 Flight article on the publishing of the enquiry into the Comet design
Project Comet—Documentary produced by Full Focus
"The Comet Is Twenty"—A 1969 Flight article
"Jet Transport's Next 40 Years"—A 1989 Flight article on the Comet's influence |
Delta Airlines | Delta Air Lines, Inc. is a major airline in the United States headquartered in Atlanta, Georgia, operating nine hubs, with Hartsfield–Jackson Atlanta International Airport being its largest in terms of total passengers and number of departures. With its regional subsidiaries and contractors operating under the brand name Delta Connection, Delta has over 5,400 flights daily and serve 325 destinations in 52 countries on six continents. Delta is a founding member of the SkyTeam airline alliance which helps to extend its global network. It is the oldest operating U.S. airline and the seventh-oldest operating worldwide.
Delta ranks first in revenue and brand value among the world's largest airlines, and second by number of passengers carried, passenger miles flown, and fleet size. Listed 70th on the Fortune 500 list, Delta has topped The Wall Street Journal's annual rankings of airlines in 2022, 2023, and 2024 and earned first place in the 2024 Readers’ Choice Awards for Best Airlines in the U.S. by Condé Nast Traveler.
== History ==
=== Early history ===
The history of Delta Air Lines began with the world's first aerial crop dusting operation called Huff Daland Dusters, Inc. The company was founded on March 2, 1925, in Macon, Georgia, before moving to Monroe, Louisiana, in the summer of 1925. It flew a Huff-Daland Duster, the first true crop duster, designed to combat the boll weevil infestation of cotton crops. C.E. Woolman, general manager and later Delta's first CEO, led a group of local investors to acquire the company's assets. Delta Air Service was incorporated on December 3, 1928, and was named after the Mississippi Delta region.
Passenger operations began on June 17, 1929, from Dallas, Texas, to Jackson, Mississippi, with stops at Shreveport and Monroe, Louisiana. By June 1930, service had extended east to Atlanta and west to Fort Worth, Texas. Passenger service ceased in October 1930 when the airmail contract for the route Delta had pioneered was awarded to another airline, which purchased the assets of Delta Air Service. Local banker Travis Oliver, acting as a trustee, C.E. Woolman, and other local investors purchased back the crop-dusting assets of Delta Air Service and incorporated as Delta Air Corporation on December 31, 1930.
Delta Air Corporation secured an airmail contract in 1934, and began doing business as Delta Air Lines over Mail Route 24, stretching from Fort Worth, Texas, to Charleston, South Carolina. Delta moved its headquarters from Monroe, Louisiana, to its current location in Atlanta in 1941. The company name officially became Delta Air Lines in 1945. In 1946, the company commenced regularly scheduled freight transport. In 1949, the company launched the first discounted fares between Chicago and Miami. In 1953, the company launched its first international routes after the acquisition of Chicago and Southern Air Lines. In 1959, it was the first airline to fly the Douglas DC-8. In 1960, it was the first airline to fly Convair 880 jets. In 1964, it launched the Deltamatic reservation systems using computers in the IBM 7070 series. In 1965, Delta was the first airline to fly the McDonnell Douglas DC-9.
=== Growth and acquisitions ===
By 1970, Delta had an all-jet fleet, and in 1972 it acquired Northeast Airlines. Trans-Atlantic service began in 1978 with the first nonstop flights from Atlanta to London. In 1981, Delta launched a frequent-flyer program. In 1987, it acquired Western Airlines, and that same year Delta began trans-Pacific service (Atlanta to Portland, Oregon, to Tokyo). In 1990, Delta was the first airline in the United States to fly McDonnell Douglas MD-11 jets. In 1991, it acquired substantially all of Pan Am's trans-Atlantic routes and the Pan Am Shuttle, rebranded as the Delta Shuttle. Delta was now the leading airline across the Atlantic.
In 1997, Delta was the first airline to board more than 100 million passengers in a calendar year. Also that year, Delta began an expansion of its international routes into Latin America. In 2003, the company launched Song, a low-cost carrier.
=== Bankruptcy and restructuring (2005–2007) ===
On September 14, 2005, the company filed for bankruptcy, citing rising fuel costs. It emerged from bankruptcy in April 2007 after fending off a hostile takeover from US Airways and its shares were re-listed on the New York Stock Exchange.
=== Acquisition of Northwest Airlines (2008–2010) ===
The acquisition of Northwest Airlines was announced on April 14, 2008. It was approved and consummated on October 29, 2008. Northwest continued to operate as a wholly-owned subsidiary of Delta until December 31, 2009, when the Northwest Airlines operating certificate was merged into that of Delta. Delta completed integration with Northwest on January 31, 2010, when their computer reservations system and websites were combined, and the Northwest Airlines brand was officially retired.
== Network ==
=== Destinations ===
Delta and its worldwide alliance partners operate more than 15,000 flights per day. As of December 31, 2021, Delta's mainline aircraft fly to 242 destinations, serving 52 countries across six continents. The airline operates nine domestic hubs. In the summer of 2024, Delta operated 893 daily flights out of its Atlanta main hub.
=== Hubs ===
Delta currently has nine hubs:
Atlanta: The airline's largest hub serving the Southern and Eastern United States and as its main gateway to Latin America and the Caribbean. Home to Delta's corporate headquarters, as well as Delta TechOps, the airline's primary maintenance base.
Boston: Delta's secondary transatlantic hub. It offers service to destinations in Europe and North America.
Detroit: One of Delta's two Midwest hubs. It is the primary Asian gateway for the Eastern United States and it also provides service to many destinations in the Americas and Europe.
Los Angeles: Delta's secondary hub for the West Coast. It offers service to cities in Latin America, Asia, Australia, Europe, and major domestic cities and West Coast regional destinations.
Minneapolis/St. Paul: One of Delta's two Midwest hubs. It is the primary Canadian gateway for the airline and also serves many American metropolitan destinations, many regional destinations in the upper Midwest, and some select destinations in Europe and Asia.
New York–JFK: Delta's primary transatlantic hub. The hub also offers service on transcontinental "prestige routes" to Los Angeles and San Francisco.
New York–LaGuardia: Delta's second New York hub. Delta's service at LaGuardia covers numerous East Coast U.S. cities and several regional destinations in the U.S. and Canada.
Salt Lake City: Delta's hub for the Rocky Mountain region of the United States. Delta's service covers most major U.S. destinations and several regional destinations in the U.S., emphasizing the Rocky Mountains and select destinations in Canada and Mexico, and select cities in Europe, Hawaii and Asia.
Seattle/Tacoma: Delta's primary West Coast hub. The hub serves as an international gateway to Asia for the Western United States. Delta service also includes many major U.S. destinations as well as regional destinations in the Pacific Northwest.
=== Delta Connection ===
=== Alliance and codeshare agreements ===
Delta is a member of the SkyTeam alliance and has codeshare agreements with the following airlines:
== Fleet ==
== Cabin ==
Delta underwent a cabin branding upgrade in 2015. Availability and exact details vary by route and aircraft type.
Delta One
Delta One is the airline's premier business class product, available on long-haul international flights, as well as transcontinental service from New York–Kennedy to Los Angeles and San Francisco.
Delta One features lie-flat seating on all aircraft types and direct aisle access from every seat on all types except the Boeing 757-200 (in which only a special sub-fleet of approximately 20 aircraft feature lie-flats) and in their 'type 35L' ex-LATAM A350s (which use a 2-2-2 layout). The Boeing 767-300ER seats, designed by James Thompson, feature a space-saving design whereby the seats are staggered such that when in the fully flat position, the foot of each bed extends under the armrests of the seat in front of it. On the Airbus A330 cabins, Delta One features the Cirrus flat-bed sleeper suite by Zodiac Seats U.S., configured in a reverse herringbone pattern.
All seats are also equipped with a personal, on-demand in-flight-entertainment (IFE) system, universal power-ports, a movable reading light, and a folding work table. Passengers also receive meals, alcoholic beverages, an amenity kit, bedding, and pre-flight Delta Sky Club access.
In August 2016, Delta announced the introduction of Delta One Suites on select widebody fleets. The suites will feature a door to the aisle for enhanced privacy, as well as improved storage space, a larger IFE screen, and an updated design. The suites rolled out on the Airbus A350 fleet, first delivered in July 2017, followed by installation within the Boeing 777 fleet. Delta's Airbus A330-900, which began revenue service for the airline in July 2019, also features Delta One Suites. Also in July 2019, Delta began retrofitting a new seat on the 767-400ER, which featured increased privacy and design similar to Delta One Suites, though without a privacy door. These seats lack a door due to the 767's smaller cabin width.
First Class
First Class is offered on mainline domestic flights (except those featuring Delta One service), select short- and medium-haul international flights, and Delta Connection aircraft. Seats range from 18.5 to 20.75 inches (47.0 to 52.7 cm) wide and have between 37 and 40 inches (94 and 102 cm) of pitch. Passengers in this class receive a wider variety of free snacks compared to Main Cabin, as well as free drinks and alcohol, and full meal service on flights 900 miles (1,400 km) and longer. Certain aircraft also feature power ports at each seat and free entertainment products from Delta Studio. First Class passengers are also eligible for priority boarding.
Premium Select
In April 2016, Delta CEO Ed Bastian announced that a new Premium Economy cabin would be added. Since renamed to Premium Select, this cabin will feature extra legroom; adjustable leg rests; extra seat pitch, width, and recline; and a new premium service. Delta introduced it on its new Airbus A350, first delivered in fall 2017, to be followed by the now-retired Boeing 777. In October 2018, Delta announced that it would be selling first class seats on domestically configured Boeing 757 aircraft flying transatlantic routes as Premium Select. Delta's A330-900, delivered in 2019, also offers Premium Select. In 2021, Delta began retrofitting many of its 767-300ER and older A330 aircraft with Premium Select.
Delta Comfort+
Delta Comfort+ seats are installed on all aircraft and feature 34–36 inches (860–910 mm) of pitch; on all Delta One configured aircraft, 35–36 inches (890–910 mm) of pitch and 50 percent more recline over standard Main Cabin seats. Additional amenities include: priority boarding, dedicated overhead space, complimentary beer, wine, and spirits on flights 250 miles (400 km) or more, and complimentary premium snacks on flights 900 miles (1,400 km) or more. Complimentary premium entertainment is available via Delta Studio, with free headsets available on most flights. On transcontinental flights between JFK-LAX/SFO, Delta Comfort+ passengers also get Luvo snack wraps. Certain Medallion members can upgrade from Main Cabin to Comfort+ for free right after booking, while other customers can upgrade for a fee or with SkyMiles.
Main Cabin
Main Cabin (Economy Class) is available on all aircraft with seats ranging from 17 to 18.6 inches (43 to 47 cm) wide and 30 to 33 inches (76 to 84 cm) of pitch. The main cabin on some aircraft has an articulating seat bottom where the seat bottom moves forward in addition to the seat back tilting backwards when reclining.
Main Cabin passengers receive complimentary snacks and non-alcoholic drinks on all flights 250 miles (400 km) or longer. Alcoholic beverages are also available for purchase. Complimentary meals and alcoholic drinks are provided on long-haul international flights as well as selected transcontinental domestic flights, such as between New York–JFK and Seattle, San Francisco, Los Angeles, and San Diego. As part of Delta's Flight Fuel buy on board program, meals are available for purchase on other North American flights 900 miles (1,400 km) or longer.
Delta operated a different buy-on-board program between 2003 and 2005. The previous program had items from differing providers, depending on the origin and destination of the flight. Prices ranged up to $10 ($16.65 when adjusted for inflation). The airline started the service on a few selected flights in July 2003, and the meal service was initially offered on 400 flights. Delta ended this buy-on-board program in 2005; instead, Delta began offering snacks at no extra charge on flights over 90 minutes to most U.S. domestic flights and some flights to the Caribbean and Latin America. Beginning in mid-March 2005 the airline planned to stop providing pillows on flights within the 48 contiguous U.S. states, Bermuda, Canada, the Caribbean, and Central America. In addition, the airline increased the price of alcoholic beverages on Delta mainline flights from $4 ($6.44 when adjusted for inflation) to $5 ($8.05 when adjusted for inflation); the increase in alcohol prices did not occur on Song flights.
Basic Economy
Basic Economy is a basic version of Main Cabin, offering the same services with fewer flexibility options for a lower price. Examples of fewer flexibility options include no ticket changes, no paid or complimentary upgrades regardless of frequent-flier status, and only having a seat assigned at check-in. As of December 2021, Basic Economy travelers no longer earn award miles (used for redeeming free travel, for example) or medallion qualifying miles (which count towards elite status).
== Reward programs ==
=== SkyMiles ===
SkyMiles is the frequent flyer program for Delta Air Lines. Miles do not expire but accounts may be deactivated by Delta in certain cases, such as the death of a program member or fraudulent activity.
As part of its efforts to improve customer experience, Delta introduced several service upgrades in 2025. These included free Wi-Fi access for SkyMiles members on most domestic flights, expanded Delta Sky Club lounge facilities, and new premium dining options featuring branded offerings such as Shake Shack.
=== Delta Sky Club ===
Delta Sky Club is the branding name of Delta's airport lounges. Membership is available through an annual membership that can be purchased with either money or miles. International passengers travelling in Delta One class get free access. Membership can also be granted through top-level Delta status or by being an American Express cardholder with certain exceptions. As of January 2019, Delta no longer offered single-day passes.
Originally, Delta's membership-based airport clubs were called Crown Room lounges, with Northwest's called WorldClubs.
Exclusive Delta One Clubs for customers travelling in business class are slated to open at New York–Kennedy, Los Angeles, and Boston in 2024.
In February 2024, Delta announced a new, more exclusive or premium level of Sky Club lounge aimed at high-spending travellers. The first would be at New York's John F. Kennedy International Airport, followed by those in Boston's Logan International Airport and Los Angeles International Airport later in the year. In addition to wellness areas, the lounge would offer a full-service brasserie and a marketplace influenced or assisted by a chef that would feature an open kitchen. The move represented a shift away from a standard offering to something closer to a unique experience for each airport and the city in which the lounge was located.
=== SkyBonus ===
On November 27, 2001, Delta Air Lines launched SkyBonus, a program aimed toward small-to-medium businesses spending between $5,000 and $500,000 annually on air travel. Businesses can earn points toward free travel and upgrades, as well as Sky Club memberships and SkyMiles Silver Medallion status. Points are earned on paid travel based on various fare amounts paid, booking codes, and place origin or destination. While enrolled businesses are able to earn points toward free travel, the travelling passenger is still eligible to earn SkyMiles during his or her travel.
In early 2010, Delta Air Lines merged its SkyBonus program with Northwest's similar Biz Perks program.
== Corporate affairs ==
=== Business trends ===
The key trends for Delta Air Lines are (as of the financial year ending December 31):
=== Personnel ===
Between its mainline operation and subsidiaries, and as of December 2024, Delta employs nearly 103,000 people.
Delta's 17,500 mainline pilots are represented by the Air Line Pilots Association, International and are the union's second largest pilot group. The company's approximately 180 flight dispatchers are represented by the Professional Airline Flight Control Association (PAFCA). Not counting the pilots and flight dispatchers, Delta is the only one of the five largest airlines in the United States, and one of only two in the top 9 (the other being JetBlue), whose non-pilot USA domestic staff is entirely non-union.
=== Delta Global Staffing ===
Delta Global Staffing (DGS) was a temporary employment firm located in Atlanta, Georgia. Delta Global Staffing was a wholly owned subsidiary of Delta Air Lines, Inc., and a division of the internal company DAL Global Services.
Delta Air Lines sold majority ownership of DAL Global Services to Argenbright Holdings on December 21, 2018. As part of the sale, Delta dissolved the staffing division of DGS.
It was founded in 1995 as a provider of temporary staffing for Delta primarily in Atlanta. DGS has since expanded to include customers and businesses outside the airline and aviation industries. DGS now supports customers in major US metropolitan areas.
Delta Global Staffing provided contract workers for short and long term assignments, VMS partnering, VOP on-site management, temp-to-hire, direct placements, and payroll services. DGS services markets such as call centers, customer services and administrative placements, IT & professional recruiting, logistics, finance & accounting, hospitality, and aviation/airline industry.
=== Headquarters and offices ===
Delta's corporate headquarters is located on a corporate campus on the northern boundary of Hartsfield–Jackson Atlanta International Airport, within the city limits of Atlanta. This location has served as Delta's headquarters since 1941, when the company relocated its corporate offices from Monroe, Louisiana, to Greater Atlanta. The crop dusting division of Delta remained headquartered in Monroe until Delta ceased crop dusting in 1966. Before 1981, the Delta corporate campus, an 80-acre (32 ha) plot of land in proximity to the old Hartsfield Airport terminal, was outside the City of Atlanta limits in unincorporated Fulton County. On August 3, 1981, the Atlanta City Council approved the annexation of 141 acres (57 ha) of land, an area containing the Delta headquarters. As of 1981 Delta would have had to begin paying $200,000 annually to the City of Atlanta in taxes. In September 1981, the airline sued the city, challenging the annexation on the basis of the constitutionality of the 1960 City of Atlanta annexation of the Hartsfield old terminal. The City of Atlanta was only permitted to annex areas that are adjacent to areas already in the Atlanta city limits.
In addition to hosting Delta's corporate headquarters, Hartsfield–Jackson is also the home of Delta TechOps, the airline's primary maintenance, repair, and overhaul arm and the largest full-service airline MRO in North America, specializing in engines, components, airframe, and line maintenance.
Delta maintains a large presence in the Twin Cities, with over 12,000 employees in the region as well as significant corporate support functions housed in the Minneapolis area, including the company's information technology divisional offices.
=== Corporate identity ===
Delta's logo, often called the "widget", was originally unveiled in 1959. Its triangle shape is taken from the Greek letter delta, and recalls the airline's early history operating in the Mississippi Delta. It is also said to be reminiscent of the swept-wing design of the DC-8, Delta's first jet aircraft.
Delta's current livery is called "Upward & Onward". It features a white fuselage with the company's name in blue lettering and a widget on the vertical stabilizer. Delta introduced its current livery in 2007 as part of a re-branding after it emerged from bankruptcy. The new livery consists of four colors, while the old one (called "colors in motion") uses eight. This meant the switch saved the airline money by removing one day from each aircraft's painting cycle. The airline took four years to repaint all of its aircraft into the current scheme, including aircraft inherited from Northwest Airlines.
=== Environmental initiatives ===
In 2008, Delta Air Lines was given an award from the United States Environmental Protection Agency's Design for the Environment (DfE) program for its use of PreKote, a more environmentally friendly, non-hexavalent chromium surface pretreatment on its aircraft, replacing hazardous chemicals formerly used to improve paint adhesion and prevent corrosion. In addition, PreKote reduces water usage by two-thirds and reduces wastewater treatment.
PreKote is also saving money by reducing the time needed to paint each airplane. With time savings of eight to ten percent, it will save an estimated more than $1 million annually.
Despite having purchased 9.7 million metric tonnes of carbon offsets in 2022, Delta was in the process of moving away from such investments to reduce the company's carbon footprint by the end of March of that year and was instead focusing on reducing emissions from company operations. In May 2023, Delta Air Lines received a consumer class action lawsuit filed in Central California U.S. District Court over marketing claims that the company is the world's first carbon neutral airline.
== In popular culture ==
=== Deltalina ===
As part of the re-branding project, a safety video featuring a flight attendant was posted on YouTube in early 2008, getting over 1 million views and the attention of news outlets, specifically for the video's tone mixed with the serious safety message. The flight attendant, Katherine Lee, was dubbed "Deltalina" by a member of FlyerTalk for her resemblance to Angelina Jolie. Delta had considered several styles for its current safety video, including animation, before opting for a video presenting a flight attendant speaking to the audience. The video was filmed on a former Song Airlines Boeing 757-200.
== On-time performance ==
In 2023, Delta flights arrived at their destination on time 84.72% of the time, compared to the North American industry average of 74.45% per Cirium. Delta completed 98.82% of its scheduled flights.
== Award and recognition ==
On June 24, 2024, Delta Air Lines was voted 2024 Best Airline in North America and Best Airline Staff Service in North America by Skytrax.
== Accidents and incidents ==
The following are major accidents and incidents that occurred on Delta mainline aircraft. For Northwest Airlines incidents, see Northwest Airlines accidents and incidents. For Delta Connection incidents, see Delta Connection incidents and accidents.
All told, in 14 fatal accidents involving at least one death, 299 passengers and crew died, 11 on two other aircraft died (in two collision accidents), and 16 persons on the ground died (in four accidents).
== Controversies and passenger incidents ==
In July 2024, Delta canceled over 7,000 flights during a disruption following the 2024 CrowdStrike incident. The incident closely resembled the 2022 Southwest Airlines scheduling crisis, in which the airline canceled thousands of flights. On Tuesday July 23, 2024, United States Secretary of Transportation, Pete Buttigieg, announced the Department of Transportation would be launching an investigation into the events that prevented Delta Air Lines from swiftly recovering, as other airlines had. Over the course of the event over 500,000 passengers were inconvenienced, according to Delta CEO Ed Bastian, and over 3,000 complaints had been lodged with the government according to the Department of Transportation.
Delta has claimed to have lost $500 million due to the outages and associated costs. The airline has hired David Boies in preparation for litigation against Microsoft and CrowdStrike.
=== Safety and aircraft maintenance ===
In April 2025, two Delta Air Lines flights experienced incidents in which ceiling panels detached mid-flight, injuring at least one passenger. The events occurred on a Boeing 757 and a Boeing 717, prompting scrutiny of Delta’s maintenance practices and the condition of its older aircraft. Emergency personnel assessed the injured upon landing.
That same month, three Delta flights made emergency landings within five days due to cabin pressurization issues. The aircraft either diverted or returned to their departure airports, with crews following established emergency protocols. Although no serious injuries were reported, the incidents raised concerns about the airline’s operational oversight. Delta stated that it provided accommodations for affected passengers and reaffirmed its focus on safety.
== See also ==
Air transportation in the United States
Delta Flight Museum
Delta Global Staffing
Delta Ship 41
List of airlines of the United States
List of airports in the United States
Transportation in the United States
== Notes ==
== References ==
Notes
Bibliography
Green, William; Swanborough, Gordon; Mowinski, John (1987). Modern Commercial Aircraft. New York: Crown Publishers, Inc. ISBN 0-517-63369-8.
== External links ==
Official website
Delta Air Lines companies grouped at OpenCorporates
Business data for Delta Air Lines: |
Doi (identifier) | A digital object identifier (DOI) is a persistent identifier or handle used to uniquely identify various objects, standardized by the International Organization for Standardization (ISO). DOIs are an implementation of the Handle System; they also fit within the URI system (Uniform Resource Identifier). They are widely used to identify academic, professional, and government information, such as journal articles, research reports, data sets, and official publications.
A DOI aims to resolve to its target, the information object to which the DOI refers. This is achieved by binding the DOI to metadata about the object, such as a URL where the object is located. Thus, by being actionable and interoperable, a DOI differs from ISBNs or ISRCs which are identifiers only. The DOI system uses the indecs Content Model to represent metadata.
The DOI for a document remains fixed over the lifetime of the document, whereas its location and other metadata may change. Referring to an online document by its DOI should provide a more stable link than directly using its URL. But if its URL changes, the publisher must update the metadata for the DOI to maintain the link to the URL. It is the publisher's responsibility to update the DOI database. If they fail to do so, the DOI resolves to a dead link, leaving the DOI useless.
The developer and administrator of the DOI system is the International DOI Foundation (IDF), which introduced it in 2000. Organizations that meet the contractual obligations of the DOI system and are willing to pay to become a member of the system can assign DOIs. The DOI system is implemented through a federation of registration agencies coordinated by the IDF. The cumulative number of DOIs has increased exponentially over time, from 50 million registrations in 2011 to 391 million in 2025. The rate of registering organizations ("members") has also increased over time from 4,000 in 2011 to 9,500 in 2013, but the federated nature of the system means it is not immediately clear how many members there are in total today. Fake registries have even appeared.
== Nomenclature and syntax ==
A DOI is a type of Handle System handle, which takes the form of a character string divided into two parts, a prefix and a suffix, separated by a slash.
prefix/suffix
The prefix identifies the registrant of the identifier and the suffix is chosen by the registrant and identifies the specific object associated with that DOI. Most legal Unicode characters are allowed in these strings, which are interpreted in a case-insensitive manner. The prefix usually takes the form 10.NNNN, where NNNN is a number greater than or equal to 1000, whose limit depends only on the total number of registrants. The prefix may be further subdivided with periods, like 10.NNNN.N.
For example, in the DOI name 10.1000/182, the prefix is 10.1000 and the suffix is 182. The "10" part of the prefix distinguishes the handle as part of the DOI namespace, as opposed to some other Handle System namespace, and the characters 1000 in the prefix identify the registrant; in this case the registrant is the International DOI Foundation itself. 182 is the suffix, or item ID, identifying a single object (in this case, the latest version of the DOI Handbook).
DOI names can identify creative works (such as texts, images, audio or video items, and software) in both electronic and physical forms, performances, and abstract works such as licenses, parties to a transaction, etc.
The names can refer to objects at varying levels of detail: thus DOI names can identify a journal, an individual issue of a journal, an individual article in the journal, or a single table in that article. The choice of level of detail is left to the assigner, but in the DOI system it must be declared as part of the metadata that is associated with a DOI name, using a data dictionary based on the indecs Content Model.
=== Display ===
The official DOI Handbook explicitly states that DOIs should be displayed on screens and in print in the format doi:10.1000/182.
Contrary to the DOI Handbook, Crossref, a major DOI registration agency, recommends displaying a URL (for example, https://doi.org/10.1000/182) instead of the officially specified format. This URL is persistent (there is a contract that ensures persistence in the doi.org domain,) so it is a PURL—providing the location of an name resolver which will redirect HTTP requests to the correct online location of the linked item.
The Crossref recommendation is primarily based on the assumption that the DOI is being displayed without being hyperlinked to its appropriate URL—the argument being that without the hyperlink it is not as easy to copy-and-paste the full URL to actually bring up the page for the DOI, thus the entire URL should be displayed, allowing people viewing the page containing the DOI to copy-and-paste the URL, by hand, into a new window/tab in their browser in order to go to the appropriate page for the document the DOI represents.
== Content ==
Major content of the DOI system currently includes:
Scholarly materials (journal articles, books, ebooks, etc.) through Crossref, a consortium of around 3,000 publishers; Airiti, a leading provider of Chinese and Taiwanese electronic academic journals; and the Japan Link Center (JaLC) an organization providing link management and DOI assignment for electronic academic journals in Japanese.
Research datasets through DataCite, a consortium of leading research libraries, technical information providers, and scientific data centers;
European Union official publications through the EU publications office;
The Chinese National Knowledge Infrastructure project at Tsinghua University and the Institute of Scientific and Technical Information of China (ISTIC), two initiatives sponsored by the Chinese government.
Permanent global identifiers for both commercial and non-commercial audio/visual content titles, edits, and manifestations through the Entertainment ID Registry, commonly known as EIDR.
In the Organisation for Economic Co-operation and Development's publication service OECD iLibrary, each table or graph in an OECD publication is shown with a DOI name that leads to an Excel file of data underlying the tables and graphs. Further development of such services is planned.
Other registries include Crossref and the multilingual European DOI Registration Agency (mEDRA). Since 2015, RFCs can be referenced as doi:10.17487/rfc....
== Features and benefits ==
The IDF designed the DOI system to provide a form of persistent identification, in which each DOI name permanently and unambiguously identifies the object to which it is associated (although when the publisher of a journal changes, sometimes all the DOIs will be changed, with the old DOIs no longer working). It also associates metadata with objects, allowing it to provide users with relevant pieces of information about the objects and their relationships. Included as part of this metadata are network actions that allow DOI names to be resolved to web locations where the objects they describe can be found. To achieve its goals, the DOI system combines the Handle System and the indecs Content Model with a social infrastructure.
The Handle System ensures that the DOI name for an object is not based on any changeable attributes of the object such as its physical location or ownership, that the attributes of the object are encoded in its metadata rather than in its DOI name, and that no two objects are assigned the same DOI name. Because DOI names are short character strings, they are human-readable, may be copied and pasted as text, and fit into the URI specification. The DOI name-resolution mechanism acts behind the scenes, so that users communicate with it in the same way as with any other web service; it is built on open architectures, incorporates trust mechanisms, and is engineered to operate reliably and flexibly so that it can be adapted to changing demands and new applications of the DOI system. DOI name-resolution may be used with OpenURL to select the most appropriate among multiple locations for a given object, according to the location of the user making the request. However, despite this ability, the DOI system has drawn criticism from librarians for directing users to non-free copies of documents, that would have been available for no additional fee from alternative locations.
The indecs Content Model as used within the DOI system associates metadata with objects. A small kernel of common metadata is shared by all DOI names and can be optionally extended with other relevant data, which may be public or restricted. Registrants may update the metadata for their DOI names at any time, such as when publication information changes or when an object moves to a different URL.
The International DOI Foundation (IDF) oversees the integration of these technologies and operation of the system through a technical and social infrastructure. The social infrastructure of a federation of independent registration agencies offering DOI services was modelled on existing successful federated deployments of identifiers such as GS1 and ISBN.
== Comparison with other identifier schemes ==
A DOI name differs from commonly used Internet pointers to material, such as the Uniform Resource Locator (URL), in that it identifies an object itself as a first-class entity, rather than the specific place where the object is located at a certain time. It implements the Uniform Resource Identifier (Uniform Resource Name) concept and adds to it a data model and social infrastructure.
A DOI name also differs from standard identifier registries such as the ISBN, ISRC, etc. The purpose of an identifier registry is to manage a given collection of identifiers, whereas the primary purpose of the DOI system is to make a collection of identifiers actionable and interoperable, where that collection can include identifiers from many other controlled collections.
The DOI system offers persistent, semantically interoperable resolution to related current data and is best suited to material that will be used in services outside the direct control of the issuing assigner (e.g., public citation or managing content of value). It uses a managed registry (providing both social and technical infrastructure). It does not assume any specific business model for the provision of identifiers or services and enables other existing services to link to it in defined ways. Several approaches for making identifiers persistent have been proposed.
The comparison of persistent identifier approaches is difficult because they are not all doing the same thing. Imprecisely referring to a set of schemes as "identifiers" does not mean that they can be compared easily. Other "identifier systems" may be enabling technologies with low barriers to entry, providing an easy to use labeling mechanism that allows anyone to set up a new instance (examples include Persistent Uniform Resource Locator (PURL), URLs, Globally Unique Identifiers (GUIDs), etc.), but may lack some of the functionality of a registry-controlled scheme and will usually lack accompanying metadata in a controlled scheme.
The DOI system does not have this approach and should not be compared directly to such identifier schemes. Various applications using such enabling technologies with added features have been devised that meet some of the features offered by the DOI system for specific sectors (e.g., ARK).
A DOI name does not depend on the object's location and, in this way, is similar to a Uniform Resource Name (URN) or PURL but differs from an ordinary URL. URLs are often used as substitute identifiers for documents on the Internet although the same document at two different locations has two URLs. By contrast, persistent identifiers such as DOI names identify objects as first class entities: two instances of the same object would have the same DOI name.
== Resolution ==
To resolve a DOI name, it may be input to a DOI resolver, such as one at the official website https://doi.org/.
DOI name resolution is provided through the Handle System, which is an infrastructure developed and operated by CNRI (Corporation for National Research Initiatives), and is freely available to any user encountering a DOI name. Resolution redirects the user from a DOI name to one or more pieces of typed data: URLs representing instances of the object, services such as e-mail, or one or more items of metadata. To the Handle System, a DOI name is a handle, and so has a set of values assigned to it and may be thought of as a record that consists of a group of fields. Each handle value must have a data type specified in its <type> field, which defines the syntax and semantics of its data. While a DOI persistently and uniquely identifies the object to which it is assigned, DOI resolution may not be persistent, due to technical and administrative issues.
Another approach, which avoids typing or copying and pasting into a resolver is to include the DOI in a document as a URL which uses the resolver as an HTTP proxy, such as https://doi.org/ (preferred) or http://dx.doi.org/, both of which support HTTPS. For example, the DOI 10.1000/182 can be included in a reference or hyperlink as https://doi.org/10.1000/182. This approach allows users to click on the DOI as a normal hyperlink. Indeed, as previously mentioned, this is how Crossref recommends that DOIs always be represented (preferring HTTPS over HTTP), so that if they are cut-and-pasted into other documents, emails, etc., they will be actionable.
An interesting consequence of the fact that DOIs depend entirely on CNRI’s Handle System infrastructure (whereby CNRI operates the global root servers and wrote the protocol) is that the proxy services DOI.org/<#> and hdl.handle.net/<#> are interoperable. For example, the following URIs resolve to the same publication:
https://doi.org/10.1016/S0021-9258(19)52451-6
https://hdl.handle.net/10.1016/S0021-9258(19)52451-6
There are other DOI resolvers and HTTP Proxies apart from NCRI's Handle System. At the beginning of the year 2016, a new class of alternative DOI resolvers was started by http://doai.io/ (now discontinued ). This service was unusual in that it tried to find a non-paywalled (often author archived) version of a title and redirected the user to that instead of the publisher's version. Since then, other open-access favoring DOI resolvers have been created, notably https://oadoi.org/ in October 2016 (rebranded in 2017 as https://unpaywall.org/). While traditional DOI resolvers solely rely on the Handle System, alternative DOI resolvers first consult multiple Open Access resources such as institutional libraries with the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), or indexing services based in OAI-PMH, such as BASE (Bielefeld Academic Search Engine).
An alternative to HTTP proxies is to use one of a number of add-ons and plug-ins for browsers, thereby avoiding the conversion of the DOIs to URLs, which depend on domain names and may be subject to change, while still allowing the DOI to be treated as a normal hyperlink. A disadvantage of this approach for publishers is that, at least at present, most users will be encountering the DOIs in a browser, mail reader, or other software which does not have one of these plug-ins installed.
== IDF organizational structure ==
The International DOI Foundation (IDF), a non-profit organization created in 1997, is the governance body of the DOI system. It safeguards all intellectual property rights relating to the DOI system, manages common operational features, and supports the development and promotion of the DOI system. The IDF ensures that any improvements made to the DOI system (including creation, maintenance, registration, resolution and policymaking of DOI names) are available to any DOI registrant. It also prevents third parties from imposing additional licensing requirements beyond those of the IDF on users of the DOI system.
The IDF is controlled by a Board elected by the members of the Foundation, with an appointed Managing Agent who is responsible for co-ordinating and planning its activities. Membership is open to all organizations with an interest in electronic publishing and related enabling technologies. The IDF holds annual open meetings on the topics of DOI and related issues.
Registration agencies, appointed by the IDF, provide services to DOI registrants: they allocate DOI prefixes, register DOI names, and provide the necessary infrastructure to allow registrants to declare and maintain metadata and state data. Registration agencies are also expected to actively promote the widespread adoption of the DOI system, to cooperate with the IDF in the development of the DOI system as a whole, and to provide services on behalf of their specific user community. A list of current RAs is maintained by the International DOI Foundation. The IDF is recognized as one of the federated registrars for the Handle System by the DONA Foundation (of which the IDF is a board member), and is responsible for assigning Handle System prefixes under the top-level 10 prefix.
Registration agencies generally charge a fee to assign a new DOI name; parts of these fees are used to support the IDF. The DOI system overall, through the IDF, operates on a not-for-profit cost recovery basis.
== Standardization ==
The DOI system is an international standard developed by the International Organization for Standardization in its technical committee on identification and description, TC46/SC9. The Draft International Standard ISO/DIS 26324, Information and documentation – Digital Object Identifier System met the ISO requirements for approval. The relevant ISO Working Group later submitted an edited version to ISO for distribution as an FDIS (Final Draft International Standard) ballot, which was approved by 100% of those voting in a ballot closing on 15 November 2010. The final standard was published on 23 April 2012.
DOI is a registered URI under the info URI scheme specified by IETF RFC 4452. info:doi/ is the infoURI Namespace of Digital Object Identifiers.
The DOI syntax is a NISO standard, first standardized in 2000, ANSI/NISO Z39.84-2005 Syntax for the Digital Object Identifier.
The maintainers of the DOI system have registered a DOI namespace for URNs.
== See also ==
== Notes ==
== References ==
== External links ==
Official website
DOI Resources from DOI.org, including Factsheets, FAQs, and more
shortDOI Service — A shortening service offered by the DOI Foundation that creates aliases for existing DOI® names of the form 10/abcde
Crossref Metadata Search from CrossRef.org
Crossref Simple Text Query from CrossRef.org |
Douglas DC-3 | The Douglas DC-3 is a propeller-driven airliner manufactured by the Douglas Aircraft Company, which had a lasting effect on the airline industry in the 1930s to 1940s and World War II.
It was developed as a larger, improved 14-bed sleeper version of the Douglas DC-2.
It is a low-wing metal monoplane with conventional landing gear, powered by two radial piston engines of 1,000–1,200 hp (750–890 kW). Although the DC-3s originally built for civil service had the Wright R-1820 Cyclone, later civilian DC-3s used the Pratt & Whitney R-1830 Twin Wasp engine.
The DC-3 has a cruising speed of 207 mph (333 km/h), a capacity of 21 to 32 passengers or 6,000 lbs (2,700 kg) of cargo, and a range of 1,500 mi (2,400 km), and can operate from short runways.
The DC-3 had many exceptional qualities compared to previous aircraft. It was fast, had a good range, was more reliable, and carried passengers in greater comfort. Before World War II, it pioneered many air travel routes. It was able to cross the continental United States from New York to Los Angeles in 18 hours, with only three stops.
It is one of the first airliners that could profitably carry only passengers without relying on mail subsidies. In 1939, at the peak of its dominance in the airliner market, around ninety percent of airline flights on the planet were by a DC-3 or some variant.
Following the war, the airliner market was flooded with surplus transport aircraft, and the DC-3 was no longer competitive because it was smaller and slower than aircraft built during the war. It was made obsolete on main routes by more advanced types such as the Douglas DC-4 and Convair 240, but the design proved adaptable and was still useful on less commercially demanding routes.
Civilian DC-3 production ended in 1943 at 607 aircraft. Military versions, including the C-47 Skytrain (the Dakota in British RAF service), and Soviet- and Japanese-built versions, brought total production to over 16,000.
Many continued to be used in a variety of niche roles; 2,000 DC-3s and military derivatives were estimated to be still flying in 2013; by 2017 more than 300 were still flying. As of 2023, it is estimated about 150 are still flying.
== Design and development ==
"DC" stands for Douglas Commercial. The DC-3 was the culmination of a development effort that began after an inquiry from Transcontinental and Western Airlines (TWA) to Donald Douglas. TWA's rival in transcontinental air service, United Airlines, was starting service with the Boeing 247, and Boeing refused to sell any 247s to other airlines until United's order for 60 aircraft had been filled. TWA asked Douglas to design and build an aircraft that would allow TWA to compete with United. Douglas' design, the 1933 DC-1, was promising, and led to the DC-2 in 1934. The DC-2 was a success, but with room for improvement.
The DC-3 resulted from a marathon telephone call from American Airlines CEO C. R. Smith to Donald Douglas, when Smith persuaded a reluctant Douglas to design a sleeper aircraft based on the DC-2 to replace American's Curtiss Condor II biplanes. The DC-2's cabin was 66 inches (1.7 m) wide, too narrow for side-by-side berths. Douglas agreed to go ahead with development only after Smith informed him of American's intention to purchase 20 aircraft. The new aircraft was engineered by a team led by chief engineer Arthur E. Raymond over the next two years, and the prototype DST (Douglas Sleeper Transport) first flew on December 17, 1935 (the 32nd anniversary of the Wright Brothers' flight at Kitty Hawk) with Douglas chief test pilot Carl Cover at the controls. Its cabin was 92 in (2,300 mm) wide, and a version with 21 seats instead of the 14–16 sleeping berths of the DST was given the designation DC-3. No prototype was built, and the first DC-3 built followed seven DSTs off the production line for delivery to American Airlines.
The DC-3 and DST popularized air travel in the United States. Eastbound transcontinental flights could cross the U.S. in about 15 hours with three refueling stops, while westbound trips against the wind took 17+1⁄2 hours. A few years earlier, such a trip entailed short hops in slower and shorter-range aircraft during the day, coupled with train travel overnight.
Several radial engines were offered for the DC-3. Early-production civilian aircraft used either the 9-cylinder Wright R-1820 Cyclone 9 or the 14-cylinder Pratt & Whitney R-1830 Twin Wasp, but the Twin Wasp was chosen for most military versions and was also used by most DC-3s converted from military service. Five DC-3S Super DC-3s with Pratt & Whitney R-2000 Twin Wasps were built in the late 1940s, three of which entered airline service.
=== Production ===
Total production including all military variants was 16,079. More than 400 remained in commercial service in 1998. Production was:
607 civilian variants
10,048 military C-47 and C-53 derivatives built at Santa Monica, California, Long Beach, California, and Oklahoma City
4,937 built under license in the Soviet Union (1939–1950) as the Lisunov Li-2 (NATO reporting name: Cab)
487 Mitsubishi Kinsei-engined aircraft built by Showa and Nakajima in Japan (1939–1945), as the L2D Type 0 transport (Allied codename Tabby)
Production of DSTs ended in mid-1941 and civilian DC-3 production ended in early 1943, although dozens of the DSTs and DC-3s ordered by airlines that were produced between 1941 and 1943 were pressed into the US military service while still on the production line. Military versions were produced until the end of the war in 1945. A larger, more powerful Super DC-3 was launched in 1949 to positive reviews. The civilian market was flooded with second-hand C-47s, many of which were converted to passenger and cargo versions. Only five Super DC-3s were built, and three of them were delivered for commercial use. The prototype Super DC-3 served the US Navy with the designation YC-129 alongside 100 R4Ds that had been upgraded to the Super DC-3 specifications.
=== Turboprop conversions ===
From the early 1950s, some DC-3s were modified to use Rolls-Royce Dart engines, as in the Conroy Turbo Three. Other conversions featured Armstrong Siddeley Mamba or Pratt & Whitney PT6A turbines.
The Greenwich Aircraft Corp DC-3-TP is a conversion with an extended fuselage and with Pratt & Whitney Canada PT6A-65AR or PT6A-67R engines fitted.
The Basler BT-67 is a conversion of the DC-3/C-47. Basler refurbishes C-47s and DC-3s at Oshkosh, Wisconsin, fitting them with Pratt & Whitney Canada PT6A-67R turboprop engines, lengthening the fuselage by 40 in (1,000 mm) with a fuselage plug ahead of the wing, and some local strengthening of the airframe.
South Africa-based Braddick Specialised Air Services International (commonly referred to as BSAS International) has also performed Pratt & Whitney PT6 turboprop conversions, having performed modifications on over 50 DC-3/C-47s / 65ARTP / 67RTP / 67FTPs.
== Operational history ==
American Airlines inaugurated passenger service on June 26, 1936, with simultaneous flights from Newark, New Jersey and Chicago, Illinois. Early U.S. airlines like American, United, TWA, Eastern, and Delta ordered over 400 DC-3s. These fleets paved the way for the modern American air travel industry, which eventually replaced trains as the most common means of long-distance travel across the United States. A nonprofit group, Flagship Detroit Foundation, continues to operate the only original American Airlines Flagship DC-3 with air show and airport visits throughout the U.S.
In 1936, KLM Royal Dutch Airlines received its first DC-3, which replaced the DC-2 in service from Amsterdam via Batavia (now Jakarta) to Sydney, by far the world's longest scheduled route at the time. In total, KLM bought 23 DC-3s before the war broke out in Europe. In 1941, a China National Aviation Corporation (CNAC) DC-3 pressed into wartime transportation service was bombed on the ground at Suifu Airfield in China, destroying the outer right wing. The only spare available was that of a smaller Douglas DC-2 in CNAC's workshops. The DC-2's right wing was removed, flown to Suifu under the belly of another CNAC DC-3, and bolted up to the damaged aircraft. After a single test flight, in which it was discovered that it pulled to the right due to the difference in wing sizes, the so-called DC-2½ was flown to safety.
During World War II, many civilian DC-3s were requisitioned for the war effort and more than 10,000 U.S. military versions of the DC-3 were built, under the designations C-47, C-53, R4D, and Dakota. Peak production was reached in 1944, with 4,853 being delivered. The armed forces of many countries used the DC-3 and its military variants for the transport of troops, cargo, and wounded. Licensed copies of the DC-3 were built in Japan as the Showa L2D (487 aircraft); and in the Soviet Union as the Lisunov Li-2 (4,937 aircraft).
After the war, thousands of cheap ex-military DC-3s became available for civilian use. Cubana de Aviación became the first Latin American airline to offer a scheduled service to Miami when it started its first scheduled international service from Havana in 1945 with a DC-3. Cubana used DC-3s on some domestic routes well into the 1960s.
Douglas developed an improved version, the Super DC-3, with more power, greater cargo capacity, and an improved wing, but with surplus aircraft available for cheap, they failed to sell well in the civilian aviation market. Only five were delivered, three of them to Capital Airlines. The U.S. Navy had 100 of its early R4Ds converted to Super DC-3 standard during the early 1950s as the Douglas R4D-8/C-117D. The last U.S. Navy C-117 was retired on July 12, 1976. The last U.S. Marine Corps C-117, serial 50835, was retired from active service during June 1982. Several remained in service with small airlines in North and South America in 2006.
The United States Forest Service used the DC-3 for smoke jumping and general transportation until the last example was retired in December 2015.
A number of aircraft companies attempted to design a "DC-3 replacement" over the next three decades (including the very successful Fokker F27 Friendship), but no single type could match the versatility, rugged reliability, and economy of the DC-3. While newer airliners soon replaced it on longer high-capacity routes, it remained a significant part of air transport systems well into the 1970s as a regional airliner before being replaced by early regional jets.
=== DC-3 in the 21st century ===
Perhaps unique among prewar aircraft, the DC-3 continues to fly in active commercial and military service as of 2025, ninety years after the type's first flight in 1935, although the number is dwindling due to expensive maintenance and a lack of spare parts. There are small operators with DC-3s in revenue service and as cargo aircraft. Applications of the DC-3 have included passenger service, aerial spraying, freight transport, military transport, missionary flying, skydiver shuttling and sightseeing. There have been a very large number of civil and military operators of the DC-3/C-47 and related types, which would have made it impracticable to provide a comprehensive listing of all operators.
A common saying among aviation enthusiasts and pilots is "the only replacement for a DC-3 is another DC-3".
Its ability to use grass or dirt runways makes it popular in developing countries or remote areas, where runways may be unpaved.
The oldest surviving DST is N133D, the sixth Douglas Sleeper Transport built, manufactured in 1936. This aircraft was delivered to American Airlines on 12 July 1936 as NC16005. In 2011 it was at Shell Creek Airport, Punta Gorda, Florida. It has been repaired and has been flying again, with a recent flight on 25 April 2021. The oldest DC-3 still flying is the original American Airlines Flagship Detroit (c/n 1920, the 43rd aircraft off the Santa Monica production line, delivered on 2 March 1937), which appears at airshows around the United States and is owned and operated by the Flagship Detroit Foundation.
The base price of a new DC-3 in 1936 was around $60,000–$80,000, and by 1960 used aircraft were available for $75,000. In 2023, flying DC-3s can be bought from $400,000-$700,000.
As of 2024, the Basler BT-67 with additions to handle cold weather and snow runways are used in Antarctica including regularly landing at the South Pole during the austral summer.
=== Original operators ===
== Variants ==
=== Civil ===
DST
Douglas Sleeper Transport; the initial variant with two 1,000–1,200-horsepower (750–890 kW) Wright R-1820 Cyclone engines and standard sleeper accommodation for up to 16 with small upper windows, convertible to carry up to 24 day passengers.
DST-A
DST with 1,000–1,200 hp (750–890 kW) Pratt & Whitney R-1830 Twin Wasp engines
DC-3
Initial non-sleeper variant; with 21 day-passenger seats, 1,000–1,200 hp (750–890 kW) Wright R-1820 Cyclone engines, no upper windows.
DC-3A
DC-3 with 1,000–1,200 hp (750–890 kW) Pratt & Whitney R-1830 Twin Wasp engines.
DC-3B
Version of DC-3 for TWA, with two 1,100–1,200 hp (820–890 kW) Wright R-1820 Cyclone engines and smaller convertible sleeper cabin forward with fewer upper windows than DST.
DC-3C
Designation for ex-military C-47, C-53, and R4D aircraft rebuilt by Douglas Aircraft in 1946, given new manufacturer numbers, and sold on the civil market; Pratt & Whitney R-1830 engines.
DC-3D
Designation for 28 new aircraft completed by Douglas in 1946 with unused components from the cancelled USAAF C-117 production line; Pratt & Whitney R-1830 engines.
DC-3S
Also known as Super DC-3, substantially redesigned DC-3 with fuselage lengthened by 39 inches (1.0 m); outer wings of a different shape with squared-off wingtips and shorter span; distinctive taller rectangular tail; and fitted with more powerful Pratt & Whitney R-2000 or 1,475 hp (1,100 kW) Wright R-1820 Cyclone engines. Five completed by Douglas for civil use using existing surplus secondhand airframes. Three Super DC-3s were operated by Capital Airlines 1950–1952. Designation also used for examples of the 100 R4Ds that had been converted by Douglas to this standard for the U.S. Navy as R4D-8s (later designated C-117Ds), all fitted with more powerful Wright R-1820 Cyclone engines, some of which entered civil use after retirement from military service.
=== Military ===
C-41, C-41A
The C-41 was the first DC-3 to be ordered by the USAAC and was powered by two 1,200 hp (890 kW) Pratt & Whitney R-1830-21 engines. It was delivered in October 1938 for use by United States Army Air Corps (USAAC) chief General Henry H. Arnold with the passenger cabin fitted out in a 14-seat VIP configuration. The C-41A was a single VIP DC-3A supplied to the USAAC in September 1939, also powered by R-1830-21 engines; and used by the Secretary of War. The forward cabin converted to sleeper configuration with upper windows similar to the DC-3B.
C-48
Various DC-3A and DST models; 36 impressed as C-48, C-48A, C-48B, and C-48C.
C-48 - 1 impressed ex-United Airlines DC-3A.
C-48A - 3 impressed DC-3As with 18-seat interiors.
C-48B - 16 impressed ex-United Airlines DST-A air ambulances with 16-berth interiors.
C-48C - 16 impressed DC-3As with 21-seat interiors.
C-49
Various DC-3 and DST models; 138 impressed into service as C-49, C-49A, C-49B, C-49C, C-49D, C-49E, C-49F, C-49G, C-49H, C-49J, and C-49K.
C-50
Various DC-3 models, fourteen impressed as C-50, C-50A, C-50B, C-50C, and C-50D.
C-51
One impressed aircraft originally ordered by Canadian Colonial Airlines, had starboard-side door.
C-52
DC-3A aircraft with R-1830 engines, five impressed as C-52, C-52A, C-52B, C-52C, and C-52D.
C-68
Two DC-3As impressed with 21-seat interiors.
C-84
One impressed DC-3B aircraft.
Dakota II
British Royal Air Force designation for impressed DC-3s.
LXD1
A single DC-3 supplied for evaluation by the Imperial Japanese Navy Air Service (IJNAS).
R4D-2
Two Eastern Air Lines DC-3-388s impressed into United States Navy (USN) service as VIP transports, later designated R4D-2F and later R4D-2Z.
R4D-4
Ten DC-3As impressed for use by the USN.
R4D-4R
Seven DC-3s impressed as staff transports for the USN.
R4D-4Q
Radar countermeasures version of R4D-4 for the USN.
XCG-17
Experimental assault glider, one converted.
=== Conversions ===
Dart-Dakota
for BEA test services, powered by two Rolls-Royce Dart turboprop engines.
Mamba-Dakota
A single conversion for the Ministry of Supply, powered by two Armstrong-Siddeley Mamba turboprop engines.
Airtech DC-3/2000
DC-3/C-47 engine conversion by Airtech Canada, first offered in 1987. Powered by two PZL ASz-62IT radial engines.
Basler BT-67
DC-3/C-47 conversion with a stretched fuselage, strengthened structure, modern avionics, and powered by two Pratt & Whitney Canada PT-6A-67R turboprop engines.
BSAS C-47TP Turbo Dakota
A South African C-47 conversion for the South African Air Force by Braddick Specialised Air Services, with two Pratt & Whitney Canada PT6A-65R turboprop engines, revised systems, stretched fuselage, and modern avionics.
Conroy Turbo-Three
One DC-3/C-47 converted by Conroy Aircraft with two Rolls-Royce Dart Mk. 510 turboprop engines.
Conroy Super-Turbo-Three
Same as the Turbo Three but converted from a Super DC-3. One converted.
Conroy Tri-Turbo-Three
Conroy Turbo Three further modified by the removal of the two Rolls-Royce Dart engines and their replacement by three Pratt & Whitney Canada PT6s (one mounted on each wing and one in the nose).
Greenwich Aircraft Corp Turbo Dakota DC-3
DC-3/C-47 conversion with a stretched fuselage, strengthened wing center section, updated systems, and powered by two Pratt & Whitney Canada PT6A-65AR turboprop engines.
TS-62
Douglas-built C-47s fitted with Russian Shvetsov ASh-62 radial engines after World War II due to shortage of American engines in the Soviet Union. Some TS-62s featured a small extra cockpit window on the left side.
TS-82
Similar to TS-62, but with 1650 hp Shvetsov ASh-82 radial engines.
USAC DC-3 Turbo Express
A turboprop conversion by the United States Aircraft Corporation, fitting Pratt & Whitney Canada PT6A-45R turboprop engines with an extended forward fuselage to maintain center of gravity. First flight of the prototype conversion, (N300TX), was on July 29, 1982.
=== Military and foreign derivatives ===
Douglas C-47 Skytrain and C-53 Skytrooper
Production military DC-3A variants.
Showa and Nakajima L2D
Developments manufactured under license in Japan by Nakajima and Showa for the IJNAS; 487 built.
Lisunov Li-2 and PS-84
Developments manufactured under license in the USSR; 4,937 built.
== Accidents and incidents ==
== Aircraft on display ==
Douglas C-47-DL serial number 41-7723 is on display at Pima Air & Space Museum near Tucson, Arizona. The aircraft was previously displayed at the United States Air Force Museum.
A decommissioned DC-3 is part of the seating area at a McDonald's in Taupō, New Zealand.
A DC-3 has been converted into an exhibit at Madurodam, The Netherlands.
A DC-3 was deliberately submerged in July 2009 for divers in Kaş, Antalya.
== Specifications (DC-3A-S1C3G) ==
Data from McDonnell Douglas Aircraft since 1920General characteristics
Crew: two
Capacity: 21–32 passengers
Length: 64 ft 5 in (19.7 m)
Wingspan: 95 ft 0 in (29.0 m)
Height: 16 ft 9 in (5.16 m) (level attitude) 23 ft 6 in
Wing area: 987 sq ft (91.7 m2)
Aspect ratio: 9.17
Airfoil: NACA2215 / NACA2206
Empty weight: 16,865 lb (7,650 kg)
Gross weight: 25,200 lb (11,431 kg) payload w/full fuel, 3,446 lb
Fuel capacity: 822 US gal (3,111.6 L)
Powerplant: 2 × Pratt & Whitney R-1830-S1C3G Twin Wasp 14-cyl. air-cooled two row radial piston engine, 1,200 hp (890 kW) each
Propellers: 3-bladed Hamilton Standard 23E50 series, 11 ft 6 in (3.5 m) diameter hydraulically controlled constant speed, feathering
Performance
Maximum speed: 223 kn (257 mph, 413 km/h) at 8,500 ft (2,590 m)
Cruise speed: 183 kn (211 mph, 339 km/h)
Stall speed: 68.0 kn (78.2 mph, 125.9 km/h)
Never exceed speed: 223 kn (257 mph, 413 km/h)
Minimum control speed: 77 kn (89 mph, 143 km/h) with one engine inoperative
Range: 1,370 nmi (1,580 mi, 2,540 km) (maximum fuel, 3500 lb payload), cruise speed/range at 10,000 ft ASL, cruise fuel consumption of 94 gph at 50% power, 157kt IAS, 1,740 nm
Service ceiling: 23,200 ft (7,100 m) , with one engine operative, 9,000 ft
Rate of climb: 1,140 ft/min (5.8 m/s) , rate of climb at sea level with one engine operative, 200 fpm
Wing loading: 25.5 lb/sq ft (125 kg/m2)
Power/mass: 0.0952 hp/lb (156.5 W/kg)
== Notable appearances in media ==
Due to the large number produced; Golden Age of Aviation and World War II significance; and nearly a century of service in passenger, cargo, and military roles throughout the world; the aircraft maintains significant popular interest and has appeared in numerous works of fiction.
== See also ==
Related development
Basler BT-67
Douglas AC-47 Spooky
Douglas C-47 Skytrain
Douglas R4D-8/C-117D
Douglas DC-2
Lisunov Li-2
Showa/Nakajima L2D
Conroy Turbo-Three
Conroy Tri-Turbo-Three
Aircraft of comparable role, configuration, and era
Boeing 247
Curtiss C-46 Commando
Douglas DC-5
Focke-Wulf Fw 206
Junkers Ju 52
Lockheed Model 18 Lodestar
Saab 90 Scandia
Vickers VC.1 Viking
Related lists
List of aircraft of World War II
List of civil aircraft
== References ==
=== Notes ===
=== Bibliography ===
== External links ==
DC-3/Dakota Historical Society
The DC-3 Hangar – Douglas DC-3 specific site
Centennial of flight Commission on the DC-3
Douglas DC-3 at the Aviation History Online Museum
Instruction manual: DC-3, DST – The Museum of Flight Digital Collections |
E-8 Joint STARS | The Northrop Grumman E-8 Joint Surveillance Target Attack Radar System (Joint STARS) is a retired United States Air Force (USAF) airborne ground surveillance, battle management and command and control aircraft. It tracked ground vehicles and some aircraft, collected imagery, and relayed tactical pictures to ground and air theater commanders. Until its retirement in 2023 the aircraft was operated by both active duty USAF and Air National Guard units, with specially trained U.S. Army personnel as additional flight crew.
== Development ==
Joint STARS evolved from separate U.S. Army and Air Force (USAF) programs to develop technology to detect, locate and attack enemy armor at ranges beyond the front line of a battle. In 1982, the programs were merged and the USAF became the lead agent. The concept and sensor technology for the E-8 was developed and tested on the Tacit Blue experimental aircraft. The prime contract was awarded to Grumman Aerospace Corporation in September 1985 for two E-8A development systems.
In late 2005, Northrop Grumman was awarded a contract for upgrading engines and other systems. Pratt & Whitney, in a joint venture with Seven Q Seven (SQS), was contracted to produce and deliver JT8D-219 engines for the E-8s. Their greater efficiency would have allowed the Joint STARS to spend more time on station, take off from a wider range of runways, climb faster, fly higher, all with a much reduced cost per flying hour.
In December 2008, an E-8C test aircraft took its first flight with the new engines. In 2009, the company began engine replacement and additional upgrade efforts. The re-engining funding was halted in 2009 as the Air Force began to consider other options for performing the JSTARS mission.
== Design ==
The E-8C is an aircraft modified from the Boeing 707-300 series commercial airliner. The E-8 carries specialized radar, communications, operations and control subsystems. The most prominent external feature is the 40 ft (12 m) canoe-shaped radome under the forward fuselage that houses the 24 ft (7.3 m) APY-7 active electronically scanned array side looking airborne radar antenna.
The E-8C can respond quickly and effectively to support worldwide military contingency operations. It is a jam-resistant system capable of operating while experiencing heavy electronic countermeasures. The E-8C can fly a mission profile for 9 hours without refueling. Its range and on-station time can be substantially increased through in-flight refueling.
=== Radar and systems ===
The AN/APY-7 radar can operate in wide area surveillance, ground moving target indicator (GMTI), fixed target indicator (FTI) target classification, and synthetic aperture radar (SAR) modes.
To pick up moving targets, the Doppler radar looks at the Doppler frequency shift of the returned signal. It can look from a long-range, which the military refers to as a high standoff capability. The antenna can be tilted to either side of the aircraft for a 120-degree field of view covering nearly 19,305 square miles (50,000 km2) and can simultaneously track 600 targets at more than 152 miles (250 km). The GMTI modes cannot pick up objects that are too small, insufficiently dense, or stationary. Data processing allows the APY-7 to differentiate between armored vehicles (tracked tanks) and trucks, allowing targeting personnel to better select the appropriate ordnance for various targets.
The system's SAR modes can produce images of stationary objects. Objects with many angles (for example, the interior of a pick-up bed) will give a much better radar signature, or specular return. In addition to being able to detect, locate and track large numbers of ground vehicles, the radar has a limited capability to detect helicopters, rotating antennas and low, slow-moving fixed-wing aircraft.
The radar and computer subsystems on the E-8C can gather and display broad and detailed battlefield information. Data is collected as events occur. This includes position and tracking information on enemy and friendly ground forces. The information is relayed in near-real time to the US Army's common ground stations via the secure jam-resistant surveillance and control data link (SCDL) and to other ground C4I nodes beyond line-of-sight via ultra high-frequency satellite communications.
Other major E-8C prime mission equipment are the communications/datalink (COMM/DLX) and operations and control (O&C) subsystems. Eighteen operator workstations display computer-processed data in graphic and tabular format on video screens. Operators and technicians perform battle management, surveillance, weapons, intelligence, communications and maintenance functions.
Northrop Grumman has tested the installation of a MS-177 camera on an E-8C to provide real time visual target confirmation.
The Multi-Platform Radar Technology Insertion Program (MP-RTIP) radar system was proposed as a more capable replacement of the AN/APY-7. The USAF ended up pursuing cheaper ways to modernize the E-8, though the MP-RTIP receiver technology did see use in the form of JSTARS Radar Modernization (JSRM).
=== Battle management ===
In missions from peacekeeping operations to major theater war, the E-8C can provide targeting data and intelligence for attack aviation, naval surface fire, field artillery and friendly maneuver forces. The information helps air and land commanders to control the battlespace.
The E-8's ground-moving radar can tell approximate number of vehicles, location, speed, and direction of travel. It cannot identify exactly what type of vehicle a target is, tell what equipment it has, or discern whether it is friendly, hostile, or a bystander, so commanders often crosscheck the JSTARS data against other sources. In the Army, JSTARS data is analyzed in and disseminated from a Ground Station Module (GSM).
Other improvement programs that have been applied to the E-8C include JSTARS Net Enabled Weapons (JNEW) and Joint Surface Warfare (JSuW); Blue Force Tracker (BFT); and Battlefield Airborne Communications Node (BACN) compatibility.
== Operational history ==
The two E-8A development aircraft were deployed in 1991 to participate in Operation Desert Storm under the direction of USAF Colonel Harry H. Heimple, Program Director, even though they were still in development. The joint program accurately tracked mobile Iraqi forces, including tanks and Scud missiles. Crews flew developmental aircraft on 49 combat sorties, accumulating more than 500 combat hours and a 100% mission effectiveness rate.
These Joint STARS developmental aircraft also participated in Operation Joint Endeavor, a NATO peacekeeping mission, in December 1995. While flying in friendly air space, the test-bed E-8A and pre-production E-8C aircraft monitored ground movements to confirm compliance with the Dayton Peace Accords agreements. Crews flew 95 consecutive operational sorties and more than 1,000 flight hours with a 98% mission effectiveness rate.
The 93d Air Control Wing, which activated 29 January 1996, accepted its first aircraft on 11 June 1996, and deployed in support of Operation Joint Endeavor in October. The provisional 93d Air Expeditionary Group monitored treaty compliance while NATO rotated troops through Bosnia and Herzegovina. The first production E-8C and a pre-production E-8C flew 36 operational sorties and more than 470 flight hours with a 100% effectiveness rate. The wing declared initial operational capability 18 December 1997 after receiving the second production aircraft. Operation Allied Force saw Joint STARS in action again from February to June 1999 accumulating more than 1,000 flight hours and a 94.5% mission-effectiveness rate in support of the U.S.-lead Kosovo War.
The twelfth production aircraft, outfitted with an upgraded operations and control subsystem, was delivered to the USAF on 5 November 2001.
On 1 October 2002, the 93d Air Control Wing (93 ACW) was "blended" with the 116th Bomb Wing in a ceremony at Robins Air Force Base (AFB), Georgia. The 116 BW was an Air National Guard wing equipped with B-1B Lancer bombers at Robins. As a result of a USAF reorganization of the B-1B force, all B-1Bs were assigned to active duty wings, resulting in the 116 BW lacking a current mission. The newly created wing was designated 116th Air Control Wing (116 ACW) and the 93 ACW was inactivated the same day. The 116 ACW constituted the first fully blended wing of active duty and Air National Guard airmen. The wing took delivery of the 17th and final E-8C on 23 March 2005.
The E-8C Joint STARS routinely supports various taskings of the Combined Force Command Korea during the North Korean winter exercise cycle and for the United Nations enforcing resolutions on Iraq.
In March 2009, a Joint STARS aircraft was damaged beyond economical repair when a test plug was left on a fuel tank vent, subsequently causing the fuel tank to rupture during in-flight refueling. There were no casualties but the aircraft sustained $25 million in damage.
In September 2009, Loren B. Thompson of the Lexington Institute raised the question of why most of the Joint STARS fleet was sitting idle instead of being used to track insurgents in Afghanistan. Thompson states that the Joint STARS' radar has an inherent capacity to find what the Army calls 'dismounted' targets—insurgents walking around or placing roadside bombs. Thompson's neutrality has been questioned by some since Lexington Institute has been heavily funded by defense contractors, including Northrop Grumman.
Recent trials of Joint STARS in Afghanistan are destined to develop tactics, techniques and procedures in tracking dismounted, moving groups of Taliban.
In January 2011, Northrop Grumman's E-8C Joint Surveillance Target Attack Radar System (Joint STARS) test bed aircraft completed the second of two deployments to Naval Air Station Point Mugu, California, in support of the U.S. Navy Joint Surface Warfare Joint Capability Technology Demonstration to test its network-enabled weapon architecture. The Joint STARS aircraft executed three Operational Utility Assessment flights and demonstrated its ability to guide anti-ship weapons against surface combatants at a variety of standoff distances in the NEW architecture.
From 2001 to January 2011 the Joint STARS fleet flew more than 63,000 hours in 5,200 combat missions in support of Operations Iraqi Freedom, Enduring Freedom and New Dawn.
On 1 October 2011, the "blended" wing construct of the 116th Air Control Wing (116 ACW), combining Air National Guard and Regular Air Force personnel in a single unit was discontinued. On this date, the 461st Air Control Wing (461 ACW) was established at Robins AFB as the Air Force's sole active duty E-8 Joint STARS wing while the 116 ACW reverted to a traditional Air National Guard wing within the Georgia Air National Guard. Both units share the same E-8 aircraft and will often fly with mixed crews, but now function as separate units.
On 1 October 2019, JSTARS ended its continuous presence in the United States Central Command (USCENTCOM) areas of responsibility. The 18–year deployment was the second-longest deployment in U.S. Air Force history. In that time, the crews and aircraft flew 10,938 sorties, and 114,426.6 combat hours.
On 11 February 2022, the first of four JSTARS out of the remaining 16 operational JSTARS was retired as detailed in the Fiscal Year 2022 National Defence Authorisation Act (NDAA). The plane (serial number 92-3289/GA) which was the first to arrive at Robins AFB in 1996 has now been transferred to the 309th Aerospace Maintenance and Regeneration Group at Davis–Monthan Air Force Base.
From late 2021 to early 2022, E-8C JSTARS aircraft deployed to Europe during the prelude to the Russian invasion of Ukraine. Thirty years after entering service, it was performing the type of mission it had originally been intended to: monitoring Russian military activity in Eastern Europe, which it did while operating over Ukrainian airspace until the start of the invasion in late February 2022.
=== Retirement ===
The USAF began an analysis of alternatives (AOA) in March 2010 for its next generation GMTI radar aircraft fleet. The study was completed in March 2012 and recommended buying a new business jet-based ISR aircraft, such as a version of the Boeing 737, and the Gulfstream 550. The Air Force said Joint STARS was expected to remain in operation through 2030.
On 23 January 2014, the USAF revealed a plan for the acquisition of a new business jet-class replacement for the E-8C Joint STARS. The program was called Joint STARS Recap and planned for the aircraft to reach initial operating capability (IOC) by 2022. The aircraft would be more efficient, and separate contracts would be awarded for developing the aircraft, airborne sensor, battle management command and control (BMC2) system, and communications subsystem.
On 8 April 2014, the Air Force held an industry day for companies interested in competing for JSTARS Recap; attendees included Boeing, Bombardier Aerospace, and Gulfstream Aerospace. Air Force procurement documents called for a replacement for the Boeing 707-based E-8C as a "business jet class" aircraft that is "significantly smaller and more efficient." Indicative specification were for an aircraft with a 10-13 person crew with a 3.96–6.1 m (13.0–20.0 ft) radar array and capable of flying at 38,000 ft for eight hours. In August 2015, the Air Force issued contracts to Boeing, Lockheed Martin, and Northrop Grumman for a one-year pre-engineering and manufacturing development effort to mature and test competing designs ahead of a downselect in late 2017.
During the fiscal 2019 budget rollout briefing it was announced that the Air Force will not move forward with an E-8C replacement aircraft. Funding for the JSTARS recapitalization program was instead be diverted to pay for development of an Advanced Battle Management System.
The E-8C JSTARS began to be retired in February 2022, and flew its last operational sortie on 21 September 2023. Rather than procure a replacement aircraft, the USAF intends to use a network of satellites, aircraft sensors and ground radars as a cheaper and more resilient approach to collecting similar targeting and tracking data. The JSTARS performed its last flight on 15 November 2023. The aircraft conducted some 14,000 operational sorties, flying more than 141,000 hours over 32 years of service.
== Variants ==
E-8A
Original platform configuration
TE-8A
Single aircraft with mission equipment removed, used for flight crew training.
YE-8B
Single aircraft, was to be a U.S. Navy Boeing E-6 Mercury but transferred to the U.S. Air Force as a development aircraft before it was decided to convert second-hand Boeing 707s (one from a Canadian Boeing CC-137) for the JSTARS role.
E-8C
Production Joint STARS platform configuration, converted from second-hand Boeing 707s (1 from a CC-137).
== Operators ==
United States
United States Air Force 1991-2023
93d Air Control Wing - Robins Air Force Base, Georgia 1996–2002
12th Airborne Command and Control Squadron
16th Airborne Command and Control Squadron
461st Air Control Wing - Robins Air Force Base, Georgia 2002–2023
12th Airborne Command and Control Squadron
16th Airborne Command and Control Squadron
Air National Guard - 2006–2023
116th Air Control Wing - Robins Air Force Base, Georgia
128th Airborne Command and Control Squadron
== Aircraft on display ==
E-8C 00-2000 is preserved at the Museum of Aviation at Robins Air Force Base, Georgia. It was transported from the base to the museum's facilities in July 2023.
TE-8A 86-0416 was transferred to the Sowela Technical Community College in Lake Charles, Louisiana on 19 September 2023. It will be used as a ground aircraft maintenance training tool as part of the college's Aviation Maintenance Technology program. This aircraft was one of the original two pre-production E-8A which took part in Operation Desert Storm in 1991, and also saw action during Operation Joint Endeavor in 1995. Afterward, it was converted into a TE-8A training aircraft and used to qualify E-8C pilots, navigators, and flight engineers.
E-8C 02-9111, the last JSTARS aircraft in service, was transferred to Kelly Field, San Antonio, Texas, on 15 November 2023, where it serves as a ground training aircraft in the 37th Training Wing.
== Accidents ==
One E-8C was damaged beyond economical repair during an operational sortie.
On 13 March 2009, E-8C tail 93-0597, while assigned to the USAF 379th Air Expeditionary Wing, experienced a near catastrophic fuel tank over-pressurization during aerial refueling. While refueling from a Boeing KC-135T Stratotanker a test plug left in the fuel vent system caused overpressure resulting in severe internal damage to the number two fuel tank and surrounding wing structure. The JSTARS crew were able to make a successful emergency landing at Al Udeid Air Base, and the aircraft was written off.
== Specifications ==
Data from USAF FactsheetGeneral characteristics
Crew: 4 flight crew (pilot, co-pilot, navigator, flight engineer)
Capacity: 18 specialists (crew size varies according to mission)
Length: 152 ft 11 in (46.61 m)
Wingspan: 145 ft 9 in (44.42 m)
Height: 42 ft 6 in (12.95 m)
Empty weight: 171,000 lb (77,564 kg)
Max takeoff weight: 336,000 lb (152,407 kg)
Powerplant: 4 × Pratt & Whitney TF33-PW-102 low-bypass turbofan engines, 19,200 lbf (85 kN) thrust each
Performance
Cruise speed: 390 kn (450 mph, 720 km/h) to 510 kn (945 km/h)
Optimum orbit speed: 449 mph (723 km/h) to 587 mph (945 km/h)
Endurance: 9 hours
Service ceiling: 42,000 ft (13,000 m)
Avionics
AN/APY-7 synthetic aperture radar
12 AN/ARC-225 UHF radios w/ HAVE QUICK
2 AN/ARC-190 HF radios
4 VHF radios (2 x AN/ARC-210, 1 x AN/ARC-186, 1 x AN/ARC-201D)
3 AN/ARC-231 SATCOM radios
== See also ==
Airborne early warning and control – Airborne system of surveillance radar plus command and control functions
Related development
Boeing C-137 Stratoliner – VIP transport aircraft derived from the Boeing 707
Boeing CC-137 – Boeing 707 transport of the Canadian Forces – parts from most of the ex-Canadian Forces 707 obtained for spares for the E-8 STARS program and two ex-CF converted as E-8 and E-8C
Boeing E-3 Sentry – Airborne early warning and control aircraft based on Boeing 707 airframe
Boeing E-6 Mercury – Airborne command post aircraft by Boeing based on 707 airframe
Aircraft of comparable role, configuration, and era
Northrop Grumman E-2 Hawkeye – Airborne early warning and control aircraftPages displaying short descriptions of redirect targets
Embraer R-99B – Airborne early warning and reconnaissance aircraft based on the ERJ-145
Raytheon Sentinel – Airborne battlefield and ground surveillance aircraft formerly operated by the Royal Air Force
Related lists
List of active military aircraft of the United States
List of military electronics of the United States
== References ==
=== Citations ===
=== Bibliography ===
Eden, Paul, ed. (July 2006). The Encyclopedia of Modern Military Aircraft. London, UK: Amber Books, 2004. ISBN 1-904687-84-9.
== External links ==
Northrop Grumman Joint STARS System Information
Northrop Grumman Joint STARS Radar Information
Boeing Integrated Defense Systems
Northrop Grumman ISR overview
Joint STARS Re-engine Program Info |
Early flying machines | Early flying machines include all forms of aircraft studied or constructed before the development of the modern aeroplane by 1910. The story of modern flight begins more than a century before the first successful manned aeroplane, and the earliest aircraft thousands of years before.
== Primitive beginnings ==
=== Legends ===
Some ancient mythologies feature legends of men using flying devices. One of the earliest known is the Greek legend of Daedalus; in Ovid's version, Daedalus fastens feathers together with thread and wax to mimic the wings of a bird. Other ancient legends include the Indian Vimana flying palace or chariot, the biblical Ezekiel's Chariot, the Irish roth rámach built by blind druid Mug Ruith and Simon Magus, various stories about magic carpets, and the mythical British King Bladud, who conjured up flying wings. The Flying Throne of Kay Kāvus was a legendary eagle-propelled craft built by the mythical Shah of Persia, Kay Kāvus, used for flying him all the way to China.
=== Early attempts ===
According to Aulus Gellius, the Ancient Greek philosopher, mathematician, astronomer, statesman, and strategist Archytas (428–347 BC) was reputed to have designed and built the first artificial, self-propelled flying device, a bird-shaped model propelled by a jet of what was probably steam, said to have actually flown some 200 metres around 400 BC. According to Gellius, this machine, which its inventor called The Pigeon (Greek: Περιστέρα "Peristera"), was suspended on a wire or pivot for its "flight" and was powered by a "concealed aura or spirit".
Eventually some tried to build flying devices, such as birdlike wings, and to fly by jumping off a tower, hill, or cliff. During this early period physical issues of lift, stability, and control were not understood, and most attempts ended in serious injury or death. In the 1st century AD, Chinese Emperor Wang Mang recruited a specialist scout to be bound with bird feathers; he is claimed to have glided about 100 meters. In 559 AD, Yuan Huangtou is said to have landed safely from an enforced tower jump.
The Andalusian scientist Abbas ibn Firnas (810–887 AD) reportedly made a glide in Córdoba, Spain, covering his body with vulture feathers and attaching two wings to his arms. The flight attempt was reported by the 17th-century Algerian historian Ahmed Mohammed al-Maqqari, who linked it to a 9th-century poem by one of Muhammad I of Córdoba's court poets. Al-Maqqari stated that Firnas flew some distance, before landing with some injuries, attributed to his lacking a tail (as birds use to land). The historian Lynn Townsend White, Jr. concluded that ibn Firnas made the first successful flight in history.
In the twelfth century, William of Malmesbury stated that the 11th-century Benedictine monk Eilmer of Malmesbury attached wings to his hands and feet and flew a short distance, but broke both legs while landing, also having neglected to make himself a tail.
=== Early kites ===
The kite was invented in China, possibly as far back as the 5th century BC by Mozi (also Mo Di) and Lu Ban (also Gongshu Ban). These leaf kites were constructed by stretching silk over a split bamboo framework. The earliest known Chinese kites were flat (not bowed) and often rectangular. Later, tailless kites incorporated a stabilizing bowline. Designs often emulated flying insects, birds, and other beasts, both real and mythical. Some were fitted with strings and whistles to make musical sounds while flying.
In 549 AD, a kite made of paper was used as a message for a rescue mission. Ancient and medieval Chinese sources list other uses of kites for measuring distances, testing the wind, lifting men, signalling, and communication for military operations.
After its introduction into India, the kite further evolved into the fighter kite. Traditionally these are small, unstable single line flat kites where line tension alone is used for control, and an abrasive line is used to cut down other kites.
Kites also spread throughout Polynesia, as far as New Zealand. Anthropomorphic kites made from cloth and wood were used in religious ceremonies to send prayers to the gods. By 1634, kites had reached the West, with an illustration of a diamond kite with a tail appearing in Bate's Mysteries of nature and art.
==== Man-carrying kites ====
Man-carrying kites are believed to have been used extensively in ancient China, for both civil and military purposes and sometimes enforced as a punishment. Stories of man-carrying kites also occur in Japan, following the introduction of the kite from China around the seventh century AD. It is said that at one time there was a Japanese law against man-carrying kites.
In 1282, the European explorer Marco Polo described the Chinese techniques then current and commented on the hazards and cruelty involved. To foretell whether a ship should sail, a man would be strapped to a kite having a rectangular grid framework and the subsequent flight pattern used to divine the outlook.
=== Rotor wings ===
The use of a rotor for vertical flight has existed since the 4th century AD in the form of the bamboo-copter, an ancient Chinese toy. The bamboo-copter is spun by rolling a stick attached to a rotor. The spinning creates lift, and the toy flies when released. The philosopher Ge Hong's book, the Baopuzi (Master Who Embraces Simplicity), written around 317, describes the apocryphal use of a possible rotor in aircraft: "Some have made flying cars [feiche 飛車] with wood from the inner part of the jujube tree, using ox-leather (straps) fastened to returning blades so as to set the machine in motion."
The similar "moulinet à noix" (rotor on a nut), as well as string-pull toys with four blades, appeared in Europe in the 14th century.
=== Hot air balloons ===
From ancient times the Chinese have understood that hot air rises and have applied the principle to a type of small hot air balloon called a sky lantern. A sky lantern consists of a paper balloon under or just inside which a small lamp is placed. Sky lanterns are traditionally launched for pleasure and during festivals. According to Joseph Needham, such lanterns were known in China from the 3rd century BC. Their military use is attributed to the general Zhuge Liang, who is said to have used them to scare the enemy troops.
There is evidence the Chinese also "solved the problem of aerial navigation" using balloons, hundreds of years before the 18th century.
=== The Renaissance ===
Eventually some investigators began to discover and define some of the basics of scientific aircraft design. Powered designs were either still driven by man-power or used a metal spring. In his 1250 book De mirabili potestate artis et naturae (Secrets of Art and Nature), the Englishman Roger Bacon predicted future designs for a balloon filled with an unspecified aether as well as a man-powered ornithopter, claiming to know someone who had invented the latter.
==== Leonardo da Vinci ====
Leonardo da Vinci studied bird flight for many years, analyzing it rationally and anticipating many principles of aerodynamics. He understood that "An object offers as much resistance to the air as the air does to the object", anticipating Isaac Newton's third law of motion (published in 1687). From the last years of the 15th century onwards, Leonardo wrote about and sketched many designs for flying machines and mechanisms, including ornithopters, fixed-wing gliders, rotorcraft and parachutes. His early designs were man-powered types including rotorcraft and ornithopters (improving on Bacon's proposal by adding a stabilizing tail). He eventually came to realise the impracticality of these and turned to controlled gliding flight, also sketching some designs powered by a spring.
In 1488, Leonardo drew a hang glider design in which the inner parts of the wings are fixed, and some control surfaces are provided towards the tips (as in the gliding flight of birds). His drawings survive and are deemed flight-worthy in principle, but he himself never flew in such a craft. In an essay titled Sul volo (On flight), he describes a flying machine called "the bird" which he built from starched linen, leather joints, and raw silk thongs. In the Codex Atlanticus, he wrote, "Tomorrow morning, on the second day of January, 1496, I will make the thong and the attempt." Some of Leonardo's other designs, such as the four-person aerial screw, similar to a helicopter, have severe flaws. He drew and wrote about a design for an ornithopter around 1490. Leonardo's work remained unknown until 1797, and so had no influence on developments over the next three hundred years.
=== Other attempts ===
In 1496, a man named Seccio broke both arms in Nuremberg while attempting flight. In 1507, John Damian strapped on wings covered with chicken feathers and jumped from the walls of Stirling Castle in Scotland, breaking his thigh; he later blamed it on not using eagle feathers.
The earliest report of an attempted jet flight dates back to the Ottoman Empire. In 1633, the aviator Lagâri Hasan Çelebi reportedly used a cone-shaped rocket to make the first attempt at a jet flight.
Francis Willughby's suggestion, published in 1676, that human legs were more comparable to birds' wings in strength than arms, had occasional influence. On 15 May 1793, the Spanish inventor Diego Marín Aguilera jumped with his glider from the highest part of the castle of Coruña del Conde, reaching a height of about 5 or 6 m, and gliding for about 360 metres. As late as 1811, Albrecht Berblinger constructed an ornithopter and jumped into the Danube at Ulm.
== Lighter than air ==
=== Balloons ===
The modern era of lighter-than-air flight began early in the 17th century with Galileo Galilei's experiments in which he showed that air has weight. Around 1650, Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today: carrying ballast which may be dropped overboard to gain height, and venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases.
The first documented balloon flight in Europe was of a model made by the Brazilian-born Portuguese priest Bartolomeu de Gusmão. On 8 August 1709, in Lisbon, he made a small hot-air balloon of paper with a fire burning beneath it, lifting it about 4 metres (13 ft) in front of king John V and the Portuguese court.
In the mid-18th century the Montgolfier brothers began experimenting with parachutes and balloons in France. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke". Despite not fully understanding the principles at work, they made some successful launches and in December 1782 flew a 20 m3 (710 cu ft) balloon to a height of 300 m (980 ft). The French Académie des Sciences soon invited them to Paris to give a demonstration.
Meanwhile, the discovery of hydrogen led Joseph Black to propose its use as a lifting gas in about 1780, though practical demonstration awaited a gastight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon and this was accepted. Charles and two craftsmen, the Robert brothers, developed a gastight material of rubberised silk and set to work.
1783 was a watershed year for ballooning. Between 4 June and 1 December five separate French balloons achieved important aviation firsts:
4 June: The Montgolfier brothers' unmanned hot air balloon lifted a sheep, a duck and a chicken in a basket hanging beneath at Annonay.
27 August: Professor Jacques Charles and the Robert brothers flew an unmanned hydrogen balloon. The hydrogen gas was generated by chemical reaction during the filling process.
19 October: The Montgolfiers launched the first manned flight, a tethered balloon with humans on board, at the Folie Titon in Paris. The aviators were the scientist Jean-François Pilâtre de Rozier, the manufacture manager Jean-Baptiste Réveillon, and Giroud de Villette.
21 November: The Montgolfiers launched the first free flight balloon with human passengers. King Louis XVI had originally decreed that condemned criminals would be the first pilots, but Rozier, along with the Marquis François d'Arlandes, successfully petitioned for the honor. They drifted 8 km (5.0 mi) in a balloon powered by a wood fire. 9 kilometres (5.6 mi) covered in 25 minutes.
1 December: Jacques Charles and Nicolas-Louis Robert launched a manned hydrogen balloon from the Jardin des Tuileries in Paris. They ascended to a height of about 1,800 feet (550 m) and landed at sunset in Nesles-la-Vallée after a flight of 2 hours and 5 minutes, covering 22 miles (35 km). After Robert alighted Charles decided to ascend alone. This time he ascended rapidly to an altitude of about 3,000 metres (9,800 ft), where he saw the sun again but also suffered extreme pain in his ears.
The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière.
The next balloon of Charles and the Robert brothers was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 kilometres (62 mi), between Paris and Beuvry, despite the man-powered propulsive devices proving useless.
In January the next year Jean Pierre Blanchard and John Jeffries crossed the English Channel from Dover to the Bois de Felmores in a Charlière. But a similar attempt the other way ended in tragedy. To try to provide both endurance and controllability, de Rozier developed a balloon with both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. His idea was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeaters skin. Shortly after the flight began, de Rozier was seen to be venting hydrogen when it was ignited by a spark and the balloon went up in flames, killing those on board. The source of the spark is not known, but suggestions include static electricity or the brazier for the hot air section.
Ballooning quickly became a major "rage" in Europe in the late 18th century, providing the first detailed understanding of the relationship between altitude and the atmosphere. By the early 1900s, ballooning was a popular sport in Britain. These privately owned balloons usually used coal gas as the lifting gas. This has about half the lifting power of hydrogen, so the balloons had to be larger; however, coal gas was far more readily available, and the local gas works sometimes provided a special lightweight formula for ballooning events.
Tethered balloons were used during the American Civil War by the Union Army Balloon Corps. In 1863, the young Ferdinand von Zeppelin, who was acting as a military observer with the Union Army of the Potomac, first flew as a balloon passenger in a balloon that had been in service with the Union army. Later that century, the British Army would make use of observation balloons during the Boer War.
=== Dirigibles or airships ===
Work on developing a dirigible (steerable) balloon, nowadays called an airship, continued sporadically throughout the 19th century. The first sustained powered, controlled flight in history is believed to have taken place on 24 September 1852 when Henri Giffard flew about 17 miles (27 km) in France from Paris to Trappes with the Giffard dirigible, a non-rigid airship filled with hydrogen and powered by a 3 horsepower (2.2 kW) steam engine driving a 3-bladed propeller.
In 1863, Solomon Andrews flew his aereon design, an unpowered, controllable dirigible in Perth Amboy, New Jersey. He flew a later design in 1866 around New York City and as far as Oyster Bay, New York. His technique of gliding under gravity works by changing the lift to provide propulsive force as the airship alternately rises and sinks, and so does not need a powerplant.
A further advance was made on 9 August 1884, when the first fully controllable free flight was made by Charles Renard and Arthur Constantin Krebs in a French Army electric-powered airship, La France. The 170-foot (52 m) long, 66,000-cubic-foot (1,900 m3) airship covered 8 km (5 mi) in 23 minutes with the aid of an 8.5 horsepower (6.3 kW) electric motor, returning to its starting point. This was the first flight over a closed circuit.
These aircraft were not practical. Besides being generally frail and short-lived, they were non-rigid or at best semi-rigid. Consequently, it was difficult to make them large enough to carry a commercial load.
Count Ferdinand von Zeppelin realised that a rigid outer frame would allow a much bigger airship. He founded the Zeppelin firm, whose rigid Luftschiff Zeppelin 1 (LZ 1) first flew from the Bodensee on the Swiss border on 2 July 1900. The flight lasted 18 minutes. The second and third flights, in October 1900 and on 24 October 1900 respectively, beat the 6 m/s (13 mph) speed record of the French airship La France by 3 m/s (7 mph).
The Brazilian Alberto Santos-Dumont became famous by designing, building, and flying dirigibles. He built and flew the first fully practical dirigible capable of routine, controlled flight. With his dirigible No.6 he won the Deutsch de la Meurthe prize on 19 October 1901 with a flight that took off from Saint-Cloud, rounded the Eiffel Tower and returned to its starting point. By this point, the airship had been established as the first practicable form of air travel.
== Heavier than air ==
=== Parachutes ===
Leonardo da Vinci's design for a pyramid-shaped parachute remained unpublished for centuries. The first published design was the Croatian Fausto Veranzio's homo volans (flying man) which appeared in his book Machinae novae (New machines) in 1595. Based on a ship's sail, it comprised a square of material stretched across a square frame and retained by ropes. The parachutist was suspended by ropes from each of the four corners.
Louis-Sébastien Lenormand is considered the first human to make a witnessed descent with a parachute. On 26 December 1783, he jumped from the tower of the Montpellier observatory in France, in front of a crowd that included Joseph Montgolfier, using a 14 feet (4.3 m) parachute with a rigid wooden frame.
Between 1853 and 1854, Louis Charles Letur developed a parachute-glider comprising an umbrella-like parachute with smaller, triangular wings and vertical tail beneath. Letur died after it crashed in 1854.
=== Kites ===
Kites are most notable in the recent history of aviation primarily for their man-carrying or man-lifting capabilities, although they have also been important in other areas such as meteorology.
The Frenchman Gaston Biot developed a man-lifting kite in 1868. Later, in 1880, Biot demonstrated to the French Society for Aerial Navigation a kite based on an open-ended cone, similar to a windsock but attached to a flat surface. The man-carrying kite was developed a stage further in 1894 by Captain Baden Baden-Powell, brother of Lord Baden-Powell, who strung a chain of hexagonal kites on a single line. A significant development came in 1893 when the Australian Lawrence Hargrave invented the box kite and some man-carrying experiments were carried out both in Australia and in the United States. On 27 December 1905, Neil MacDearmid was carried aloft in Baddeck, Nova Scotia, Canada by a large box kite named the Frost King, designed by Alexander Graham Bell.
Balloons were by then in use for both meteorology and military observation. Balloons can only be used in light winds, while kites can only be used in stronger winds. The American Samuel Franklin Cody, working in England, realised that the two types of craft between them allowed operation over a wide range of weather conditions. He developed Hargrave's basic design, adding additional lifting surfaces to create powerful man-lifting systems using multiple kites on a single line. Cody made many demonstrations of his system and would later sell four of his "war kite" systems to the Royal Navy. His kites also found use in carrying meteorological instruments aloft and he was made a fellow of the Royal Meteorological Society. In 1905, Sapper Moreton of the British Army's balloon section was lifted 2,600 feet (790 m) by a kite at Aldershot under Cody's supervision. In 1906, Cody was appointed Chief Instructor in Kiting at the Army School of Ballooning in Aldershot. He soon also joined the newly established Army Balloon Factory at Farnborough and continued developing his war kites for the British Army. In his own time, he developed a manned "glider-kite" which was launched on a tether like a kite and then released to glide freely. In 1907, Cody next fitted an aircraft engine to a modified unmanned "power-kite", the precursor to his later aeroplanes, and flew it inside the Balloon Shed, along a wire suspended from poles, before the Prince and Princess of Wales. The British Army officially adopted his war kites for their Balloon Companies in 1908.
=== 17th and 18th centuries ===
Da Vinci's realisation that manpower alone was not sufficient for sustained flight was rediscovered independently in the 17th century by Giovanni Alfonso Borelli and Robert Hooke. Hooke realised that some form of engine would be necessary and in 1655 made a spring-powered ornithopter model which was apparently able to fly.
Attempts to design or construct a true flying machine began, typically comprising a gondola with a supporting canopy and spring- or man-powered flappers for propulsion. Among the first were Hautsch and Burattini (1648). Others included de Gusmão's "Passarola" (1709 on), Swedenborg (1716), Desforges (1772), Bauer (1764), Meerwein (1781), and Blanchard (1781) who would later have more success with balloons. Rotary-winged helicopters likewise appeared, notably from Lomonosov (1754) and Paucton. A few model gliders flew successfully although some claims are contested, but in any event no full-size craft succeeded.
Italian inventor Tito Livio Burattini, invited by the Polish King Władysław IV to his court in Warsaw, built a model aircraft with four fixed glider wings in 1647. Described as "four pairs of wings attached to an elaborate 'dragon'", it was said to have successfully lifted a cat in 1648 but not Burattini himself. He promised that "only the most minor injuries" would result from landing the craft. His "Dragon Volant" is considered "the most elaborate and sophisticated aeroplane to be built before the 19th Century".
Bartolomeu de Gusmão's "Passarola" was a hollow, vaguely bird-shaped glider of similar concept but with two wings. In 1709, he presented a petition to King John V of Portugal, begging for support for his invention of an "airship", in which he expressed the greatest confidence. The public test of the machine, which was set for 24 June 1709, did not take place. According to contemporary reports, however, Gusmão appears to have made several less ambitious experiments with this machine, descending from eminences. It is certain that Gusmão was working on this principle at the public exhibition he gave before the Court on 8 August 1709, in the hall of the Casa da Índia in Lisbon, when he propelled a ball to the roof by combustion. He also demonstrated a small airship model before the Portuguese court, but never succeeded with a full-scale model.
Both understanding and a power source were still lacking. This was recognised by Emanuel Swedenborg in his "Sketch of a Machine for Flying in the Air" (1716). His flying machine consisted of a light frame covered with strong canvas and provided with two large oars or wings moving on a horizontal axis, arranged so that the upstroke met with no resistance while the downstroke provided lifting power. Swedenborg knew that the machine would not fly, but suggested it as a start and was confident that the problem would be solved. He wrote: "It seems easier to talk of such a machine than to put it into actuality, for it requires greater force and less weight than exists in a human body. The science of mechanics might perhaps suggest a means, namely, a strong spiral spring. If these advantages and requisites are observed, perhaps in time to come some one might know how better to utilize our sketch and cause some addition to be made so as to accomplish that which we can only suggest". The Editor of the Royal Aeronautical Society journal wrote in 1910 that Swedenborg's design was "...the first rational proposal for a flying machine of the aeroplance [heavier-than-air] type..."
Meanwhile, rotorcraft were not wholly forgotten. In July 1754, Mikhail Lomonosov demonstrated a small coaxial twin-rotor system, powered by a spring, to the Russian Academy of Sciences. The rotors were arranged one above the other and spun in opposite directions, principles still used in modern twin-rotor designs. In his 1768 Théorie de la vis d'Archimède, Alexis-Jean-Pierre Paucton suggested the use of one airscrew for lift and a second for propulsion, nowadays called a gyrodyne. In 1784, Launoy and Bienvenu demonstrated a flying model with coaxial, contra-rotating rotors powered by a simple spring similar to a bow saw, now accepted as the first powered helicopter.
Attempts at man-powered flight still persisted. Paucton's rotorcraft was man-powered, while another approach, also originally studied by Leonardo, was the use of flap valves. The flap valve is a simple hinged flap over a hole in the wing. In one direction it opens to allow air through and in the other it closes to allow an increased pressure difference. An early example was designed by Bauer in 1764. Later in 1808, Jacob Degen built an ornithopter with flap valves, in which the pilot stood on a rigid frame and worked the wings with a movable horizontal bar. His 1809 attempt at flight failed, so he then added a small hydrogen balloon and the combination achieved some short hops. Popular illustrations of the day depicted his machine without the balloon, leading to confusion as to what had actually flown. In 1811, Albrecht Berblinger built an ornithopter based on Degen's design but omitted the balloon, plunging instead into the Danube. The fiasco did have an upside: George Cayley, also taken in by the illustrations, was spurred to publish his findings to date "for the sake of giving a little more dignity to a subject bordering upon the ludicrous in public estimation", and the modern era of aviation was born.
=== 19th century ===
Throughout the 19th century, tower jumping was replaced in popularity by the equally-fatal balloon jumping as a way to demonstrate the continued uselessness of man-power and flapping wings. Meanwhile, the scientific study of heavier-than-air flight began in earnest.
==== Sir George Cayley and the first modern aircraft ====
Sir George Cayley was first called the "father of the aeroplane" in 1846. During the last years of the previous century he had begun the first rigorous study of the physics of flight and would later design the first modern heavier-than-air craft. Among his many achievements, his most important contributions to aeronautics include:
Clarifying our ideas and laying down the principles of heavier-than-air flight.
Reaching a scientific understanding of the principles of bird flight.
Conducting scientific aerodynamic experiments demonstrating drag and streamlining, movement of the centre of pressure, and the increase in lift from curving the wing surface.
Defining the modern aeroplane configuration comprising a fixed-wing, fuselage and tail assembly.
Demonstrations of manned, gliding flight.
Setting out the principles of power-to-weight ratio in sustaining flight.
From the age of ten, Cayley began studying the physics of bird flight and his school notebooks contained sketches in which he was developing his ideas on the theories of flight. It has been claimed that these sketches show that Cayley modeled the principles of a lift-generating inclined plane as early as 1792 or 1793.
In 1796, Cayley made a model helicopter of the form commonly known as a Chinese flying top, unaware of Launoy and Bienvenu's model of similar design. He regarded the helicopter as the best design for simple vertical flight, and later in his life in 1854 he made an improved model. He gave a Mr. Cooper credit for being the first person to improve on "the clumsy structure of the toy" and reports Cooper's model as ascending twenty or thirty feet. Cayley made one and a Mr. Coulson made a copy, described by Cayley as "a very beautiful specimen of the screw propeller in the air" and capable of flying over ninety feet high.
Cayley's next innovations were twofold: the adoption of the whirling arm test rig, invented in the previous century by Benjamin Robbins to investigate aerodynamic drag and used soon after by John Smeaton to measure the forces on rotating windmill blades, for use in aircraft research together with the use of aerodynamic models on the arm, rather than attempting to fly a model of a complete design. He initially used a simple flat plane fixed to the arm and inclined at an angle to the airflow.
In 1799, he set down the concept of the modern aeroplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control. On a small silver disc dated that year, he engraved on one side the forces acting on an aircraft and on the other a sketch of an aircraft design incorporating such modern features as a cambered wing, separate tail comprising a horizontal tailplane and vertical fin, and fuselage for the pilot suspended below the center of gravity to provide stability. The design is not yet wholly modern, incorporating as it does two pilot-operated paddles or oars which appear to work as flap valves.
He continued his research, and in 1804 constructed a model glider which was the first modern heavier-than-air flying machine, having the layout of a conventional modern aircraft with an inclined wing towards the front and adjustable tail at the back with both tailplane and fin. The wing was just a toy paper kite, flat, and uncambered. A movable weight allowed adjustment of the model's center of gravity. It was "very pretty to see" when flying down a hillside, and sensitive to small adjustments of the tail.
By the end of 1809, he had constructed the world's first full-size glider and flown it as an unmanned tethered kite. In the same year, goaded by the farcical antics of his contemporaries (see above), he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air". He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs. He argued that manpower alone was insufficient, and while no suitable power source was yet available he discussed the possibilities and even described the operating principle of the internal combustion engine using a gas and air mixture. However he was never able to make a working engine and confined his flying experiments to gliding flight. He also identified and described the importance of the cambered aerofoil, dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes.
In 1848, he had progressed far enough to construct a glider in the form of a triplane large and safe enough to carry a child. A local boy was chosen but his name is not known.
He went on to publish the design for a full-size manned glider or "governable parachute" to be launched from a balloon in 1852 and then to construct a version capable of launching from the top of a hill, which carried the first adult aviator across Brompton Dale in 1853. The identity of the aviator is not known. It has been suggested variously as Cayley's coachman, footman or butler, John Appleby who may have been the coachman or another employee, or even Cayley's grandson George John Cayley. What is known is that he was the first to fly in a glider with distinct wings, fuselage and tail, and featuring inherent stability and pilot-operated controls: the first fully modern and functional heavier-than-air craft.
Minor inventions included the rubber-powered motor, which provided a reliable power source for research models. By 1808, he had even re-invented the wheel, devising the tension-spoked wheel in which all compression loads are carried by the rim, allowing a lightweight undercarriage.
==== The age of steam ====
Drawing directly from Cayley's work, William Samuel Henson's 1842 design for an aerial steam carriage broke new ground. Henson proposed a 150 feet (46 m) span high-winged monoplane, with a steam engine driving two pusher configuration propellers. Although only a design, (scale models were built in 1843 or 1848 and flew 3 or 40 metres (10 or 130 ft)) it was the first in history for a propeller-driven fixed-wing aircraft. Henson and his collaborator John Stringfellow even dreamed of the first Aerial Transit Company.
In 1856, Frenchman Jean-Marie Le Bris made the first flight higher than his point of departure, by having his glider "L'Albatros artificiel" pulled by a horse on a beach. He reportedly achieved a height of 100 meters, over a distance of 200 meters.
The British advances had galvanised French researchers. Starting in 1857, Félix du Temple and his brother Luis built several models using a clockwork mechanism as a power source and later a small steam engine. In 1857 or 1858, a 680-gram (1.5 lb) model was able to fly briefly and land.
Francis Herbert Wenham presented the first paper to the newly formed Aeronautical Society (later the Royal Aeronautical Society), On Aerial Locomotion. He took Cayley's work on cambered wings further, making important findings about both the wing aerofoil section and lift distribution. To test his ideas, from 1858 he constructed several gliders, both manned and unmanned, and with up to five stacked wings. He concluded correctly that long, thin wings would be better than the bat-like ones suggested by many, because they would have more leading edge for their area. Today this relationship is known as the aspect ratio of a wing.
The latter part of the 19th century became a period of intense study, characterized by the "gentleman scientists" who represented most research efforts until the 20th century. Among them was the British scientist-philosopher and inventor Matthew Piers Watt Boulton, who wrote an important paper in 1864, On Aërial Locomotion, which also described lateral flight control. He was the first to patent an aileron control system in 1868.
In 1864, Le Comte Ferdinand Charles Honore Phillipe d'Esterno published a study On the Flight of Birds (Du Vol des Oiseaux), and the next year Louis Pierre Mouillard published an influential book The Empire of the Air (l'Empire de l'Air).
1866 saw the founding of the Aeronautical Society of Great Britain and two years later the world's first aeronautical exhibition was held at the Crystal Palace, London, where Stringfellow was awarded a £100 prize for the steam engine with the best power-to-weight ratio.
In 1871, Wenham and Browning made the first wind tunnel. Members of the Society used the tunnel and learned that cambered wings generated considerably more lift than expected by Cayley's Newtonian reasoning, with lift-to-drag ratios of about 5:1 at 15 degrees. This clearly demonstrated the possibility of building practical heavier-than-air flying machines: what remained were the problems of controlling and powering the craft.
Alphonse Pénaud, a Frenchman living from 1850 to 1880, made significant contributions to aeronautics. He advanced the theory of wing contours and aerodynamics and constructed successful models of aeroplanes, helicopters and ornithopters. In 1871, he flew the first aerodynamically stable fixed-wing aeroplane, a model monoplane he called the "Planophore", a distance of 40 metres (130 ft). Pénaud's model incorporated several of Cayley's discoveries, including the use of a tail, wing dihedral for inherent stability, and rubber power. The planophore also had longitudinal stability, being trimmed such that the tailplane was set at a smaller angle of incidence than the wings, an original and important contribution to the theory of aeronautics.
By the 1870s, lightweight steam engines had been developed enough for their experimental use in aircraft.
Félix du Temple eventually achieved a short hop with a full-size manned craft in 1874. His "Monoplane" was a large aircraft made of aluminium, with a wingspan of 42 ft 8 in (13 m) and a weight of only 176 pounds (80 kg) without the pilot. Several trials were made with the aircraft, and it achieved lift-off under its own power after launching from a ramp, glided for a short time and returned safely to the ground, making it the first successful powered hop in history, a year ahead of Moy's flight.
The Aerial Steamer, made by Thomas Moy, sometimes called the Moy-Shill Aerial Steamer, was an unmanned tandem wing aircraft driven by a 3 horsepower (2.2 kW) steam engine using methylated spirits as fuel. It was 14 feet (4.3 m) long and weighed about 216 pounds (98 kg) of which the engine accounted for 80 pounds (36 kg), and ran on three wheels. It was tested in June 1875 on a circular rolled gravel track of nearly 300 feet (91 m) diameter. It did not reach a speed of above 12 miles per hour (19 km/h), but a speed of around 35 miles per hour (56 km/h) would be necessary to lift off. However it is credited with being the first steam-powered aircraft to have left the ground under its own power by the historian Charles Gibbs-Smith.
Pénaud's later project for an amphibian aeroplane, although never built, incorporated other modern features. A tailless monoplane with a single vertical fin and twin tractor airscrews, it also featured hinged rear elevator and rudder surfaces, retractable undercarriage and a fully enclosed, instrumented cockpit.
Equally authoritative as a theorist was Pénaud's fellow countryman Victor Tatin. In 1879, he flew a model which, like Pénaud's project, was a monoplane with twin tractor propellers but also had a separate horizontal tail. It was powered by compressed air, with the air tank forming the fuselage.
In Russia Alexander Mozhaiski constructed a steam-powered monoplane driven by one large tractor and two smaller pusher propellers. In 1884, it was launched from a ramp and remained airborne for 98 feet (30 m).
That same year in France, Alexandre Goupil published his work La Locomotion Aérienne (Aerial Locomotion), although the flying machine he later constructed failed to fly.
Sir Hiram Maxim was an American who moved to England and adopted English nationality. He chose to largely ignore his contemporaries and built his own whirling arm rig and wind tunnel. In 1889, he built a hangar and workshop in the grounds of Baldwyn's Manor at Bexley, Kent, and made many experiments. He developed a biplane design which he patented in 1891 and completed as a test rig three years later. It was an enormous machine, with a wingspan of 105 feet (32 m), a length of 145 feet (44 m), fore and aft horizontal surfaces and a crew of three. Twin propellers were powered by two lightweight compound steam engines each delivering 180 horsepower (130 kW). Overall weight was 7,000 pounds (3,200 kg). Later modifications would add more wing surfaces as shown in the illustration. Its purpose was for research and it was neither aerodynamically stable nor controllable, so it ran on a 1,800 feet (550 m) track with a second set of restraining rails to prevent it from lifting off, somewhat in the manner of a roller coaster. In 1894, the machine developed enough lift to take off, breaking one of the restraining rails and being damaged in the process. Maxim then abandoned work on it but would return to aeronautics in the 20th century to test a number of smaller designs powered by internal combustion engines.
One of the last of the steam-powered pioneers, like Maxim ignoring his contemporaries who had moved on (see next section), was Clément Ader. His Éole of 1890 was a bat-winged tractor monoplane which achieved a brief, uncontrolled hop, thus becoming the first heavier-than-air machine to take off under its own power. However his similar but larger Avion III of 1897, notable only for having twin steam engines, failed to fly. After a short run the machine was caught by a gust of wind, slewed off the track, and came to a stop. After this the French army withdrew its funding, but kept the results secret. The commission released in November 1910 the official reports on Ader's attempted flights, stating that they were unsuccessful.
==== Learning to glide ====
The glider constructed with the help of Massia and flown briefly by Biot in 1879 was based on the work of Mouillard and was still bird-like in form. It is preserved at the Musee de l'Air, France, and is claimed to be the earliest man-carrying flying machine still in existence.
In the last decade or so of the 19th century, a number of key figures were refining and defining the modern aeroplane. The Englishman Horatio Phillips made key contributions to aerodynamics. The German Otto Lilienthal and the American Octave Chanute worked independently on gliding flight. Lillienthal published a book on bird flight and went on, from 1891 to 1896, to construct a series of gliders, of various monoplane, biplane and triplane configurations, to test his theories. He made thousands of flights and at the time of his death was working on motor-powered gliders.
Phillips conducted extensive wind tunnel research on aerofoil sections, using steam as the working fluid. He proved the principles of aerodynamic lift foreseen by Cayley and Wenham and, from 1884, took out several patents on aerofoils. His findings underpin all modern aerofoil design. Phillips would later develop theories on the design of multiplanes, which he went on to show were unfounded.
Starting in the 1880s, advances were made in construction that led to the first truly practical gliders. Four people in particular were active: John J. Montgomery, Otto Lilienthal, Percy Pilcher and Octave Chanute. One of the first modern gliders was built by John J. Montgomery in 1883; Montgomery later claimed to have made a single successful flight with it in 1884 near San Diego and Montgomery's activities were documented by Chanute in his book Progress in Flying Machines. Montgomery discussed his flying during the 1893 Aeronautical Conference in Chicago and Chanute published Montgomery's comments in December 1893 in the American Engineer & Railroad Journal. Short hops with Montgomery's second and third gliders in 1885 and 1886 were also described by Montgomery. Between 1886 and 1896 Montgomery focused on understanding the physics of aerodynamics rather than experiment with flying machines. Another hang-glider had been constructed by Wilhelm Kress as early as 1877 near Vienna.
Otto Lilienthal was known as the "Glider King" or "Flying Man" of Germany. He duplicated Wenham's work and greatly expanded on it in 1884, publishing his research in 1889 as Birdflight as the Basis of Aviation (Der Vogelflug als Grundlage der Fliegekunst). He also produced a series of gliders of a type now known as the hang glider, including bat-wing, monoplane and biplane forms, such as the Derwitzer Glider and Normal soaring apparatus. Starting in 1891 he became the first person to make controlled untethered glides routinely, and the first to be photographed flying a heavier-than-air machine, stimulating interest around the world. He rigorously documented his work, including photographs, and for this reason is one of the best known of the early pioneers. He also promoted the idea of "jumping before you fly", suggesting that researchers should start with gliders and work their way up, instead of simply designing a powered machine on paper and hoping it would work. Lilienthal made over 2,000 glides until his death in 1896 from injuries sustained in a glider crash. Lilienthal had also been working on small engines suitable for powering his designs at the time of his death.
Picking up where Lilienthal left off, Octave Chanute took up aircraft design after an early retirement and funded the development of several gliders. In the summer of 1896, his team flew several of their designs many times at Miller Beach, Indiana, eventually deciding that the best was a biplane design. Like Lilienthal, he documented his work and also photographed it, and was busy corresponding with like-minded researchers around the world. Chanute was particularly interested in solving the problem of aerodynamic instability of the aircraft in flight, which birds compensate for by instant corrections, but which humans would have to address either with stabilizing and control surfaces or by moving the center of gravity of the aircraft, as Lilienthal did. The most disconcerting problem was longitudinal instability (divergence), because as the angle of attack of a wing increases, the center of pressure moves forward and makes the angle increase yet more. Without immediate correction, the craft will pitch up and stall. Much more difficult to understand was the relationship between lateral and directional control.
In Britain, Percy Pilcher, who had worked for Maxim and had built and successfully flown several gliders during the mid to late 1890s, constructed a prototype powered aircraft in 1899 which, recent research has shown, would have been capable of flight. However, like Lilienthal he died in a glider accident before he was able to test it.
Publications, particularly Octave Chanute's Progress in Flying Machines of 1894 and James Means' The Problem of Manflight (1894) and Aeronautical Annuals (1895–1897) helped bring current research and events to a wider audience.
The invention of the box kite during this period by the Australian Lawrence Hargrave led to the development of the practical biplane. In 1894, Hargrave linked four of his kites together, added a sling seat, and flew 16 feet (4.9 m). By demonstrating to a sceptical public that it was possible to build a safe and stable flying machine, Hargrave opened the door to other inventors and pioneers. Hargrave devoted most of his life to constructing a machine that would fly. He believed passionately in open communication within the scientific community and would not patent his inventions. Instead, he scrupulously published the results of his experiments in order that a mutual interchange of ideas may take place with other inventors working in the same field, so as to expedite joint progress. By 1889, he had constructed a rotary engine driven by compressed air.
Octave Chanute became convinced that multiple wing planes were more effective than a monoplane and introduced the "strut-wire" braced wing structure which, with its combination of rigidity and lightness, would in the form of the biplane come to dominate aircraft design for decades to come.
Even balloon-jumping began to succeed. In 1905, Daniel Maloney was carried by balloon in a tandem-wing glider designed by John Montgomery to an altitude of 4,000 feet (1,200 m) before being released, gliding down and landing at a predetermined location as part of a large public demonstration of aerial flight at Santa Clara, California. However, after several successful flights, during an ascension in July 1905, a rope from the balloon struck the glider, and the glider suffered structural failure after release, resulting in Maloney's death.
=== Adding power ===
==== Whitehead ====
Gustave Weißkopf was a German who emigrated to the U.S., where he soon changed his name to Whitehead. From 1897 to 1915 he designed and built flying machines and engines. On 14 August 1901 Whitehead claimed to have carried out a controlled, powered flight in his Number 21 monoplane at Fairfield, Connecticut. An account of the alleged flight was published in the Bridgeport Sunday Herald and was repeated in many U.S. newspapers and a few overseas. Whitehead claimed two more flights on 17 January 1902, using his Number 22 monoplane. He described it as having a 40 horsepower (30 kW) motor with twin tractor propellers and controlled by differential propeller speed and rudder. He claimed to have flown a 10 kilometres (6.2 mi) circle.
Whitehead's claims are generally rejected by aviation historians. The Smithsonian Institution and Royal Aeronautical Society are among those who do not accept that Whitehead flew as reported. In March 2013 Jane's All the World's Aircraft published an editorial by Paul Jackson which accepted Whitehead's flight as the first manned, powered, controlled flight of a heavier-than-air craft. The corporate owner of Jane's subsequently distanced itself from the editorial, stating "the article reflected Mr. Jackson's opinion on the issue and not that of IHS Jane's".
==== Langley ====
After a distinguished career in astronomy and shortly before becoming Secretary of the Smithsonian Institution, Samuel Pierpont Langley started a serious investigation into aerodynamics at what is today the University of Pittsburgh. In 1891, he published Experiments in Aerodynamics detailing his research, and then turned to building his designs. He hoped to achieve automatic aerodynamic stability, so he gave little consideration to in-flight control. On 6 May 1896, Langley's Aerodrome No. 5 made the first successful sustained flight of an unpiloted, engine-driven heavier-than-air craft of substantial size. It was launched from a spring-actuated catapult mounted on top of a houseboat on the Potomac River near Quantico, Virginia. Two flights were made that afternoon, one of 1,005 metres (3,297 ft) and a second of 700 metres (2,300 ft), at a speed of approximately 25 miles per hour (40 km/h). On both occasions, the Aerodrome No. 5 landed in the water as planned, because in order to save weight, it was not equipped with landing gear. On 28 November 1896, another successful flight was made with the Aerodrome No. 6. This flight, of 1,460 metres (4,790 ft), was witnessed and photographed by Alexander Graham Bell. The Aerodrome No. 6 was actually Aerodrome No. 4 greatly modified. So little remained of the original aircraft that it was given a new designation.
With the successes of the Aerodrome No. 5 and No. 6, Langley started looking for funding to build a full-scale man-carrying version of his designs. Spurred by the Spanish–American War, the U.S. government granted him $50,000 to develop a man-carrying flying machine for aerial reconnaissance. Langley planned on building a scaled-up version known as the Aerodrome A, and started with the smaller Quarter-scale Aerodrome, which flew twice on 18 June 1901, and then again with a newer and more powerful engine in 1903.
With the basic design apparently successfully tested, he then turned to the problem of a suitable engine. He contracted Stephen Balzer to build one, but was disappointed when it delivered only 8 horsepower (6.0 kW) instead of the 12 horsepower (8.9 kW) he expected. Langley's assistant, Charles M. Manly, then reworked the design into a five-cylinder water-cooled radial that delivered 52 horsepower (39 kW) at 950 rpm, a feat that took years to duplicate. Now with both power and a design, Langley put the two together with great hopes.
To his dismay, the resulting aircraft proved to be too fragile. Simply scaling up the original small models resulted in a design that was too weak to hold itself together. Two launches in late 1903 both ended with the Aerodrome immediately crashing into the water. The pilot, Manly, was rescued each time. Also, the aircraft's control system was inadequate to allow quick pilot responses, and it had no method of lateral control, and the Aerodrome's aerial stability was marginal.
Langley's attempts to gain further funding failed, and his efforts ended. Nine days after his second abortive launch on 8 December, the Wright brothers successfully flew their Flyer. Glenn Curtiss made 93 modifications to the Aerodrome and flew this very different aircraft in 1914. Without acknowledging the modifications, the Smithsonian Institution asserted that Langley's Aerodrome was the first machine "capable of flight". The Smithsonian eventually retracted this claim in 1928.
==== Richard Pearse ====
Richard William Pearse (3 December 1877 – 29 July 1953) was a New Zealand farmer and inventor who performed pioneering aviation experiments. Witnesses interviewed many years afterwards describe observing Pearse flying and landing a powered heavier-than-air machine on 31 March 1903.
==== The Wright brothers ====
The Wrights solved both the control and power problems that confronted aeronautical pioneers. They invented roll control using wing warping and combined roll with simultaneous yaw control using a steerable rear rudder. Although wing-warping as a means of roll control was used only briefly during the early history of aviation, the innovation of combining roll and yaw control was a fundamental advance in flight control. For pitch control, the Wrights used a forward elevator (canard), another design element that later became outmoded.
The Wrights made rigorous wind-tunnel tests of airfoils and flight tests of full-size gliders. They not only built a working powered aircraft, the Wright Flyer, but also significantly advanced the science of aeronautical engineering.
They concentrated on the controllability of unpowered aircraft before attempting to fly a powered design. From 1900 to 1902, they built and flew a series of three gliders. The first two were much less efficient than the Wrights expected, based on experiments and writings of their 19th-century predecessors. Their 1900 glider had only about half the lift they anticipated, and the 1901 glider performed even more poorly, until makeshift modifications made it serviceable.
Seeking answers, the Wrights constructed their own wind tunnel and equipped it with a sophisticated measuring device to calculate lift and drag of 200 different model-size wing designs they created. As a result, the Wrights corrected earlier mistakes in calculations of lift and drag and used this knowledge to construct their 1902 glider, third in the series. It became the first manned, heavier-than-air flying machine that was mechanically controllable in all three axes: pitch, roll and yaw. Its pioneering design also included wings with a higher aspect ratio than the previous gliders. The brothers successfully flew the 1902 glider hundreds of times, and it performed far better than their earlier two versions.
To obtain adequate power for their engine-driven Flyer, the Wrights designed and built a low-powered internal combustion engine. Using their wind tunnel data, they designed and carved wooden propellers that were more efficient than any before, enabling them to gain adequate performance from their low engine power. The Flyer's design was also influenced by the desire of the Wrights to teach themselves to fly safely without unreasonable risk to life and limb, and to make crashes survivable. The limited engine power resulted in low flying speeds and the need to take off into a headwind.
According to the Smithsonian Institution and Fédération Aéronautique Internationale (FAI), the Wrights made the first sustained, controlled, powered heavier-than-air manned flight at Kill Devil Hills, North Carolina, 4 miles (6.4 km) south of Kitty Hawk, North Carolina, on 17 December 1903. The first flight by Orville Wright, of 120 feet (37 m) in 12 seconds, was recorded in a famous photograph. In the fourth flight of the same day, Wilbur Wright flew 852 feet (260 m) in 59 seconds. Modern analysis by Professor Fred E. C. Culick and Henry R. Rex (1985) has demonstrated that the 1903 Wright Flyer was so unstable as to be almost unmanageable by anyone but the Wrights, who had trained themselves in the 1902 glider.
The Wrights continued developing their flying machines and flying at Huffman Prairie near Dayton, Ohio, in 1904–05. After a crash in 1905, they rebuilt the Flyer III and made important design changes. They almost doubled the size of the elevator and rudder and moved them about twice the distance from the wings. They added two fixed vertical vanes (called "blinkers") between the elevators, and gave the wings a very slight dihedral. They disconnected the rudder from the wing-warping control, and as in all future aircraft, placed it on a separate control handle. The Flyer III became the first practical aircraft (though without wheels and using a launching device), flying consistently under full control and bringing its pilot back to the starting point safely and landing without damage. On 5 October 1905, Wilbur flew 24 miles (39 km) in 39 minutes 23 seconds".
Eventually the Wrights would abandon the foreplane altogether, with the Model B of 1910 instead having a tail plane in the manner which was by then becoming conventional.
According to the April 1907 issue of the Scientific American magazine, the Wright brothers seemed to have the most advanced knowledge of heavier-than-air navigation at the time. However, the same magazine issue also claimed that no public flight had been made in the United States before its April 1907 issue. Hence, they devised the Scientific American Aeronautic Trophy in order to encourage the development of a heavier-than-air flying machine.
=== The first practical aircraft ===
Once powered, controlled flight had been achieved, progress was still needed to create a practical flying machine for general use. This period leading up to World War I is sometimes called the pioneer era of aviation.
==== Reliable power ====
The history of early powered flight is very much the history of early engine construction. The Wrights designed their own engines. They used a single flight engine, a 12 horsepower (8.9 kW) water-cooled four-cylinder inline type with five main bearings and fuel injection. Whitehead's craft was powered by two engines of his design: a ground engine of 10 horsepower (7.5 kW) which drove the front wheels in an effort to reach takeoff speed and a 20 horsepower (15 kW) acetylene engine powering the propellers. Whitehead was an experienced machinist, and he is reported to have raised funds for his aircraft by making and selling engines to other aviators. Most early engines were neither powerful nor reliable enough for practical use, and the development of improved engines went hand-in-hand with improvements in the airframes themselves.
In Europe, Léon Levavasseur's Antoinette 8V pioneering example of the V-8 engine format, first patented in 1902, dominated flight for several years after it was introduced in 1906, powering many notable craft of that era. Incorporating direct fuel injection, evaporative water cooling and other advanced features, it generated around 50 horsepower (37 kW).
The British Green C.4 of 1908 followed the Wright's pattern of a four-cylinder inline water-cooled design but produced 52 horsepower (39 kW). It powered many successful pioneer aircraft including those of A.V. Roe.
Horizontally opposed designs were also produced. The four-cylinder water-cooled de Havilland Iris achieved 45 horsepower (34 kW) but was little used, while the successful two-cylinder Nieuport design achieved 28 hp (21 kW) in 1910.
1909 saw radial engine forms rise to significance. The Anzani 3-cylinder semi-radial or fan engine of 1909 (also built in a true, 120° cylinder angle radial form) developed only 25 horsepower (19 kW) but was much lighter than the Antoinette, and was chosen by Louis Blériot for his cross-Channel flight. More radical was the Seguin brothers' series of Gnôme rotary radial engines, starring with the Gnome Omega 50 horsepower (37 kW) air-cooled seven-cylinder rotary engine in 1906. In a rotary engine, the crankshaft is fixed to the airframe and the whole engine casing and cylinders rotate with the propeller. Although this type had been introduced as long ago as 1887 by Lawrence Hargrave, improvements made to the Gnome created a robust, relatively reliable and lightweight design which revolutionised aviation and would see continuous development over the next ten years. Fuel was introduced into each cylinder direct from the crankcase meaning that only an exhaust valve was required. The larger and more powerful nine-cylinder, 80 horsepower Le Rhône 9C rotary was introduced in 1913 and was widely adopted for military use.
Inline and vee types remained popular, with the German company Mercedes producing a series of water-cooled six-cylinder models. In 1913, they introduced the highly successful 75 kilowatts (101 hp) D.I series.
==== Lift and efficiency ====
The lightness and strength of the biplane is offset by the inefficiency inherent in placing two wings so close together. Biplane and monoplane designs vied with each other, with both still in production by the outbreak of war in 1914.
A notable development, although a failure, was the first cantilever monoplane ever built. The Antoinette Monobloc of 1911 had a fully enclosed cockpit and faired undercarriage but its V-8 engine's 50 horsepower (37 kW) output was not enough for it to fly for more than a few feet at most. More successful was the Deperdussin braced monoplane, which won the inaugural 1913 Schneider Trophy race flown by Maurice Prévost, completing 28 circuits of the 10 km (6.2 mi) course with an average speed of 73.63 kilometres per hour (45.75 mph).
Triplanes too were experimented with, notably a series built between 1909 and 1910 by the British pioneer A.V. Roe. Going one better with four wings the quadruplane too made rare appearances. The multiplane, having large numbers of very thin wings, was also experimented with, most successfully by Horatio Phillips. His final prototype confirmed the inefficiency and poor performance of the idea.
Other radical approaches to wing design were also being tried. The Scottish-born inventor Alexander Graham Bell devised a cellular octahedral wing form which, like the multiplane, proved disappointingly inefficient. Other lacklustre performers included the Edwards Rhomboidal, the Lee-Richards annular wing and varying numbers of wings one after the other in tandem.
Many of these early experimental forms were in principle quite practical and have since reappeared.
==== Stability and control ====
Early work had focused primarily on making a craft stable enough to fly but failed to offer full controllability, while the Wrights had sacrificed stability in order to make their Flyer fully controllable. A practical aircraft requires both. Although stability had been achieved by several designs, the principles were not fully understood and progress was erratic. The aileron slowly replaced wing warping for lateral control although designers sometimes, as with the Blériot XI, returned briefly to wing warping. Similarly, all-flying tail surfaces gave way to fixed stabilizers with hinged control surfaces attached. The canard pusher configuration of the early Wright Flyers was supplanted by tractor propeller aircraft designs.
In France, progress was relatively rapid.
On October 23 and November 12, 1906, the Brazilian Alberto Santos-Dumont made public flights in France with his 14-bis. A canard pusher biplane with pronounced wing dihedral, it had a Hargrave-style box-cell wing with a forward-mounted "boxkite" assembly which was movable to act as both elevator and rudder. His flight was the first made by a powered heavier-than-air machine to be verified by the Aéro-Club de France, and won the Deutsch-Archdeacon Prize for the first officially observed flight of more than 25 metres (82 ft). It later set the first world record recognized by the Federation Aeronautique Internationale by flying 220 metres (720 ft) in 21.5 seconds. It had no lateral control, so after these flights, in late November, he added auxiliary surfaces between the wings as primitive ailerons, and made a few more flights.
The next year Louis Blériot flew the Blériot VII, a tractor monoplane with full three-axis control using the horizontal tail surfaces as combined elevators and ailerons. Its immediate descendant, the Blériot VIII, was the very first airframe to bring together the recognizable elements of the modern aircraft flight control system in April 1908. Where Horatio Phillips and Traian Vuia had failed, Blériot's was the first practical tractor monoplane and marked the start of a trend in French aviation. By 1909, he had developed this configuration to the point where the Blériot XI was able to cross the English Channel, among other refinements using the tail surfaces only as elevators and using wing warping for lateral control. Another design that appeared in 1907 was the Voisin biplane. This lacked any provision for lateral control, and could only make shallow turns using only rudder control, but was flown with increasing success during the year by Henri Farman, and on 13 January 1908 he won the 50,000 francs Deutsch de la Meurthe-Archdeacon Grand Prix de l'Aviation for being the first aviator to complete an officially observed 1 kilometre closed circuit flight, including taking off and landing under the aircraft's own power.
The designs of the French pioneer Léon Levavasseur are better known by the name of the Antoinette company which he founded. His Antoinette IV of 1908 was a monoplane of what is now the conventional configuration, with tailplane and fin each bearing movable control surfaces, and ailerons on the wings. The ailerons were not sufficiently effective and on later models were replaced by wing warping.
At the end of 1908, the Voisin brothers sold an aircraft ordered by Henri Farman to J. T. C. Moore-Brabazon. Angered, Farman built his own aircraft, adapting the Voisin design by adding ailerons. Following further modifications to the tail surfaces and ailerons, the Farman III became the most popular aeroplane sold between 1909 and 1911, and was widely imitated. In Britain, the American expatriate Samuel Cody flew an aircraft similar in layout to the Wright flyer in 1908, incorporating a tailplane as well as a large front elevator. In 1910, an improved model fitted with between-wing ailerons won the Michelin Cup competition, while Geoffrey de Havilland's second Farman-style aircraft had ailerons on the upper wing and became the Royal Aircraft Factory F.E.1. The Bristol Boxkite, a copy of the Farman III, was manufactured in quantity. In the USA Glenn Curtiss had flown first the AEA June Bug and then his Golden Flyer, which in 1910 achieved the first naval deck landing and takeoff. Meanwhile, the Wrights themselves had also been wrestling with the problem of achieving both stability and control, experimenting further with the foreplane before first adding a second small plane at the tail and then finally removing the foreplane altogether. They announced their two-seat Model B in 1910 and licensed it for production in 1911 as the Burgess Model F.
Many other more radical layouts were tried, with only a few showing any promise. In the United Kingdom, J. W. Dunne developed a series of tailless pusher designs having swept wings with a conical upper surface. His D.5 biplane flew in 1910 and proved fully stable. Dunne deliberately avoided full three-axis control, devising instead a system which was easier to operate and which he regarded as far safer in practice. Dunne's system would not be widely adopted. His tailless design reached its peak with the D.8 which was manufactured under license in France by Nieuport and in the US as the Burgess-Dunne, however it was rejected as a practical warplane by the British Army, in which Dunne was an officer, because it was too stable and hence not manoeuvrable enough in battle.
==== Seaplanes ====
1901 in Austria, Wilhelm Kress fails to take off in his underpowered Drachenflieger, a floatplane featuring twin pontoons made of aluminium and three wings in tandem.
1910 in France, Henri Fabre makes the first seaplane flight in his Hydravion. It was a monoplane with a biplane foreplane and three short floats in tricycle layout.
1912 The world's first seaplane carrier, the French Navy's Foudre, embarks her first floatplane, a Voisin Canard.
A problem with early seaplanes was the tendency for suction between the water and the aircraft as speed increased, holding the aircraft down and hindering takeoff. The British designer John Cyril Porte invented the technique of placing a step in the bottom of the aircraft to break the suction, and this was incorporated in the 1914 Curtiss Model H.
=== Military use ===
In 1909, aeroplanes remained frail and of little practical use. The limited engine power available meant that the effective payload was extremely limited. The basic structural and materials technology of the airframes mostly consisted of hardwood materials or steel tubing, braced with steel wires and covered in linen fabric doped with a flammable stiffener and sealant. The need to save weight meant that most aircraft were structurally fragile, and not infrequently broke up in flight especially when performing violent manoeuvres, such as pulling out of a steep dive, which would be required in combat.
Even so, these evolving flying machines were recognised to be not just toys, but weapons in the making. In 1909, the Italian staff officer Giulio Douhet remarked:
The sky is about to become another battlefield no less important than the battlefields on land and sea....In order to conquer the air, it is necessary to deprive the enemy of all means of flying, by striking at him in the air, at his bases of operation, or at his production centers. We had better get accustomed to this idea, and prepare ourselves.
In 1911, Captain Bertram Dickson, the first British military officer to fly and the first British military officer to perform an aerial reconnaissance mission in a fixed-wing aircraft during army manoeuvres in 1910, predicted the military use of aircraft and the ensuing development and escalation of aerial combat in a submission to the UK Technical Sub-Committee for Imperial Defence.
Missiles were dropped from an aeroplane for the first time when United States Army Lieutenant Paul W. Beck dropped sandbags simulating bombs over Los Angeles, California.
Aeroplanes were first used in warfare during the Italo-Turkish War of 1911–1912. The first operational use took place on 23 October 1911, when Captain Carlo Piazza made a flight near Benghazi in a Blériot XI. The first aerial bombardment followed shortly afterwards on 1 November, when Second Lieutenant Giulio Gavotti dropped four bombs on two bases held by the Turks. The first photographic reconnaissance flight took place in March 1912, also flown by Captain Piazza.
Some types developed during this period would see military service into, or even throughout, World War I. These include the Etrich Taube of 1910, Fokker Spin of 1911, Royal Aircraft Factory BE.2, Sopwith Tabloid/Schneider and a variety of obsolescent types that would be used for pilot training. The Sikorsky Ilya Muromets (also known as Sikorsky S-22) was the first four-engined aircraft to ever enter production and the largest of its day, the prototype first flying in 1913 just before the outbreak of war. The type would go on to see service in both bomber and transport roles.
=== Helicopters ===
The early work on powered rotor lift was followed up by later investigators, independently from the development of fixed-wing aircraft.
In 19th century France an association was set up to collaborate on helicopter designs, of which there were many. In 1863 Gustave de Ponton d'Amécourt constructed a model using the established counter-rotating rotors. Initially powered by steam it failed, but a clockwork version did fly. Other designs, covering a wide variety of forms, included Pomés and De la Pauze (1871), Pénaud, Achenbach (1874), Dieuaide (1887), Melikoff (1877), Forlanini (1877), Castel (1878), and Dandrieux (1878–79). Of these, Forlanini's steam-powered contra-rotating model flew for 20 seconds, reaching a height of 13 metres (43 ft), and Dandrieux' rubber-powered model also flew.
Hiram Maxim's father conceived of a helicopter powered by two counter-rotating rotors, but was unable to find a powerful enough engine to build it. Hiram himself sketched out plans for a helicopter in 1872 before turning his attention to fixed-wing flight.
In 1907, the French Breguet-Richet Gyroplane No. 1 lifted off in a "tethered" test flight, becoming the first manned helicopter to rise from the ground. It rose about 60 centimetres (24 in) and hovered for a minute. However, the flight proved to be extremely unsteady.
Two months later at Lisenux, France, Paul Cornu made the first free flight in a manned rotary-winged craft in his Cornu helicopter, lifting to 30 centimetres (12 in) and remaining aloft for 20 seconds.
== See also ==
Aviation accidents and incidents
Aviation in the pioneer era (1903–1914)
Aviation in World War I
Claims to the first powered flight
Timeline of aviation
== Notes ==
== References ==
=== Bibliography ===
Walker, P. (1971). Early Aviation at Farnborough, Volume I: Balloons, Kites and Airships, Macdonald.
== External links ==
Aerospaceweb – Who was the first to fly?
Aerospaceweb – Why do Brazilians consider Alberto Santos-Dumont the first man to fly if he didn't fly until 1906 and the Wright brothers did so in 1903?
Pre-Wright flying machines
Aviation Pioneers: An Anthology
The Early Birds of Aviation
Plane truth: list of greatest technical breakthroughs in manned flight by Jürgen Schmidhuber, Nature 421, 689, 2003 |
Eilmer of Malmesbury | Eilmer of Malmesbury (also known as Oliver due to a scribe's miscopying, or Elmer, or Æthelmær) was an 11th-century English Benedictine monk best known for his early attempt at a gliding flight using wings.
== Life ==
Eilmer was a monk of Malmesbury Abbey who wrote on astrology. All that is known of him is from the Gesta regum Anglorum (Deeds of the English Kings), written by the eminent medieval historian William of Malmesbury in about 1125. Being a fellow monk of the same abbey, William almost certainly obtained his account directly from people who knew Eilmer when he was an old man.
Later scholars, such as the American historian of technology Lynn White, have attempted to estimate Eilmer's date of birth based on a quotation in William's Deeds about Halley's Comet, which appeared in 1066. However, William recorded Eilmer's quotation not to establish his age, but to show that a prophecy was fulfilled when the Normans invaded England.
You've come, have you? – You've come, you source of tears to many mothers. It is long since I saw you; but as I see you now you are much more terrible, for I see you brandishing the downfall of my country.
If Eilmer had seen Halley's Comet 76 years earlier in 989, he could have been born about 984, making him about five or six years old when he first saw the comet, and therefore old enough to remember it. However the periodicity of comets was probably unknown in Eilmer's time, and so his remark "It is long since I saw you" could have referred to a different, later comet. Since it is known that Eilmer was an "old man" in 1066, and that he had made the flight attempt "in his youth", the event is placed some time during the early 11th century, perhaps in its first decade.
Beyond those based on William's account, there are no other known sources documenting Eilmer's life.
== The flight ==
William records that, in Eilmer's youth, he had read and believed the Greek myth of Daedalus. Thus, Eilmer fixed wings to his hands and feet and launched himself from the top of a tower at Malmesbury Abbey:
He was a man learned for those times, of ripe old age, and in his early youth had hazarded a deed of remarkable boldness. He had by some means, I scarcely know what, fastened wings to his hands and feet so that, mistaking fable for truth, he might fly like Daedalus, and, collecting the breeze upon the summit of a tower, flew for more than a furlong [201 metres]. But agitated by the violence of the wind and the swirling of air, as well as by the awareness of his rash attempt, he fell, broke both his legs and was lame ever after. He used to relate as the cause of his failure, his forgetting to provide himself a tail.
Given the geography of the abbey, his landing site, and the account of his flight, to travel for "more than a furlong" (220 yards, 201 metres) he would have had to have been airborne for about 15 seconds. His exact flightpath is not known, nor how long he was in the air, because today's abbey is not the abbey of the 11th century, when it was probably smaller, although the tower was probably close to the present height. "Olivers Lane", off the present-day High Street and about 200 metres (660 ft) from the abbey, is reputed locally to be the site where Eilmer landed. That would have taken him over many buildings. Maxwell Woosnam's study concluded that he is more likely to have descended the steep hill off to the southwest of the abbey, rather than the town centre to the south.
Eilmer used a bird-like apparatus to glide downwards against the breeze. However, being unable to balance himself forward and backwards, as does a bird by slight movements of its wings, head and legs, he would have needed a large tail to maintain equilibrium. Eilmer could not have achieved true soaring flight, but he might have glided down safely with a tail. Eilmer said he had "forgotten to provide himself with a tail."
== Historical traditions and influence ==
Other than William's account of the flight, nothing has survived of Eilmer's lifetime work as a monk, although his astrological treatises apparently still circulated as late as the 16th century.
Based on William's account, the story of Eilmer's flight has been retold many times through the centuries by scholars, encyclopaedists, and proponents of man-powered flight, keeping the idea of human flight alive. These include over the years: Helinand of Froidmont (before 1229), Alberic of Trois-Fontaines (before 1241), Vincent of Beauvais (1250s), Roger Bacon (c. 1260), Ranulf Higden (before 1352, and the first to misname him "Oliver") and the English translators of his work: Henry Knighton (before 1367), John Nauclerus of Tübingen (c. 1500), John Wilkins (1648), John Milton (1670), and John Wise (1850).
More recently, Maxwell Woosnam in 1986 examined in more detail the technical aspects such as materials, glider angles, and wind effects.
Contemporaries had developed small drawstring toy helicopters, windmills, and sails for boats while church artists increasingly showed angels with more accurate bird-like wings, detailing the camber (curvature) that would help develop lift for heavier-than-air flight. Air was accepted as something that could be "worked", and some people believed that humans could fly with physical effort and the right equipment. Still, for the monk Eilmer the idea of flight must have had a spiritual significance as well: he would not have been ignorant of the need to guard and stabilize the soul for its flight in the afterlife; the differences between angelic and human bodies; the weight of sin and the unnaturalness of ascending mortal flesh.
== Legacy ==
The School of Mechanical and Mining Engineering at the University of Queensland in Brisbane, Australia, has developed a Computational Fluid Dynamics simulation code named Eilmer4. The short film "Eilmer the Flying Monk" recounts Eilmer's attempt to emulate Icarus.
== See also ==
List of firsts in aviation
== Notes ==
== References ==
This article incorporates public domain material from Richard Hallion. Pioneers of Flight: Eilmer of Malmesbury. United States Air Force. Retrieved 10 May 2008.
Lacey, Robert (2004). Great Tales From English History. New York: Little, Brown. ISBN 0-316-10910-X.
Scott, P. (1995). "1". The Shoulders of Giants: A History of Human Flight to 1919. Reading MA: Addison Wesley Publishing Co. ISBN 9780201627220.
White, Lynn (1961). "Eilmer of Malmesbury, an Eleventh Century Aviator: A Case Study of Technological Innovation, Its Context and Tradition". Technology and Culture. 2 (2). Technology and Culture, Vol. 2, No. 2: 97–111. doi:10.2307/3101411. JSTOR 3101411.
Woosnam, Maxwell (1986). Eilmer, The Flight and The Comet. Malmesbury, UK: Friends of Malmesbury Abbey. ISBN 0-9513798-0-1.
== External links ==
Eilmer of Malmesbury, from the Malmesbury Abbey website. |
Ejection seat | In aircraft, an ejection seat or ejector seat is a system designed to rescue the pilot or other crew of an aircraft (usually military) in an emergency. In most designs, the seat is propelled out of the aircraft by an explosive charge or rocket motor, carrying the pilot with it. The concept of an ejectable escape crew capsule has also been tried (see B-58 Hustler). Once clear of the aircraft, the ejection seat deploys a parachute. Ejection seats are common on certain types of military aircraft.
== History ==
A bungee-assisted escape from an aircraft took place in 1910. In 1916, Everard Calthrop, an early inventor of parachutes, patented an ejector seat using compressed air. Compression springs installed under the seat were tested.
The modern layout for an ejection seat was first introduced by Romanian inventor Anastase Dragomir in the late 1920s. The design featured a parachuted cell (a dischargeable chair from an aircraft or other vehicle). It was successfully tested on 25 August 1929 at the Paris-Orly Airport near Paris and in October 1929 at Băneasa, near Bucharest. Dragomir patented his "catapult-able cockpit" at the French Patent Office.
The design was perfected during World War II. Prior to this, the only means of escape from an incapacitated aircraft was to jump clear ("bail out"), and in many cases this was difficult due to injury, the difficulty of egress from a confined space, g forces, the airflow past the aircraft, and other factors.
The first ejection seats were developed independently during World War II by Heinkel and SAAB. Early models were powered by compressed air and the first aircraft to be fitted with such a system was the Heinkel He 280 prototype jet-engined fighter in 1940. One of the He 280 test pilots, Helmut Schenk, became the first person to escape from a stricken aircraft with an ejection seat on 13 January 1942 after his control surfaces iced up and became inoperative. The fighter was being used in tests of the Argus As 014 impulse jets for V-1 flying bomb development. It had its usual Heinkel HeS 8A turbojets removed, and was towed aloft from the Erprobungsstelle Rechlin central test facility of the Luftwaffe in Germany by a pair of Messerschmitt Bf 110C tugs in a heavy snow-shower. At 7,875 ft (2,400 m), Schenk found he had no control, jettisoned his towline, and ejected. The He 280 was never put into production status. The first operational type built anywhere to provide ejection seats for the crew was the Heinkel He 219 Uhu night fighter in 1942.
In Sweden, a version using compressed air was tested in 1941. A gunpowder ejection seat was developed by Bofors and tested in 1943 for the Saab 21. The first test in the air was on a Saab 17 on 27 February 1944, and the first real use occurred by Lt. Bengt Johansson on 29 July 1946 after a mid-air collision between a J 21 and a J 22.
As the first operational military jet in late 1944 to ever feature one, the winner of the German Volksjäger "people's fighter" home defense jet fighter design competition; the lightweight Heinkel He 162A Spatz, featured a new type of ejection seat, this time fired by an explosive cartridge. In this system, the seat rode on wheels set between two pipes running up the back of the cockpit. When lowered into position, caps at the top of the seat fitted over the pipes to close them. Cartridges, basically identical to shotgun shells, were placed in the bottom of the pipes, facing upward. When fired, the gases would fill the pipes, "popping" the caps off the end, and thereby forcing the seat to ride up the pipes on its wheels and out of the aircraft. By the end of the war, the Dornier Do 335 Pfeil—primarily from it having a rear-mounted engine (of the twin engines powering the design) powering a pusher propeller located at the aft end of the fuselage presenting a hazard to a normal "bailout" escape—and a few late-war prototype aircraft were also fitted with ejection seats.
After World War II, the need for such systems became pressing, as aircraft speeds were getting ever higher, and it was not long before the sound barrier was broken. Manual escape at such speeds would be impossible. The United States Army Air Forces experimented with downward-ejecting systems operated by a spring, but it was the work of James Martin and his company Martin-Baker that proved crucial.
The first live flight test of the Martin-Baker system took place on 24 July 1946, when fitter Bernard Lynch ejected from a Gloster Meteor Mk III jet. Shortly afterward, on 17 August 1946, 1st Sgt. Larry Lambert was the first live U.S. ejectee. Lynch demonstrated the ejection seat at the Daily Express Air Pageant in 1948, ejecting from a Meteor. Martin-Baker ejector seats were fitted to prototype and production aircraft from the late 1940s, and the first emergency use of such a seat occurred in 1949 during testing of the jet-powered Armstrong Whitworth A.W.52 experimental flying wing.
Early seats used a solid propellant charge to eject the pilot and seat by igniting the charge inside a telescoping tube attached to the seat. As aircraft speeds increased still further, this method proved inadequate to get the pilot sufficiently clear of the airframe. Increasing the amount of propellant risked damaging the occupant's spine, so experiments with rocket propulsion began. In 1958, the Convair F-102 Delta Dagger was the first aircraft to be fitted with a rocket-propelled seat. Martin-Baker developed a similar design, using multiple rocket units feeding a single nozzle. The greater thrust from this configuration had the advantage of being able to eject the pilot to a safe height even if the aircraft was on or very near the ground.
In the early 1960s, deployment of rocket-powered ejection seats designed for use at supersonic speeds began in such planes as the Convair F-106 Delta Dart. Six pilots have ejected at speeds exceeding 700 knots (1,300 km/h; 810 mph). The highest altitude at which a Martin-Baker seat was deployed was 57,000 ft (17,400 m) (from a Canberra bomber in 1958). Following an accident on 30 July 1966 in the attempted launch of a D-21 drone, two Lockheed M-21 crew members ejected at Mach 3.25 at an altitude of 80,000 ft (24,000 m). The pilot was recovered successfully, but the launch control officer drowned after a water landing. Despite these records, most ejections occur at fairly low speeds and altitudes, when the pilot can see that there is no hope of regaining aircraft control before impact with the ground.
Late in the Vietnam War, the U.S. Air Force and U.S. Navy became concerned about its pilots ejecting over hostile territory and those pilots either being captured or killed and the losses in men and aircraft in attempts to rescue them. Both services began a program titled Air Crew Escape/Rescue Capability or Aerial Escape and Rescue Capability (AERCAB) ejection seats (both terms have been used by the US military and defence industry), where after the pilot ejected, the ejection seat would fly them to a location far enough away from where they ejected to where they could safely be picked up. A Request for Proposals for concepts for AERCAB ejection seats were issued in the late 1960s. Three companies submitted papers for further development: A Rogallo wing design by Bell Systems; a gyrocopter design by Kaman Aircraft; and a mini-conventional fixed wing aircraft employing a Princeton Wing (i.e. a wing made of flexible material that rolls out and then becomes rigid by means of internal struts or supports etc. deploying) by Fairchild Hiller. All three, after ejection, would be propelled by small turbojet engine developed for target drones. With the exception of the Kaman design, the pilot would still be required to parachute to the ground after reaching a safety-point for rescue. The AERCAB project was terminated in the 1970s with the end of the Vietnam War. The Kaman design, in early 1972, was the only one which was to reach the hardware stage. It came close to being tested with a special landing-gear platform attached to the AERCAB ejection seat for first-stage ground take offs and landings with a test pilot.
== Pilot safety ==
The purpose of an ejection seat is pilot survival. The pilot typically experiences an acceleration of about 12–14g. Western seats usually impose lighter loads on the pilots; 1960s–70s era Soviet technology often goes up to 20–22 g (with SM-1 and KM-1 gunbarrel-type ejection seats). Compression fractures of vertebrae are a recurrent side effect of ejection.
It was theorised early on that ejection at supersonic speeds would be unsurvivable; extensive tests, including Project Whoosh with chimpanzee test subjects, were undertaken to determine that it was feasible.
The capabilities of the NPP Zvezda K-36 were unintentionally demonstrated at the Fairford Air Show on 24 July 1993 when the pilots of two MiG-29 fighters ejected after a mid-air collision.
The minimal ejection altitude for the ACES II seat in inverted flight is about 140 feet (43 m) above ground level at 150 KIAS, while the Russian counterpart, the K-36DM, has a minimal ejection altitude from inverted flight of 100 feet (30 m) AGL.
When an aircraft is equipped with the NPP Zvezda K-36DM ejection seat and the pilot is wearing the КО-15 protective gear, they are able to eject at airspeeds from 0 to 1,400 kilometres per hour (870 mph) and altitudes of 0 to 25 km (16 mi or about 82,000 ft). The K-36DM ejection seat features drag chutes and a small shield that rises between the pilot's legs to deflect air around the pilot.
Pilots have successfully ejected from underwater in a handful of instances after being forced to ditch in water. The first recorded case was Lieutenant B. D. Macfarlane of the Royal Navy Fleet Air Arm when he successfully ejected underwater using his Martin-Baker Mk.1 ejection seat after his Westland Wyvern had ditched on launch and been cut in two by the carrier on 13 October 1954. Documented evidence also exists that pilots of the US and Indian navies have also performed this feat.
As of 20 June 2011 – when two Spanish Air Force pilots ejected over San Javier airport – the number of lives saved by Martin-Baker products was 7,402 from 93 air forces. The company runs a club called the "Ejection Tie Club" and gives survivors a unique tie and lapel pin. The total figure for all types of ejection seats is unknown, but may be considerably higher.
Early models of the ejection seat were equipped with only an overhead ejection handle which doubled in function by forcing the pilot to assume the right posture and by having them pull a screen down to protect both their face and oxygen mask from the subsequent air blast. Martin Baker added a secondary handle in the front of the seat to allow ejection even when pilots weren't able to reach upwards because of high g-force. Later (e.g. in Baker's MK9) the top handle was discarded because the lower handle had proven easier to operate and the technology of helmets had advanced to also protect from the air blast.
== Egress systems ==
The "standard" ejection system operates in two stages. First, the entire canopy or hatch above the aviator is opened, shattered, or jettisoned, and the seat and occupant are launched through the opening. In most earlier aircraft this required two separate actions by the aviator, while later egress system designs, such as the Advanced Concept Ejection Seat model 2 (ACES II), perform both functions as a single action.
The ACES II ejection seat is used in most American-built fighters. The A-10 uses connected firing handles that activate both the canopy jettison systems, followed by the seat ejection. The F-15 has the same connected system as the A-10 seat. Both handles accomplish the same task, so pulling either one suffices. The F-16 has only one handle located between the pilot's knees, since the cockpit is too narrow for side-mounted handles.
Non-standard egress systems include Downward Track (used for some crew positions in bomber aircraft, including the B-52 Stratofortress), Canopy Destruct (CD) and Through-Canopy Penetration (TCP), Drag Extraction, Encapsulated Seat, and even Crew Capsule.
Early models of the F-104 Starfighter were equipped with a Downward Track ejection seat due to the hazard of the T-tail. In order to make this work, the pilot was equipped with "spurs" which were attached to cables that would pull the legs inward so the pilot could be ejected. Following this development, some other egress systems began using leg retractors as a way to prevent injuries to flailing legs, and to provide a more stable center of gravity. Some models of the F-104 were equipped with upward-ejecting seats.
Similarly, two of the six ejection seats on the B-52 Stratofortress fire downward, through hatch openings on the bottom of the aircraft; the downward hatches are released from the aircraft by a thruster that unlocks the hatch, while gravity and wind remove the hatch and arm the seat. The four seats on the forward upper deck (two of them, EWO and Gunner, facing the rear of the airplane) fire upwards as usual. Any such downward-firing system is of no use on or near the ground if aircraft is in level flight at the time of the ejection.
Aircraft designed for low-level use sometimes have ejection seats which fire through the canopy, as waiting for the canopy to be ejected is too slow. Many aircraft types (e.g., the BAE Hawk and the Harrier line of aircraft) use Canopy Destruct systems, which have an explosive cord (MDC – Miniature Detonation Cord or FLSC – Flexible Linear Shaped Charge) embedded within the acrylic plastic of the canopy. The MDC is initiated when the eject handle is pulled, and shatters the canopy over the seat a few milliseconds before the seat is launched. This system was developed for the Hawker Siddeley Harrier family of VTOL aircraft as ejection may be necessary while the aircraft was in the hover, and jettisoning the canopy might result in the pilot and seat striking it. This system is also used in the T-6 Texan II and F-35 Lightning II.
Through-Canopy Penetration is similar to Canopy Destruct, but a sharp spike on the top of the seat, known as the "shell tooth", strikes the underside of the canopy and shatters it. The A-10 Thunderbolt II is equipped with canopy breakers on either side of its headrest in the event that the canopy fails to jettison. The T-6 is also equipped with such breakers if the MDC fails to detonate. In ground emergencies, a ground crewman or pilot can use a breaker knife attached to the inside of the canopy to shatter the transparency. The A-6 Intruder and EA-6B Prowler seats were capable of ejecting through the canopy, with canopy jettison a separate option if there is enough time.
CD and TCP systems cannot be used with canopies made of flexible materials, such as the Lexan polycarbonate canopy used on the F-16.
Soviet VTOL naval fighter planes such as the Yakovlev Yak-38 were equipped with ejection seats which were automatically activated during at least some part of the flight envelope.
Drag Extraction is the lightest and simplest egress system available, and has been used on many experimental aircraft. Halfway between simply "bailing out" and using explosive-eject systems, Drag Extraction uses the airflow past the aircraft (or spacecraft) to move the aviator out of the cockpit and away from the stricken craft on a guide rail. Some operate like a standard ejector seat, by jettisoning the canopy, then deploying a drag chute into the airflow. That chute pulls the occupant out of the aircraft, either with the seat or following release of the seat straps, who then rides off the end of a rail extending far enough out to help clear the structure. In the case of the Space Shuttle, the astronauts would have ridden a long, curved rail, blown by the wind against their bodies, then deployed their chutes after free-falling to a safe altitude.
Encapsulated Seat egress systems were developed for use in the B-58 Hustler and B-70 Valkyrie supersonic bombers. These seats were enclosed in an air-operated clamshell, which permitted the aircrew to escape at airspeeds and altitudes high enough to otherwise cause bodily harm. These seats were designed to allow the pilot to control the plane even with the clamshell closed, and the capsule would float in case of water landings.
Some aircraft designs, such as the General Dynamics F-111, do not have individual ejection seats, but instead, the entire section of the airframe containing the crew can be ejected as a single capsule. In this system, very powerful rockets are used, and multiple large parachutes are used to bring the capsule down, in a manner similar to the Launch Escape System of the Apollo spacecraft. On landing, an airbag system is used to cushion the landing, and this also acts as a flotation device if the Crew Capsule lands in water.
=== Zero-zero ejection seat ===
A zero-zero ejection seat is designed to safely extract upward and land its occupant from a grounded stationary position (i.e., zero altitude and zero airspeed), specifically from aircraft cockpits. The zero-zero capability was developed to help aircrews escape upward from unrecoverable emergencies during low-altitude and/or low-speed flight, as well as ground mishaps. Parachutes require a minimum altitude for opening, to give time for deceleration to a safe landing speed. Thus, prior to the introduction of zero-zero capability, ejections could only be performed above minimum altitudes and airspeeds. If the seat was to work from zero (aircraft) altitude, the seat would have to lift itself to a sufficient altitude.
These early seats were fired from the aircraft with a cannon, providing the high impulse needed over the very short length on the cannon barrel within the seat. This limited the total energy, and thus the additional height possible, as otherwise the high forces needed would crush the pilot.
Modern zero-zero technology use small rockets to propel the seat upward to an adequate altitude and a small explosive charge to open the parachute canopy quickly for a successful parachute descent, so that proper deployment of the parachute no longer relies on airspeed and altitude. The seat cannon clears the seat from the aircraft, then the under-seat rocket pack fires to lift the seat to altitude. As the rockets fire for longer than the cannon, they do not require the same high forces. Zero-zero rocket seats also reduced forces on the pilot during any ejection, reducing injuries and spinal compression.
== Other vehicles ==
The Kamov Ka-50 and its successor, the Kamov Ka-52, were the first and only serial production helicopters with ejection seats. The system is similar to that of a conventional fixed-wing aircraft; however the main rotors are equipped with explosive bolts to jettison the blades moments before the seat is fired, preventing the pilots being struck and killed by them.
The only commercial jetliner ever fitted with ejection seats was the Soviet Tupolev Tu-144. However, the seats were present in the prototype only, and were only available for the crew and not the passengers. The Tu-144 that crashed at the Paris Air Show in 1973 was a production model, and did not have ejection seats.
The Lunar Landing Research Vehicle (LLRV) and its successor, the Lunar Landing Training Vehicle (LLTV), used ejection seats. Neil Armstrong ejected on 6 May 1968, following Joe Algranti and Stuart M. Present.
The only spacecraft ever flown with installed ejection seats were Vostok, Gemini, and the Space Shuttle.
Early flights of the Space Shuttle, which used Columbia, were with a crew of two, both provided with ejector seats (STS-1 to STS-4), but the seats were disabled and then removed as the crew size was increased. Columbia and Enterprise were the only two Space Shuttle orbiters fitted with ejection seats. The Buran-class orbiters were planned to be fitted with K-36RB (K-36M-11F35) seats, but as the program was canceled, the seats were never used.
No land vehicle has ever been fitted with an ejection seat, though it is a common trope in fiction. A notable example is the Aston Martin DB5 from the James Bond films, which has an ejecting passenger seat.
== See also ==
Attacks on parachutists – discusses the 1949 Geneva Conventions on War, declaring it illegal to attack ejecting aircrew until they land
Caterpillar Club
Dynamic response index
Escape pod
Launch escape system
Airborne lifeboat
Lifeboat
Pressure suit
Ejection Tie Club
== Notes ==
=== Explanatory notes ===
=== Citations ===
== General and cited references ==
Terry, Gerard (August–November 1984). "Talkback". Air Enthusiast. No. 25. p. 79. ISSN 0143-5450.
== External links ==
"Up, Out and Down". Flight International. 14 November 1952.
"Safety for Service Aircrew". Flight International. 16 December 1965.
Coyne, Kevin (1996–2003). "The Ejection Site".
Kalei, Kalikiano (February 27, 2008). "A History of Military Aircraft Egress Systems (Part One of Three)". Authorsden.com.
"IN PICTURES: Lethbridge CF-18 jet fighter crash". The Globe and Mail. July 23, 2010.
Bennett, Michael (1980–2014). "Ejection History".
"Mk10 seat". Martin-Baker. Archived from the original on 2007-10-07.
"In Pictures: A Potted History Of Ejection Seats". Aviation Week. Oct 28, 2016.
Bull, John O.; Serocki, Edward L.; McDowell, Howard L. (September 1966). "Compilation of Data on Crew Emergency Escape Systems" (PDF). Air Force Flight Dynamics Laboratory. Retrieved 1 April 2023. |
Electric battery | An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons. When a battery is connected to an external electric load, those negatively charged electrons flow through the circuit and reach the positive terminal, thus causing a redox reaction by attracting positively charged ions, or cations. Thus, higher energy reactants are converted to lower energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell.
Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones.
Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines.
== History ==
=== Invention ===
Benjamin Franklin first used the term "battery" in 1749 when he was doing experiments with electricity using a set of linked Leyden jar capacitors. Franklin grouped a number of the jars into what he described as a "battery", using the military term for weapons functioning together. By multiplying the number of holding vessels, a stronger charge could be stored, and more power would be available on discharge.
Italian physicist Alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800. This was a stack of copper and zinc plates, separated by brine-soaked paper disks, that could produce a steady current for a considerable length of time. Volta did not understand that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834.
Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. The Daniell cell, invented in 1836 by British chemist John Frederic Daniell, was the first practical source of electricity, becoming an industry standard and seeing widespread adoption as a power source for electrical telegraph networks. It consisted of a copper pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode.
These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile and potentially dangerous. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.
Batteries in vacuum tube devices historically used a wet cell for the "A" battery (to provide power to the filament) and a dry cell for the "B" battery (to provide the plate voltage).
=== Ongoing developments ===
Between 2010 and 2018, battery demand grew by 30% annually, reaching a total of 180 GWh in 2018. Conservatively, the growth rate is expected to be maintained at an estimated 25%, culminating in demand reaching 2600 GWh in 2030. In addition, cost reductions are expected to further increase the demand to as much as 3562 GWh.
Important reasons for this high rate of growth of the electric battery industry include the electrification of transport, and large-scale deployment in electricity grids, supported by decarbonization initiatives.
Distributed electric batteries, such as those used in battery electric vehicles (vehicle-to-grid) and in home energy storage with smart metering and that are connected to smart grids for demand response are active participants in smart power supply grids.
Secondary use of partially depleted batteries can add to the overall utility of electric batteries by reducing energy storage costs and emission impact due to longer service life. In this use, vehicle electric batteries that have their battery capacity reduced to less than 80% (usually after 5–8 years of service) are repurposed for use in backup supplies or renewable energy storage systems.
Grid scale energy storage envisages the large-scale use of batteries to collect and store energy from the grid or a power plant and then discharge that energy at a later time to provide electricity or other grid services when needed. Grid scale energy storage (either turnkey or distributed) are important components of smart power supply grids.
== Chemistry and principles ==
Batteries convert chemical energy directly to electrical energy. In many cases, the electrical energy released is the difference in the cohesive or bond energies of the metals, oxides, or molecules undergoing the electrochemical reaction. For instance, energy can be stored in Zn or Li, which are high-energy metals because they are not stabilized by d-electron bonding, unlike transition metals. Batteries are designed so that the energetically favorable redox reaction can occur only when electrons move through the external part of the circuit.
A battery consists of some number of voltaic cells. Each cell consists of two half-cells connected in series by a conductive electrolyte containing metal cations. One half-cell includes electrolyte and the negative electrode, the electrode to which anions (negatively charged ions) migrate; the other half-cell includes electrolyte and the positive electrode, to which cations (positively charged ions) migrate. Cations are reduced (electrons are added) at the cathode, while metal atoms are oxidized (electrons are removed) at the anode. Some cells use different electrolytes for each half-cell; then a separator is used to prevent mixing of the electrolytes while allowing ions to flow between half-cells to complete the electrical circuit.
Each half-cell has an electromotive force (emf, measured in volts) relative to a standard. The net emf of the cell is the difference between the emfs of its half-cells. Thus, if the electrodes have emfs
E
1
{\displaystyle {\mathcal {E}}_{1}}
and
E
2
{\displaystyle {\mathcal {E}}_{2}}
, then the net emf is
E
2
−
E
1
{\displaystyle {\mathcal {E}}_{2}-{\mathcal {E}}_{1}}
; in other words, the net emf is the difference between the reduction potentials of the half-reactions.
The electrical driving force or
Δ
V
b
a
t
{\displaystyle \displaystyle {\Delta V_{bat}}}
across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of
E
{\displaystyle {\mathcal {E}}}
until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and produced a charge of one coulomb then on complete discharge it would have performed 1.5 joules of work. In actual cells, the internal resistance increases under discharge and the open-circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed.
The voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries, but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. The high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.
Almost any liquid or moist object that has enough ions to be electrically conductive can serve as the electrolyte for a cell. As a novelty or science demonstration, it is possible to insert two electrodes made of different metals into a lemon, potato, etc. and generate small amounts of electricity.
A voltaic pile can be made from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile generates a very low voltage but, when many are stacked in series, they can replace normal batteries for a short time.
== Types ==
=== Primary and secondary batteries ===
Batteries are classified into primary and secondary forms:
Primary batteries are designed to be used until exhausted of energy then discarded. Their chemical reactions are generally not reversible, so they cannot be recharged. When the supply of reactants in the battery is exhausted, the battery stops producing current and is useless.
Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by applying electric current to the cell. This regenerates the original chemical reactants, so they can be used, recharged, and used again multiple times.
Some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the electrodes. Secondary batteries are not indefinitely rechargeable due to dissipation of the active materials, loss of electrolyte and internal corrosion.
Primary batteries, or primary cells, can produce current immediately on assembly. These are most commonly used in portable devices that have low current drain, are used only intermittently, or are used well away from an alternative power source, such as in alarm and communication circuits where other electric power is only intermittently available. Disposable primary cells cannot be reliably recharged, since the chemical reactions are not easily reversible and active materials may not return to their original forms. Battery manufacturers recommend against attempting to recharge primary cells. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω). Common types of disposable batteries include zinc–carbon batteries and alkaline batteries.
Secondary batteries, also known as secondary cells, or rechargeable batteries, must be charged before first use; they are usually assembled with active materials in the discharged state. Rechargeable batteries are (re)charged by applying electric current, which reverses the chemical reactions that occur during discharge/use. Devices to supply the appropriate current are called chargers. The oldest form of rechargeable battery is the lead–acid battery, which are widely used in automotive and boating applications. This technology contains liquid electrolyte in an unsealed container, requiring that the battery be kept upright and the area be well ventilated to ensure safe dispersal of the hydrogen gas it produces during overcharging. The lead–acid battery is relatively heavy for the amount of electrical energy it can supply. Its low manufacturing cost and its high surge current levels make it common where its capacity (over approximately 10 Ah) is more important than weight and handling issues. A common application is the modern car battery, which can, in general, deliver a peak current of 450 amperes.
=== Composition ===
Many types of electrochemical cells have been produced, with varying chemical processes and designs, including galvanic cells, electrolytic cells, fuel cells, flow cells and voltaic piles.
A wet cell battery has a liquid electrolyte. Other names are flooded cell, since the liquid covers all internal parts or vented cell, since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells. Molten salt batteries are primary or secondary batteries that use a molten salt as electrolyte. They operate at high temperatures and must be well insulated to retain heat.
A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.
A reserve battery can be stored unassembled (unactivated and supplying no power) for a long period (perhaps years). When the battery is needed, then it is assembled (e.g., by adding electrolyte); once assembled, the battery is charged and ready to work. For example, a battery for an electronic artillery fuze might be activated by the impact of firing a gun. The acceleration breaks a capsule of electrolyte that activates the battery and powers the fuze's circuits. Reserve batteries are usually designed for a short service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water.
On 28 February 2017, the University of Texas at Austin issued a press release about a new type of solid-state battery, developed by a team led by lithium-ion battery inventor John Goodenough, "that could lead to safer, faster-charging, longer-lasting rechargeable batteries for handheld mobile devices, electric cars and stationary energy storage". The solid-state battery is also said to have "three times the energy density", increasing its useful life in electric vehicles, for example. It should also be more ecologically sound since the technology uses less expensive, earth-friendly materials such as sodium extracted from seawater. They also have much longer life.
Sony has developed a biological battery that generates electricity from sugar in a way that is similar to the processes observed in living organisms. The battery generates electricity through the use of enzymes that break down carbohydrates.
The sealed valve regulated lead–acid battery (VRLA battery) is popular in the automotive industry as a replacement for the lead–acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life. VRLA batteries immobilize the electrolyte. The two types are:
Gel batteries (or "gel cell") use a semi-solid electrolyte.
Absorbed Glass Mat (AGM) batteries absorb the electrolyte in a special fiberglass matting.
Other portable rechargeable batteries include several sealed "dry cell" types, that are useful in applications such as mobile phones and laptop computers. Cells of this type (in order of increasing power density and cost) include nickel–cadmium (NiCd), nickel–zinc (NiZn), nickel–metal hydride (NiMH), and lithium-ion (Li-ion) cells. Li-ion has by far the highest share of the dry cell rechargeable market. NiMH has replaced NiCd in most applications due to its higher capacity, but NiCd remains in use in power tools, two-way radios, and medical equipment.
In the 2000s, developments include batteries with embedded electronics such as USBCELL, which allows charging an AA battery through a USB connector, nanoball batteries that allow for a discharge rate about 100x greater than current batteries, and smart battery packs with state-of-charge monitors and battery protection circuits that prevent damage on over-discharge. Low self-discharge (LSD) allows secondary cells to be charged prior to shipping.
Lithium–sulfur batteries were used on the longest and highest solar-powered flight.
=== Consumer and industrial grades ===
Batteries of all types are manufactured in consumer and industrial grades. Costlier industrial-grade batteries may use chemistries that provide higher power-to-size ratio, have lower self-discharge and hence longer life when not in use, more resistance to leakage and, for example, ability to handle the high temperature and humidity associated with medical autoclave sterilization.
=== Combination and management ===
Standard-format batteries are inserted into battery holder in the device that uses them. When a device does not uses standard-format batteries, they are typically combined into a custom battery pack which holds multiple batteries in addition to features such as a battery management system and battery isolator which ensure that the batteries within are charged and discharged evenly.
=== Sizes ===
Primary batteries readily available to consumers range from tiny button cells used for electric watches, to the No. 6 cell used for signal circuits or other long duration applications. Secondary cells are made in very large sizes; very large batteries can power a submarine or stabilize an electrical grid and help level out peak loads.
As of 2017, the world's largest battery was built in South Australia by Tesla. It can store 129 MWh. A battery in Hebei Province, China, which can store 36 MWh of electricity was built in 2013 at a cost of $500 million. Another large battery, composed of Ni–Cd cells, was in Fairbanks, Alaska. It covered 2,000 square metres (22,000 sq ft)—bigger than a football pitch—and weighed 1,300 tonnes. It was manufactured by ABB to provide backup power in the event of a blackout. The battery can provide 40 MW of power for up to seven minutes. Sodium–sulfur batteries have been used to store wind power. A 4.4 MWh battery system that can deliver 11 MW for 25 minutes stabilizes the output of the Auwahi wind farm in Hawaii.
=== Comparison ===
Many important cell properties, such as voltage, energy density, flammability, available cell constructions, operating temperature range and shelf life, are dictated by battery chemistry.
== Performance, capacity and discharge ==
A battery's characteristics may vary over load cycle, over charge cycle, and over lifetime due to many factors including internal chemistry, current drain, and temperature. At low temperatures, a battery cannot deliver as much power. As such, in cold climates, some car owners install battery warmers, which are small electric heating pads that keep the car battery warm.
A battery's capacity is the amount of electric charge it can deliver at a voltage that does not drop below the specified terminal voltage. The more electrode material contained in the cell the greater its capacity. A small cell has less capacity than a larger cell with the same chemistry, although they develop the same open-circuit voltage. Capacity is usually stated in ampere-hours (A·h) (mAh for small batteries). The rated capacity of a battery is usually expressed as the product of 20 hours multiplied by the current that a new battery can consistently supply for 20 hours at 20 °C (68 °F), while remaining above a specified terminal voltage per cell. For example, a battery rated at 100 A·h can deliver 5 A over a 20-hour period at room temperature. The fraction of the stored charge that a battery can deliver depends on multiple factors, including battery chemistry, the rate at which the charge is delivered (current), the required terminal voltage, the storage period, ambient temperature and other factors.
The higher the discharge rate, the lower the capacity. The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law:
t
=
Q
P
I
k
{\displaystyle t={\frac {Q_{P}}{I^{k}}}}
where
Q
P
{\displaystyle Q_{P}}
is the capacity when discharged at a rate of 1 amp.
I
{\displaystyle I}
is the current drawn from battery (A).
t
{\displaystyle t}
is the amount of time (in hours) that a battery can sustain.
k
{\displaystyle k}
is a constant around 1.3.
Charged batteries (rechargeable or disposable) lose charge by internal self-discharge over time although not discharged, due to the presence of generally irreversible side reactions that consume charge carriers without producing current. The rate of self-discharge depends upon battery chemistry and construction, typically from months to years for significant loss. When batteries are recharged, additional side reactions reduce capacity for subsequent discharges. After enough recharges, in essence all capacity is lost and the battery stops producing power. Internal energy losses and limitations on the rate that ions pass through the electrolyte cause battery efficiency to vary. Above a minimum threshold, discharging at a low rate delivers more of the battery's capacity than at a higher rate. Installing batteries with varying A·h ratings changes operating time, but not device operation unless load limits are exceeded. High-drain loads such as digital cameras can reduce total capacity of rechargeable or disposable batteries. For example, a battery rated at 2 A·h for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity suggests.
The C-rate is a measure of the rate at which a battery is being charged or discharged. It is defined as the current through the battery divided by the theoretical current draw under which the battery would deliver its nominal rated capacity in one hour. It has the units h−1. Because of internal resistance loss and the chemical processes inside the cells, a battery rarely delivers nameplate rated capacity in only one hour. Typically, maximum capacity is found at a low C-rate, and charging or discharging at a higher C-rate reduces the usable life and capacity of a battery. Manufacturers often publish datasheets with graphs showing capacity versus C-rate curves. C-rate is also used as a rating on batteries to indicate the maximum current that a battery can safely deliver in a circuit. Standards for rechargeable batteries generally rate the capacity and charge cycles over a 4-hour (0.25C), 8 hour (0.125C) or longer discharge time. Types intended for special purposes, such as in a computer uninterruptible power supply, may be rated by manufacturers for discharge periods much less than one hour (1C) but may suffer from limited cycle life.
In 2009 experimental lithium iron phosphate (LiFePO4) battery technology provided the fastest charging and energy delivery, discharging all its energy into a load in 10 to 20 seconds. In 2024 a prototype battery for electric cars that could charge from 10% to 80% in five minutes was demonstrated, and a Chinese company claimed that car batteries it had introduced charged 10% to 80% in 10.5 minutes—the fastest batteries available—compared to Tesla's 15 minutes to half-charge.
== Lifespan and endurance ==
Battery life (or lifetime) has two meanings for rechargeable batteries but only one for non-chargeables. It can be used to describe the length of time a device can run on a fully charged battery—this is also unambiguously termed "endurance". For a rechargeable battery it may also be used for the number of charge/discharge cycles possible before the cells fail to operate satisfactorily—this is also termed "lifespan". The term shelf life is used to describe how long a battery will retain its performance between manufacture and use. Available capacity of all batteries drops with decreasing temperature. In contrast to most of today's batteries, the Zamboni pile, invented in 1812, offers a very long service life without refurbishment or recharge, although it can supply very little current (nanoamps). The Oxford Electric Bell has been ringing almost continuously since 1840 on its original pair of batteries, thought to be Zamboni piles.
Disposable batteries typically lose 8–20% of their original charge per year when stored at room temperature (20–30 °C). This is known as the "self-discharge" rate, and is due to non-current-producing "side" chemical reactions that occur within the cell even when no load is applied. The rate of side reactions is reduced for batteries stored at lower temperatures, although some can be damaged by freezing and storing in a fridge will not meaningfully prolong shelf life and risks damaging condensation. Old rechargeable batteries self-discharge more rapidly than disposable alkaline batteries, especially nickel-based batteries; a freshly charged nickel cadmium (NiCd) battery loses 10% of its charge in the first 24 hours, and thereafter discharges at a rate of about 10% a month. However, newer low self-discharge nickel–metal hydride (NiMH) batteries and modern lithium designs display a lower self-discharge rate (but still higher than for primary batteries).
The active material on the battery plates changes chemical composition on each charge and discharge cycle; active material may be lost due to physical changes of volume, further limiting the number of times the battery can be recharged. Most nickel-based batteries are partially discharged when purchased, and must be charged before first use. Newer NiMH batteries are ready to be used when purchased, and have only 15% discharge in a year.
Some deterioration occurs on each charge–discharge cycle. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material detaches from the electrodes. Low-capacity NiMH batteries (1,700–2,000 mA·h) can be charged some 1,000 times, whereas high-capacity NiMH batteries (above 2,500 mA·h) last about 500 cycles. NiCd batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values. Fast charging increases component changes, shortening battery lifespan. If a charger cannot detect when the battery is fully charged then overcharging is likely, damaging it.
NiCd cells, if used in a particular repetitive manner, may show a decrease in capacity called "memory effect". The effect can be avoided with simple practices. NiMH cells, although similar in chemistry, suffer less from memory effect.
Automotive lead–acid rechargeable batteries must endure stress due to vibration, shock, and temperature range. Because of these stresses and sulfation of their lead plates, few automotive batteries last beyond six years of regular use. Automotive starting (SLI: Starting, Lighting, Ignition) batteries have many thin plates to maximize current. In general, the thicker the plates the longer the life. They are typically discharged only slightly before recharge. "Deep-cycle" lead–acid batteries such as those used in electric golf carts have much thicker plates to extend longevity. The main benefit of the lead–acid battery is its low cost; its main drawbacks are large size and weight for a given capacity and voltage. Lead–acid batteries should never be discharged to below 20% of their capacity, because internal resistance will cause heat and damage when they are recharged. Deep-cycle lead–acid systems often use a low-charge warning light or a low-charge power cut-off switch to prevent the type of damage that will shorten the battery's life.
Battery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer, which slows the side reactions. Such storage can extend the life of alkaline batteries by about 5%; rechargeable batteries can hold their charge much longer, depending upon type. To reach their maximum voltage, batteries must be returned to room temperature; discharging an alkaline battery at 250 mA at 0 °C is only half as efficient as at 20 °C. Alkaline battery manufacturers such as Duracell do not recommend refrigerating batteries.
== Hazards ==
A battery explosion is generally caused by misuse or malfunction, such as attempting to recharge a primary (non-rechargeable) battery, or a short circuit.
When a battery is recharged at an excessive rate, an explosive gas mixture of hydrogen and oxygen may be produced faster than it can escape from within the battery (e.g. through a built-in vent), leading to pressure build-up and eventual bursting of the battery case. In extreme cases, battery chemicals may spray violently from the casing and cause injury. An expert summary of the problem indicates that this type uses "liquid electrolytes to transport lithium ions between the anode and the cathode. If a battery cell is charged too quickly, it can cause a short circuit, leading to explosions and fires". Car batteries are most likely to explode when a short circuit generates very large currents. Such batteries produce hydrogen, which is very explosive, when they are overcharged (because of electrolysis of the water in the electrolyte). During normal use, the amount of overcharging is usually very small and generates little hydrogen, which dissipates quickly. However, when "jump starting" a car, the high current can cause the rapid release of large volumes of hydrogen, which can be ignited explosively by a nearby spark, e.g. when disconnecting a jumper cable.
Overcharging (attempting to charge a battery beyond its electrical capacity) can also lead to a battery explosion, in addition to leakage or irreversible damage. It may also cause damage to the charger or device in which the overcharged battery is later used.
Disposing of a battery via incineration may cause an explosion as steam builds up within the sealed case.
Many battery chemicals are corrosive, poisonous or both. If leakage occurs, either spontaneously or through accident, the chemicals released may be dangerous. For example, disposable batteries often use a zinc "can" both as a reactant and as the container to hold the other reagents. If this kind of battery is over-discharged, the reagents can emerge through the cardboard and plastic that form the remainder of the container. The active chemical leakage can then damage or disable the equipment that the batteries power. For this reason, many electronic device manufacturers recommend removing the batteries from devices that will not be used for extended periods of time.
Many types of batteries employ toxic materials such as lead, mercury, and cadmium as an electrode or electrolyte. When each battery reaches end of life it must be disposed of to prevent environmental damage. Batteries are one form of electronic waste (e-waste). E-waste recycling services recover toxic substances, which can then be used for new batteries. Of the nearly three billion batteries purchased annually in the United States, about 179,000 tons end up in landfills across the country.
Batteries may be harmful or fatal if swallowed. Small button cells can be swallowed, in particular by young children. While in the digestive tract, the battery's electrical discharge may lead to tissue damage; such damage is occasionally serious and can lead to death. Ingested disk batteries do not usually cause problems unless they become lodged in the gastrointestinal tract. The most common place for disk batteries to become lodged is the esophagus, resulting in clinical sequelae. Batteries that successfully traverse the esophagus are unlikely to lodge elsewhere. The likelihood that a disk battery will lodge in the esophagus is a function of the patient's age and battery size. Older children do not have problems with batteries smaller than 21–23 mm. Liquefaction necrosis may occur because sodium hydroxide is generated by the current produced by the battery (usually at the anode). Perforation has occurred as rapidly as 6 hours after ingestion.
Some battery manufactures have added a bad taste to batteries to discourage swallowing.
== Legislation and regulation ==
Legislation around electric batteries includes such topics as safe disposal and recycling.
In the United States, the Mercury-Containing and Rechargeable Battery Management Act of 1996 banned the sale of mercury-containing batteries, enacted uniform labeling requirements for rechargeable batteries and required that rechargeable batteries be easily removable. California and New York City prohibit the disposal of rechargeable batteries in solid waste. The rechargeable battery industry operates nationwide recycling programs in the United States and Canada, with dropoff points at local retailers.
The Battery Directive of the European Union has similar requirements, in addition to requiring increased recycling of batteries and promoting research on improved battery recycling methods. In accordance with this directive all batteries to be sold within the EU must be marked with the "collection symbol" (a crossed-out wheeled bin). This must cover at least 3% of the surface of prismatic batteries and 1.5% of the surface of cylindrical batteries. All packaging must be marked likewise.
In response to reported accidents and failures, occasionally ignition or explosion, recalls of devices using lithium-ion batteries have become more common in recent years.
On 9 December 2022, the European Parliament reached an agreement to force, from 2026, manufacturers to design all electrical appliances sold in the EU (and not used predominantly in wet conditions) so that consumers can easily remove and replace batteries themselves.
== See also ==
Battery simulator
Nanowire battery
Search for the Super Battery
== References ==
== Bibliography ==
Dingrando, Laurel; et al. (2007). Chemistry: Matter and Change. New York: Glencoe/McGraw-Hill. ISBN 978-0-07-877237-5. Ch. 21 (pp. 662–695) is on electrochemistry.
Fink, Donald G.; H. Wayne Beaty (1978). Standard Handbook for Electrical Engineers, Eleventh Edition. New York: McGraw-Hill. ISBN 978-0-07-020974-9.
Knight, Randall D. (2004). Physics for Scientists and Engineers: A Strategic Approach. San Francisco: Pearson Education. ISBN 978-0-8053-8960-9. Chs. 28–31 (pp. 879–995) contain information on electric potential.
Linden, David; Thomas B. Reddy (2001). Handbook of Batteries. New York: McGraw-Hill. ISBN 978-0-07-135978-8.
Saslow, Wayne M. (2002). Electricity, Magnetism, and Light. Toronto: Thomson Learning. ISBN 978-0-12-619455-5. Chs. 8–9 (pp. 336–418) have more information on batteries.
Turner, James Morton. Charged: A History of Batteries and Lessons for a Clean Energy Future (University of Washington Press, 2022). online review
== External links ==
Media related to Electric batteries at Wikimedia Commons
Non-rechargeable batteries (archived 22 October 2013)
HowStuffWorks: How batteries work
Other Battery Cell Types
DoITPoMS Teaching and Learning Package- "Batteries" |
Electricity | Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.
The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts.
Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.
The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force behind the Second Industrial Revolution, with electricity's versatility driving transformations in both industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society.
== History ==
Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artefact was electrical in nature.
Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Isaac Newton made early investigations into electricity, with an idea of his written down in his book Opticks arguably the beginning of the field theory of the electric force.
Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.
In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.: 148
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
In 1887, Heinrich Hertz: 843–44 discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels.
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.
Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948.
== Concepts ==
=== Electric charge ===
By modern convention, the charge carried by electrons is defined as negative, and that by protons is positive. Before these particles were discovered, Benjamin Franklin had defined a positive charge as being the charge acquired by a glass rod when it is rubbed with a silk cloth. A proton by definition carries a charge of exactly 1.602176634×10−19 coulombs. This value is also defined as the elementary charge. No object can have a charge smaller than the elementary charge, and any amount of charge an object may carry is a multiple of the elementary charge. An electron has an equal negative charge, i.e. −1.602176634×10−19 coulombs. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.
The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.: 457 A lightweight ball suspended by a fine thread can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.
The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.: 35 The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.
Charge originates from certain types of subatomic particles, the most familiar carriers of which are the electron and proton. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact or by passing along a conducting material, such as a wire.: 2–5 The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.
Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.: 2–5
=== Electric current ===
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,: 17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.: 23–24 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass.: 370 He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.: 11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.: 206–07 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.: 223–25 These properties however can become important when circuitry is subjected to transients, such as when first energised.
=== Electric field ===
The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.
An electric field generally varies in space, and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point.: 469–70 The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, it follows that an electric field is a vector field.: 469–70
The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, they originate at positive charges and terminate at negative charges; second, they must enter any good conductor at right angles, and third, they may never cross nor close in on themselves.: 479
A hollow conducting body carries all its charge on its outer surface. The field is therefore 0 at all places inside the body.: 88 This is the operating principle of the Faraday cage, a conducting metal shell that isolates its interior from outside electrical effects.
The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre.: 2 The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.: 201–02
The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning strike to develop there, rather than to the building it serves to protect.: 155
=== Electric potential ===
The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.: 494–98 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. The electric field is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.: 494–98 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.
For practical purposes, defining a common reference point to which potentials may be expressed and compared is useful. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge and is therefore electrically uncharged—and unchargeable.
Electric potential is a scalar quantity. That is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, since otherwise there would be a force along the surface of the conductor that would move the charge carriers to even the potential across the surface.
The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together.: 60
=== Electromagnets ===
Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it.: 370 Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too.
Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.
This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.
Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.
=== Electric circuits ===
An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.
The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.: 15–16
The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.: 30–35
The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.: 216–20
The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current but opposes a rapidly changing one.: 226–29
=== Electric power ===
Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.
Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is
P
=
work done per unit time
=
Q
V
t
=
I
V
{\displaystyle P={\text{work done per unit time}}={\frac {QV}{t}}=IV\,}
where
Q is electric charge in coulombs
t is time in seconds
I is electric current in amperes
V is electric potential or voltage in volts
Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.
=== Electronics ===
Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies.: 1–5, 71 The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible,: 75 and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.
Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering.
=== Electromagnetic wave ===
Faraday's and Ampère's work showed that a time-varying magnetic field created an electric field, and a time-varying electric field created a magnetic field. Thus, when either field is changing in time, a field of the other is always induced.: 696–700 These variations are an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that in a vacuum such a wave would travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's equations, which unify light, fields, and charge are one of the great milestones of theoretical physics.: 696–700
The work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents and, via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.
== Production, storage and uses ==
=== Generation and transmission ===
In the 6th century BC the Greek philosopher Thales of Miletus experimented with amber rods: these were the first studies into the production of electricity. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electricity.
Electrical power is usually generated by electro-mechanical generators. These can be driven by steam produced from fossil fuel combustion or the heat released from nuclear reactions, but also more directly from the kinetic energy of wind or flowing water. The steam turbine invented by Sir Charles Parsons in 1884 is still used to convert the thermal energy of steam into a rotary motion that can be used by electro-mechanical generators. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. Electricity generated by solar panels rely on a different mechanism: solar radiation is converted directly into electricity using the photovoltaic effect.
Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century, a rate of growth that is now being experienced by emerging economies such as those of India or China.
Environmental concerns with electricity generation, in specific the contribution of fossil fuel burning to climate change, have led to an increased focus on generation from renewable sources. In the power sector, wind and solar have become cost effective, speeding up an energy transition away from fossil fuels.
=== Transmission and storage ===
The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.
Normally, demand for electricity must match the supply, as storage of electricity is difficult. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses. With increasing levels of variable renewable energy (wind and solar energy) in the grid, it has become more challenging to match supply and demand. Storage plays an increasing role in bridging that gap. There are four types of energy storage technologies, each in varying states of technology readiness: batteries (electrochemical storage), chemical storage such as hydrogen, thermal or mechanical (such as pumped hydropower).
=== Applications ===
Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.
The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate. Electrification is expected to play a major role in the decarbonisation of sectors that rely on direct fossil fuel burning, such as transport (using electric vehicles) and heating (using heat pumps).
The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.
Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.
Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain many billions of miniaturised transistors in a region only a few centimetres square.
== Electricity and the natural world ==
=== Physiological effects ===
A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock—electrocution—is still used for judicial execution in some US states, though its use had become very rare by the end of the 20th century.
=== Electrical phenomena in nature ===
Electricity is not a human invention, and may be observed in several forms in nature, notably lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is due to the natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when pressed. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal: when a piezoelectric material is subjected to an electric field it changes size slightly.
Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon; these are electric fish in different orders. The order Gymnotiformes, of which the best-known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.
== Cultural perception ==
It is said that in the 1850s, British politician William Ewart Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, "One day sir, you may tax it." However, according to Snopes.com "the anecdote should be considered apocryphal because it isn't mentioned in any accounts by Faraday or his contemporaries (letters, newspapers, or biographies) and only popped up well after Faraday's death."
In the 19th and early 20th centuries, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature.: 69 This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.
As public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light,: 71 such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha.: 71 Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books.: 71 The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.: 71
With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it acquired particular attention by popular culture only when it stops flowing,: 71 an event that usually signals disaster.: 71 The people who keep it flowing, such as the nameless hero of Jimmy Webb's song "Wichita Lineman" (1968),: 71 are still often cast as heroic, wizard-like figures.: 71
== See also ==
Ampère's circuital law, connects the direction of an electric current and its associated magnetic currents.
Electric potential energy, the potential energy of a system of charges
Electricity market, the sale of electrical energy
Etymology of electricity, the origin of the word electricity and its current different usages
Hydraulic analogy, an analogy between the flow of water and electric current
Developmental bioelectricity – Electric current produced in living cells
== Notes ==
== References ==
Benjamin, Park (1898), A history of electricity: (The intellectual rise in electricity) from antiquity to the days of Benjamin Franklin, New York: J. Wiley & Sons
Hammond, Percy (1981), "Electromagnetism for Engineers", Nature, 168 (4262), Pergamon: 4–5, Bibcode:1951Natur.168....4G, doi:10.1038/168004b0, ISBN 0-08-022104-1, S2CID 27576009
Morely, A.; Hughes, E. (1994), Principles of Electricity (5th ed.), Longman, ISBN 0-582-22874-3
Nahvi, Mahmood; Joseph, Edminister (1965), Electric Circuits, McGraw-Hill, ISBN 978-0071422413
Naidu, M.S.; Kamataru, V. (1982), High Voltage Engineering, Tata McGraw-Hill, ISBN 0-07-451786-4
Nilsson, James; Riedel, Susan (2007), Electric Circuits, Prentice Hall, ISBN 978-0-13-198925-2
Patterson, Walter C. (1999), Transforming Electricity: The Coming Generation of Change, Earthscan, ISBN 1-85383-341-X
== External links ==
Basic Concepts of Electricity chapter from Lessons In Electric Circuits Vol 1 DC book and series.
"One-Hundred Years of Electricity", May 1931, Popular Mechanics
Socket and plug standards
Electricity Misconceptions
Electricity and Magnetism
Understanding Electricity and Electronics in about 10 Minutes |
Environmental impact of aviation | Aircraft engines produce gases, noise, and particulates from fossil fuel combustion, raising environmental concerns over their global effects and their effects on local air quality.
Jet airliners contribute to climate change by emitting carbon dioxide (CO2), the best understood greenhouse gas, and, with less scientific understanding, nitrogen oxides, contrails and particulates.
Their radiative forcing is estimated at 1.3–1.4 that of CO2 alone, excluding induced cirrus cloud with a very low level of scientific understanding.
In 2018, global commercial operations generated 2.4% of all CO2 emissions.
Jet airliners have become 70% more fuel efficient between 1967 and 2007, and CO2 emissions per revenue ton-kilometer (RTK) in 2018 were 47% of those in 1990. In 2018, CO2 emissions averaged 88 grams of CO2 per revenue passenger per km.
While the aviation industry is more fuel efficient, overall emissions have risen as the volume of air travel has increased. By 2020, aviation emissions were 70% higher than in 2005 and they could grow by 300% by 2050.
Aircraft noise pollution disrupts sleep, children's education and could increase cardiovascular risk.
Airports can generate water pollution due to their extensive handling of jet fuel and deicing chemicals if not contained, contaminating nearby water bodies.
Aviation activities emit ozone and ultrafine particles, both of which are health hazards. Piston engines used in general aviation burn Avgas, releasing toxic lead.
Aviation's environmental footprint can be reduced by better fuel economy in aircraft, or air traffic control and flight routes can be optimized to lower non-CO2 effects on climate from NOx, particulates or contrails.
Aviation biofuel, emissions trading and carbon offsetting, part of the ICAO's CORSIA, can lower CO2 emissions. Aviation usage can be lowered by short-haul flight bans, train connections, personal choices and aviation taxation and subsidies. Fuel-powered aircraft may be replaced by hybrid electric aircraft and electric aircraft or by hydrogen-powered aircraft.
Since 2021, the IATA members plan net-zero carbon emissions by 2050, followed by the ICAO in 2022.
== Climate change ==
=== Factors ===
Airplanes emit gases (carbon dioxide, water vapor, nitrogen oxides or carbon monoxide − bonding with oxygen to become CO2 upon release) and atmospheric particulates (incompletely burned hydrocarbons, sulfur oxides, black carbon), interacting among themselves and with the atmosphere.
While the main greenhouse gas emission from powered aircraft is CO2, jet airliners contribute to climate change in four ways as they fly in the tropopause:
Carbon dioxide (CO2)
CO2 emissions are the most significant and best understood contribution to climate change. The effects of CO2 emissions are similar regardless of altitude. Airport ground vehicles, those used by passengers and staff to access airports, emissions generated by airport construction and aircraft manufacturing also contribute to the greenhouse gas emissions from the aviation industry.
Nitrogen oxides (NOx, nitric oxide and nitrogen dioxide)
In the tropopause, emissions of NOx favor ozone (O3) formation in the upper troposphere. At altitudes from 8 to 13 km (26,000 to 43,000 ft), NOx emissions result in greater concentrations of O3 than surface NOx emissions and these in turn have a greater global warming effect. The effect of O3 surface concentrations are regional and local, but it becomes well mixed globally at mid and upper tropospheric levels. NOx emissions also reduce ambient levels of methane, another greenhouse gas, resulting in a climate cooling effect, though not offsetting the O3 forming effect. Aircraft sulfur and water emissions in the stratosphere tend to deplete O3, partially offsetting the NOx-induced O3 increases, although these effects have not been quantified. Light aircraft and small commuter aircraft fly lower in the troposphere, not in the tropopause.
Contrails and cirrus clouds
Fuel burning produces water vapor, which condenses at high altitude, under cold and humid conditions, into visible line clouds: condensation trails (contrails). They are thought to have a global warming effect, though less significant than CO2 emissions. Contrails are uncommon from lower-altitude aircraft. Cirrus clouds can develop after the formation of persistent contrails and can have an additional global warming effect. Their global warming contribution is uncertain and estimating aviation's overall contribution often excludes cirrus cloud enhancement.
Particulates
Compared with other emissions, sulfate and soot particles have a smaller direct effect: sulfate particles have a cooling effect and reflect radiation, while soot has a warming effect and absorbs heat, while the clouds' properties and formation are influenced by particles. Contrails and cirrus clouds evolving from particles may have a greater radiative forcing effect than CO2 emissions. As soot particles are large enough to serve as condensation nuclei, they are thought to cause the most contrail formation. Soot production may be decreased by reducing the aromatic compound of jet fuel.
In 1999, the IPCC estimated aviation's radiative forcing in 1992 to be 2.7 (2 to 4) times that of CO2 alone − excluding the potential effect of cirrus cloud enhancement.
This was updated for 2000, with aviation's radiative forcing estimated at 47.8 mW/m2, 1.9 times the effect of CO2 emissions alone, 25.3 mW/m2.
In 2005, research by David S. Lee, et al., published in the scientific journal Atmospheric Environment estimated the cumulative radiative forcing effect of aviation as 55 mW/m2, which is twice the 28 mW/m2 radiative forcing effect of the cumulative CO2 emissions alone, excluding induced cirrus clouds.
In 2012, research from Chalmers university estimated this weighting factor at 1.3–1.4 if aviation induced cirrus is not included, 1.7–1.8 if they are included (within a range of 1.3–2.9). This ratio depends on how aviation activity grows. If the growth is exponential then the ratio is constant. But if the growth stops, the ratio will go down because the CO2 in the atmosphere due to aviation will continue to go up, whereas the other effects will stagnate.
Uncertainties remain on the NOx–O3–CH4 interactions, aviation-produced contrails formation, the effects of soot aerosols on cirrus clouds and measuring non-CO2 radiative forcing.
In 2018, CO2 represented 34.3 mW/m2 of aviation's effective radiative forcing (ERF, on the surface), with a high confidence level (± 6 mW/m2), NOx 17.5 mW/m2 with a low confidence level (± 14) and contrail cirrus 57.4 mW/m2, also with a low confidence level (± 40).
All factors combined represented 43.5 mW/m2 (1.27 that of CO2 alone) excluding contrail cirrus and 101 mW/m2 (±45) including them, 3.5% of the anthropogenic ERF of 2290 mW/m2 (± 1100). Again, it must be remembered that the effect of CO2 accumulates from year to year, unlike the effect of contrails and cirrus clouds.
=== Volume ===
By 2018, airline traffic reached 4.3 billion passengers with 37.8 million departures, an average of 114 passengers per flight and 8.26 trillion RPKs, an average journey of 1,920 km (1,040 nmi), according to ICAO.
The traffic was experiencing continuous growth, doubling every 15 years, despite external shocks − a 4.3% average yearly growth and Airbus forecasts expect the growth to continue.
While the aviation industry is more fuel efficient, halving the amount of fuel burned per flight compared to 1990 through technological advancement and operations improvements, overall emissions have risen as the volume of air travel has increased.
Between 1960 and 2018, RPKs increased from 109 to 8,269 billion.
In 1992, aircraft emissions represented 2% of all man-made CO2 emissions, having accumulated a little more than 1% of the total man-made CO2 increase over 50 years.
By 2015, aviation accounted for 2.5% of global CO2 emissions.
In 2018, global commercial operations emitted 918 million tonnes (Mt) of CO2, 2.4% of all CO2 emissions: 747 Mt for passenger transport and 171 Mt for freight operations.
Between 1960 and 2018, CO2 emissions increased 6.8 times from 152 to 1,034 million tonnes per year.
Emissions from flights rose by 32% between 2013 and 2018.
Between 1990 and 2006, greenhouse gas emissions from aviation increased by 87% in the European Union.
In 2010, about 60% of aviation emissions came from international flights, which are outside the emission reduction targets of the Kyoto Protocol. International flights are not covered by the Paris Agreement, either, to avoid a patchwork of individual country regulations. That agreement was adopted by the International Civil Aviation Organization, however, capping airlines carbon emissions to the year 2020 level, while allowing airlines to buy carbon credits from other industries and projects.
In 1992, aircraft radiative forcing was estimated by the IPCC at 3.5% of the total man-made radiative forcing.
=== Per passenger ===
As it accounts for a large share of their costs, 28% by 2007, airlines have a strong incentive to lower their fuel consumption, reducing their environmental footprint.
Jet airliners have become 70% more fuel efficient between 1967 and 2007.
Jetliner fuel efficiency improves continuously, 40% of the improvement come from engines and 30% from airframes.
Efficiency gains were larger early in the jet age than later, with a 55–67% gain from 1960 to 1980 and a 20–26% gain from 1980 to 2000.
The average fuel burn of new aircraft fell 45% from 1968 to 2014, a compounded annual reduction of 1.3% with variable reduction rate.
By 2018, CO2 emissions per revenue ton-kilometer (RTK) were more than halved compared to 1990, at 47%.
The aviation energy intensity went from 21.2 to 12.3 MJ/RTK between 2000 and 2019, a 42% reduction.
In 2018, CO2 emissions totalled 747 million tonnes for passenger transport, for 8.5 trillion revenue passenger kilometres (RPK), giving an average of 88 gram CO2 per RPK.
The UK's Department for BEIS calculate a long-haul flight release 102 g of CO2 per passenger kilometre, and 254 g of CO2 equivalent, including non-CO2 greenhouse gas emissions, water vapor etc.; for a domestic flight in Britain.
The ICAO targets a 2% efficiency improvement per year between 2013 and 2050, while the IATA targets 1.5% for 2009–2020 and to cut net CO2 emissions in half by 2050 relative to 2005.
=== Evolution ===
In 1999, the IPCC estimated aviation's radiative forcing may represent 190 mW/m2 or 5% of the total man-made radiative forcing in 2050, with the uncertainty ranging from 100 to 500 mW/m2. If other industries achieve significant reductions in greenhouse gas emissions over time, aviation's share, as a proportion of the remaining emissions, could rise.
Alice Bows-Larkin estimated that the annual global CO2 emissions budget would be entirely consumed by aviation emissions to keep the climate change temperature increase below 2 °C by mid-century. Given that growth projections indicate that aviation will generate 15% of global CO2 emissions, even with the most advanced technology forecast, she estimated that to hold the risks of dangerous climate change to under 50% by 2050 would exceed the entire carbon budget in conventional scenarios.
In 2013, the National Center for Atmospheric Science at the University of Reading forecast that increasing CO2 levels will result in a significant increase in in-flight turbulence experienced by transatlantic airline flights by the middle of the 21st century. This prediction is supported by data showing that incidents of severe turbulence increased by 55% between 1979 and 2020, attributed to changes in wind velocity at high altitudes.
Aviation CO2 emissions grow despite efficiency innovations to aircraft, powerplants and flight operations.
Air travel continue to grow.
In 2015, the Center for Biological Diversity estimated that aircraft could generate 43 Gt of carbon dioxide emissions through 2050, consuming almost 5% of the remaining global carbon budget. Without regulation, global aviation emissions may triple by mid-century and could emit more than 3 Gt of carbon annually under a high-growth, business-as-usual scenario.
Many countries have pledged emissions reductions for the Paris Agreement, but the sum of these efforts and pledges remains insufficient and not addressing airplane pollution would be a failure despite technological and operational advancements.
The International Energy Agency projects aviation share of global CO2 emissions may grow from 2.5% in 2019 to 3.5% by 2030.
By 2020, global international aviation emissions were around 70% higher than in 2005 and the ICAO forecasts they could grow by over further 300% by 2050 in the absence of additional measures.
By 2050, aviation's negative effects on climate could be decreased by a 2% increase in fuel efficiency and a decrease in NOx emissions, due to advanced aircraft technologies, operational procedures and renewable alternative fuels decreasing radiative forcing due to sulfate aerosol and black carbon.
== Noise ==
Air traffic causes aircraft noise, which disrupts sleep, adversely affects children's school performance and could increase cardiovascular risk for airport neighbours. Sleep disruption can be reduced by banning or restricting flying at night, but disturbance progressively decreases and legislation differs across countries.
The ICAO Chapter 14 noise standard applies for aeroplanes submitted for certification after 31 December 2017, and after 31 December 2020 for aircraft below 55 t (121,000 lb), 7 EPNdB (cumulative) quieter than Chapter4. The FAA Stage 5 noise standards are equivalent. Higher bypass ratio engines produce less noise. The PW1000G is presented as 75% quieter than previous engines. Serrated edges or 'chevrons' on the back of the nacelle reduce noise.
A Continuous Descent Approach (CDA) is quieter as less noise is produced while the engines are near idle power. CDA can reduce noise on the ground by ~1–5 dB per flight.
== Water pollution ==
Airports can generate significant water pollution due to their extensive use and handling of jet fuel, lubricants and other chemicals. Chemical spills can be mitigated or prevented by spill containment structures and clean-up equipment such as vacuum trucks, portable berms and absorbents.
Deicing fluids used in cold weather can pollute water, as most of them fall to the ground and surface runoff can carry them to nearby streams, rivers or coastal waters.: 101 Deicing fluids are based on ethylene glycol or propylene glycol.: 4 Airports use pavement deicers on paved surfaces including runways and taxiways, which may contain potassium acetate, glycol compounds, sodium acetate, urea or other chemicals.: 42
During degradation in surface waters, ethylene and propylene glycol exert high levels of biochemical oxygen demand, consuming oxygen needed by aquatic life. Microbial populations decomposing propylene glycol consume large quantities of dissolved oxygen (DO) in the water column.: 2–23
Fish, macroinvertebrates and other aquatic organisms need sufficient dissolved oxygen levels in surface waters. Low oxygen concentrations reduce usable aquatic habitat because organisms die if they cannot move to areas with sufficient oxygen levels. Bottom feeder populations can be reduced or eliminated by low DO levels, changing a community's species profile or altering critical food-web interactions.: 2–30
Glycol-based deicing fluids are toxic to humans and other mammals. Research into non-toxic alternative deicing fluids is ongoing.
== Air pollution ==
Aviation is the main human source of ozone, a respiratory health hazard, causing an estimated 6,800 premature deaths per year.
Aircraft engines emit ultrafine particles (UFPs) in and near airports, as does ground support equipment. During takeoff, 3 to 50 × 1015 particles were measured per kg of fuel burned, while significant differences are observed depending on the engine. Other estimates include 4 to 200 × 1015 particles for 0.1–0.7 gram, or 14 to 710 × 1015 particles, or 0.1–10 × 1015 black carbon particles for 0.046–0.941 g.
In the United States, 167,000 piston aircraft engines, representing three-quarters of private airplanes, burn Avgas, releasing lead into the air. The Environmental Protection Agency estimated this released 34,000 tons of lead into the atmosphere between 1970 and 2007. The Federal Aviation Administration recognizes inhaled or ingested lead leads to adverse effects on the nervous system, red blood cells, and cardiovascular and immune systems. Lead exposure in infants and young children may contribute to behavioral and learning problems and lower IQ.
== Private jet travel ==
A 2024 study published in Communications Earth & Environment revealed that carbon dioxide emissions from private jet travel surged to 15.6 million tonnes in 2023, a 46% increase compared to 2019. Despite serving only 256,000 individuals—approximately 0.003% of the global population—the industry contributes significantly to greenhouse gas emissions.
The research further highlights that nearly half of these flights covered distances shorter than 500 kilometers. Moreover, many flights involved empty legs, where aircraft traveled without passengers, often for repositioning or ferry flights.
The private jet industry is poised for further growth, with projections indicating a 33% increase in the global fleet to 26,000 aircraft by 2033.
== Mitigation ==
Aviation's environmental footprint can be mitigated by reducing air travel, optimizing flight routes, capping emissions, restricting short-distance flights, increasing taxation and decreasing subsidies to the aviation industry. Technological innovation could also mitigate damage to the environment and climate, for example, through the development of electric aircraft, biofuels, and increased fuel efficiency.
In 2016, the International Civil Aviation Organization (ICAO) committed to improve aviation fuel efficiency by 2% per year and to keeping the carbon emissions from 2020 onwards at the same level as those from 2010.
To achieve these goals, multiple measures were identified: more fuel-efficient aircraft technology; development and deployment of sustainable aviation fuels (SAFs); improved air traffic management (ATM); market-based measures like emission trading, levies, and carbon offsetting, the Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA).
In December 2020, the UK Climate Change Committee said that: "Mitigation options considered include demand management, improvements in aircraft efficiency (including use of hybrid electric aircraft), and use of sustainable aviation fuels (biofuels, biowaste to jet and synthetic jet fuels) to displace fossil jet fuel."
In February 2021, Europe's aviation sector unveiled its Destination 2050 sustainability initiative towards zero CO2 emissions by 2050:
aircraft technology improvements for 37% emission reductions;
SAFs for 34%;
economic measures for 8%;
ATM and operations improvements for 6%;
while air traffic should grow by 1.4% per year between 2018 and 2050.
The initiative is led by ACI Europe, ASD Europe, A4E, CANSO and ERA.
This would apply to flights within and departing the European single market and the UK.
In October 2021, the IATA committed to net-zero carbon emissions by 2050. In 2022, the ICAO agreed to support a net-zero carbon emission target for 2050.
The aviation sector could be decarbonized by 2050 with moderate demand growth, continuous efficiency improvements, new short-haul engines, higher SAF production and CO2 removal to compensate for non-CO2 forcing.
With constant air transport demand and aircraft efficiency, decarbonizing aviation would require nearly five times the 2019 worldwide biofuel production, competing with other hard-to-decarbonize sectors, and 0.2 to 3.4 Gt of CO2 removal to compensate for non-CO2 forcing.
Carbon offsets would be preferred if carbon credits are less expensive than SAFs, but they may be unreliable, while specific routing could avoid contrails.
As of 2023, fuel represents 20–30% of the airlines' operating costs, while SAF is 2–4 times more expensive than fossil jet fuel.
Projected cost decreases of green hydrogen and carbon capture could make synthetic fuels more affordable, and lower feedstock costs and higher conversion efficiencies would help FT and HEFA biofuels.
Policy incentives like cleaner aviation fuel tax credits and low-carbon fuel standards could induce improvements, and carbon pricing could render SAFs more competitive, accelerating their deployment and reducing their costs through learning and economies of scale.
According to a 2023 Royal Society study, reaching net zero would need replacing fossil aviation fuel with a low or zero carbon energy source, as battery technologies are unlikely to give enough specific energy.
Biofuels can be introduced quickly and with little aircraft modification, but are restricted by scale and feedstock availability, and few are low-carbon.
Producing enough renewable electricity to produce green hydrogen would be a costly challenge and would need substantial aircraft and infrastructure modification.
Synthetic fuels would need little aircraft modification, but necessitates green hydrogen feedstock and large scale direct CO2 air capture at high costs.
Low-carbon Ammonia would also need costly green hydrogen at scale, and would need substantial aircraft and infrastructure modifications.
In its Sixth Assessment Report, the IPCC notes that sustainable biofuels, low-emissions hydrogen, and derivatives (including ammonia and synthetic fuels) can support mitigation of CO2 emissions but some hard-to-abate residual GHG emissions remain and would need to be counterbalanced by deployment of carbon dioxide removal methods.
On 29 March 2003, during a Senate hearing, hydrogen propulsion proponents like ZeroAvia or Universal Hydrogen bemoaned that the incumbents like GE Aerospace or Boeing were supporting sustainable aviation fuel (SAF) because it does not require major changes to existing infrastructure.
An April 2023 report of the Sustainable Aero Lab estimate current in-production aircraft will be the vast majority of the 2050 fleet as electric aircraft will not have enough range and hydrogen aircraft will not be available soon enough : the main decarbonisation drivers will be SAF; replacing regional jets with turboprop aircraft; and incentives to replace older jets with new generation ones.
The airline industry faces a significant climate challenge due to the scarcity of clean fuel options, exemplified by the recent establishment of LanzaJet Inc.'s $200 million facility in Georgia, the first to convert ethanol into jet engine-compatible fuel, with an annual production target of 9 million gallons of sustainable aviation fuel (SAF). This volume, however, is minuscule compared to the global demand, as evidenced by the world's airlines consuming 90 billion gallons of jet fuel last year, and even major airlines like IAG SA (parent company of British Airways) using only 0.66% of their total fuel consumption as SAF, with a goal to increase this to 10% by 2030. Incentives such as the $1.75 per gallon SAF credit offered by the US Inflation Reduction Act, set to expire in 2027, aim to boost SAF usage, while L.E.K. Consulting forecasts that alcohol-to-jet technology will become the dominant source of SAF by the mid-next decade. Meanwhile, emerging technologies like e-kerosene, though potentially reducing climate impacts significantly, face economic challenges as they cost nearly seven times more than traditional jet fuel, and the future of 45 proposed power-to-liquids plants in Europe remains uncertain, according to Transport & Environment.
=== Technology improvements ===
==== Electric aircraft ====
Electric aircraft operations do not produce any emissions and electricity can be generated by renewable energy. Lithium-ion batteries including packaging and accessories gives a 160 Wh/kg energy density while aviation fuel gives 12,500 Wh/kg. As electric machines and converters are more efficient, their shaft power available is closer to 145 Wh/kg of battery while a gas turbine gives 6,555 Wh/kg of fuel: a 45:1 ratio. For Collins Aerospace, this 1:50 ratio forbids electric propulsion for long-range aircraft. By November 2019, the German Aerospace Center estimated large electric planes could be available by 2040. Large, long-haul aircraft are unlikely to become electric before 2070 or within the 21st century, whilst smaller aircraft can be electrified. As of May 2020, the largest electric airplane was a modified Cessna 208B Caravan.
For the UK's Committee on Climate Change (CCC), huge technology shifts are uncertain, but consultancy Roland Berger points to 80 new electric aircraft programmes in 2016–2018, all-electric for the smaller two-thirds and hybrid for larger aircraft, with forecast commercial service dates in the early 2030s on short-haul routes like London to Paris, with all-electric aircraft not expected before 2045. Berger predicts a 24% CO2 share for aviation by 2050 if fuel efficiency improves by 1% per year and if there are no electric or hybrid aircraft, dropping to 3–6% if 10-year-old aircraft are replaced by electric or hybrid aircraft due to regulatory constraints, starting in 2030, to reach 70% of the 2050 fleet. This would greatly reduce the value of the existing fleet of aircraft, however.
Limits to the supply of battery cells could hamper their aviation adoption, as they compete with other industries like electric vehicles.
Lithium-ion batteries have proven fragile and fire-prone and their capacity deteriorates with age. However, alternatives are being pursued, such as sodium-ion batteries.
==== Hydrogen-powered aircraft ====
In 2020, Airbus unveiled liquid-hydrogen-powered aircraft concepts as zero-emissions airliners, poised for 2035.
Aviation, like industrial processes that cannot be electrified, could use primarily Hydrogen-based fuel.
A 2020 study by the EU Clean Sky 2 and Fuel Cells and Hydrogen 2 Joint Undertakings found that hydrogen could power aircraft by 2035 for short-range aircraft. A short-range aircraft (< 2,000 km, 1,100 nmi) with hybrid Fuel cell/Turbines could reduce climate impact by 70–80% for a 20–30% additional cost, a medium-range airliner with H2 turbines could have a 50–60% reduced climate impact for a 30–40% overcost, and a long-range aircraft (> 7,000 km, 3,800 nmi) also with H2 turbines could reduce climate impact by 40–50% for a 40–50% additional cost. Research and development would be required, in aircraft technology and into hydrogen infrastructure, regulations and certification standards.
==== Sustainable aviation fuels (SAF) ====
An aviation biofuel (also known as bio-jet fuel, sustainable aviation fuel (SAF), or bio-aviation fuel (BAF)) is a biofuel used to power aircraft. The International Air Transport Association (IATA) considers it a key element in reducing the environmental impact of aviation. Aviation biofuel is used to decarbonize medium and long-haul air travel. These types of travel generate the most emissions and could extend the life of older aircraft types by lowering their carbon footprint. Synthetic paraffinic kerosene (SPK) refers to any non-petroleum-based fuel designed to replace kerosene jet fuel, which is often, but not always, made from biomass.
Biofuels are biomass-derived fuels from plants, animals, or waste; depending on which type of biomass is used, they could lower CO2 emissions by 20–98% compared to conventional jet fuel.
The first test flight using blended biofuel was in 2008, and in 2011, blended fuels with 50% biofuels were allowed on commercial flights. In 2023 SAF production was 600 million liters, representing 0.2% of global jet fuel use. By 2024, SAF production was to increase to 1.3 billion liters (1 million tonnes), representing 0.3% of global jet fuel consumption and 11% of global renewable fuel production. This increase came as major US production facilities delayed their ramp-up until 2025, having initially been expected to reach 1.9 billion liters.
Aviation biofuel can be produced from plant or animal sources such as Jatropha, algae, tallows, waste oils, palm oil, Babassu, and Camelina (bio-SPK); from solid biomass using pyrolysis processed with a Fischer–Tropsch process (FT-SPK); with an alcohol-to-jet (ATJ) process from waste fermentation; or from synthetic biology through a solar reactor. Small piston engines can be modified to burn ethanol.
Sustainable biofuels are an alternative to electrofuels. Sustainable aviation fuel is certified as being sustainable by a third-party organisation.
==== Electrofuels (e-fuels) ====
The Potsdam Institute for Climate Impact Research reported a €800–1,200 mitigation cost per ton of CO2 for hydrogen-based e-fuels.
Those could be reduced to €20–270 per ton of CO2 in 2050, but maybe not early enough to replace fossil fuels.
Climate policies could bear the risk of e-fuel uncertain availability, and Hydrogen and e-fuels may be prioritised when direct electrification is inaccessible.
==== Aircraft with lower design speed and altitude ====
According to a research project focusing on short to medium range passenger aircraft, design for subsonic instead of transonic speed (about 15% less speed) would save 21% of fuel compared to an aircraft of conventional design speed and similar characteristics in terms of size, range and expected general technology improvements. The lower mach number and turboprop instead of turbofan propulsion leads to lower flight altitude with a disproportionately high reduction in Non-CO2 emissions. Thus, over 60% climate impact reduction can be potentially achieved by such advanced turboprop aircraft compared to current short to medium range passenger aircraft, yet before switching to synthetic fuels.
=== Reducing air travel ===
==== Measures ====
The ICCT estimates that 3% of the global population take regular flights.
Stefan Gössling of the Western Norway Research Institute estimates 1% of the world population emits half of commercial aviation's CO2, while close to 90% does not fly in a given year.
In early 2022, the European Investment Bank published the results of its 2021–2022 Climate Survey, showing that 52% of Europeans under 30, 37% of people between 30 and 64 and 25% for people aged 65 and above plan to travel by air for their summer holidays in 2022; and 27% of those under 30, 17% for people aged 30–64 and 12% for people aged 65 and above plan to travel by air to a faraway destination.
Short-haul flight ban
A short-haul flight ban is a prohibition imposed by governments on airlines to establish and maintain a flight connection over a certain distance, or by organisations or companies on their employees for business travel using existing flight connections over a certain distance, in order to mitigate the environmental impact of aviation (most notably to reduce anthropogenic greenhouse gas emissions which is the leading cause of climate change). In the 21st century, several governments, organisations and companies have imposed restrictions and even prohibitions on short-haul flights, stimulating or pressuring travellers to opt for more environmentally friendly means of transportation, especially trains.
Flight shame
In Sweden the concept of "flight shame" or "flygskam" has been cited as a cause of falling air travel. Swedish rail company SJ AB reports that twice as many Swedish people chose to travel by train instead of by air in summer 2019 compared with the previous year. Swedish airports operator Swedavia reported 4% fewer passengers across its 10 airports in 2019 compared to the previous year: a 9% drop for domestic passengers and 2% for international passengers.
Personal allowances
Climate change mitigation can be backed by Personal carbon allowances (PCAs) where all adults receive "an equal, tradable carbon allowance that reduces over time in line with national targets." Everyone would have a share of allowed carbon emissions and would need to trade further emissions allowances. An alternative would be rationing everyone's flights: an "individual cap on air travel, that people can trade with each other".
=== Economic measures ===
==== Emissions trading ====
ICAO has endorsed emissions trading to reduce aviation CO2 emission, guidelines were to be presented to the 2007 ICAO Assembly. Within the European Union, the European Commission has included aviation in the European Union Emissions Trading Scheme operated since 2012, capping airline emissions, providing incentives to lower emissions through more efficient technology or to buy carbon credits from other companies. The Centre for Aviation, Transport and Environment at Manchester Metropolitan University estimates the only way to lower emissions is to put a price on carbon and to use market-based measures like the EU ETS.
==== Taxation and subsidies ====
Financial measures can discourage airline passengers and promote other transportation modes and motivates airlines to improve fuel efficiency. Aviation taxation include:
air passenger taxes, paid by passengers for environmental reasons, may be variable by distance and include domestic flights;
departure taxes, paid by passengers leaving the country, sometimes also applies outside aviation;
jet fuel taxes, paid by airlines for the consumed jet fuel. Jet fuel taxation is applied in the United States, but banned in the European Union.
Consumer behavior can be influenced by cutting subsidies for unsustainable aviation and subsidising the development of sustainable alternatives.
By September–October 2019, a carbon tax on flights would be supported by 72% of the EU citizens, in a poll conducted for the European Investment Bank.
Aviation taxation could reflect all its external costs and could be included in an emissions trading scheme.
International aviation emissions escaped international regulation until the ICAO triennial conference in 2016 agreed on the CORSIA offset scheme.
Due to low or nonexistent taxes on aviation fuel, air travel has a competitive advantage over other transportation modes.
=== Carbon offsetting ===
A carbon offset is a means of compensating aviation emissions by saving enough carbon or absorbing carbon back into plants through photosynthesis (for example, by planting trees through reforestation or afforestation) to balance the carbon emitted by a particular action.
However, carbon credits permanence and additionality can be questionable. More than 90% of rainforest offset credits certified by Verra's Verified Carbon Standard may not represent genuine carbon reductions.
==== Consumer option ====
Some airlines offer carbon offsets to passengers to cover the emissions created by their flight, invested in green technology such as renewable energy and research into future technology. Airlines offering carbon offsets include British Airways, Continental Airlines, easyJet,; and also Air Canada, Air New Zealand, Delta Air Lines, Emirates Airlines, Gulf Air, Jetstar, Lufthansa, Qantas, United Airlines and Virgin Australia. Consumers can also purchase offsets on the individual market. There are certification standards for these, including the Gold Standard and the Green-e.
==== National carbon budgets ====
In UK, transportation replaced power generation as the largest emissions source. This includes aviation's 4% contribution. This is expected to expand until 2050 and passenger demand may need to be reduced. For the UK Committee on Climate Change (CCC), the UK target of an 80% reduction from 1990 to 2050 was still achievable from 2019, but the committee suggests that the Paris Agreement should tighten its emission targets.
Their position is that emissions in problematic sectors, like aviation, should be offset by greenhouse gas removal, carbon capture and storage and reforestation.
The UK will include international aviation and shipping in their carbon budgets and hopes other countries will too.
==== Airline offsets ====
Some airlines have been carbon-neutral like Costa Rican Nature Air, or claim to be, like Canadian Harbour Air Seaplanes. Long-haul low-cost venture Fly POP aims to be carbon neutral.
In 2019, Air France announced it would offset CO2 emissions on its 450 daily domestic flights, that carry 57,000 passengers, from January 2020, through certified projects.
The company will also offer its customers the option to voluntarily compensate for all their flights and aims to reduce its emissions by 50% per pax/km by 2030, compared to 2005.
Starting in November 2019, UK budget carrier EasyJet decided to offset carbon emissions for all its flights, through investments in atmospheric carbon reduction projects.
It claims to be the first major operator to be carbon neutral, at a cost of £25 million for its 2019–2020 financial year.
Its CO2 emissions were 77 g per passenger in its 2018–2019 financial year, down from 78.4 g the previous year.
From January 2020, British Airways began offsetting its 75 daily domestic flights emissions through carbon-reduction project investments.
The airline seeks to become carbon neutral by 2050 with fuel-efficient aircraft, sustainable fuels and operational changes.
Passengers flying overseas can offset their flights for £1 to Madrid in economy or £15 to New York in business-class.
US low-cost carrier JetBlue planned to use offsets for its emissions from domestic flights starting in July 2020, the first major US airline to do so. It also plans to use sustainable aviation fuel made from waste by Finnish refiner Neste starting in mid-2020. In August 2020, JetBlue became entirely carbon-neutral for its U.S. domestic flights, using efficiency improvements and carbon offsets. Delta Air Lines pledged to do the same within ten years.
To become carbon neutral by 2050, United Airlines invests to build in the US the largest carbon capture and storage facility through the company 1PointFive, jointly owned by Occidental Petroleum and Rusheen Capital Management, with Carbon Engineering technology, aiming for nearly 10% offsets.
=== Air traffic management improvements ===
An improved air traffic management system, with more direct routes than suboptimal air corridors and optimized cruising altitudes, would allow airlines to reduce their emissions by up to 18%. In the European Union, a Single European Sky has been proposed since 1999 to avoid overlapping airspace restrictions between EU countries and to reduce emissions. By 2007, 12 million tons of CO2 emissions per year were caused by the lack of a Single European Sky. As of September 2020, the Single European Sky has still not been completely achieved, costing 6 billion euros in delays and causing 11.6 million tonnes of excess CO2 emissions.
=== Operations improvements ===
Non-CO2 emissions
Besides carbon dioxide, aviation produces nitrogen oxides (NOx), particulates, unburned hydrocarbons (UHC) and contrails. Flight routes can be optimized: modelling CO2, H2O and NOx effects of transatlantic flights in winter shows westbound flights climate forcing can be lowered by up to 60% and ~25% for jet stream-following eastbound flights, costing 10–15% more due to longer distances and lower altitudes consuming more fuel, but 0.5% costs increase can reduce climate forcing by up to 25%. A 2000 feet (~600 m) lower cruise altitude than the optimal altitude has a 21% lower radiative forcing, while a 2000 feet higher cruise altitude 9% higher radiative forcing.
Nitrogen oxides (NOx)
As designers work to reduce NOx emissions from jet engines, they fell by over 40% between 1997 and 2003. Cruising at a 2,000 ft (610 m) lower altitude could reduce NOx-caused radiative forcing from 5 mW/m2 to ~3 mW/m2.
Particulates
Modern engines are designed so that no smoke is produced at any point in the flight while particulates and smoke were a problem with early jet engines at high power settings.
Unburned hydrocarbons (UHC)
Produced by incomplete combustion, more unburned hydrocarbons are produced with low compressor pressures and/or relatively low combustor temperatures, they have been eliminated in modern jet engines through improved design and technology, like particulates.
Contrails
Contrail formation would be reduced by lowering the cruise altitude with slightly increased flight times, but this would be limited by airspace capacity, especially in Europe and North America, and increased fuel burn due to lower efficiency at lower altitudes, increasing CO2 emissions by 4%. Contrail radiative forcing could be minimized by schedules: night flights cause 60–80% of the forcing for only 25% of the air traffic, while winter flights contribute half of the forcing for only 22% of the air traffic. As 2% of flights are responsible for 80% of contrail radiative forcing, changing a flight altitude by 2,000 ft (610 m) to avoid high humidity for 1.7% of flights would reduce contrail formation by 59%. DLR's ECLIF3 study, flying an Airbus A350, show sustainable aviation fuel reduces contrail ice-crystal formation by 56% and soot particle by 35%, maybe due to lower sulphur content, as well as low aromatic and naphthalene content.
== See also ==
== References ==
=== Works cited ===
IPCC (2022). Shukla P, Skea J, Slade R, Al Khourdajie A, et al. (eds.). Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press..
== Further reading ==
Institutional
"Aviation Emissions, Impacts & Mitigation: A Primer" (PDF). FAA Office of Environment and Energy. January 2015.
"Strategic Research & Innovation Agenda" (PDF). Advisory Council for Aviation Research and Innovation in Europe. 2017. Archived from the original (PDF) on 18 July 2020.
"European Aviation Environmental Report" (PDF). EASA. 2019. Archived from the original (PDF) on 16 March 2019.
"Environmental Report". ICAO. 2019.
Concerns
"airportwatch.org.uk". AirportWatch. oppose any expansion of aviation and airports likely to damage the human or natural environment, and to promote an aviation policy for the UK which is in full accordance with the principles of sustainable development
Industry
"Aviation: Benefits Beyond Borders". Air Transport Action Group. information on the many industry measures underway to limit the impact of aviation on the environment
"sustainableaviation.co.uk". Sustainable Aviation. collective approach of UK aviation to tackling the challenge of ensuring a sustainable future
"The aviation sector's climate action framework" (PDF). Air Transport Action Group. November 2015.
"Making Net-Zero Aviation Possible" (PDF). Mission Possible Partnership. July 2022. An industry-backed, 1.5°C-aligned transition strategy
Research
"Aviation Sustainability Center". Washington State University and the Massachusetts Institute of Technology.
"Laboratory for Aviation and the Environment". Massachusetts Institute of Technology.
"Partnership for Air Transportation Noise and Emissions Reduction". Massachusetts Institute of Technology.
"Sustainable Sky Institute". Sustainable Sky Institute.
Studies
Kivits R, Charles MB, Ryan N (2010). "A post-carbon aviation future: Airports and the transition to a cleaner aviation sector". Futures. 42 (3): 199–211. doi:10.1016/j.futures.2009.11.005.
The Heinrich Böll Foundation and the Airbus Group (May 2016). "Aloft – An Inflight Review" (PDF).
Antoine Gelain (10 August 2016). "Opinion: The Uncomfortable Truth About Aviation Emissions". Aviation Week & Space Technology.
"Tracking report: Aviation". International Energy Agency. June 2020.
Hannah Ritchie (22 October 2020). "Climate change and flying: what share of global CO2 emissions come from aviation?". Our World in Data.
"The aviation industry wants to be net zero—but not soon". The Economist. 14 May 2023. |
Ethanol | Ethanol (also called ethyl alcohol, grain alcohol, drinking alcohol, or simply alcohol) is an organic compound with the chemical formula CH3CH2OH. It is an alcohol, with its formula also written as C2H5OH, C2H6O or EtOH, where Et stands for ethyl. Ethanol is a volatile, flammable, colorless liquid with a characteristic wine-like odor and pungent taste. As a psychoactive depressant, it is the active ingredient in alcoholic beverages, and the second most consumed drug globally behind caffeine.
Ethanol is naturally produced by the fermentation process of sugars by yeasts or via petrochemical processes such as ethylene hydration. Historically it was used as a general anesthetic, and has modern medical applications as an antiseptic, disinfectant, solvent for some medications, and antidote for methanol poisoning and ethylene glycol poisoning. It is used as a chemical solvent and in the synthesis of organic compounds, and as a fuel source for lamps, stoves, and internal combustion engines. Ethanol also can be dehydrated to make ethylene, an important chemical feedstock. As of 2023, world production of ethanol fuel was 112.0 gigalitres (2.96×1010 US gallons), coming mostly from the U.S. (51%) and Brazil (26%).
The term “ethanol,” originates from the ethyl group coined in 1834 and was officially adopted in 1892, while “alcohol”—now referring broadly to similar compounds—originally described a powdered cosmetic and only later came to mean ethanol specifically. Ethanol occurs naturally as a byproduct of yeast metabolism in environments like overripe fruit and palm blossoms, during plant germination under anaerobic conditions, in interstellar space, in human breath, and in rare cases, is produced internally due to auto-brewery syndrome.
Ethanol has been used since ancient times as an intoxicant. Production through fermentation and distillation evolved over centuries across various cultures. Chemical identification and synthetic production began by the 19th century.
== Name ==
Ethanol is the systematic name defined by the International Union of Pure and Applied Chemistry for a compound consisting of an alkyl group with two carbon atoms (prefix "eth-"), having a single bond between them (infix "-an-") and an attached −OH functional group (suffix "-ol").
The "eth-" prefix and the qualifier "ethyl" in "ethyl alcohol" originally came from the name "ethyl" assigned in 1834 to the group C2H5− by Justus Liebig. He coined the word from the German name Aether of the compound C2H5−O−C2H5 (commonly called "ether" in English, more specifically called "diethyl ether"). According to the Oxford English Dictionary, Ethyl is a contraction of the Ancient Greek αἰθήρ (aithḗr, "upper air") and the Greek word ὕλη (hýlē, "wood, raw material", hence "matter, substance"). Ethanol was coined as a result of a resolution on naming alcohols and phenols that was adopted at the International Conference on Chemical Nomenclature that was held in April 1892 in Geneva, Switzerland.
The term alcohol now refers to a wider class of substances in chemistry nomenclature, but in common parlance it remains the name of ethanol. It is a medieval loan from Arabic al-kuḥl, a powdered ore of antimony used since antiquity as a cosmetic, and retained that meaning in Middle Latin. The use of 'alcohol' for ethanol (in full, "alcohol of wine") was first recorded in 1753. Before the late 18th century the term alcohol generally referred to any sublimated substance.
== Uses ==
=== Recreational drug ===
As a central nervous system depressant, ethanol is one of the most commonly consumed psychoactive drugs. Despite alcohol's psychoactive, addictive, and carcinogenic properties, it is readily available and legal for sale in many countries. There are laws regulating the sale, exportation/importation, taxation, manufacturing, consumption, and possession of alcoholic beverages. The most common regulation is prohibition for minors.
In mammals, ethanol is primarily metabolized in the liver and stomach by ADH enzymes. These enzymes catalyze the oxidation of ethanol into acetaldehyde (ethanal):
CH3CH2OH + NAD+ → CH3CHO + NADH + H+
When present in significant concentrations, this metabolism of ethanol is additionally aided by the cytochrome P450 enzyme CYP2E1 in humans, while trace amounts are also metabolized by catalase. The resulting intermediate, acetaldehyde, is a known carcinogen, and poses significantly greater toxicity in humans than ethanol itself. Many of the symptoms typically associated with alcohol intoxication—as well as many of the health hazards typically associated with the long-term consumption of ethanol—can be attributed to acetaldehyde toxicity in humans.
The subsequent oxidation of acetaldehyde into acetate is performed by aldehyde dehydrogenase (ALDH) enzymes. A mutation in the ALDH2 gene that encodes for an inactive or dysfunctional form of this enzyme affects roughly 50% of east Asian populations, contributing to the characteristic alcohol flush reaction that can cause temporary reddening of the skin as well as a number of related, and often unpleasant, symptoms of acetaldehyde toxicity. This mutation is typically accompanied by another mutation in the ADH enzyme ADH1B in roughly 80% of east Asians, which improves the catalytic efficiency of converting ethanol into acetaldehyde.
=== Medical ===
Ethanol is the oldest known sedative, used as an oral general anesthetic during surgery in ancient Mesopotamia and in medieval times. Mild intoxication starts at a blood alcohol concentration of 0.03 – 0.05% and induces anesthetic coma at 0.4%. This use carries the high risk of deadly alcohol intoxication, pulmonary aspiration and vomiting, which led to use of alternatives in antiquity, such as opium and cannabis, and later diethyl ether, starting in the 1840s.
Ethanol is used as an antiseptic in medical wipes and hand sanitizer gels for its bactericidal and anti-fungal effects. Ethanol kills microorganisms by dissolving their membrane lipid bilayer and denaturing their proteins, and is effective against most bacteria, fungi and viruses. It is ineffective against bacterial spores, which can be treated with hydrogen peroxide.
A solution of 70% ethanol is more effective than pure ethanol because ethanol relies on water molecules for optimal antimicrobial activity. Absolute ethanol may inactivate microbes without destroying them because the alcohol is unable to fully permeate the microbe's membrane. Ethanol can also be used as a disinfectant and antiseptic by inducing cell dehydration through disruption of the osmotic balance across the cell membrane, causing water to leave the cell, leading to cell death.
Ethanol may be administered as an antidote to ethylene glycol poisoning and methanol poisoning. It does so by acting as a competitive inhibitor against methanol and ethylene glycol for alcohol dehydrogenase (ADH). Though it has more side effects, ethanol is less expensive and more readily available than fomepizole in the role.
Ethanol is used to dissolve many water-insoluble medications and related compounds. Liquid preparations of pain medications, cough and cold medicines, and mouth washes, for example, may contain up to 25% ethanol and may need to be avoided in individuals with adverse reactions to ethanol such as alcohol-induced respiratory reactions. Ethanol is present mainly as an antimicrobial preservative in over 700 liquid preparations of medicine including acetaminophen, iron supplements, ranitidine, furosemide, mannitol, phenobarbital, trimethoprim/sulfamethoxazole and over-the-counter cough medicine.
Some medicinal solutions of ethanol are also known as tinctures.
=== Energy source ===
The largest single use of ethanol is as an engine fuel and fuel additive. Brazil in particular relies heavily upon the use of ethanol as an engine fuel, due in part to its role as one of the world's leading producers of ethanol. Gasoline sold in Brazil contains at least 25% anhydrous ethanol. Hydrous ethanol (about 95% ethanol and 5% water) can be used as fuel in more than 90% of new gasoline-fueled cars sold in the country.
The US and many other countries primarily use E10 (10% ethanol, sometimes known as gasohol) and E85 (85% ethanol) ethanol/gasoline mixtures. Over time, it is believed that a material portion of the ≈150-billion-US-gallon (570,000,000 m3) per year market for gasoline will begin to be replaced with fuel ethanol.
Australian law limits the use of pure ethanol from sugarcane waste to 10% in automobiles. Older cars (and vintage cars designed to use a slower burning fuel) should have the engine valves upgraded or replaced.
According to an industry advocacy group, ethanol as a fuel reduces harmful tailpipe emissions of carbon monoxide, particulate matter, oxides of nitrogen, and other ozone-forming pollutants. Argonne National Laboratory analyzed greenhouse gas emissions of many different engine and fuel combinations, and found that biodiesel/petrodiesel blend (B20) showed a reduction of 8%, conventional E85 ethanol blend a reduction of 17% and cellulosic ethanol 64%, compared with pure gasoline. Ethanol has a much greater research octane number (RON) than gasoline, meaning it is less prone to pre-ignition, allowing for better ignition advance which means more torque, and efficiency in addition to the lower carbon emissions.
Ethanol combustion in an internal combustion engine yields many of the products of incomplete combustion produced by gasoline and significantly larger amounts of formaldehyde and related species such as acetaldehyde. This leads to a significantly larger photochemical reactivity and more ground level ozone. This data has been assembled into The Clean Fuels Report comparison of fuel emissions and show that ethanol exhaust generates 2.14 times as much ozone as gasoline exhaust. When this is added into the custom Localized Pollution Index of The Clean Fuels Report, the local pollution of ethanol (pollution that contributes to smog) is rated 1.7, where gasoline is 1.0 and higher numbers signify greater pollution. The California Air Resources Board formalized this issue in 2008 by recognizing control standards for formaldehydes as an emissions control group, much like the conventional NOx and reactive organic gases (ROGs).
More than 20% of Brazilian cars are able to use 100% ethanol as fuel, which includes ethanol-only engines and flex-fuel engines. Flex-fuel engines in Brazil are able to work with all ethanol, all gasoline or any mixture of both. In the United States, flex-fuel vehicles can run on 0% to 85% ethanol (15% gasoline) since higher ethanol blends are not yet allowed or efficient. Brazil supports this fleet of ethanol-burning automobiles with large national infrastructure that produces ethanol from domestically grown sugarcane.
Ethanol's high miscibility with water makes it unsuitable for shipping through modern pipelines like liquid hydrocarbons. Mechanics have seen increased cases of damage to small engines (in particular, the carburetor) and attribute the damage to the increased water retention by ethanol in fuel.
Ethanol was commonly used as fuel in early bipropellant rocket (liquid-propelled) vehicles, in conjunction with an oxidizer such as liquid oxygen. The German A-4 ballistic rocket of World War II (better known by its propaganda name V-2), which is credited as having begun the space age, used ethanol as the main constituent of B-Stoff. Under such nomenclature, the ethanol was mixed with 25% water to reduce the combustion chamber temperature. The V-2's design team helped develop U.S. rockets following World War II, including the ethanol-fueled Redstone rocket, which launched the first U.S. astronaut on suborbital spaceflight. Alcohols fell into general disuse as more energy-dense rocket fuels were developed, although ethanol was used in recent experimental lightweight rocket-powered racing aircraft.
Commercial fuel cells operate on reformed natural gas, hydrogen or methanol. Ethanol is an attractive alternative due to its wide availability, low cost, high purity and low toxicity. There is a wide range of fuel cell concepts that have entered trials including direct-ethanol fuel cells, auto-thermal reforming systems and thermally integrated systems. The majority of work is being conducted at a research level although there are a number of organizations at the beginning of the commercialization of ethanol fuel cells.
Ethanol fireplaces can be used for home heating or for decoration. Ethanol can also be used as stove fuel for cooking.
=== Other uses ===
Ethanol is an important industrial ingredient. It has widespread use as a precursor for other organic compounds such as ethyl halides, ethyl esters, diethyl ether, acetic acid, and ethyl amines. It is considered a universal solvent, as its molecular structure allows for the dissolving of both polar, hydrophilic and nonpolar, hydrophobic compounds. As ethanol also has a low boiling point, it is easy to remove from a solution that has been used to dissolve other compounds, making it a popular extracting agent for botanical oils. Cannabis oil extraction methods often use ethanol as an extraction solvent, and also as a post-processing solvent to remove oils, waxes, and chlorophyll from solution in a process known as winterization.
Ethanol is found in paints, tinctures, markers, personal care products such as mouthwashes, perfumes and deodorants, and wet specimen preservatives. Polysaccharides precipitate from aqueous solution in the presence of alcohol, and ethanol precipitation is used for this reason in the purification of DNA and RNA. Because of its low freezing point of −114 °C (−173 °F) and low toxicity, ethanol is sometimes used in laboratories (with dry ice or other coolants) as a cooling bath to keep vessels at temperatures below the freezing point of water. For the same reason, it is also used as the active fluid in alcohol thermometers.
== Chemistry ==
Ethanol is a 2-carbon alcohol. Its molecular formula is CH3CH2OH. The structure of the molecule of ethanol is CH3−CH2−OH (an ethyl group linked to a hydroxyl group), which indicates that the carbon of a methyl group (−CH3) is attached to the carbon of a methylene group (−CH2−), which is attached to the oxygen of a hydroxyl group (−OH). It is a constitutional isomer of dimethyl ether. Ethanol is sometimes abbreviated as EtOH, using the common organic chemistry notation of representing the ethyl group (−CH2CH3) with Et.
=== Physical properties ===
Ethanol is a volatile, colorless liquid that has a slight odor. It burns with a smokeless blue flame that is not always visible in normal light. The physical properties of ethanol stem primarily from the presence of its hydroxyl group and the shortness of its carbon chain. Ethanol's hydroxyl group is able to participate in hydrogen bonding, rendering it more viscous and less volatile than less polar organic compounds of similar molecular weight, such as propane. Ethanol's adiabatic flame temperature for combustion in air is 2082 °C or 3779 °F.
Ethanol is slightly more refractive than water, having a refractive index of 1.36242 (at λ=589.3 nm and 18.35 °C or 65.03 °F). The triple point for ethanol is 150 ± 20 K.
=== Solvent properties ===
Ethanol is a versatile solvent, miscible with water and with many organic solvents, including acetic acid, acetone, benzene, carbon tetrachloride, chloroform, diethyl ether, ethylene glycol, glycerol, nitromethane, pyridine, and toluene. Its main use as a solvent is in making tincture of iodine, cough syrups, etc. It is also miscible with light aliphatic hydrocarbons, such as pentane and hexane, and with aliphatic chlorides such as trichloroethane and tetrachloroethylene.
Ethanol's miscibility with water contrasts with the immiscibility of longer-chain alcohols (five or more carbon atoms), whose water miscibility decreases sharply as the number of carbons increases. The miscibility of ethanol with alkanes is limited to alkanes up to undecane: mixtures with dodecane and higher alkanes show a miscibility gap below a certain temperature (about 13 °C for dodecane). The miscibility gap tends to get wider with higher alkanes, and the temperature for complete miscibility increases.
Ethanol-water mixtures have less volume than the sum of their individual components at the given fractions. Mixing equal volumes of ethanol and water results in only 1.92 volumes of mixture. Mixing ethanol and water is exothermic, with up to 777 J/mol being released at 298 K.
Hydrogen bonding causes pure ethanol to be hygroscopic to the extent that it readily absorbs water from the air. The polar nature of the hydroxyl group causes ethanol to dissolve many ionic compounds, notably sodium and potassium hydroxides, magnesium chloride, calcium chloride, ammonium chloride, ammonium bromide, and sodium bromide. Sodium and potassium chlorides are slightly soluble in ethanol. Because the ethanol molecule also has a nonpolar end, it will also dissolve nonpolar substances, including most essential oils and numerous flavoring, coloring, and medicinal agents.
The addition of even a few percent of ethanol to water sharply reduces the surface tension of water. This property partially explains the "tears of wine" phenomenon. When wine is swirled in a glass, ethanol evaporates quickly from the thin film of wine on the wall of the glass. As the wine's ethanol content decreases, its surface tension increases and the thin film "beads up" and runs down the glass in channels rather than as a smooth sheet.
=== Azeotrope with water ===
At atmospheric pressure, mixtures of ethanol and water form an azeotrope at about 89.4 mol% ethanol (95.6% ethanol by mass, 97% alcohol by volume), with a boiling point of 351.3 K (78.1 °C). At lower pressure, the composition of the ethanol-water azeotrope shifts to more ethanol-rich mixtures. The minimum-pressure azeotrope has an ethanol fraction of 100% and a boiling point of 306 K (33 °C), corresponding to a pressure of roughly 70 torr (9.333 kPa). Below this pressure, there is no azeotrope, and it is possible to distill absolute ethanol from an ethanol-water mixture.
=== Flammability ===
An ethanol–water solution will catch fire if heated above a temperature called its flash point and an ignition source is then applied to it. For 20% alcohol by mass (about 25% by volume), this will occur at about 25 °C (77 °F). The flash point of pure ethanol is 13 °C (55 °F), but may be influenced very slightly by atmospheric composition such as pressure and humidity. Ethanol mixtures can ignite below average room temperature. Ethanol is considered a flammable liquid (Class 3 Hazardous Material) in concentrations above 2.35% by mass (3.0% by volume; 6 proof). Dishes using burning alcohol for culinary effects are called flambé.
== Natural occurrence ==
Ethanol is a byproduct of the metabolic process of yeast. As such, ethanol will be present in any yeast habitat. Ethanol can commonly be found in overripe fruit. Ethanol produced by symbiotic yeast can be found in bertam palm blossoms. Although some animal species, such as the pentailed treeshrew, exhibit ethanol-seeking behaviors, most show no interest or avoidance of food sources containing ethanol. Ethanol is also produced during the germination of many plants as a result of natural anaerobiosis.
Ethanol has been detected in outer space, forming an icy coating around dust grains in interstellar clouds.
Minute quantity amounts (average 196 ppb) of endogenous ethanol and acetaldehyde were found in the exhaled breath of healthy volunteers. Auto-brewery syndrome, also known as gut fermentation syndrome, is a rare medical condition in which intoxicating quantities of ethanol are produced through endogenous fermentation within the digestive system.
== Production ==
Ethanol is produced both as a petrochemical, through the hydration of ethylene and, via biological processes, by fermenting sugars with yeast. Which process is more economical depends on prevailing prices of petroleum and grain feed stocks.
=== Sources ===
World production of ethanol in 2006 was 51 gigalitres (1.3×1010 US gal), with 69% of the world supply coming from Brazil and the U.S. Brazilian ethanol is produced from sugarcane, which has relatively high yields (830% more fuel than the fossil fuels used to produce it) compared to some other energy crops. Sugarcane not only has a greater concentration of sucrose than corn (by about 30%), but is also much easier to extract. The bagasse generated by the process is not discarded, but burned by power plants to produce electricity. Bagasse burning accounts for around 9% of the electricity produced in Brazil.
In the 1970s most industrial ethanol in the U.S. was made as a petrochemical, but in the 1980s the U.S. introduced subsidies for corn-based ethanol. According to the Renewable Fuels Association, as of 30 October 2007, 131 grain ethanol bio-refineries in the U.S. have the capacity to produce 7×10^9 US gal (26,000,000 m3) of ethanol per year. An additional 72 construction projects underway (in the U.S.) can add 6.4 billion US gallons (24,000,000 m3) of new capacity in the next 18 months.
In India ethanol is made from sugarcane. Sweet sorghum is another potential source of ethanol, and is suitable for growing in dryland conditions. The International Crops Research Institute for the Semi-Arid Tropics is investigating the possibility of growing sorghum as a source of fuel, food, and animal feed in arid parts of Asia and Africa. Sweet sorghum has one-third the water requirement of sugarcane over the same time period. It also requires about 22% less water than corn. The world's first sweet sorghum ethanol distillery began commercial production in 2007 in Andhra Pradesh, India.
Ethanol has been produced in the laboratory by converting carbon dioxide via biological and electrochemical reactions.
=== Hydration ===
Ethanol can be produced from petrochemical feed stocks, primarily by the acid-catalyzed hydration of ethylene. It is often referred to as synthetic ethanol.
C2H4 + H2O → C2H5OH
The catalyst is most commonly phosphoric acid, adsorbed onto a porous support such as silica gel or diatomaceous earth. This catalyst was first used for large-scale ethanol production by the Shell Oil Company in 1947. The reaction is carried out in the presence of high pressure steam at 300 °C (572 °F) where a 5:3 ethylene to steam ratio is maintained. This process was used on an industrial scale by Union Carbide Corporation and others. It is no longer practiced in the US as fermentation ethanol produced from corn is more economical.
In an older process, first practiced on the industrial scale in 1930 by Union Carbide but now almost entirely obsolete, ethylene was hydrated indirectly by reacting it with concentrated sulfuric acid to produce ethyl sulfate, which was hydrolyzed to yield ethanol and regenerate the sulfuric acid:
C2H4 + H2SO4 → C2H5HSO4
C2H5HSO4 + H2O → H2SO4 + C2H5OH
=== Fermentation ===
Ethanol in alcoholic beverages and fuel is produced by fermentation. Certain species of yeast (e.g., Saccharomyces cerevisiae) metabolize sugar (namely polysaccharides), producing ethanol and carbon dioxide. The chemical equations below summarize the conversion:
Fermentation is the process of culturing yeast under favorable thermal conditions to produce alcohol. This process is carried out at around 35–40 °C (95–104 °F). Toxicity of ethanol to yeast limits the ethanol concentration obtainable by brewing; higher concentrations, therefore, are obtained by fortification or distillation. The most ethanol-tolerant yeast strains can survive up to approximately 18% ethanol by volume.
To produce ethanol from starchy materials such as cereals, the starch must first be converted into sugars. In brewing beer, this has traditionally been accomplished by allowing the grain to germinate, or malt, which produces the enzyme amylase. When the malted grain is mashed, the amylase converts the remaining starches into sugars.
Sugars for ethanol fermentation can be obtained from cellulose. Deployment of this technology could turn a number of cellulose-containing agricultural by-products, such as corncobs, straw, and sawdust, into renewable energy resources. Other agricultural residues such as sugarcane bagasse and energy crops such as switchgrass may also be fermentable sugar sources.
=== Testing ===
Breweries and biofuel plants employ two methods for measuring ethanol concentration. Infrared ethanol sensors measure the vibrational frequency of dissolved ethanol using the C−H band at 2900 cm−1. This method uses a relatively inexpensive solid-state sensor that compares the C−H band with a reference band to calculate the ethanol content. The calculation makes use of the Beer–Lambert law. Alternatively, by measuring the density of the starting material and the density of the product, using a hydrometer, the change in specific gravity during fermentation indicates the alcohol content. This inexpensive and indirect method has a long history in the beer brewing industry.
== Purification ==
Ethylene hydration or brewing produces an ethanol–water mixture. For most industrial and fuel uses, the ethanol must be purified. Fractional distillation at atmospheric pressure can concentrate ethanol to 95.6% by weight (89.5 mole%). This mixture is an azeotrope with a boiling point of 78.1 °C (172.6 °F), and cannot be further purified by distillation. Addition of an entraining agent, such as benzene, cyclohexane, or heptane, allows a new ternary azeotrope comprising the ethanol, water, and the entraining agent to be formed. This lower-boiling ternary azeotrope is removed preferentially, leading to water-free ethanol.
Apart from distillation, ethanol may be dried by addition of a desiccant, such as molecular sieves, cellulose, or cornmeal. The desiccants can be dried and reused. Molecular sieves can be used to selectively absorb the water from the 95.6% ethanol solution. Molecular sieves of pore-size 3 Å, a type of zeolite, effectively sequester water molecules while excluding ethanol molecules. Heating the wet sieves drives out the water, allowing regeneration of their desiccant capability.
Membranes can also be used to separate ethanol and water. Membrane-based separations are not subject to the limitations of the water-ethanol azeotrope because the separations are not based on vapor-liquid equilibria. Membranes are often used in the so-called hybrid membrane distillation process. This process uses a pre-concentration distillation column as the first separating step. The further separation is then accomplished with a membrane operated either in vapor permeation or pervaporation mode. Vapor permeation uses a vapor membrane feed and pervaporation uses a liquid membrane feed.
A variety of other techniques have been discussed, including the following:
Salting using potassium carbonate to exploit its insolubility will cause a phase separation with ethanol and water. This offers a very small potassium carbonate impurity to the alcohol that can be removed by distillation. This method is very useful in purification of ethanol by distillation, as ethanol forms an azeotrope with water.
Direct electrochemical reduction of carbon dioxide to ethanol under ambient conditions using copper nanoparticles on a carbon nanospike film as the catalyst;
Extraction of ethanol from grain mash by supercritical carbon dioxide;
Pervaporation;
Fractional freezing is also used to concentrate fermented alcoholic solutions, such as traditionally made Applejack (beverage);
Pressure swing adsorption.
=== Grades of ethanol ===
Pure ethanol and alcoholic beverages are heavily taxed as psychoactive drugs, but ethanol has many uses that do not involve its consumption. To relieve the tax burden on these uses, most jurisdictions waive the tax when an agent has been added to the ethanol to render it unfit to drink. These include bittering agents such as denatonium benzoate and toxins such as methanol, naphtha, and pyridine. Products of this kind are called denatured alcohol.
Absolute or anhydrous alcohol refers to ethanol with a low water content. There are various grades with maximum water contents ranging from 1% to a few parts per million (ppm). If azeotropic distillation is used to remove water, it will contain trace amounts of the material separation agent (e.g. benzene). Absolute alcohol is not intended for human consumption. Absolute ethanol is used as a solvent for laboratory and industrial applications, where water will react with other chemicals, and as fuel alcohol. Spectroscopic ethanol is an absolute ethanol with a low absorbance in ultraviolet and visible light, fit for use as a solvent in ultraviolet-visible spectroscopy. Pure ethanol is classed as 200 proof in the US, equivalent to 175 degrees proof in the UK system. Rectified spirit, an azeotropic composition of 96% ethanol containing 4% water, is used instead of anhydrous ethanol for various purposes. Spirits of wine are about 94% ethanol (188 proof). The impurities are different from those in 95% (190 proof) laboratory ethanol.
== Reactions ==
Ethanol is classified as a primary alcohol, meaning that the carbon that its hydroxyl group attaches to has at least two hydrogen atoms attached to it as well. Many ethanol reactions occur at its hydroxyl group.
=== Ester formation ===
In the presence of acid catalysts, ethanol reacts with carboxylic acids to produce ethyl esters and water:
RCOOH + HOCH2CH3 → RCOOCH2CH3 + H2O
This reaction, which is conducted on large scale industrially, requires the removal of the water from the reaction mixture as it is formed. Esters react in the presence of an acid or base to give back the alcohol and a salt. This reaction is known as saponification because it is used in the preparation of soap. Ethanol can also form esters with inorganic acids. Diethyl sulfate and triethyl phosphate are prepared by treating ethanol with sulfur trioxide and phosphorus pentoxide respectively. Diethyl sulfate is a useful ethylating agent in organic synthesis. Ethyl nitrite, prepared from the reaction of ethanol with sodium nitrite and sulfuric acid, was formerly used as a diuretic.
=== Dehydration ===
In the presence of acid catalysts, alcohols can be converted to alkenes such as ethanol to ethylene. Typically solid acids such as alumina are used.
CH3CH2OH → H2C=CH2 + H2O
Since water is removed from the same molecule, the reaction is known as intramolecular dehydration. Intramolecular dehydration of an alcohol requires a high temperature and the presence of an acid catalyst such as sulfuric acid. Ethylene produced from sugar-derived ethanol (primarily in Brazil) competes with ethylene produced from petrochemical feedstocks such as naphtha and ethane. At a lower temperature than that of intramolecular dehydration, intermolecular alcohol dehydration may occur producing a symmetrical ether. This is a condensation reaction. In the following example, diethyl ether is produced from ethanol:
2 CH3CH2OH → CH3CH2OCH2CH3 + H2O
=== Combustion ===
Complete combustion of ethanol forms carbon dioxide and water:
C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (l); −ΔcH = 1371 kJ/mol = 29.8 kJ/g = 327 kcal/mol = 7.1 kcal/g
C2H5OH (l) + 3 O2 (g) → 2 CO2 (g) + 3 H2O (g); −ΔcH = 1236 kJ/mol = 26.8 kJ/g = 295.4 kcal/mol = 6.41 kcal/g
Specific heat = 2.44 kJ/(kg·K)
=== Acid-base chemistry ===
Ethanol is a neutral molecule and the pH of a solution of ethanol in water is nearly 7.00. Ethanol can be quantitatively converted to its conjugate base, the ethoxide ion (CH3CH2O−), by reaction with an alkali metal such as sodium:
2 CH3CH2OH + 2 Na → 2 CH3CH2ONa + H2
or a very strong base such as sodium hydride:
CH3CH2OH + NaH → CH3CH2ONa + H2
The acidities of water and ethanol are nearly the same, as indicated by their pKa of 15.7 and 16 respectively. Thus, sodium ethoxide and sodium hydroxide exist in an equilibrium that is closely balanced:
CH3CH2OH + NaOH ⇌ CH3CH2ONa + H2O
=== Halogenation ===
Ethanol is not used industrially as a precursor to ethyl halides, but the reactions are illustrative. Ethanol reacts with hydrogen halides to produce ethyl halides such as ethyl chloride and ethyl bromide via an SN2 reaction:
CH3CH2OH + HCl → CH3CH2Cl + H2O
HCl requires a catalyst such as zinc chloride.
HBr requires refluxing with a sulfuric acid catalyst. Ethyl halides can, in principle, also be produced by treating ethanol with more specialized halogenating agents, such as thionyl chloride or phosphorus tribromide.
CH3CH2OH + SOCl2 → CH3CH2Cl + SO2 + HCl
Upon treatment with halogens in the presence of base, ethanol gives the corresponding haloform (CHX3, where X = Cl, Br, I). This conversion is called the haloform reaction.
An intermediate in the reaction with chlorine is the aldehyde called chloral, which forms chloral hydrate upon reaction with water:
4 Cl2 + CH3CH2OH → CCl3CHO + 5 HCl
CCl3CHO + H2O → CCl3C(OH)2H
=== Oxidation ===
Ethanol can be oxidized to acetaldehyde and further oxidized to acetic acid, depending on the reagents and conditions. This oxidation is of no importance industrially, but in the human body, these oxidation reactions are catalyzed by the enzyme liver ADH. The oxidation product of ethanol, acetic acid, is a nutrient for humans, being a precursor to acetyl CoA, where the acetyl group can be spent as energy or used for biosynthesis.
=== Metabolism ===
Ethanol is similar to macronutrients such as proteins, fats, and carbohydrates in that it provides calories. When consumed and metabolized, it contributes 7 kilocalories per gram via ethanol metabolism.
== Safety ==
Ethanol is very flammable and should not be used around an open flame.
Pure ethanol will irritate the skin and eyes. Nausea, vomiting, and intoxication are symptoms of ingestion. Long-term use by ingestion can result in serious liver damage. Atmospheric concentrations above one part per thousand are above the European Union occupational exposure limits.
== History ==
The fermentation of sugar into ethanol is one of the earliest biotechnologies employed by humans. Ethanol has historically been identified variously as spirit of wine or ardent spirits, and as aqua vitae (Latin for "water of life") or aqua vita. The intoxicating effects of its consumption have been known since ancient times. Ethanol has been used by humans since prehistory as the intoxicating ingredient of alcoholic beverages. Dried residue on 9,000-year-old pottery found in China suggests that Neolithic people consumed alcoholic beverages.
The inflammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (c. 371–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of ethanol, despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (c. 801–873 CE) and to al-Fārābī (c. 872–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., ethanol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists.
The works of Taddeo Alderotti (1223–1296) describe a method for concentrating ethanol involving repeated fractional distillation through a water-cooled still, by which an ethanol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (c. 1310–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine). In China, archaeological evidence indicates that the true distillation of alcohol began during the Jin (1115–1234) or Southern Song (1127–1279) dynasties. A still has been found at an archaeological site in Qinglong, Hebei, dating to the 12th century. In India, the true distillation of alcohol was introduced from the Middle East, and was in wide use in the Delhi Sultanate by the 14th century.
In 1796, German-Russian chemist Johann Tobias Lowitz obtained pure ethanol by mixing partially purified ethanol (the alcohol-water azeotrope) with an excess of anhydrous alkali and then distilling the mixture over low heat. French chemist Antoine Lavoisier described ethanol as a compound of carbon, hydrogen, and oxygen, and in 1807 Nicolas-Théodore de Saussure determined ethanol's chemical formula. Fifty years later, Archibald Scott Couper published the structural formula of ethanol, one of the first structural formulas determined.
Ethanol was first prepared synthetically in 1825 by Michael Faraday. He found that sulfuric acid could absorb large volumes of coal gas. He gave the resulting solution to Henry Hennell, a British chemist, who found in 1826 that it contained "sulphovinic acid" (ethyl hydrogen sulfate). In 1828, Hennell and the French chemist Georges-Simon Serullas independently discovered that sulphovinic acid could be decomposed into ethanol. Thus, in 1825 Faraday had unwittingly discovered that ethanol could be produced from ethylene (a component of coal gas) by acid-catalyzed hydration, a process similar to current industrial ethanol synthesis.
Ethanol was used as lamp fuel in the U.S. as early as 1840, but a tax levied on industrial alcohol during the Civil War made this use uneconomical. The tax was repealed in 1906. Use as an automotive fuel dates back to 1908, with the Ford Model T able to run on petrol (gasoline) or ethanol. It fuels some spirit lamps.
Ethanol intended for industrial use is often produced from ethylene. Ethanol has widespread use as a solvent of substances intended for human contact or consumption, including scents, flavorings, colorings, and medicines. In chemistry, it is both a solvent and a feedstock for the synthesis of other products. It has a long history as a fuel for heat and light, and more recently as a fuel for internal combustion engines.
== See also ==
== References ==
== Further reading ==
Boyce JM, Pittet D (2003). "Hand Hygiene in Healthcare Settings". Atlanta, GA: Centers for Disease Control.
Onuki S, Koziel JA, van Leeuwen J, Jenks WS, Grewell D, Cai L (June 2008). Ethanol production, purification, and analysis techniques: a review. 2008 ASABE Annual International Meeting. Providence, RI. Retrieved 16 February 2013.
"Explanation of US denatured alcohol designations". Sci-toys.
Lange, Norbert Adolph (1967). John Aurie Dean (ed.). Lange's Handbook of Chemistry (10th ed.). McGraw-Hill.
Schmidt, Eckart W. (2022). "Ethanol". Alcohols. Encyclopedia of Liquid Fuels. De Gruyter. pp. 12–32. doi:10.1515/9783110750287-001. ISBN 978-3-11-075028-7.
== External links ==
Alcohol (Ethanol) at The Periodic Table of Videos (University of Nottingham)
International Labour Organization ethanol safety information
National Pollutant Inventory – Ethanol Fact Sheet
CDC – NIOSH Pocket Guide to Chemical Hazards – Ethyl Alcohol
National Institute of Standards and Technology chemical data on ethanol
Chicago Board of Trade news and market data on ethanol futures
Calculation of vapor pressure, liquid density, dynamic liquid viscosity, surface tension of ethanol
Ethanol History A look into the history of ethanol
ChemSub Online: Ethyl alcohol
Industrial ethanol production process flow diagram using ethylene and sulphuric acid |
Eurofighter Typhoon | The Eurofighter Typhoon is a European multinational twin-engine, supersonic, canard delta wing, multirole fighter. The Typhoon was designed originally as an air-superiority fighter and is manufactured by a consortium of Airbus, BAE Systems and Leonardo that conducts the majority of the project through a joint holding company, Eurofighter Jagdflugzeug GmbH. The NATO Eurofighter and Tornado Management Agency, representing the UK, Germany, Italy and Spain, manages the project and is the prime customer.
The aircraft's development effectively began in 1983 with the Future European Fighter Aircraft programme, a multinational collaboration among the UK, Germany, France, Italy and Spain. Previously, Germany, Italy and the UK had jointly developed and deployed the Panavia Tornado combat aircraft and desired to collaborate on a new project, with additional participating EU nations. Disagreements over design authority and operational requirements however, led France to leave the consortium to develop the Dassault Rafale independently. A technology demonstration aircraft, the British Aerospace EAP, first flew on 6 August 1986; a Eurofighter prototype made its maiden flight on 27 March 1994. The aircraft's name, Typhoon, was adopted in September 1998 and the first production contracts were also signed that year.
The sudden end of the Cold War reduced European demand for fighter aircraft and led to debate over the aircraft's cost and work share and protracted the Typhoon's development: the Typhoon entered operational service in 2003 and is now in service with the air forces of Austria, Italy, Germany, the United Kingdom, Spain, Saudi Arabia and Oman. Kuwait and Qatar have also ordered the aircraft, bringing the procurement total to 680 aircraft as of November 2023.
The Eurofighter Typhoon is a highly agile aircraft, designed to be an effective dogfighter in combat. Later production aircraft have been increasingly better equipped to undertake air-to-surface strike missions and to be compatible with an increasing number of different armaments and equipment, including Storm Shadow, Brimstone and Marte ER missiles. The Typhoon had its combat debut during the 2011 military intervention in Libya with the UK's Royal Air Force (RAF) and the Italian Air Force, performing aerial reconnaissance and ground-strike missions. The type has also taken primary responsibility for air-defence duties for the majority of customer nations.
== Development ==
=== Origins ===
In the UK, as early as 1971, work commenced on the development of a manoeuvrable, tactical aircraft to replace the SEPECAT Jaguar (that was then about to enter service with the RAF). This work soon expanded to include an air superiority capability. A specification titled Air Staff Target 403 (AST 403), in 1972, led to the Hawker P.96, an unbuilt design with a relatively conventional planform, including a separate tail structure, in the late 1970s.
Simultaneously, in West Germany, the requirement for a new fighter had resulted in competition between Dornier, VFW-Fokker and Messerschmitt-Bölkow-Blohm (MBB) for a future Luftwaffe contract known as Taktisches Kampfflugzeug 90 ("Tactical Combat Aircraft 90"; TKF-90). Dornier collaborated with Northrop in the US on an acclaimed, but unsuccessful design, known as the Northrop-Dornier ND-102. MBB was successful, with a design including a cranked delta wing, close-coupled-canard controls, and artificial stability.
In 1979, MBB and British Aerospace (BAe) presented a formal proposal to their respective governments for a collaboration, to be known as the European Collaborative Fighter, or European Combat Fighter (ECF). In October 1979, French firm Dassault joined the ECF project. It was at this stage of development the Eurofighter name was first attached to the aircraft. The development of three separate prototypes continued however: MBB continued to refine its TKF-90 concept, and Dassault produced a design known as the ACX.
In the meantime, while the P.96 would have met the original UK specification, it had been cancelled because it was considered to offer little potential for future upgrades and redevelopment. In addition, there was a feeling within the UK aircraft industry that the P.96 would have been too similar to the McDonnell Douglas F/A-18 Hornet, which was then known to be at an advanced stage of development. The P.96 would not have been available until long after the Hornet, which would therefore likely have met and closed off most potential export markets for the P.96. BAe then produced two new proposals: the P.106B, a single-engined lightweight fighter, superficially resembling the JAS 39 Gripen, and the twin-engine P.110. The RAF rejected the P.106 concept on the grounds it had "half the effectiveness of the two-engined aircraft at two-thirds of the cost".
The ECF project collapsed in 1981 for several reasons, including differing requirements, Dassault's insistence on "design leadership" and the British preference for a new version of the RB199 to power the aircraft versus the French preference for the new Snecma M88.
Consequently, the Panavia partners (MBB, BAe and Aeritalia) launched the Agile Combat Aircraft (ACA) programme in April 1982. BAe designers agreed with the overall configuration of the proposed MBB TKF-90, although they rejected some of its more ambitious features such as engine vectoring nozzles and vented trailing edge controls – a form of boundary layer control. The ACA, like the BAe P.110, had a cranked delta wing, canards and a twin tail. One major external difference was the replacement of the side-mounted engine intakes with a chin intake. The ACA was to be powered by a modified version of the RB199. The German and Italian governments withdrew funding, and the UK Ministry of Defence (MoD) agreed to fund 50% of the cost with the remaining 50% to be provided by industry. MBB and Aeritalia signed up and it was agreed that the aircraft would be produced at two sites: BAe Warton and an MBB factory in Germany. In May 1983, BAe announced a contract with the MoD for the development and production of an ACA demonstrator, the Experimental Aircraft Programme.
In 1983, Italy, Germany, France, the UK and Spain launched the "Future European Fighter Aircraft" (FEFA) programme. The aircraft was to have short take off and landing (STOL) and beyond visual range (BVR) capabilities. In 1984, France reiterated its requirement for a carrier-capable version and demanded a leading role. Italy, West Germany and the UK opted out and established a new EFA programme. In Turin on 2 August 1985, West Germany, the UK and Italy agreed to go ahead with the Eurofighter; and confirmed France, along with Spain, had chosen not to proceed as a member of the project. Despite pressure from France, Spain rejoined the Eurofighter project in early September 1985. France officially withdrew from the project to pursue its own ACX project, which was to become the Dassault Rafale.
By 1986, the programme's cost had reached £180 million. When the EAP programme had started, the cost was supposed to be equally shared by government and industry, but the West German and Italian governments wavered on the agreement and the British government and private finance had to provide £100 million to keep the programme from ending. In April 1986, the British Aerospace EAP was rolled out at BAe Warton. The EAP first flew on 6 August 1986. The Eurofighter bears a strong resemblance to the EAP. Design work continued over the next five years using data from the EAP. Initial requirements were: UK: 250 aircraft, Germany: 250, Italy: 165 and Spain: 100. The share of the production work was divided among the countries in proportion to their projected procurement – BAe (33%), DASA (33%), Aeritalia (21%), and Construcciones Aeronáuticas SA (CASA) (13%).
The Munich-based Eurofighter Jagdflugzeug GmbH was established in 1986 to manage development of the project and EuroJet Turbo GmbH, the alliance of Rolls-Royce, MTU Aero Engines, FiatAvio (now Avio) and ITP for development of the EJ200. The aircraft was known as Eurofighter EFA from the late 1980s until it was renamed EF 2000 in 1992.
By 1990, the selection of the aircraft's radar had become a major stumbling-block. The UK, Italy and Spain supported the Ferranti Defence Systems-led ECR-90, while Germany preferred the APG-65-based MSD2000 (a collaboration between Hughes, AEG and GEC-Marconi). An agreement was reached after UK Defence Secretary Tom King assured his West German counterpart Gerhard Stoltenberg that the British government would approve the project and allow the GEC subsidiary Marconi Electronic Systems to acquire Ferranti Defence Systems from its parent, the Ferranti Group, which was in financial and legal difficulties. GEC thus withdrew its support for the MSD2000.
=== Delays ===
The financial burdens placed on Germany by reunification caused Helmut Kohl to make an election promise to cancel the Eurofighter. In early to mid-1991 German Defence Minister Volker Rühe sought to withdraw Germany from the project in favour of using Eurofighter technology in a cheaper, lighter plane. Because of the amount of money already spent on development, the number of jobs dependent on the project, and the binding commitments on each partner government, Kohl was unable to withdraw; "Rühe's predecessors had locked themselves into the project by a punitive penalty system of their own devising."
In 1995 concerns over workshare appeared. Since the formation of Eurofighter the workshare split had been agreed at 33/33/21/13 (United Kingdom/Germany/Italy/Spain) based on the number of units being ordered by each contributing nation, all the nations then reduced their orders. The UK cut its orders from 250 to 232, Germany from 250 to 140, Italy from 165 to 121 and Spain from 100 to 87. According to these order levels the workshare split should have been 39/24/22/15 UK/Germany/Italy/Spain, however Germany was unwilling to give up such a large amount of work. In January 1996, after much negotiation between German and UK partners, a compromise was reached whereby Germany would purchase another 40 aircraft. The workshare split was therefore UK 37.42%, Germany 29.03%, Italy 19.52% and Spain 14.03%.
At the 1996 Farnborough Airshow the UK announced funding for the construction phase of the project. On 22 December 1997 the defence ministers of the four partner nations signed the contract for production of the Eurofighter.
=== Testing ===
The maiden flight of the Eurofighter prototype took place in Bavaria on 27 March 1994, flown by DASA chief test pilot Peter Weger. In December 2004, Eurofighter Typhoon IPA4 began three months of Cold Environmental Trials (CET) at the Vidsel Air Base in Sweden, the purpose of which was to verify the operational behaviour of the aircraft and its systems in temperatures between −25 and 31 °C. The maiden flight of Instrumented Production Aircraft 7 (IPA7), the first fully equipped Tranche 2 aircraft, took place from EADS' Manching airfield on 16 January 2008.
=== Procurement, production and costs ===
The first production contract was signed on 30 January 1998 between Eurofighter GmbH, Eurojet and NETMA. The procurement totals were as follows: the UK 232, Germany 180, Italy 121, and Spain 87. Production was again allotted according to procurement: BAe (37.42%), DASA (29.03%), Aeritalia (19.52%), and CASA (14.03%).
On 2 September 1998, a naming ceremony was held at Farnborough, United Kingdom. This saw the Typhoon name formally adopted, initially for export aircraft only. The name continues the storm theme started by the Panavia Tornado. This was reportedly resisted by Germany; the Hawker Typhoon was a fighter-bomber aircraft used by the RAF during the Second World War to attack German targets. The name "Spitfire II" (after the famous British Second World War fighter, the Supermarine Spitfire) had also been considered and rejected for the same reason early in the development programme. In September 1998, contracts were signed for production of 148 Tranche 1 aircraft and procurement of long lead-time items for Tranche 2 aircraft. In March 2008, the final Tranche 1 aircraft was delivered to the German Air Force. On 21 October 2008, the RAF's first two of 91 Tranche 2 aircraft, were delivered to RAF Coningsby.
In July 2009, after almost 2 years of negotiations, the planned Tranche 3 purchase was split into 2 parts and the Tranche 3A contract was signed by the partner nations. The "Tranche 3B" order did not go ahead.
The Eurofighter Typhoon is unique in modern combat aircraft in that there are four separate assembly lines. Each partner company assembles its own national aircraft, but builds the same parts for all aircraft (including exports); Premium AEROTEC (main centre fuselage), EADS CASA (right wing, leading edge slats), BAE Systems (BAE) (front fuselage (including foreplanes), canopy, dorsal spine, tail fin, inboard flaperons, rear fuselage section) and Leonardo (left wing, outboard flaperons, rear fuselage sections).
Production is divided into three tranches (see table below). Tranches are a production/funding distinction, and do not imply an incremental increase in capability with each tranche. Tranche 3 are based on late Tranche 2 aircraft with improvements added. Tranche 3 was split into A and B parts. Tranches were further divided up into production standard/capability blocks and funding/procurement batches, though these did not coincide, and are not the same thing; e.g., the Eurofighter designated FGR4 by the RAF is a Tranche 1, block 5. Batch 1 covered block 1, but batch 2 covered blocks 2, 2B and 5. On 25 May 2011 the 100th production aircraft, ZK315, rolled off the production line at Warton.
In 1985 the estimated cost of 250 UK aircraft was £7 billion. By 1997 the estimated cost was £17 billion; by 2003, £20 billion, and the in-service date (2003, defined as the date of delivery of the first aircraft to the RAF) was 54 months late. After 2003, the MoD refused to release updated cost-estimates on the grounds of commercial sensitivity. However, in 2011, the National Audit Office estimated the UK's "assessment, development, production and upgrade costs eventually hit £22.9 billion" and total programme costs would reach £37 billion.
By 2007, Germany estimated the system cost (aircraft and training, plus spare parts) at €120 million and said it was in perpetual increase. On 17 June 2009, Germany ordered 31 aircraft of Tranche 3A for €2.8 billion, leading to a system cost of €90 million per aircraft. The UK's Committee of Public Accounts reported that mismanagement of the project had helped increase the cost of each aircraft by seventy-five percent. The Spanish MoD put the cost of their Typhoon project up to December 2010 at €11.718 billion, up from an original €9.255 billion and implying a system cost for their 73 aircraft of €160 million.
On 31 March 2009, a Eurofighter Typhoon fired an AIM-120 AMRAAM whilst having its radar in passive mode for the first time; the necessary target data for the missile was acquired by the radar of a second Eurofighter Typhoon and transmitted using the Multifunctional Information Distribution System (MIDS). The entire Typhoon fleet passed the 500,000 flying hours milestone in 2018. As of August 2019, a total of 623 orders had been received.
In July 2016, the ten-year Typhoon Total Availability Enterprise (TyTAN) support deal between the RAF and industry partners BAE and Leonardo was announced that aims to reduce the Typhoon's per-hour operating cost by 30 to 40 percent. This should equate to a saving of at least £550 million ($712 million), which "will be recycled into the programme" and, according to BAE, will result in the Typhoon having a per-hour operating cost "equivalent to a F-16". By 2022 it was estimated that savings would be "over £500 million."
=== Upgrades ===
In 2000, the UK selected the Meteor from MBDA as the long range air-to-air missile armament for its Typhoons with an in-service date (ISD) of December 2011. In December 2002, France, Germany, Spain and Sweden joined the British in a $1.9bn contract for Meteor on Typhoon, the Dassault Rafale and the Saab Gripen. The protracted contract negotiations pushed the ISD to August 2012, and it was further put back by Eurofighter's failure to make trials aircraft available to the Meteor partners. In 2014 the "second element of the Phase 1 Enhancements package known as 'P1Eb'" was announced, allowing "Typhoon to realise both its air-to-air and air-to-ground capability to full effect".
In 2011 Flight International reported that budgetary pressures being encountered by the four original partner nations were limiting upgrades. For example, the four original partner nations were reluctant at that stage to fund enhancements that extend the aircraft's air-to-ground capability, such as integration of the MBDA Storm Shadow cruise missile.
Tranche 3 aircraft ESM/ECM enhancements have focused on improving radiating jamming power with antenna modifications, while EuroDASS is reported to offer a range of new capabilities, including the addition of a digital receiver, extending band coverage to low frequencies (VHF/UHF) and introducing an interferometric receiver with extremely precise geolocation functionalities. On the jamming side, EuroDASS is looking to low-band (VHF/UHF) jamming, more capable antennae, new ECM techniques, while protection against missile is to be enhanced through a new passive MWS in addition to the active devices already on board the aircraft. The latest support for self-protection will however originate from the new active electronically scanned array (AESA) radar which is to replace the Captor system, providing in a spiralled programme with passive, active and cyberwarfare RF capabilities. Selex ES has developed a self-contained expendable Digital Radio Frequency Memory (DRFM) jammer for fast jet aircraft known as BriteCloud which is being studied for integration on the Typhoon.
Eurojet is attempting to find funding to test thrust vectoring control (TVC) nozzles on a flight demonstrator. In April 2014, BAE announced new wind tunnel tests to assess the aerodynamic characteristics of conformal fuel tanks (CFTs). The CFTs, which can be fitted to any Tranche 3 aircraft, could carry 1,500 litres each to increase the Typhoon's combat radius by a factor of 25% to 1,500 n miles (2,778 km).
BAE has completed development of its Striker II Helmet-Mounted Display that builds on the capabilities of the original Striker Helmet-Mounted Display, which is already in service on the Typhoon. Striker II features a new display with more colour and can transition between day and night seamlessly eliminating the need for separate night vision goggles. In addition, the helmet can monitor the pilot's exact head position so it always knows exactly what information to display. The system is compatible with ANR, a 3-D audio threats system and 3-D communications; these are available as customer options. In 2015, BAE was awarded a £1.7 million contract to study the feasibility of a common weapon launcher that could be capable of carrying multiple weapons and weapon types on a single pylon.
Also in 2015, Airbus flight tested a package of aerodynamic upgrades for the Eurofighter known as the Aerodynamic Modification Kit (AMK) consisting of reshaped (delta) fuselage strakes, extended trailing-edge flaperons and leading-edge root extensions. This increases wing lift by 25% resulting in an increased turn rate, tighter turning radius, and improved nose-pointing ability at low speed with angle of attack values around 45% greater and roll rates up to 100% higher. Eurofighter's Laurie Hilditch said these improvements should increase subsonic turn rate by 15% and give the Eurofighter the sort of "knife-fight in a phone box" turning capability enjoyed by rivals such as Boeing's F/A-18E/F or the Lockheed Martin F-16, without sacrificing the transonic and supersonic high-energy agility inherent to its delta wing-canard configuration. Eurofighter Project Pilot Germany Raffaele Beltrame said: "The handling qualities appeared to be markedly improved, providing more manoeuvrability, agility and precision while performing tasks representative of in-service operations. And it is extremely interesting to consider the potential benefits in the air-to-surface configuration thanks to the increased variety and flexibility of stores that can be carried."
In April 2016, Finmeccanica (now Leonardo) demonstrated the air-to-ground capabilities of its Mode 5 Reverse-Identification friend or foe (IFF) system which showed that it is possible to give pilots the ability to distinguish between friendly and enemy platforms in a simple fashion using the aircraft's existing transponder. Finmeccanica said NATO is considering the system as a short- to mid-term solution for air-to-surface identification of friendly forces and thus avoid collateral damages due to friendly fire during close air support operations.
==== UK Project Centurion upgrades ====
With the confirmed retirement date of March 2019 for RAF Tornado GR4s, in 2014 the UK commenced an upgrade programme that would eventually become the £425 million Project Centurion to ensure the Typhoon was able to assume the precision strike duties of the ageing Tornado. The upgrade was delivered under different phases:
Phase 0 – initial multirole upgrades.
Phase 1/P2EA – MBDA Meteor integration and initial Storm Shadow Capability.
Phase 2/P3EA – Full Storm Shadow capability as well as Brimstone integration.
Phase 1 standard aircraft were used operationally for the first time as part of Operation Shader over Iraq and Syria in 2018. On 18 December 2018 the RAF approved release to service for the full Project Centurion package.
==== Proposed upgrade for German Tornado replacement ====
On 24 April 2018, Airbus announced its offer to replace Germany's Panavia Tornado fleet, proposing the integration of new weaponry, performance enhancements and additional capabilities to the Eurofighter Typhoon. This is similar to that being performed as part of the UK's Project Centurion. Integration of air-to-ground weapons already has begun on German Typhoons as part of Project Odin. Among the weapons being offered are the Kongsberg Joint Strike Missile for the anti-ship mission and the Taurus cruise missile.
The consortium is keen to make use of the engine's growth potential to boost thrust by around 15% as well as improve fuel efficiency and range. This will be combined with a new design and enlarged 1,800-litre fuel tank. The aircraft currently is fitted with 1,000-litre fuel tanks. Other modifications will include the Aerodynamic Modification Kit, test flown in 2014, to improve maneuverability and handling, particularly with heavy weapon loads. Eurofighter says it is comfortable with delivering integration of the U.S. B61 nuclear weapon onto the aircraft, a process that requires U.S. certification. Paltzo said he was confident the U.S. government would not use the certification requirements of the weapon as "leverage" to force Germany towards a U.S. platform. A next-generation electronic warfare suite has been planned by the four-country consortium.
In November 2019, Airbus proposed a SEAD capability for the aircraft, a role which is currently performed by the Tornado ECR in German service. The Typhoon ECR would be configured with two Escort Jammer pods under the wings and two Emitter Location Systems at the wing tips. Armament configuration would include four MBDA Meteor, two IRIS-T and six SPEAR-EW in addition to three drop tanks.
On 5 November 2020, the German government approved an order for 38 Tranche 4 with ground attack capabilities for the replacement of Tranche 1 units in German service.
The Luftwaffe ordered 15 ECR electronic warfare aircraft conversions for the Luftgestützte Wirkung im Elektromagnetischen Spektrum (luWES) requirement in March 2022. The 15 Typhoon EK model will be transformed from existing German Typhoons and will be equipped with AGM-88E AARGM Anti-radiation missiles. The aircraft are expected to be NATO-certified by 2030.
The Tranche 4PE is a further development package aiming at integrating improved missiles (Meteor, Taurus, AMRAAM, GBU, JDAM).
=== Replacement ===
Germany is to replace the Eurofighter with the New Generation Fighter (NGF), co-developed with France and Spain. The Global Combat Air Programme is a ‘6th Generation’ fighter envisioned as a replacement for the RAF and Italian Air Force (AM), part of the UK's wider Future Combat Air System.
== Design ==
=== Airframe overview ===
The Typhoon is a highly agile aircraft at all speeds, subsonic and supersonic, achieved by having intentionally relaxed stability. The quadruplex digital fly-by-wire control system manages the inherent instability, allowing better manoeuvrability than direct pilot control. It is described as "carefree", and prevents the permitted manoeuvre envelope being exceeded. Roll control is primarily achieved by use of the ailerons. Pitch control is by operation of the canards and ailerons, because the canards disturb airflow to inner elevons (flaps). The yaw control is done by a large, single rudder. Engines are fed by a chin double intake ramp situated below a splitter plate.
The Typhoon uses lightweight construction (82% composites consisting of 70% carbon fibre composite materials and 12% glass fibre reinforced composites) with an estimated lifespan of 6,000 flying hours.
==== Radar signature reduction features ====
Although it was not designated a stealth fighter, measures were taken to reduce the Typhoon's radar cross section (RCS), especially from the frontal aspect. For example, the Typhoon has jet inlets that conceal the front of the engines, a strong radar target, from radar. Many important potential radar targets, such as the wing, canard and fin leading edges, are highly swept so they will reflect radar energy well away from the front. Some external weapons are mounted semi-recessed into the aircraft, partially shielding them from incoming radar. In addition radar-absorbent materials (RAM), developed primarily by EADS/DASA, coat many of the most significant reflectors, such as the wing leading edges, the intake edges and interior, the rudder surrounds, and strakes.
The manufacturers carried out tests on the early Eurofighter prototypes to optimise the low observability characteristics of the aircraft from the early 1990s. Testing at Warton on the DA4 prototype measured the RCS of the aircraft and investigated the effects of a variety of RAM coatings and composites. Passive sensors (PIRATE IRST), which minimise the radiation of revealing electronic emissions, also reduce the likelihood of discovery. While canards generally have poor stealth characteristics from side because of corner to hull, the flight control system is designed to maintain the elevon trim and canards at an angle at which they have the smallest RCS.
=== Cockpit ===
The Typhoon features a glass cockpit without any conventional instruments. It incorporates three full colour multi-function head-down displays (MHDDs). The display formats on these MHDDs are manipulated by means of dedicated controls, softkeys, XY cursor, and voice (Direct Voice Input or DVI) command. There is a wide-angle head-up display (HUD) with forward-looking infrared (FLIR), a voice and hands-on throttle and stick (Voice+HOTAS), a Helmet Mounted Symbology System (HMSS), a manual data-entry facility (MDEF) located on the left glareshield and a fully integrated aircraft warning system with a dedicated warnings panel (DWP). There is also an interactive display panel for the MIDS. Reversionary flying instruments, lit by LEDs, are located under a hinged right glareshield. Access to the cockpit is normally via either a telescopic integral ladder or an external version. The integral ladder is stowed in the port side of the fuselage, below the cockpit.
User needs were given a high priority in the cockpit's design; both layout and functionality was developed with feedback and assessments from military pilots and a specialist testing facility. The aircraft is controlled by means of a centre stick (or control stick) and left hand throttles, designed on a Hand on Throttle and Stick (HOTAS) principle to lower pilot workload. Emergency escape is provided by a Martin-Baker Mk.16A ejection seat, with the canopy being jettisoned by two rocket motors. The HMSS was delayed by years but should have been operational by late 2011. Standard g-force protection is provided by the full-cover anti-g trousers (FCAGTs), a specially developed g suit providing sustained protection up to nine g. German and Austrian Air Force pilots wear a hydrostatic g-suit called Libelle (dragonfly) Multi G Plus instead, which also provides protection to the arms, theoretically giving more complete g tolerance.
In the event of pilot disorientation, the Flight Control System allows for rapid and automatic recovery by the simple press of a button. On selection of this cockpit control the FCS takes full control of the engines and flying controls, and automatically stabilises the aircraft in a wings level, gentle climbing attitude at 300 knots, until the pilot is ready to retake control. The aircraft also has an Automatic Low-Speed Recovery system (ALSR) which prevents it from departing from controlled flight at very low speeds and high angle of attack. The FCS system is able to detect a developing low-speed situation and to raise an audible and visual low-speed cockpit warning. This gives the pilot sufficient time to react and to recover the aircraft manually. If the pilot does not react, however, or if the warning is ignored, the ALSR takes control of the aircraft, selects maximum dry power for the engines and returns the aircraft to a safe flight condition. Depending on the attitude, the FCS employs an ALSR "push", "pull" or "knife-over" manoeuvre.
The Typhoon Direct Voice Input (DVI) system uses a speech recognition module (SRM), developed by Smiths Aerospace and Computing Devices. It was the first production DVI system used in a military cockpit. DVI provides the pilot with an additional natural mode of command and control over approximately 26 non-critical cockpit functions, to reduce pilot workload, improve aircraft safety, and expand mission capabilities. An important step in the development of the DVI occurred in 1987 when Texas Instruments completed the TMS-320-C30, a digital signal-processor, enabling reductions in the size and system complexity required. The project was given the go-ahead in July 1997, with development carried out on the Eurofighter Active Cockpit Simulator at Warton. The DVI system is speaker-dependent, requiring each pilot to create a template. It is not used for safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage. Voice commands are confirmed by visual or aural feedback, and serves to reduce pilot workload. All functions are also achievable by means of a conventional button-press or soft-key selections; functions include display management, communications, and management of various systems. EADS Defence and Security in Spain has worked on a new non-template DVI module to allow for continuous speech recognition, speaker voice recognition with common databases (e.g. British English, American English, etc.) and other improvements.
BAE Systems has been awarded a contract to develop new touch screen displays in the cockpit and enhance data processing capability for Eurofighter Typhoon.
=== Avionics ===
Navigation is via both GPS and an inertial navigation system. The Typhoon can use Instrument Landing System (ILS) for landing in poor weather. The aircraft also features an enhanced ground proximity warning system (GPWS) based on the TERPROM Terrain Referenced Navigation (TRN) system used by the Panavia Tornado. MIDS provides a Link 16 data link.
The aircraft employs a sophisticated and highly integrated Defensive Aids Sub-System named Praetorian (formerly Euro-DASS) Praetorian monitors and responds automatically to air and surface threats, provides an all-round prioritised assessment, and can respond to multiple threats simultaneously. Threat detection methods include a Radar warning receiver (RWR), a missile warning system (MWS) and a laser warning receiver (LWR, only on UK Typhoons). Protective countermeasures consist of chaff, flares, an electronic countermeasures (ECM) suite and a towed radar decoy (TRD). The ESM-ECM and MWS consists of 16 antenna array assemblies and 10 radomes.
Historically, each sensor in an aircraft is treated as a discrete source of information; however this can result in conflicting data and limits the scope for the automation of systems, hence increasing pilot workload. To overcome this, the Typhoon employs sensor fusion techniques. In the Typhoon, fusion of all data sources is achieved through the Attack and Identification System, or AIS. This combines data from the major on-board sensors along with any information obtained from off-board platforms such as AWACS and MIDS. Additionally the AIS integrates all the other major offensive and defensive systems (e.g. DASS & communications). The AIS physically comprises two essentially separate units: the Attack Computer (AC) and the Navigation Computer (NC).
By having a single source of information, pilot workload should be reduced by removing the possibility of conflicting data and the need for cross-checking, improving situational awareness and increasing systems automation. In practice the AIS should allow the Eurofighter to identify targets at distances in excess of 150 nmi (280 km; 170 mi) and acquire and auto-prioritise them at over 100 nmi (190 km; 120 mi). In addition the AIS offers the ability to automatically control emissions from the aircraft, so called EMCON (from EMissions CONtrol). This should aid in limiting the detectability of the Typhoon by opposing aircraft further reducing pilot workload.
In 2017 a RAF Eurofighter Typhoon demonstrated interoperability with the F-35B using its Multifunction Advanced Data Link (MADL) in a two-week trial known as Babel Fish III, in the Mojave Desert. This was achieved by translating the MADL messages into Link 16 format, thus allowing an F-35 in stealth mode to communicate directly with the Typhoon.
=== Radar and sensors ===
==== Captor radar ====
The Euroradar Captor is a mechanical multi-mode pulse Doppler radar designed for the Eurofighter Typhoon. The Eurofighter operates automatic Emission Controls (EMCON) to reduce the electro-magnetic emissions of the current CAPTOR mechanically scanned radar. The Captor-M has three working channels, one intended for classification of jammer and for jamming suppression. A succession of radar software upgrades have enhanced the air-to-air capability of the radar. These upgrades have included the R2P programme (initially UK only, and known as T2P when 'ported' to the Tranche 2 aircraft) which is being followed by R2Q/T2Q. R2P was applied to eight German Typhoons deployed on Red Flag Alaska in 2012.
Captor-E AESA variant
The Captor-E is an AESA derivative of the original Captor radar, also known as CAESAR (from Captor Active Electronically Scanned Array Radar) being developed by the Euroradar Consortium, led by Selex ES.
Synthetic Aperture Radar is expected to be fielded as part of the AESA radar upgrade which will give the Eurofighter an all-weather ground attack capability. The conversion to AESA will also give the Eurofighter a low probability of intercept radar with improved jam resistance. These include an innovative design with a gimbal to meet RAF requirements for a wider scan field than a fixed AESA. The coverage of a fixed AESA is limited to 120° in azimuth and elevation. A senior EADS radar expert has claimed that Captor-E is capable of detecting an F-35 from roughly 59 kilometres (37 mi) away.
The first flight of a Eurofighter equipped with a "mass model" of the Captor-E occurred in late February 2014, with flight tests of the actual radar beginning in July of that year. On 19 November 2014 the contract to upgrade to the Captor-E was signed at the offices of EuroRadar lead Selex ES in Edinburgh, in a deal worth €1bn. Kuwait became the launch customer for the Captor-E active electronically scanned array radar in April 2016. Germany has announced the intention to integrate the AESA Captor-E into their Typhoons, beginning in 2022.
In January 2024, it was announced that the first European Common Radar System (ECRS) MK2 had been fitted to an RAF operated test and evaluation Typhoon ZK355 (BS116), at BAE Systems' site Warton. Leonardo and DE&S announced that the initial flight was scheduled to take place later in 2024.
The AESA radar program for the Eurofighter is now split into three European Common Radar System (ECRS) variants:
ECRS Mk0: also called Radar One Plus, this is the baseline Captor-E model which was developed by Leonardo. Hardware development is complete and it is fitted to aircraft delivered to Kuwait and Qatar.
ECRS Mk1: an upgrade of the Mk0 being developed by Hensoldt/Indra, for Germany and Spain. It will be retrofitted to their Tranche 2 and 3 aircraft, and also fitted to both countries' new Tranche 4 models.
ECRS Mk2: also known as Radar Two, a different version developed from the ARTS and Bright Adder demonstrators, and from the Gripen E's ES-05 Raven radar. With electronic warfare/attack capabilities, it is being developed by Leonardo for the RAF, and integrated by BAE Systems. It will initially be applied to Tranche 3 aircraft, but the RAF may upgrade Tranche 2 later. Italy has joined development of the ECRS Mk2, which was part of the Typhoon offer to Finland for its HX Fighter Program.
==== IRST ====
The Passive Infra-Red Airborne Track Equipment (PIRATE) system is an infrared search and track (IRST) system mounted on the port side of the fuselage, forward of the windscreen. Selex ES is the lead contractor which, along with Thales Optronics (system technical authority) and Tecnobit of Spain, make up the EUROFIRST consortium responsible for the system's design and development. Eurofighters starting with Tranche 1 block 5 have the PIRATE. The first Eurofighter Typhoon with PIRATE-IRST was delivered to the Italian Aeronautica Militare in August 2007. More advanced targeting capabilities can be provided with the addition of a targeting pod such as the Litening pod.
When used with the radar in an air-to-air role, it functions as an infrared search and track system, providing passive target detection and tracking. The system can detect variations in temperature at a long range. It also provides a navigation and landing aid. PIRATE is linked to the pilot's helmet-mounted display. It allows the detection of both hot exhaust plumes of jet engines and surface heating caused by friction; processing techniques further enhance the output, giving a near-high resolution image of targets. The output can be directed to any of the Multi-function Head Down Displays, and can also be overlaid on both the Helmet Mounted Sight and the Head Up Display.
Up to 200 targets can be simultaneously tracked using one of several different modes; Multiple Target Track (MTT), Single Target Track (STT), Single Target Track Ident (STTI), Sector Acquisition and Slaved Acquisition. In MTT mode the system will scan a designated volume space looking for potential targets. In STT mode PIRATE will provide tracking of a single designated target. An addition to this mode, STT Ident allows for visual identification of the target, the resolution being superior to CAPTOR's. When in Sector Acquisition mode PIRATE will scan a volume of space under direction of another onboard sensor such as CAPTOR. In Slave Acquisition, off-board sensors are used with PIRATE being commanded by data obtained from an AWACS or other source. When a target is found in either of these modes, PIRATE will automatically designate it and switch to STT.
Once a target has been tracked and identified, PIRATE can be used to cue an appropriately equipped short range missile, i.e. a missile with a high off-boresight tracking capability such as ASRAAM. Additionally the data can be used to augment that of Captor or off-board sensor information via the AIS. This should enable the Typhoon to overcome severe ECM environments and still engage its targets. PIRATE also has a passive ranging capability although the system remains limited when providing passive firing solutions, as it does not have a laser rangefinder.
=== Engines ===
The Eurofighter Typhoon is fitted with two Eurojet EJ200 engines, each capable of providing up to 60 kN (13,500 lbf) of dry thrust and >90 kN (20,230 lbf) with afterburners. Using the "war" setting, dry thrust increases by 15% to 69 kN per engine and afterburners by 5% to 95 kN per engine and for a few seconds, up to 102 kN thrust without damaging the engine. The EJ200 engine combines the leading technologies from each of the four European companies, using advanced digital control and health monitoring; wide chord aerofoils and single crystal turbine blades; and a convergent / divergent exhaust nozzle to give high thrust-to-weight ratio, multimission capability, supercruise performance, low fuel consumption, low cost of ownership, modular construction and growth potential.
The Typhoon is capable of supersonic cruise without using afterburners (referred to as supercruise). Air Forces Monthly gives a maximum supercruise speed of Mach 1.1 for the RAF FGR4 multirole version, however in a Singaporean evaluation, a Typhoon managed to supercruise at Mach 1.21 on a hot day with a combat load. Eurofighter states that the Typhoon can supercruise at Mach 1.5. As with the F-22, the Eurofighter can launch weapons while under supercruise to extend their ranges via this "running start". In 2007, the EJ200 engine had accumulated 50,000 Engine Flying Hours in service with the four Nation Air Forces (Germany, UK, Spain and Italy).
The EJ200 engine has the potential to be fitted with a thrust vectoring control (TVC) nozzle, which the Eurofighter and Eurojet consortium have been actively developing and testing, primarily for export but also for future upgrades of the fleet. TVC could reduce fuel burn on a typical Typhoon mission by up to 5%, as well as increase available thrust in supercruise by up to 7% and take-off thrust by 2%. Clemens Linden, Eurojet TURBO GmbH CEO, speaking at the 2018 Farnborough International Air Show, said "15 per cent more thrust would allow pilots to operate with a heavily loaded aircraft in the battlespace with the same performance levels as they have today. The technology insertion also provides more persistence – giving aircraft longer range or longer loitering time. To achieve more thrust we would increase the airflow and pressure ratios of the high and low pressure compressors and run higher temperatures in the turbines by using the latest generation single crystal turbine blade materials. And with higher aerodynamic efficiencies we can achieve a lower fuel burn. A third area of improvement would be the engine exhaust nozzle which would be upgraded with the installation of a 2-parametric version allowing independent and optimized adjustment of the throat and exit area at all flight conditions, providing fuel burn advantages. The technologies for the different components are at a Technology readiness level of between 7 and 9. The nozzle has been at ITP in Spain on a test bed for 400 hours."
=== Performance ===
The Typhoon's combat performance, compared to the F-22 Raptor and F-35 Lightning II fighters and the French Dassault Rafale, has been the subject of much discussion. In March 2005, United States Air Force Chief of Staff General John P. Jumper, then the only person to have flown both the Eurofighter Typhoon and the Raptor, said:
The Eurofighter is both agile and sophisticated, but is still difficult to compare to the F/A-22 Raptor. They are different kinds of airplanes to start with; it's like asking us to compare a NASCAR car with a Formula One car. They are both exciting in different ways, but they are designed for different levels of performance. ... The Eurofighter is certainly, as far as smoothness of controls and the ability to pull (and sustain high G forces), very impressive. That is what it was designed to do, especially the version I flew, with the avionics, the color moving map displays, etc. — all absolutely top notch. The maneuverability of the airplane in close-in combat was also very impressive. The F/A-22 performs in much the same way as the Eurofighter. But it has additional capabilities that allow it to perform the [U.S.] Air Force's unique missions. ... The F/A-22 Raptor has stealth and supercruise. It has the ability to penetrate virtually undetected.
In the 2005 Singapore evaluation, the Typhoon won all three combat tests, including one in which a single Typhoon defeated three RSAF F-16s, and reliably completed all planned flight tests. In July 2009, Former Chief of Air Staff for the RAF, Air Chief Marshal Sir Glenn Torpy, said that "The Eurofighter Typhoon is an excellent aircraft. It will be the backbone of the Royal Air Force along with the JSF."
In July 2007, Indian Air Force Su-30MKI fighters participated in the Indra-Dhanush exercise with the RAF's Typhoon. This was the first time the two fighters had taken part in such an exercise. The IAF did not allow their pilots to use the MKI's radar during the exercise to protect the highly classified Russian N011M Bars. The IAF pilots were impressed by the Typhoon's agility. In 2015, Indian Air Force Su-30MKIs again participated in a Indra-Dhanush exercise with RAF Typhoons.
=== Armament ===
==== Air to ground ====
The Typhoon is a multi-role fighter with maturing air-to-ground capabilities. The initial absence of air-to-ground capability is believed to have been a factor in the type's rejection from Singapore's fighter competition in 2005. At the time it was claimed that Singapore was concerned about the delivery timescale and the ability of the Eurofighter partner nations to fund the required capability packages. Tranche 1 aircraft could drop laser-guided bombs in conjunction with third-party designators but the anticipated deployment of Typhoon to Afghanistan meant that the UK required self-contained bombing capabilities before the other partners. In 2006 the UK embarked on the £73m Change Proposal 193 (CP193) to give an "austere" air-to-surface capability using GBU-16 Paveway II and Rafael/Ultra Electronics Litening III laser designator for Tranche 1 Block 5 aircraft. Aircraft with this upgrade were designated Typhoon FGR4 by the RAF.
Similar capability was added to Tranche 2 aircraft on the main development pathway as part of the Phase 1 Enhancements. P1Ea (SRP10) entered service in 2013 Q1 and added the use of Paveway IV, EGBU16 and the cannon against surface targets. P1Eb (SRP12) added full integration with GPS bombs such as GBU-10 Paveway II, GBU-16 Paveway II, Paveway IV and a new real-time operating system that allows multiple targets to be attacked in a single run. This new system will form the basis for future weapons integration by individual countries under the Phase 2 Enhancements. The Storm Shadow and KEPD 350 (Taurus) cruise missiles, together with the Meteor Beyond Visual Range Air-to-Air missile flight trials had been successfully completed by January 2016. The Storm Shadow and Meteor firings are part of the Phase 2 Enhancement (P2E) programme which introduced a range of new and improved long range attack capabilities to Typhoon. In addition to Meteor and Storm Shadow, the first live firing of MBDA's Brimstone air-to-surface missile, part of the Phase 3 Enhancements (P3E) programme, was successfully completed in July 2017.
German aircraft can carry four GBU-48 1000 lb bombs.
An anti-ship capability has been studied but has not yet been contracted. Weapon options for this role could include Boeing Harpoon, MBDA Marte, "Sea Brimstone", and RBS-15.
==== Air to air ====
The Typhoon also carries a specially developed variant of the Mauser BK-27 27 mm cannon that was developed originally for the Panavia Tornado. This is a single-barrel, electrically fired, gas-operated revolver cannon with a new linkless feed system which is located in the starboard wing root, and is capable of firing up to 1700 rounds per minute. There was a proposal on cost grounds in 1999 to limit UK gun-armament fit to the first 53 batch-1 aircraft and not used operationally, but this decision was reversed in 2006. The aircraft carries 150 rounds.
In his 2022 book Typhoon, former RAF pilot Mike Sutton reported that his 27 mm cannon had jammed during a strafing run in Syria, against ISIS targets, while supporting Allied ground units. According to his book, the Typhoon was originally intended to be built without an internal gun, like the F-4 Phantom and the Harrier jump jet. A decision to install an internal gun had led to "manufacturing issues". Sutton claimed that, during his staffing run, the gun jammed after 26 rounds, with the HUD showing a "GUN FAIL" warning legend. During the debrief it transpired that the problem was well known to both the pilots and ground crews.
In addition to its air to ground armament; the Typhoon can carry a mixture of air to air weaponry to fulfill its role as an air superiority fighter. This includes the ASRAAM, IRIS-T, and the AIM-9 Sidewinder heat seeking missiles; and the AIM-120 AMRAAM and the MBDA Meteor beyond visual range radar guided missiles. Under Tranche 2, Block 15 EOC (Enhanced Operational Capability) 2; the Meteor was integrated into the Typhoon's arsenal. This similar capability was achieved in the RAF under "Project Centurion"; with 107 Tranche 2 and 3 Typhoons modified to be capable to use the Meteor along with Brimstone and Storm Shadow air to ground missiles.
== Operational history ==
=== Austrian Air Force (Luftstreitkräfte) ===
In 2002, Austria selected the Typhoon as its new air defence aircraft, it having beaten the F-16 and the Saab Gripen in competition. The purchase of 18 Typhoons was agreed on 1 July 2003, however this was reduced to 15 in June 2007. The first aircraft (7L-WA) was delivered on 12 July 2007 to Zeltweg Air Base and formally entered service with the Austrian Air Force. A 2008 report by the Austrian Court of Audit calculated, that instead of getting 18 Tranche 2 jets at a price of €109 million each, as stipulated by the original contract, the revised deal, agreed to by Minister Norbert Darabos, meant that Austria was paying an increased unit price of €114 million for 15 partially used, Tranche 1 jets. In July 2008, the Luftstreitkräfte assigned the Eurofighter to Quick Reaction Alert (QRA) duties, by the end of the year they had been scrambled 73 times.
Austrian prosecutors are investigating allegations that up to €100 million was made available to lobbyists to influence the original purchase decision in favour of the Eurofighter. By October 2013, all Typhoons in service with Austria had been upgraded to the latest Tranche 1 standard. In 2014, due to defence budget restrictions, there were only 12 pilots available to fly the 15 aircraft in Austria's Air Force. In February 2017, Austrian defence minister Hans Peter Doskozil accused Airbus of fraudulent intent following a probe that allegedly unveiled corruption linked to the order of Typhoon jets.
In July 2017, the Austria Defence Ministry announced that it would be replacing all its Typhoon aircraft by 2020. The ministry said continued use of its Typhoons over their 30-year life span would cost about €5 billion with the bulk being for maintenance. By comparison it is estimated that buying and operating a new fleet of 15 single-seat and three twin-seat fighters would save €2 billion over that period. Austria plans to explore a government-to-government sale or lease agreement to avoid a lengthy and costly tender process with a manufacturer. Possible replacements include the Gripen and the F-16.
On 20 July 2020, a letter written by Indonesia's defence minister, Prabowo Subianto, was published by Indonesian news outlets expressing interest in acquiring Austria's entire fleet of Typhoon jets.
=== German Air Force (Luftwaffe) ===
On 4 August 2003, the German Air Force accepted its first series production Eurofighter (30+03) starting the replacement process of the Mikoyan MiG-29s inherited from the East German Air Force. The first Luftwaffe Wing to accept the Eurofighter was Jagdgeschwader 73 "Steinhoff" on 30 April 2004 at Rostock–Laage Airport. The second Wing was Jagdgeschwader 74 (JG74) on 25 July 2006, with four Eurofighters arriving at Neuburg Air Base, beginning the replacement of JG74's McDonnell Douglas F-4F Phantom IIs.
The Luftwaffe assigned their Eurofighters to QRA on 3 June 2008, taking over from the F-4F Phantom II.
On 28 October 2014, while deployed to Ämari Air Base in Estonia as part of the NATO Baltic Air Policing mission, German Eurofighters scrambled and intercepted seven Russian Air Force aircraft over the Baltic Sea.
The Luftwaffe once again provided Baltic Air Policing at Ämari Air Base between 31 August 2020 and April 2021, having taken over from Dassault Mirage 2000-5Fs of the French Air and Space Force.
On 5 June 2024, the German chancellor announced plans to purchase another twenty Eurofighters.
German Eurofighters took part in Exercise Tarang Shakti held by the Indian Air Force from 6 August 2024.
=== Italian Air Force (Aeronautica Militare) ===
On 16 December 2005, the F-2000 Typhoon reached initial operational capability (IOC) with the Italian Air Force (Aeronautica Militare). Its F-2000 Typhoons were put into service as air defence fighters at the Grosseto Air Base, and immediately assigned to QRA at the same base.
On 17 July 2009, Italian Air Force F-2000A Typhoons were deployed to protect Albania's airspace. On 29 March 2011, Italian Air Force Eurofighter Typhoons began flying combat air patrol missions in support of NATO's Operation Unified Protector in Libya.
Between January and August 2015, four Aeronautica Militare F-2000A Typhoons (from 36º and 37º Stormo) were deployed to Šiauliai Air Base in northern Lithuania as part of the Baltic Air Policing mission.
=== Kuwait Air Force ===
On 11 September 2015, Eurofighter confirmed that an agreement had been reached to supply Kuwait with 28 aircraft. On 1 March 2016, the Kuwaiti National Assembly approved the procurement of 22 single-seat and six twin-seat Typhoons. On 5 April 2016, Kuwait signed a contract with Leonardo valued at €7.957 billion ($9.062 billion) for the supply of the 28 aircraft, all to tranche 3 standard. The Kuwaiti aircraft will be the first Typhoons to receive the Captor-E AESA radar, with two instrumented production aircraft from the UK and Germany currently undergoing ground-based integration trials. The Typhoons will be fitted with Leonardo's Praetorian defensive aids suite and PIRATE infrared search and track system. The contract involves the production of aircraft in Italy and covers logistics, operational support and the training of flight crews and ground personnel. It also encompasses infrastructure work at the Ali Al Salem Air Base, where the Typhoons will be based. Aircraft deliveries will begin in 2020.
=== Qatar Emiri Air Force ===
From January 2011 the Qatar Emiri Air Force (QEAF) evaluated the Typhoon, alongside the Boeing F/A-18E/F Super Hornet, the McDonnell Douglas F-15E Strike Eagle, the Dassault Rafale, and the Lockheed Martin F-35 Lightning II, to replace its then inventory of Dassault Mirage 2000-5s. On 30 April 2015 Qatar announced that it would order 24 Rafales.
In December 2017 a deal for Qatar to buy 24 jets and a support and training package from BAE was announced, scheduled to begin in 2022. In September 2018, Qatar made the first payment for the procurement of 24 Eurofighter Typhoons and nine BAE Systems Hawk aircraft to BAE.
=== Royal Air Force (UK) ===
The UK's first Typhoon Development Aircraft (DA-2) ZH588 made its maiden flight on 6 April 1994 from Warton. On 1 September 2002, No. XVII (Reserve) Squadron was reformed at Warton as the Typhoon Operational Evaluation Unit (TOEU), receiving its first aircraft on 18 December 2003. The first RAF production aircraft to take to the air was ZJ800 (BT001) on 14 February 2003, completing a 21-minute flight. The next Typhoon squadron to be formed was No. 29 (R) Squadron which formed as the Typhoon Operational Conversion Unit (OCU). The first operational RAF Typhoon squadron to be formed was No. 3 (Fighter) Squadron on 31 March 2006, when it moved to RAF Coningsby.
No. 3 (F) Squadron Typhoon F2s took over QRA responsibilities from the Panavia Tornado F3 on 29 June 2007, initially alternating with the Tornado F3 every month. On 9 August 2007, the UK's MoD reported that No. XI (F) Squadron of the RAF, which stood up as a Typhoon squadron on 29 March 2007, had taken delivery of its first two multi-role Typhoons. Two of No. XI (F) Squadron's Typhoons were sent to intercept a Russian Tupolev Tu-95 approaching British airspace on 17 August 2007. The RAF Typhoons were declared combat ready in the air-to-ground role by 1 July 2008. The RAF Typhoons were projected to be ready to deploy for operations by mid-2008.
In late 2009, four RAF Typhoons were deployed to RAF Mount Pleasant, replacing the Tornado F3s of No. 1435 Flight defending the Falkland Islands. No. 6 Squadron stood up at RAF Leuchars on 6 September 2010, making Leuchars the second RAF base to operate the Typhoon.
On 20 March 2011 ten Typhoons from RAF Coningsby and RAF Leuchars arrived at the Gioia del Colle airbase in southern Italy to enforce a no-fly zone in Libya alongside Panavia Tornado GR4s. On 21 March, RAF Typhoons flew their first-ever combat mission while patrolling the no-fly zone. On 29 March, it was revealed that the RAF was having to divert personnel from Typhoon training to meet the shortfall in pilots available to fly the required number of sorties over Libya. On 12 April 2011, a RAF Typhoon and a Tornado GR4 dropped precision-guided bombs on ground vehicles operated by Gaddafi forces. The RAF said that each aircraft dropped one GBU-16 Paveway II 454 kg (1,000 lb) laser-guided bomb which struck "very successfully and very accurately [and this] represented] a significant milestone in the delivery of multi-role Typhoon." Target designation was provided by the Tornados with their Litening III targeting pods due to the lack of Typhoon pilots trained in air-to-ground missions.
The National Audit Office observed in 2011 that the distribution of the Eurofighter's parts supply and repairs over several countries has led to parts shortages, long timescales for repairs, and the cannibalisation of some aircraft to keep others flying. The UK's then Defence Secretary Liam Fox admitted on 14 April 2011 that Britain's Eurofighter Typhoon jets were grounded in 2010 due to shortage of spare parts. The RAF "cannibalised" aircraft for spare parts in a bid to keep the maximum number of Typhoons operational on any given day. The MoD warned that the problems were likely to continue until 2015.
On 15 September 2012, No. 1 (F) Squadron stood up at RAF Leuchars, joining No. 6 Squadron as the second Typhoon unit to operate in Scotland. On 22 April 2013, No. 41 (R) Test and Evaluation Squadron (TES) began operating the Typhoon from RAF Coningbsy.
By July 2014, a dozen RAF Tranche 2 Typhoons had been upgraded with Phase 1 Enhancement (P1E) capability to enable them to use the Paveway IV guided bomb; the Tranche 1 version had used the GBU-12 Paveway II in combat over Libya, but the Paveway IV can be set to explode above or beneath a target and to hit at a set angle.
No. II (AC) Squadron became the fifth RAF Typhoon squadron on 12 January 2015 at RAF Lossiemouth. In July 2015, it was reported that Typhoons from No. II (AC) Squadron were training with Type 45 destroyers in an Air-Maritime Integration (AMI) role, conceding that the service had recently neglected the role following the decommissioning of the Nimrod Maritime Patrol aircraft. In the 2015 Strategic Defence and Security Review (SDSR), the UK decided to retain some of the Tranche 1 aircraft to increase the number of front-line squadrons from five to seven and to extend the out-of-service date from 2030 to 2040 as well as implementing the Captor-E AESA radar in later tranches. In 2015, Typhoons were deployed to Malta as security for the Commonwealth Heads of Government Meeting. On 3 December 2015, six Typhoon FGR4s deployed to RAF Akrotiri to support operations against ISIL. The following evening the Typhoons, accompanied by Tornados, attacked targets in Syria.
In October 2016, four Typhoon FGR4s from No. II (AC) Squadron, supported by an Airbus Voyager KC3 aerial tanker and a Boeing C-17 Globemaster III, deployed to Misawa Air Base in Japan for the first bilateral exercises with non-US forces hosted by the JASDF.
On 14 December 2017, it was announced No. 12 (B) Squadron would stand as a joint RAF/Qatari Air Force squadron, with the Qatari crew temporarily operating Typhoons to prepare them for their own Typhoon deliveries in 2022. On 29 January 2018, the RAF announced that 16 twin-seat Typhoons would undergo the Return to Produce (RTP) process in an effort to save £800 million, with each airframe producing £50M of spare parts. This move also reflected the switch from two-seat trainer to single-seat pilot training and greater use of training simulators. In addition, the two-seat airframes were primarily from Tranche 1 and could not be equipped with Tranche 3 and later upgrades such as Captor-E.
On 1 April 2019, No. IX (B) Squadron officially converted from the Tornado GR4 to the Typhoon FGR4, becoming an aggressor and air defence squadron at Lossiemouth. In April, four Typhoons of No. XI (F) Squadron deployed from RAF Coningsby to Ämari Airbase, Estonia, to undergo a four month long NATO Baltic policing mission (Op AZOTIZE). Five Typhoons of No. 6 Squadron participated in the Arctic Challenge Exercise (ACE) in Sweden from 22 May to 4 June. No. 12 Squadron were assigned their first Typhoon FGR4 in July 2019. The 160th, and last, Typhoon (ZK437) was delivered to the RAF on 27 September 2019. Between November and December 2019, No. 1(F) Squadron deployed to Keflavik Airbase in Iceland as part of NATO's Icelandic Air Policing Mission. During this one-month deployment the aircraft conducted more than 180 practice intercepts and 59 training sorties.
Between April and September 2020, No. 6 Squadron deployed to Šiauliai Air Base, Lithuania, as part of Operation Azotize. While deployed the squadron participated in Exercise BALTOPs 2020. In July 2020, No. 12 Squadron began operating as a joint RAF-QEAF unit at RAF Coningsby.
On 22 March 2021 the 2021 Defence Command Paper announced the retirement of all Tranche 1 Typhoons by 2025, with the remaining fleet being upgraded. Also in 2021 the UK launched the P3Ec package, due for delivery in 2024, including several upgrades, including replacing the multifunction displays with a Large Area Display (LAD). On 14 December 2021 the RAF executed its first operational air-to-air engagement with a Typhoon, shooting down a small hostile drone with an ASRAAM near the Al-Tanf coalition base in Syria.
On 7 September 2022 during the joint UK/US SinkEx 'Atlantic Thunder' a 41 Squadron Typhoon successfully hit the ex-USS Boone with Paveway IVs, becoming the first RAF Typhoon to strike a naval target with live ordnance.
Between 18 and 22 September 2023, Typhoons from 41 Squadron took part in the Finnish led Exercise ‘Baana 23’. During this exercise, the aircraft performed landings and takeoffs from a highway in Tervo, marking a first for any Eurofighter operator.
On 12 January 2024, at 2:30 am local time, four RAF Typhoons dropped Paveway IV bombs on two military facilities, used by Houthis to launch drone and missile strikes on ships in the Red Sea, as a part of the 2024 Yemeni airstrike. On 13 April 2024, RAF Typhoons shot down an unspecified number of unmanned aerial vehicles during the 2024 Iranian strikes in Israel. The Typhoons, based in Cyprus and Romania, were operating in Iraqi and Syrian airspace as part of Operation Shader.
=== Royal Air Force of Oman ===
During the 2008 Farnborough Airshow it was announced that Oman was in an "advanced stage" of discussions to order Typhoons as a replacement for its SEPECAT Jaguar aircraft. On 21 December 2012, the Royal Air Force of Oman (RAFO) became the Typhoon's seventh customer when BAE and Oman announced an order for 12 Typhoons to enter service in 2017. The first of the Typhoons (plus Hawk Mk 166) ordered by Oman were "formally presented to the customer" on 15 May 2017. This included a flypast by a RAFO Typhoon.
=== Royal Saudi Air Force ===
In August 2006, Saudi Arabia confirmed it had agreed to purchase 72 Typhoons for the Royal Saudi Air Force (RSAF). In December 2006, it was reported in The Guardian that Saudi Arabia had threatened to buy Rafales because of a UK Serious Fraud Office (SFO) investigation into the Al Yamamah defence deals which commenced in the 1980s.
On 14 December 2006, Britain's attorney general, Lord Goldsmith, ordered that the SFO discontinue its investigation into BAE Systems' alleged bribery of senior Saudi officials in the Al-Yamamah contracts, citing "the need to safeguard national and international security". The Times raised the possibility that RAF production aircraft would be diverted as early Saudi Arabian aircraft, with the RAF forced to wait for its full complement of aircraft. This arrangement would mirror the diversion of RAF Tornados to the RSAF. The Times also reported that such an arrangement would make the UK purchase of its Tranche 3 commitments more likely. On 17 September 2007, Saudi Arabia confirmed it had signed a £4.43 billion contract for 72 aircraft. 24 aircraft would be at the Tranche 2 build standard, previously destined for the UK RAF, the first being delivered in 2008. The remaining 48 aircraft were to be assembled in Saudi Arabia and delivered from 2011, however following contract renegotiations in 2011, it was agreed that all 72 aircraft would be assembled by BAE Systems in the UK, with the last 24 aircraft being built to Tranche 3 capability.
On 29 September 2008, the United States Department of State approved the Typhoon sale, required because of a certain technology governed by the International Traffic in Arms Regulations (ITAR) process which was incorporated into the MIDS of the Eurofighter.
On 22 October 2008, the first RSAF Typhoon made its maiden flight at Warton. Since 2010, BAE has been training Saudi Arabian personnel at Warton.
By 2011, 24 Tranche 2 Eurofighter Typhoons had been delivered to Saudi Arabia, consisting of 18 single-seat and six two-seat aircraft. After that, BAE and Riyadh entered into discussions over configurations and price of the rest of the 72-plane order. On 19 February 2014, BAE announced that the Saudis had agreed to a price increase. BAE announced that the last of the original 72 Typhoons had been delivered to Saudi Arabia in June 2017.
RSAF Typhoons are playing a central role in the Saudi-led bombing campaign in Yemen. In February 2015, Saudi Typhoons attacked ISIS targets over Syria using Paveway IV bombs for the first time.
On 9 March 2018, a memorandum of intent for the additional 48 Typhoons was signed during Saudi Crown Prince Mohammed bin Salman's visit to the United Kingdom, however the deal has not been completed due to German arms sanctions implemented in November 2018 in response to the assassination of Jamal Khashoggi.
=== Spanish Air and Space Force ===
The first Spanish production Eurofighter Tifón to fly was CE.16-01 (ST001) on 17 February 2003, flying from Getafe Air Base. The Spanish Air and Space Force assigned their Typhoons to QRA responsibilities in July 2008.
On 7 August 2018, a Spanish Air and Space Force Typhoon, on a training exercise near Otepää in Estonia, released an AMRAAM missile by mistake. There were no casualties, but the ten-day search operation for missile remains was unsuccessful and the status of the missile is unknown, whether it self-destructed in the air or landed unexploded and left a hazardous situation for the public. The pilot was disciplined for negligence, but received only the minimum penalty in the light of undisclosed mitigating circumstances.
== Sales and marketing ==
=== Germany ===
Germany placed an order for an additional 38 Tranche 4 Typhoons on 11 November 2020 under the Quadriga Agreement. The aircraft are due to replace Tranche 1 aircraft currently in service, with the first airframe being announced as in production in November 2022. Deliveries are due to take place from 2025.
In March 2022, the German government announced the decision to purchase Typhoon EK over the Boeing EA-18G Growler to replace the ageing Tornado ECR variant from 2030. On 30 November 2023, the Budget Committee of the Bundestag formally announced the plans to convert 15 Typhoons to Electronic Warfare standard.
On 5 June 2024, it was announced that an additional 20 Typhoons would be ordered on top of the 38 already on order.
=== Italy ===
On 23 December 2024, an order worth €7.5 billion was placed for 24 aircraft.
=== Spain ===
The Spanish Air and Space Force has a requirement for a further 45 Typhoons split across two contracts.
Halcon I was signed in June 2022 for the purchase of 20 aircraft will begin deliveries from 2026. The contract is for 16 single-seat and four twin-seat airframes, all at Tranche 4 standard. These aircraft are expected to replace the EF-18 Hornets of Ala 46, based at Gando Air Base on the Canary Islands.
Halcon II followed on 12 September 2023 for the acquirement of a further 25 Typhoons. These aircraft will replace the rest of the EF-18 Hornet fleet which is due to be decommissioned in 2030. The Spanish Government announced that these aircraft would be of Tranche 5 configuration.
=== Saudi Arabia ===
In October 2016, it was reported that BAE Systems was in talks with Saudi Arabia over an order for another 48 aircraft. On 9 March 2018, a memorandum of intent for the additional 48 Typhoons was signed during Saudi Crown Prince Mohammed bin Salman's visit to the United Kingdom.
In January 2024, the German government announced that it would no longer block the sale of 48 Typhoons to Saudi Arabia. As of February 2024, there has been no official confirmation that the sale will go ahead as other aircraft have been considered to strengthen the Royal Saudi Air Force's combat fleet.
=== Egypt ===
In January 2023, reports surfaced that Egypt would acquire 24 Typhoons as part of a wider $10–12 Billion arms package from Italy.
=== Turkey ===
Turkey has also expressed interest, amid US hesitance on delivering the latest block F-16s, and started negotiations with the UK. Defense Minister Yaşar Güler underscored Turkey's continued interest in acquiring Typhoons, saying that they remain a compelling alternative, despite recent disagreements with Germany over the potential purchase. "If we can realize the issues we talked about with our friends, maybe we won't need it, but we do now. The Eurofighter is a very good alternative, and we want to buy it," Güler said in a televised interview with private broadcaster NTV on 11 December 2023. Turkey expected the United States to approve a proposed sale of new F-16 jets and modernization kits in return for Ankara finally green-lighting Sweden's admission into NATO. It was revealed in November 2023 that Turkey was in talks with the United Kingdom and Spain over procuring 40 Typhoons. Any sale would require Germany's approval, which is not forthcoming. President Erdoğan has been in Germany since the negotiations were revealed, but is reported not to have raised the issue with German Chancellor Olaf Scholz.
In November 2024, Turkish Defence Minister Yaşar Güler said, "We will buy 40 Eurofighter Typhoon fighter jets", crediting Italy, Spain, and the UK for their support in persuading Germany, which had resisted the sale for years. In March 2025, the UK formally submitted a proposal from BAE Systems to Turkey for the purchase of 40 jets. In April 2025, German news sources close to the government claimed that the German government was blocking the export of the Typhoon to Turkey, expressing concerns over recent political developments, especially the arrest of opposition Istanbul Mayor Ekrem İmamoğlu. However, the Ministry of National Defense and the German government denied the allegations. It was said that the sale of the Eurofighter Typhoons was a decision left to the new German government, which viewed the sale of weapons positively.
=== Others ===
Other countries have expressed interest in the fighter, including Serbia, Bangladesh, Colombia, and Ukraine.
The following countries have formally eliminated the Typhoon from their fighter programs: Belgium, Denmark, Singapore, South Korea, Switzerland, and Finland.
== Variants ==
The Eurofighter is produced in single-seat and twin-seat variants. The twin-seat variant is not used operationally, but only for training, though it is combat capable. The aircraft has been manufactured in three major standards; seven Development Aircraft (DA), seven production standard Instrumented Production Aircraft (IPA) for further system development, and a continuing number of Series Production Aircraft. The production aircraft are now operational with the partner nation's air forces.
The Tranche 1 aircraft were produced from 2000 onwards. Aircraft capabilities are being increased incrementally, with each software upgrade resulting in a different standard, known as blocks. With the introduction of the block 5 standard, the R2 retrofit programme began to bring all Tranche 1 aircraft to that standard.
== Operators ==
=== Summary ===
=== Current operators ===
Austria
Austrian Air Force – 15 delivered.
Zeltweg Air Base
Überwachungsgeschwader
Germany
German Air Force – 143 ordered, and all delivered. As of 14 March 2025, 138 are in service. 38 Tranche 4 aircraft on order under Project Quadriga. 15 Aircraft to be upgraded to Typhoon EW (Electronic Warfare) standard.
Nörvenich Air Base
Taktisches Luftwaffengeschwader 31 "Boelcke", 311 & 312 Staffel at
Wittmundhafen Air Base
Taktisches Luftwaffengeschwader 71 "Richthofen", 711 Staffel
Laage Air Base
Taktisches Luftwaffengeschwader 73 "Steinhoff", 731 & 732 Staffel. (OCU formation)
Neuburg Air Base
Taktisches Luftwaffengeschwader 74, 741 & 742 Staffel
Ingolstadt Manching Airport
Wehrtechnische Dienststelle 61
Italy
Italian Air Force – 96 ordered with 96 delivered and 93 in operation as of August 2024. An additional 24 aircraft were ordered on 23 December 2024 for €7.5 billion.
Grosseto Air Base, 4º Stormo "Amedeo d'Aosta" (4th Wing)
9° Gruppo Caccia (9th Fighter Squadron)
20° Gruppo OCU Caccia (20th Fighter Operational Conversion Squadron)
Gioia del Colle Air Base, 36° Stormo "Riccardo Hellmuth Seidl" (36th Wing)
10° Gruppo Caccia (10th Fighter Squadron)
12° Gruppo Caccia (12th Fighter Squadron)
Trapani Air Base, 37° Stormo "Cesare Toschi" (37th Wing)
18° Gruppo Caccia (18th Fighter Squadron)
Istrana Air Base, 51° Stormo "Ferruccio Serafini" (51st Wing)
132° Gruppo Caccia (132nd Fighter Squadron)
Pratica di Mare Air Base, Reparto Sperimentale Volo
Kuwait
Kuwait Air Force – 28 ordered with 15 delivered as of 31 March 2024.
Ali Al Salem AB, Al Jahra District
7 Squadron
18 Squadron
Oman
Royal Air Force of Oman – 12 ordered in December 2012 with all delivered by June 2018.
RAFO Adam, Ad Dakhiliyah
No.8 Squadron
Qatar
Qatar Emiri Air Force – 24 ordered, 10 delivered as of March 2023.
Tamim Airbase, Dukhan
7 Squadron
12 Squadron
RAF Coningsby, Lincolnshire, United Kingdom (from July 2020)
No. 12 Squadron RAF, joint RAF/Qatar Emiri Air Force squadron
Saudi Arabia
Royal Saudi Air Force – 71 aircraft in operation as of June 2018 from 72 delivered.
King Fahad Air Base, Taif
No. 3 Squadron
No. 10 Squadron
No. 80 Squadron
Spain
Spanish Air and Space Force – 73 ordered, all of which have been delivered by October 2020 with 70 in operation as of October 2020. A further 45 aircraft are on order as of 13 September 2023. On 20 December 2024, the Spanish government has signed a contract with Munich-based, Germany, NATO Eurofighter and Tornado Management Agency (NETMA) for the acquisition of additional 25 Eurofighter aircraft known as the Halcon II programme.
Seville-Morón Air Base, Ala 11
111 Escuadrón
113 Escuadrón, OCU Tactical pilot training and evaluation
Albacete-Los Llanos Air Base, Ala 14
142 Escuadrón
Past Units
Armament and Experimentation Logistics Center
United Kingdom
Royal Air Force – 160 ordered, all of which had been delivered by September 2019. As of 21 August 2023, the RAF has 137 aircraft, with 102 in service.
RAF Coningsby, Lincolnshire, England
No. 3 (F) Squadron
No. XI (F) Squadron
No. 12 Squadron, joint RAF/Qatar Air Force squadron
No. 29 Squadron, OCU Tactical pilot training and evaluation
No. 41 Test and Evaluation Squadron
RAF Lossiemouth, Moray, Scotland
No. 1 (F) Squadron
No. II (AC) Squadron
No. 6 Squadron
No. IX (B) Squadron
RAF Mount Pleasant, East Falkland, Falkland Islands
No. 1435 Flight
Past Units
No. 17 (R) Test & Evaluation Squadron, Operational Evaluation Unit (Operated between 2003 and 2013)
== Accidents ==
On 21 November 2002, the Spanish twin-seat Typhoon prototype DA-6 crashed due to a double engine flameout caused by surges of the two engines at 45,000 ft. The two crew members escaped unhurt and the aircraft crashed in a military test range near Toledo, some 110 kilometres (68 mi) from its base at Getafe Air Base.
On 23 April 2008, a RAF Typhoon FGR4 from 17 Squadron at RAF Coningsby (ZJ943), made a wheels–up landing at the US Navy's NAWS China Lake, in the United States. The aircraft was severely damaged however the pilot from 17 Squadron did not sustain any significant injury. It is thought the pilot may have forgotten to deploy the undercarriage or that for some reason he was not alerted to the undercarriage having not been deployed.
On 24 August 2010, a Spanish twin-seat Typhoon crashed at Spain's Morón Air Base moments after take-off for a routine training flight. It was being piloted by a RSAF pilot, who was killed, and a Spanish Air Force Major, who ejected safely. In September 2010 the German Air Force grounded its 55 planes and the RAF temporarily grounded all Typhoon training flights amidst concerns that after ejecting successfully the pilot had fallen to his death. On 21 September, the RAF announced that the harness system had been sufficiently modified to enable routine flying from RAF Coningsby. The Austrian Air Force also said all its aircraft had been cleared for flight. On 24 August 2010, the ejection seat manufacturer Martin Baker commented: "... under certain conditions, the quick release fitting could be unlocked using the palm of the hand, rather than the thumb and fingers, and that this posed a risk of inadvertent release", adding that a modification had been rapidly developed and approved "to eliminate this risk" and was being fitted to all Typhoon seats.
On 9 June 2014, the Spanish Air Force announced that a Typhoon had crashed at Spain's Morón Air Base on landing after a routine training flight. The sole pilot, Captain Fernando Lluna Carrascosa of the Spanish Air Force, who had over 600 Eurofighter flying hours, died in the crash.
On 23 June 2014, a Typhoon of the German Air Force suffered a mid-air collision with a Learjet 35A, which crashed near Olsberg, Germany. The severely damaged Eurofighter made a safe landing at Nörvenich Air Base, while the Learjet crashed with the two onboard killed.
On 1 September 2017, a RAF Typhoon overran the runway on landing at Pardubice Airport, Czech Republic, after diverting for bad weather.
On 14 September 2017, a RSAF aircraft crashed on a combat mission in Yemen's Abhyan province, killing its pilot. According to the Saudi Government, the aircraft crashed due to technical reasons.
On 24 September 2017, an Italian Air Force aircraft crashed during an airshow in Terracina, Lazio, Italy. The pilot did not eject and died in the accident. The Italian Air Force said the jet completed a loop but then failed to get enough lift as it approached sea level and hit the water just a few hundred metres offshore.
On 12 October 2017, a Spanish Air Force Typhoon crashed near its base at Los Llanos Albacete, Spain, when returning from the military parade for the Spanish National Day. The pilot was killed.
On 24 June 2019, two German Air Force aircraft collided mid-air during an exercise in the region of Müritz in Mecklenburg-Vorpommern in northern Germany. Both aircraft were lost while the pilots ejected. The two planes were based at Laage, home to the "Steinhoff" Tactical Air Force Wing 73. Neither plane was carrying weapons. One of the pilots died.
On 14 December 2022, an Italian Air Force Typhoon of 37° Stormo crashed during the landing sequence into Trapani-Birgi Air Base in Sicily. The aircraft had been conducting a training mission with another Typhoon which landed safely. The pilot was killed during the crash.
On 24 July 2024, an Italian Air Force Typhoon crashed during a military training exercise in the Douglas Daly region of the Northern Territory, in outback Australia, during Exercise Pitch Black. The pilot ejected safely and was taken to Royal Darwin Hospital by helicopter.
== Aircraft on display ==
Germany
98+29 EF2000 Prototype DA-1 on display at the Deutsches Museum Flugwerft Schleissheim, Munich.
98+30 EF2000 Prototype DA-5 on display at the MHM Gatow, currently stored, Berlin-Gatow.
30+39 EF2000 GS0025 on display at the General-Steinhoff barracks, Berlin.
Italy
MMX602 EF2000 Prototype DA-3 on display at Leonardo Factory Museum, Caselle.
MMX603 EF2000 Prototype DA-7 on display at Italian Air Force Museum, Vigna di Valle.
United Kingdom
ZH588 EF2000 Prototype DA-2 on display at the Royal Air Force Museum London, Hendon, England.
ZH590 EF2000(T) Prototype DA-4 was on display at the Imperial War Museum Duxford, Cambridge, England, in Hangar 3: Air and Sea, and was due to be transferred to the Newark Air Museum in 2020. It now resides at RAF Cosford, however, after the MOD made the decision to use it as an Instructional Airframe.
== Specifications ==
Data from RAF Typhoon data, Air Forces Monthly, Superfighters, and Brassey's Modern FightersGeneral characteristics
Crew: 1 or 2
Length: 15.96 m (52 ft 4 in)
Wingspan: 10.95 m (35 ft 11 in)
Height: 5.28 m (17 ft 4 in)
Wing area: 51.2 m2 (551 sq ft)
Empty weight: 11,000 kg (24,251 lb)
Gross weight: 16,000 kg (35,274 lb)
Max takeoff weight: 23,500 kg (51,809 lb)
Fuel capacity: 4,996 kg (11,010 lb) / 6,215 L (1,642 US gal; 1,367 imp gal) internal
Powerplant: 2 × Eurojet EJ200 afterburning turbofan engines, 60 kN (13,500 lbf) thrust each dry, 90 kN (20,200 lbf) with afterburner
Performance
Maximum speed: 2,500 km/h (1,600 mph, 1,300 kn) at 11 km altitude — or Mach 2.35
1,530 km/h (950 mph; 830 kn) at sea level — or Mach 1.25
Supercruise: Mach 1.5
Range: 2,900 km (1,800 mi, 1,600 nmi)
Combat range: 1,389 km (863 mi, 750 nmi) ground attack, hi-lo-hi
601 km (325 nmi; 373 mi) ground attack, lo-lo-lo
Ferry range: 3,790 km (2,350 mi, 2,050 nmi) with 3 × drop tanks
Endurance: 3 hours combat air patrol (air defence) at 185 km (100 nmi; 115 mi)
10 minutes air-defence loiter at 1,389 km (750 nmi; 863 mi)
Service ceiling: 16,764 m (55,000 ft)
Max flight altitude: 20 km (65,000 ft)
g limits: +9 / -3
Rate of climb: 315 m/s (62,000 ft/min)
Wing loading: 312 kg/m2 (64 lb/sq ft)
Thrust/weight: 1.15 (interceptor configuration)
Brakes-off to Take-off acceleration: <8 s
Brakes-off to supersonic acceleration: <30 s
Brakes-off to Mach 1.6 at 11,000 m (36,000 ft): <150 s
Armament
Guns: 1 × 27 mm Mauser BK-27 revolver cannon with 150 rounds
Hardpoints: Total of 13: 8 × under-wing; and 5 × under-fuselage pylon stations; holding in excess of 9,000 kg (19,800 lb) of payload Typical multi-role configuration for a Tranche 2-P1E would be 4 × AMRAAM, 2×ASRAAM/IRIS-T, 4 × EGBU-16/Paveway-IV, 2 × 1000-litre supersonic fuel tanks and a targeting pod.
Missiles:
Air-to-air missiles:
AIM-120 AMRAAM
MBDA Meteor
IRIS-T
AIM-132 ASRAAM
AIM-9 Sidewinder
Air-to-surface missiles:
Storm Shadow/Scalp EG
Brimstone
AGM-88 HARM
Taurus KEPD 350
SPEAR 3 (in progress)
Anti-ship missiles:
Marte ER (up to 6 Marte ER anti-ship missiles at 6 hardpoints)
Joint Strike Missile (planned)
Bombs:
Paveway II/III/Enhanced Paveway series of laser-guided bombs (LGBs)
500-lb Paveway IV
Small Diameter Bomb (planned for P2E)
Joint Direct Attack Munition (JDAM), work started in 2018
HOPE/HOSBO, in the future
Spice 250
Others:
Up to 3 × drop tanks for ferry flight or extended range/loitering time
Conformal fuel tanks on Tranche 3 or later
Avionics
Euroradar CAPTOR:
Captor-M: Solid-state, mechanically scanned array radar
European Common Radar System (ECRS) Mk0: Active electronically scanned array (AESA) radar developed by the original 4 Eurofighter consortium members. Commonly referred to as the Captor-E. Fitted to Qatari and Kuwaiti Eurofighters.
ECRS Mk1: Upgraded Mk0, manufactured by Hendsoldt and Airbus. To be fitted to existing German and Spanish Eurofighters.
ECRS Mk2: New (AESA) radar with additional electronic warfare capabilities. Manufactured in Edinburgh by Leonardo UK. To be fitted to existing Tranche 3 UK Eurofighters.
Passive Infra-Red Airborne Tracking Equipment (PIRATE)
Praetorian DASS
Damocles (targeting pod)
LITENING III laser targeting pod (LITENING 5 in RAF testing)
Sniper Advanced Targeting Pod
== See also ==
Timeline of the Eurofighter Typhoon
Fourth-generation jet fighter
Related development
British Aerospace EAP
Related lists
List of active United Kingdom military aircraft
List of aircraft of the Royal Air Force
List of military aircraft of Germany
List of active Italian military aircraft
List of megaprojects, Aerospace
== References ==
=== Notes ===
=== Citations ===
=== Bibliography ===
== External links ==
Official website |
Extended reality | Extended reality (XR) is both an umbrella term to refer to and interpolate between augmented reality (AR), mixed reality (MR), and virtual reality (VR), as well as to extrapolate (extend) beyond these, e.g. allowing us to see sound waves, radio waves, and otherwise invisible phenomena. The technology is intended to combine or mirror the physical world with a "digital twin world" able to interact with it, giving users an immersive experience by being in a virtual or augmented environment.
XR is rapidly growing beyond an academic discipline, and is now having real-world impact in medicine, architecture, education, industry, and is being applied in a wide range of areas such as entertainment, cinema, marketing, real estate, manufacturing, education, maintenance and remote work. Extended reality has the ability to be used for joint effort in the workplace, training, educational purposes, therapeutic treatments, and data exploration and analysis.
Extended reality works by using visual data acquisition that is either accessed locally or shared and transfers over a network and to the human senses. By enabling real-time responses in a virtual stimulus these devices create customized experiences. Advancing in 5G and edge computing – a type of computing that is done "at or near the source of data" – could aid in data rates, increase user capacity, and reduce latency. These applications will likely expand extended reality into the future.
Extended Reality can be applied not only to humans as a subject, but also to technology as a subject, where the subject (whether human or technology) can have its sensory capacity extended by placing it in a closed feedback loop. This form of Extended Intelligence is called veillametrics.
Around one-third of the global extended reality market is attributed to Europe.
In 2018 the BBC launched a research project to capture and document the barriers present in extended reality environments.
The International Institute of MetaNumismatics (INIMEN) studies the applications of extended reality technologies in numismatic research, with a dedicated department.
== See also ==
Computer-mediated reality – Ability to manipulate one's perception of reality through the use of a computer
Head-mounted display – Type of display device
Immersion (virtual reality) – Perception of being physically present in a non-physical world
Metaverse – Collective three-dimensional virtual shared space
On-set virtual production – Technology for television and film production
OpenXR – Standard for access to virtual reality and augmented reality platforms and devices
Reality–virtuality continuum – Concept in computer science
Smartglasses – Wearable computer glasses
Spatial computing – Computing paradigm emphasizing 3D spatial interaction with technology
Wearable computer – Small computing device worn on the body
WebXR – Experimental JavaScript API for augmented/virtual reality devices
== References ==
== Sources ==
Vinod Baya; Erik Sherman. "The road ahead for augmented reality". pwc.
Pereira, Fernando. "Deep Learning-Based Extended Reality: Making Humans and Machines Speak the Same Visual Language." In Proceedings of the 1st Workshop on Interactive eXtended Reality, 1–2. IXR ’22. New York, NY, USA: Association for Computing Machinery, 2022. https://doi.org/10.1145/3552483.3555366.
United States Government Accountability Office. Extended Reality Technologies. Science & Tech Spotlight. Washington, D.C: GAO, Science, Technology Assessment, and Analytics, 2022.
Boel, Carl, Kim Dekeyser, Fien Depaepe, Luis Quintero, Tom van Daele, and Brenda Wiederhold. Extended Reality: Opportunities, Success Stories and Challenges (Health, Education) : Executive Summary. Luxembourg: Publications Office, 2023. https://op.europa.eu/publication/manifestation_identifier/PUB_KK0722997ENN.
Sayler, Kelley M. "Military Applications of Extended Reality." IF 12010. Washington, D.C: Congressional Research Service, 2022. |
Externalities of cars | The externalities of automobiles, similar to other economic externalities, represent the measurable costs imposed on those who do not own the vehicle, in contrast to the costs borne by the vehicle owner. These externalities include factors such as air pollution, noise, traffic congestion, and road maintenance costs, which affect the broader community and environment. Additionally, these externalities contribute to social injustice, as disadvantaged communities often bear a disproportionate share of these negative impacts.
According to Harvard University, the main externalities of driving are local and global pollution, oil dependence, traffic congestion
and traffic collisions; while according to a meta-study conducted by the Delft University these externalities are congestion and scarcity costs, accident costs, air pollution costs, noise costs, climate change costs, costs for nature and landscape, costs for water pollution, costs for soil pollution and costs of energy dependency. An estimated 1,670,000 people die each year due to cars, that is 1 in 34 deaths, many because of air pollution rather than crashes.
== Negative externalities ==
The negative externalities can be substantial, since the driver does not take into account, for example, the negative effects of air pollution on third parties, when they opt to drive their car. Legislators and regulators can internalize those external costs, either by taxes on fuels for example, either by any kind of limitation to car usage, such as parking meters or urban tolls. Nevertheless, it seems the drivers in some countries, already pay some external costs with taxes. Road taxes in the Netherlands for instance, have a relatively high yearly value, which covers the maintenance of the infrastructures. Nevertheless, in the majority of western nations, the external costs of driving, are not covered totally either by taxes, or by any kind of car usage limitation.
=== Traffic congestion and scarcity ===
Increased reliance on the automobile leads to increased road congestion The externalities of automobiles, similar to other economic externalities, represent the measurable costs imposed on those who do not own the vehicle, in contrast to the costs borne by the vehicle owner. These externalities include factors such as air pollution, noise, traffic congestion, and road maintenance costs, which affect the broader community and environment. Additionally, these externalities contribute to social injustice, as disadvantaged communities often bear a disproportionate share of these negative impacts. While expansions in road capacity are often touted as relieving congestion, induced demand often means that any reductions in congestion are temporary.
=== Collisions ===
Cars are the leading cause of fatal collisions in many countries, and are the leading cause of death of youth and children. In 2010, car crashes in the United States resulted in 32,999 deaths and a projected $871 billion cost to society, around 6% of the United States 2010 GDP.
Road traffic collisions cause social costs including material damages, administrative costs, medical costs, production losses and immaterial costs. Immaterial costs are lifetime shortening, suffering as well as for example pain or sorrow, which can arise from death injuries. Material costs are often covered by insurance and also market price of these costs are available. This does not, however, hold for any immaterial costs and proxy cost factors because these costs are not sufficiently covered by private insurance systems.
=== Air pollution ===
Cars produce numerous harmful air pollutants in their exhaust such as NOx, particulate matter and ground-level ozone (indirectly). Additionally, as car tires wear down, they shed the materials they are made of into the air as particulate pollution. Those pollutants are known to cause various respiratory and other health issues and cars are among the leading cause of smog in modern developed world cities. External costs which can arise from using cars and trucks in everyday life are of different kinds (covering also material costs such as damages to buildings and materials), but health costs are the most common. In this case cars might cause cardiovascular and respiratory diseases. Such costs have to be paid by the society as a whole.
There is quite a high number of available studies on the methodology of air pollution costs as well as applications of these methods.
=== Noise ===
Cars significantly contribute to noise pollution. While one common perception the engine is the main cause for noise, tire noise becomes the dominant source of noise above 20–30 miles per hour (30–50 km/h) for passenger vehicles. Although aerodynamic noise does increase at highway speeds, it contributes less than tire noise unless at very high speeds.
Persistent traffic noise above 40 dB(A) is known to disrupt sleep, and above 55 dB(A) is known to increase the risk of cardiovascular disease. In Germany, 2.9% of myocardial infarction cases can be attributed to road traffic noise, with the 1.5% of the population exposed to greater than 75 dB(A) accounting for 27.13% of that. In total, an estimated 800,023 Disability-adjusted life years are lost due in urban populations due to road traffic noise in the EU. In the United States, 13.2% of the population is potentially exposed to road noise above 45 dB(A), with 5.5% exposed to road noise above 55 dB(A).
=== Climate change ===
Climate change is significantly caused by human activity, particularly the production of greenhouse gasses and their release into the atmosphere. About 16% of manmade carbon dioxide is from road transport, mostly passenger vehicles. Gasoline cars with less than two passengers produce more carbon dioxide per passenger kilometer than any other form of land transport.
The changing speed of a vehicle is a factor when considering the measurement of greenhouse gas emissions. Traffic congestion can be dangerous because of its effects on society. Besides the increasing risk of injuries arising primarily from high-grade roads together with the high noise, the main consequence of traffic congestion is increasing level of emissions of greenhouse gases. In addition to that nitrogen oxides from cars have a minor indirect greenhouse effect.
=== Costs for nature and landscape ===
Roads, parking spaces but also suburban sprawl caused by cars need significant amount of space. Typically, once agricultural or uncultivated land is turned over into ever wider motorways and ever larger parking lots to accommodate the automobile but induced demand means any relief is temporary and more and more surfaces are sealed in the process.
=== Costs for water pollution ===
Lubricants and fuels used by automobiles are harmful when they leak into the groundwater. Oil refineries and particularly the mining of unconventional oil like oil shales and oil sands can be extremely harmful for the surrounding water resources and bodies of water.
In addition to that runoff of impervious surfaces like roads or parking lots can be contaminated with all sorts of pollutants.
=== Costs for soil pollution ===
In addition to the fertile topsoil often "buried" under freeways and parking spaces, cars directly or indirectly release pollutants into the soil. Oil may leak into the groundwater and the common practice to clean cars in the front yard causes surfactants and other products in the cleaning products to pollute the ground. Similarly, salt is often used to keep roads and highways free of snow and ice and chlorides cause major damage to vegetation as well as being an aggressive substance linked to rust and corrosion.
=== Costs of energy dependency ===
While trains and tramway often run on electricity which can be generated through renewable sources or locally available fuel, cars by and large run on petroleum derived fuels. Only a handful of countries are net exporters of petroleum. For developed countries this causes a political dependence on a reliable petroleum supply and has been cited as the reason for foreign policy decisions of the United States among others. For developing countries, petroleum products can be among the chief imports and reliance on automobiles can significantly impact the trade deficit and public debt of such nations.
=== Obesity ===
Some research indicates a correlation between urban sprawl and obesity. Car centric development and lack of walkability lead to less use of active modes of transportation such as utility cycling and walking which is linked to various health issues caused by a lack of exercise.
== Solutions to negative externalities ==
=== Pigovian taxes ===
Pigovian taxes are one solution used for correcting negative externalities caused by automobiles. By increasing the cost of using automobiles, it is possible to reduce consumption to an economically optimal level while raising tax revenue. This could be achieved through the use of fuel taxes and road taxes, which might be used for infrastructure investment and repair. However meeting all negative externalities by fuel tax is politically difficult. In the case of carbon taxes, revenue could be used for investment in environmentally friendly initiatives. Fuel and carbon taxes have been criticized as being a regressive tax, that affect low income individuals greater than high earners. As a result, the Canadian government has used a portion of tax revenue from carbon taxes to rebate lower income households.
=== Congestion pricing ===
Major cities such as London and Stockholm have introduced congestion pricing in order to reduce traffic and pollution in their city centres. This is implemented as a toll on automobiles entering the city centre during peak hours. This toll aims to correct the negative externalities and change consumer behaviour, by making consumers more aware of the costs induced by their consumption. Congestion pricing is an efficient way at reducing traffic externalities, as monitoring technology allows prices to adapt to changes in traffic levels. This added toll reduces congestion, encourages the use of public transit, and raises revenue from tolls.
=== Subsidizing alternatives ===
Many governments have begun subsidizing electric vehicles. With the intention of correcting the positive externality that electric vehicles contribute to the environment. This has been implemented through the use of tax credits, purchase rebates, and tax exemptions. These subsidies reduce the cost of Zero-emissions vehicle and as a result increase demand. By incentivizing consumers to reduce their purchases of petrol vehicles in favour of electric cars, there is a decrease in negative externalities associated with emissions. There has been backlash against the equity of these subsidies, stating that these subsidies favour the wealthy. It has been suggested that subsidizing ebikes and car charging stations would be fairer.
=== Regulation ===
The use of emission standards on automobiles, reduces the amount of pollutants emitted by new automobiles thus reducing negative environmental externalities. This is an important piece in regulating automobile externalities, as emission levels per litre of gasoline consumed are not reduced by fuel taxes. The European Union has set a target of 95g of CO2 per kilometre by 2021. Emission limits are based on mass of automobiles with heavier vehicles having higher limits. Manufacturers who miss this target are charged with increasing costs for each gram of additional pollution. This policy serves to regulate pollution while accounting for unmeasured costs placed by automobiles on the environment.
== Positive externalities ==
While the existence of negative externalities seems consensual, the existence of positive externalities of the automobile does not have consensus amongst economists and experts in the transportation sector. The creation of jobs or the fact that the related industries pay taxes, cannot be considered, as such, as positive externalities, because any legal economic activity pays taxes, and the big majority also needs job demand. Time saving to the driver, and therefore, eventually more personal production, cannot either be considered a positive externality, because the driver has already taken those factors into account when they opted to use their car, and therefore these factors cannot be considered, by many authors, a pure externality.
=== Accessibility and land value ===
Notwithstanding the above objections, some authors enumerate positive externalities for the automobile like accessibility and land value. Where land is expensive, it is developed more intensively. Where it is more intensively developed, there are more activities and destinations that can be reached in a given time. Where there are more activities, accessibility is higher and where accessibility is higher, land is more expensive.
=== City growth ===
Economists have sought to understand why cities grow and why large cities seem to be at an advantage relative to others. One explanation that has received much attention emphasizes the role of agglomeration economies in facilitating and sustaining city growth. The clustering of firms and workers in cities generates positive externalities by allowing for labor market pooling, input sharing, and knowledge spillovers.
Nevertheless, some other economists mention urban decay and urban sprawl as a negative effect or cost of the automobile, when the city grows due to automobile dependency.
Most large cities currently require most of their food to be trucked in by motor vehicle. Historic Paris is a counterexample, using up to 1/6 of its landspace for growing food.
Furthermore, most large cities extensively rely on urban rail of some form and it is often argued that their functioning would be severely diminished without the existence of said urban rail system.
== See also ==
== References == |
F-16 | The General Dynamics F-16 Fighting Falcon is an American single-engine supersonic multirole fighter aircraft originally developed by General Dynamics for the United States Air Force (USAF). Designed as an air superiority day fighter, it evolved into a successful all-weather multirole aircraft with over 4,600 built since 1976. Although no longer purchased by the U.S. Air Force, improved versions are being built for export. In 1993, General Dynamics sold its aircraft manufacturing business to the Lockheed Corporation, which became part of Lockheed Martin after a 1995 merger with Martin Marietta.
The F-16's key features include a frameless bubble canopy for enhanced cockpit visibility, a side-mounted control stick to ease control while maneuvering, an ejection seat reclined 30 degrees from vertical to reduce the effect of g-forces on the pilot, and the first use of a relaxed static stability/fly-by-wire flight control system that helps to make it an agile aircraft. The fighter has a single turbofan engine, an internal M61 Vulcan cannon and 11 hardpoints. Although officially named "Fighting Falcon", the aircraft is commonly known by the nickname "Viper" among its crews and pilots.
In addition to active duty in the U.S. Air Force, Air Force Reserve Command, and Air National Guard units, the aircraft is also used by the U.S. Air Force Thunderbirds aerial demonstration team, the US Air Combat Command F-16 Viper Demonstration Team, and as an adversary/aggressor aircraft by the United States Navy. The F-16 has also been procured by the air forces of 25 other nations. As of 2025, it is the world's most common fixed-wing aircraft in military service, with 2,084 F-16s operational.
== Development ==
=== Lightweight Fighter program ===
US Vietnam War experience showed the need for air superiority fighters and better air-to-air training for fighter pilots. Based on his experience in the Korean War and as a fighter tactics instructor in the early 1960s, Colonel John Boyd with mathematician Thomas Christie developed the energy–maneuverability theory to model a fighter aircraft's performance in combat. Boyd's work called for a small, lightweight aircraft that could maneuver with the minimum possible energy loss and which also incorporated an increased thrust-to-weight ratio. In the late 1960s, Boyd gathered a group of like-minded innovators who became known as the Fighter Mafia, and in 1969, they secured Department of Defense funding for General Dynamics and Northrop to study design concepts based on the theory.
Air Force F-X proponents were opposed to the concept because they perceived it as a threat to the F-15 program, but the USAF's leadership understood that its budget would not allow it to purchase enough F-15 aircraft to satisfy all of its missions. The Advanced Day Fighter concept, renamed F-XX, gained civilian political support under the reform-minded Deputy Secretary of Defense David Packard, who favored the idea of competitive prototyping. As a result, in May 1971, the Air Force Prototype Study Group was established, with Boyd a key member, and two of its six proposals would be funded, one being the Lightweight Fighter (LWF). The request for proposals issued on 6 January 1972 called for a 20,000-pound (9,100 kg) class air-to-air day fighter with a good turn rate, acceleration, and range, and optimized for combat at speeds of Mach 0.6–1.6 and altitudes of 30,000–40,000 feet (9,100–12,000 m). This was the region where USAF studies predicted most future air combat would occur. The anticipated average flyaway cost of a production version was $3 million. This production plan was hypothetical as the USAF had no firm plans to procure the winner.
==== Selection of finalists and flyoff ====
Five companies responded, and in 1972, the Air Staff selected General Dynamics' Model 401 and Northrop's P-600 for the follow-on prototype development and testing phase. GD and Northrop were awarded contracts worth $37.9 million and $39.8 million to produce the YF-16 and YF-17, respectively, with the first flights of both prototypes planned for early 1974. To overcome resistance in the Air Force hierarchy, the Fighter Mafia and other LWF proponents successfully advocated the idea of complementary fighters in a high-cost/low-cost force mix. The "high/low mix" would allow the USAF to be able to afford sufficient fighters for its overall fighter force structure requirements. The mix gained broad acceptance by the time of the prototypes' flyoff, defining the relationship between the LWF and the F-15.
The YF-16 was developed by a team of General Dynamics engineers led by Robert H. Widmer. The first YF-16 was rolled out on 13 December 1973. Its 90-minute maiden flight was made at the Air Force Flight Test Center at Edwards AFB, California, on 2 February 1974. Its actual first flight occurred accidentally during a high-speed taxi test on 20 January 1974. While gathering speed, a roll-control oscillation caused a fin of the port-side wingtip-mounted missile and then the starboard stabilator to scrape the ground, and the aircraft then began to veer off the runway. The test pilot, Phil Oestricher, decided to lift off to avoid a potential crash, safely landing six minutes later. The slight damage was quickly repaired and the official first flight occurred on time. The YF-16's first supersonic flight was accomplished on 5 February 1974, and the second YF-16 prototype first flew on 9 May 1974. This was followed by the first flights of Northrop's YF-17 prototypes on 9 June and 21 August 1974, respectively. During the flyoff, the YF-16s completed 330 sorties for a total of 417 flight hours; the YF-17s flew 288 sorties, covering 345 hours.
=== Air Combat Fighter competition ===
Increased interest turned the LWF into a serious acquisition program. NATO allies Belgium, Denmark, the Netherlands, and Norway were seeking to replace their F-104G Starfighter fighter-bombers. In early 1974, they reached an agreement with the U.S. that if the USAF ordered the LWF winner, they would consider ordering it as well. The USAF also needed to replace its F-105 Thunderchief and F-4 Phantom II fighter-bombers. The U.S. Congress sought greater commonality in fighter procurements by the Air Force and Navy, and in August 1974 redirected Navy funds to a new Navy Air Combat Fighter program that would be a naval fighter-bomber variant of the LWF. The four NATO allies had formed the Multinational Fighter Program Group (MFPG) and pressed for a U.S. decision by December 1974; thus, the USAF accelerated testing.
To reflect this serious intent to procure a new fighter-bomber, the LWF program was rolled into a new Air Combat Fighter (ACF) competition in an announcement by U.S. Secretary of Defense James R. Schlesinger in April 1974. The ACF would not be a pure fighter, but multirole, and Schlesinger made it clear that any ACF order would be in addition to the F-15, which extinguished opposition to the LWF. ACF also raised the stakes for GD and Northrop because it brought in competitors intent on securing what was touted at the time as "the arms deal of the century". These were Dassault-Breguet's proposed Mirage F1M-53, the Anglo-French SEPECAT Jaguar, and the proposed Saab 37E "Eurofighter". Northrop offered the P-530 Cobra, which was similar to the YF-17. The Jaguar and Cobra were dropped by the MFPG early on, leaving two European and two U.S. candidates. On 11 September 1974, the U.S. Air Force confirmed plans to order the winning ACF design to equip five tactical fighter wings. Though computer modeling predicted a close contest, the YF-16 proved significantly quicker going from one maneuver to the next and was the unanimous choice of those pilots that flew both aircraft.
On 13 January 1975, Secretary of the Air Force John L. McLucas announced the YF-16 as the winner of the ACF competition. The chief reasons given by the secretary were the YF-16's lower operating costs, greater range, and maneuver performance that was "significantly better" than that of the YF-17, especially at supersonic speeds. Another advantage of the YF-16 – unlike the YF-17 – was its use of the Pratt & Whitney F100 turbofan engine, the same powerplant used by the F-15; such commonality would lower the cost of engines for both programs. Secretary McLucas announced that the USAF planned to order at least 650, possibly up to 1,400 production F-16s. In the Navy Air Combat Fighter competition, on 2 May 1975, the Navy selected the YF-17 as the basis for what would become the McDonnell Douglas F/A-18 Hornet.
=== Production ===
The U.S. Air Force initially ordered 15 full-scale development (FSD) aircraft (11 single-seat and four two-seat models) for its flight test program which was reduced to eight (six F-16A single-seaters and two F-16B two-seaters). The YF-16 design was altered for the production F-16. The fuselage was lengthened by 10.6 in (0.269 m), a larger nose radome was fitted for the AN/APG-66 radar, wing area was increased from 280 to 300 sq ft (26 to 28 m2), the tailfin height was decreased, the ventral fins were enlarged, two more stores stations were added, and a single door replaced the original nosewheel double doors. The F-16's weight was increased by 25% over the YF-16 by these modifications.
The FSD F-16s were manufactured by General Dynamics in Fort Worth, Texas, at United States Air Force Plant 4 in late 1975; the first F-16A rolled out on 20 October 1976 and first flew on 8 December. The initial two-seat model achieved its first flight on 8 August 1977. The initial production-standard F-16A flew for the first time on 7 August 1978 and its delivery was accepted by the USAF on 6 January 1979. The aircraft entered USAF operational service with the 34th Tactical Fighter Squadron, 388th Tactical Fighter Wing, at Hill AFB in Utah, on 1 October 1980.
The F-16 was given its name of "Fighting Falcon" on 21 July 1980. Its pilots and crews often use the name "Viper" instead, because of a perceived resemblance to a viper snake as well as to the fictional Colonial Viper starfighter from the television program Battlestar Galactica, which aired at the time the F-16 entered service.
On 7 June 1975, the four European partners, now known as the European Participation Group, signed up for 348 aircraft at the Paris Air Show. This was split among the European Participation Air Forces (EPAF) as 116 for Belgium, 58 for Denmark, 102 for the Netherlands, and 72 for Norway. Two European production lines, one in the Netherlands at Fokker's Schiphol-Oost facility and the other at SABCA's Gosselies plant in Belgium, would produce 184 and 164 units respectively. Norway's Kongsberg Vaapenfabrikk and Denmark's Terma A/S also manufactured parts and subassemblies for EPAF aircraft. European co-production was officially launched on 1 July 1977 at the Fokker factory. Beginning in November 1977, Fokker-produced components were sent to Fort Worth for fuselage assembly, then shipped back to Europe for final assembly of EPAF aircraft at the Belgian plant on 15 February 1978; deliveries to the Belgian Air Force began in January 1979. The first Royal Netherlands Air Force aircraft was delivered in June 1979. In 1980, the first aircraft were delivered to the Royal Norwegian Air Force by Fokker and to the Royal Danish Air Force by SABCA.
During the late 1980s and 1990s, Turkish Aerospace Industries (TAI) produced 232 Block 30/40/50 F-16s on a production line in Ankara under license for the Turkish Air Force. TAI also produced 46 Block 40s for Egypt in the mid-1990s and 30 Block 50s from 2010 onwards. Korean Aerospace Industries opened a production line for the KF-16 program, producing 140 Block 52s from the mid-1990s to mid-2000s (decade). If India had selected the F-16IN for its Medium Multi-Role Combat Aircraft procurement, a sixth F-16 production line would have been built in India. In May 2013, Lockheed Martin stated there were currently enough orders to keep producing the F-16 until 2017.
=== Improvements and upgrades ===
One change made during production was augmented pitch control to avoid deep stall conditions at high angles of attack. The stall issue had been raised during development but had originally been discounted. Model tests of the YF-16 conducted by the Langley Research Center revealed a potential problem, but no other laboratory was able to duplicate it. YF-16 flight tests were not sufficient to expose the issue; later flight testing on the FSD aircraft demonstrated a real concern. In response, the area of each horizontal stabilizer was increased by 25% on the Block 15 aircraft in 1981 and later retrofitted to earlier aircraft. In addition, a manual override switch to disable the horizontal stabilizer flight limiter was prominently placed on the control console, allowing the pilot to regain control of the horizontal stabilizers (which the flight limiters otherwise lock in place) and recover. Besides reducing the risk of deep stalls, the larger horizontal tail also improved stability and permitted faster takeoff rotation.
In the 1980s, the Multinational Staged Improvement Program (MSIP) was conducted to evolve the F-16's capabilities, mitigate risks during technology development, and ensure the aircraft's worth. The program upgraded the F-16 in three stages. The MSIP process permitted the quick introduction of new capabilities, at lower costs and with reduced risks compared to traditional independent upgrade programs. In 2012, the USAF had allocated $2.8 billion (~$3.67 billion in 2023) to upgrade 350 F-16s while waiting for the F-35 to enter service. One key upgrade has been an auto-GCAS (Ground collision avoidance system) to reduce instances of controlled flight into terrain. Onboard power and cooling capacities limit the scope of upgrades, which often involve the addition of more power-hungry avionics.
Lockheed won many contracts to upgrade foreign operators' F-16s. BAE Systems also offers various F-16 upgrades, receiving orders from South Korea, Oman, Turkey, and the US Air National Guard; BAE lost the South Korean contract because of a price breach in November 2014. In 2012, the USAF assigned the total upgrade contract to Lockheed Martin. Upgrades include Raytheon's Center Display Unit, which replaces several analog flight instruments with a single digital display.
In 2013, sequestration budget cuts cast doubt on the USAF's ability to complete the Combat Avionics Programmed Extension Suite (CAPES), a part of secondary programs such as Taiwan's F-16 upgrade. Air Combat Command's General Mike Hostage stated that if he only had money for a service life extension program (SLEP) or CAPES, he would fund SLEP to keep the aircraft flying. Lockheed Martin responded to talk of CAPES cancellation with a fixed-price upgrade package for foreign users. CAPES was not included in the Pentagon's 2015 budget request. The USAF said that the upgrade package will still be offered to Taiwan's Republic of China Air Force, and Lockheed said that some common elements with the F-35 will keep the radar's unit costs down. In 2014, the USAF issued a RFI to SLEP 300 F-16 C/Ds.
=== Production relocation ===
To make more room for assembly of its newer F-35 Lightning II fighter aircraft, Lockheed Martin moved the F-16 production from Fort Worth, Texas to its plant in Greenville, South Carolina. Lockheed delivered the last F-16 from Fort Worth to the Iraqi Air Force on 14 November 2017, ending 40 years of F-16 production there. The company resumed production in 2019, though engineering and modernization work will remain in Fort Worth. A gap in orders made it possible to stop production during the move; after completing orders for the last Iraqi purchase, the company was negotiating an F-16 sale to Bahrain that would be produced in Greenville. This contract was signed in June 2018, and the first planes rolled off the Greenville line in 2023.
== Design ==
=== Overview ===
The F-16 is a single-engine, highly maneuverable, supersonic, multirole tactical fighter aircraft. It is much smaller and lighter than its predecessors but uses advanced aerodynamics and avionics, including the first use of a relaxed static stability/fly-by-wire (RSS/FBW) flight control system, to achieve enhanced maneuver performance. Highly agile, the F-16 was the first fighter aircraft purpose-built to pull 9-g maneuvers and can reach a maximum speed of over Mach 2. Innovations include a frameless bubble canopy for better visibility, a side-mounted control stick, and a reclined seat to reduce g-force effects on the pilot. It is armed with an internal 20 mm M61 Vulcan cannon in the left wing root and has multiple locations for mounting various missiles, bombs and pods. It has a thrust-to-weight ratio greater than one, providing power to climb and vertical acceleration.
The F-16 was designed to be relatively inexpensive to build and simpler to maintain than earlier-generation fighters. The airframe is built with about 80% aviation-grade aluminum alloys, 8% steel, 3% composites, and 1.5% titanium. The leading-edge flaps, stabilators, and ventral fins make use of bonded aluminum honeycomb structures and graphite epoxy lamination coatings. The number of lubrication points, fuel line connections, and replaceable modules is significantly less than in preceding fighters; 80% of the access panels can be accessed without stands. The air intake was placed so it was rearward of the nose but forward enough to minimize air flow losses and reduce aerodynamic drag.
Although the LWF program called for a structural life of 4,000 flight hours, capable of achieving 7.33 g with 80% internal fuel; GD's engineers decided to design the F-16's airframe life for 8,000 hours and for 9-g maneuvers on full internal fuel. This proved advantageous when the aircraft's mission changed from solely air-to-air combat to multirole operations. Changes in operational use and additional systems have increased weight, necessitating multiple structural strengthening programs.
=== General configuration ===
The F-16 has a cropped-delta wing incorporating wing-fuselage blending and forebody vortex-control strakes; a fixed-geometry, underslung air intake (with splitter plate) to the single turbofan jet engine; a conventional tri-plane empennage arrangement with all-moving horizontal "stabilator" tailplanes; a pair of ventral fins beneath the fuselage aft of the wing's trailing edge; and a tricycle landing gear configuration with the aft-retracting, steerable nose gear deploying a short distance behind the inlet lip. There is a boom-style aerial refueling receptacle located behind the single-piece "bubble" canopy of the cockpit. Split-flap speedbrakes are located at the aft end of the wing-body fairing, and a tailhook is mounted underneath the fuselage. A fairing beneath the rudder often houses ECM equipment or a drag chute. Later F-16 models feature a long dorsal fairing along the fuselage's "spine", housing additional equipment or fuel.
Aerodynamic studies in the 1960s demonstrated that the "vortex lift" phenomenon could be harnessed by highly swept wing configurations to reach higher angles of attack, using leading edge vortex flow off a slender lifting surface. As the F-16 was being optimized for high combat agility, GD's designers chose a slender cropped-delta wing with a leading-edge sweep of 40° and a straight trailing edge. To improve maneuverability, a variable-camber wing with a NACA 64A-204 airfoil was selected; the camber is adjusted by leading-edge and trailing edge flaperons linked to a digital flight control system regulating the flight envelope. The F-16 has a moderate wing loading, reduced by fuselage lift. The vortex lift effect is increased by leading-edge extensions, known as strakes. Strakes act as additional short-span, triangular wings running from the wing root (the junction with the fuselage) to a point further forward on the fuselage. Blended into the fuselage and along the wing root, the strake generates a high-speed vortex that remains attached to the top of the wing as the angle of attack increases, generating additional lift and allowing greater angles of attack without stalling. Strakes allow a smaller, lower-aspect-ratio wing, which increases roll rates and directional stability while decreasing weight. Deeper wing roots also increase structural strength and internal fuel volume.
=== Armament ===
Early F-16s could be armed with up to six AIM-9 Sidewinder heat-seeking short-range air-to-air missiles (AAM) by employing rail launchers on each wingtip, as well as radar-guided AIM-7 Sparrow medium-range AAMs in a weapons mix. More recent versions support the AIM-120 AMRAAM, and US aircraft often mount that missile on their wingtips to reduce wing flutter. The aircraft can carry various other AAMs, a wide variety of air-to-ground missiles, rockets or bombs; electronic countermeasures (ECM), navigation, targeting or weapons pods; and fuel tanks on 9 hardpoints – six under the wings, two on wingtips, and one under the fuselage. Two other locations under the fuselage are available for sensor or radar pods. The F-16 carries a 20 mm (0.79 in) M61A1 Vulcan cannon, which is mounted inside the fuselage to the left of the cockpit.
=== Relaxed stability and fly-by-wire ===
The F-16 is the first production fighter aircraft intentionally designed to be slightly aerodynamically unstable, also known as relaxed static stability (RSS), to both reduce drag and improve maneuverability. Most aircraft are designed to have positive static stability, which induces the aircraft to return to straight and level flight attitude if the pilot releases the controls. This reduces maneuverability as the inherent stability has to be overcome and increases a form of drag known as trim drag. Aircraft with relaxed stability are designed to be able to augment their stability characteristics while maneuvering to increase lift and reduce drag, thus greatly increasing their maneuverability. At Mach 1, the F-16 gains positive stability because of aerodynamic changes.
To counter the tendency to depart from controlled flight and avoid the need for constant trim inputs by the pilot, the F-16 has a quadruplex (four-channel) fly-by-wire (FBW) flight control system (FLCS). The flight control computer (FLCC) accepts pilot input from the stick and rudder controls and manipulates the control surfaces in such a way as to produce the desired result without inducing control loss. The FLCC conducts thousands of measurements per second on the aircraft's flight attitude to automatically counter deviations from the pilot-set flight path. The FLCC further incorporates limiters governing movement in the three main axes based on attitude, airspeed, and angle of attack (AOA)/g; these prevent control surfaces from inducing instability such as slips or skids, or a high AOA inducing a stall. The limiters also prevent maneuvers that would exert more than a 9-g load.
Flight testing revealed that "assaulting" multiple limiters at high AOA and low speed can result in an AOA far exceeding the 25° limit, colloquially referred to as "departing"; this causes a deep stall; a near-freefall at 50° to 60° AOA, either upright or inverted. While at a very high AOA, the aircraft's attitude is stable but control surfaces are ineffective. The pitch limiter locks the stabilators at an extreme pitch-up or pitch-down attempting to recover. This can be overridden so the pilot can "rock" the nose via pitch control to recover.
Unlike the YF-17, which had hydromechanical controls serving as a backup to the FBW, General Dynamics took the innovative step of eliminating mechanical linkages from the control stick and rudder pedals to the flight control surfaces. The F-16 is entirely reliant on its electrical systems to relay flight commands, instead of traditional mechanically linked controls, leading to the early moniker of "the electric jet" and aphorisms among pilots such as "You don't fly an F-16; it flies you." The quadruplex design permits "graceful degradation" in flight control response in that the loss of one channel renders the FLCS a "triplex" system. The FLCC began as an analog system on the A/B variants but has been supplanted by a digital computer system beginning with the F-16C/D Block 40. The F-16's controls suffered from a sensitivity to static electricity or electrostatic discharge (ESD) and lightning. Up to 70–80% of the C/D models' electronics were vulnerable to ESD.
=== Cockpit and ergonomics ===
A key feature of the F-16's cockpit is the exceptional field of view. The single-piece, bird-proof polycarbonate bubble canopy provides 360° all-round visibility, with a 40° look-down angle over the side of the aircraft, and 15° down over the nose (compared to the common 12–13° of preceding aircraft); the pilot's seat is elevated for this purpose. Additionally, the F-16's canopy omits the forward bow frame found on many fighters, which is an obstruction to a pilot's forward vision. The F-16's ACES II zero/zero ejection seat is reclined at an unusual tilt-back angle of 30°; most fighters have a tilted seat at 13–15°. The tilted seat can accommodate taller pilots and increases g-force tolerance; however, it has been associated with reports of neck aches, possibly caused by incorrect headrest usage. Subsequent U.S. fighters have adopted more modest tilt-back angles of 20°. Because of the seat angle and the canopy's thickness, the ejection seat lacks canopy-breakers for emergency egress; instead the entire canopy is jettisoned prior to the seat's rocket firing.
The pilot flies primarily by means of an armrest-mounted side-stick controller (instead of a traditional center-mounted stick) and an engine throttle; conventional rudder pedals are also employed. To enhance the pilot's degree of control of the aircraft during high-g combat maneuvers, various switches and function controls were moved to centralized hands on throttle-and-stick (HOTAS) controls upon both the controllers and the throttle. Hand pressure on the side-stick controller is transmitted by electrical signals via the FBW system to adjust various flight control surfaces to maneuver the F-16. Originally, the side-stick controller was non-moving, but this proved uncomfortable and difficult for pilots to adjust to, sometimes resulting in a tendency to "over-rotate" during takeoffs, so the control stick was given a small amount of "play". Since the introduction of the F-16, HOTAS controls have become a standard feature on modern fighters.
The F-16 has a head-up display (HUD), which projects visual flight and combat information in front of the pilot without obstructing the view; being able to keep their head "out of the cockpit" improves the pilot's situation awareness. Further flight and systems information are displayed on multi-function displays (MFD). The left-hand MFD is the primary flight display (PFD), typically showing radar and moving maps; the right-hand MFD is the system display (SD), presenting information about the engine, landing gear, slat and flap settings, and fuel and weapons status. Initially, the F-16A/B had monochrome cathode-ray tube (CRT) displays; replaced by color liquid-crystal displays on the Block 50/52. The Mid-Life Update (MLU) introduced compatibility with night-vision goggles (NVG). The Boeing Joint Helmet Mounted Cueing System (JHMCS) is available from Block 40 onwards for targeting based on where the pilot's head faces, unrestricted by the HUD, using high-off-boresight missiles like the AIM-9X. The newer Scorpion Helmet Mounted Display is also available and would later replace the JHMCS in U.S. service.
In November 2024 it was announced that the US Air Force had awarded a $9 million contract to Danish defense company Terma A/S, to supply its 3-D audio system for the aircraft, with a program of upgrades over the following two years. The system will provide high-fidelity digital audio by spatially separating radio signals, aligning audio with threat directions, and integrating active noise reduction.
=== Fire-control radar ===
The F-16A/B was originally equipped with the Westinghouse AN/APG-66 fire-control radar. Its slotted planar array antenna was designed to be compact to fit into the F-16's relatively small nose. In uplook mode, the APG-66 uses a low pulse-repetition frequency (PRF) for medium- and high-altitude target detection in a low-clutter environment, and in look-down/shoot-down employs a medium PRF for heavy clutter environments. It has four operating frequencies within the X band, and provides four air-to-air and seven air-to-ground operating modes for combat, even at night or in bad weather. The Block 15's APG-66(V)2 model added more powerful signal processing, higher output power, improved reliability, and increased range in cluttered or jamming environments. The Mid-Life Update (MLU) program introduced a new model, APG-66(V)2A, which features higher speed and more memory.
The AN/APG-68, an evolution of the APG-66, was introduced with the F-16C/D Block 25. The APG-68 has greater range and resolution, as well as 25 operating modes, including ground-mapping, Doppler beam-sharpening, ground moving target indication, sea target, and track while scan (TWS) for up to 10 targets. The Block 40/42's APG-68(V)1 model added full compatibility with Lockheed Martin Low Altitude Navigation and Targeting Infrared for Night (LANTIRN) pods, and a high-PRF pulse-Doppler track mode to provide Interrupted Continuous Wave guidance for semi-active radar homing (SARH) missiles like the AIM-7 Sparrow. Block 50/52 F-16s initially used the more reliable APG-68(V)5 which has a programmable signal processor employing Very High Speed Integrated Circuit (VHSIC) technology. The Advanced Block 50/52 (or 50+/52+) is equipped with the APG-68(V)9 radar, with a 30% greater air-to-air detection range and a synthetic aperture radar (SAR) mode for high-resolution mapping and target detection-recognition. In August 2004, Northrop Grumman was contracted to upgrade the APG-68 radars of Block 40/42/50/52 aircraft to the (V)10 standard, providing all-weather autonomous detection and targeting for Global Positioning System (GPS)-aided precision weapons, SAR mapping, and terrain-following radar (TF) modes, as well as interleaving of all modes.
The F-16E/F is outfitted with Northrop Grumman's AN/APG-80 active electronically scanned array (AESA) radar. Northrop Grumman developed the latest AESA radar upgrade for the F-16 (selected for USAF and Taiwan's Republic of China Air Force F-16 upgrades), named the AN/APG-83 Scalable Agile Beam Radar (SABR). In July 2007, Raytheon announced that it was developing a Next Generation Radar (RANGR) based on its earlier AN/APG-79 AESA radar as a competitor to Northrop Grumman's AN/APG-68 and AN/APG-80 for the F-16. On 28 February 2020, Northrop Grumman received an order from USAF to extend the service lives of their F-16s to at least 2048 with AN/APG-83 as part of the service-life extension program (SLEP).
=== Propulsion ===
The initial powerplant selected for the single-engined F-16 was the Pratt & Whitney F100-PW-200 afterburning turbofan, a modified version of the F-15's F100-PW-100, rated at 23,830 lbf (106.0 kN) thrust. During testing, the engine was found to be prone to compressor stalls and "rollbacks", wherein the engine's thrust would spontaneously reduce to idle. Until resolved, the Air Force ordered F-16s to be operated within "dead-stick landing" distance of its bases. It was the standard F-16 engine through the Block 25, except for the newly built Block 15s with the Operational Capability Upgrade (OCU). The OCU introduced the 23,770 lbf (105.7 kN) F100-PW-220, later installed on Block 32 and 42 aircraft: the main advance being a Digital Electronic Engine Control (DEEC) unit, which improved reliability and reduced stall occurrence. Beginning production in 1988, the "-220" also supplanted the F-15's "-100", for commonality. Many of the "-220" engines on Block 25 and later aircraft were upgraded from 1997 onwards to the "-220E" standard, which enhanced reliability and maintainability; unscheduled engine removals were reduced by 35%.
The F100-PW-220/220E was the result of the USAF's Alternate Fighter Engine (AFE) program (colloquially known as "the Great Engine War"), which also saw the entry of General Electric as an F-16 engine provider. Its F110-GE-100 turbofan was limited by the original inlet to a thrust of 25,735 lbf (114.47 kN), the Modular Common Inlet Duct allowed the F110 to achieve its maximum thrust of 28,984 lbf (128.93 kN). (To distinguish between aircraft equipped with these two engines and inlets, from the Block 30 series on, blocks ending in "0" (e.g., Block 30) are powered by GE, and blocks ending in "2" (e.g., Block 32) are fitted with Pratt & Whitney engines.)
The Increased Performance Engine (IPE) program led to the 29,588 lbf (131.61 kN) F110-GE-129 on the Block 50 and 29,160 lbf (129.7 kN) F100-PW-229 on the Block 52. F-16s began flying with these IPE engines in the early 1990s. Altogether, of the 1,446 F-16C/Ds ordered by the USAF, 556 were fitted with F100-series engines and 890 with F110s. The United Arab Emirates' Block 60 is powered by the General Electric F110-GE-132 turbofan with a maximum thrust of 32,500 lbf (145 kN), the highest thrust engine developed for the F-16.
== Operational history ==
=== United States ===
The F-16 is being used by the active-duty USAF, Air Force Reserve, and Air National Guard units, the USAF aerial demonstration team, the U.S. Air Force Thunderbirds, and as an adversary-aggressor aircraft by the United States Navy at the Naval Strike and Air Warfare Center.
The U.S. Air Force, including the Air Force Reserve and the Air National Guard, flew the F-16 in combat during Operation Desert Storm in 1991 and in the Balkans later in the 1990s. During NATO bombing of Yugoslavia, on 2 May 1999 one F-16 has been shot down over western Serbia by the 250th Air Defence Missile Brigade, piloted by David L. Goldfein, later Chief of Staff of the United States Air Force. F-16s also patrolled the no-fly zones in Iraq during Operations Northern Watch and Southern Watch and served during the War in Afghanistan and the War in Iraq from 2001 and 2003 respectively. In 2011, Air Force F-16s took part in the intervention in Libya.
On 11 September 2001, two unarmed F-16s were launched in an attempt to ram and down United Airlines Flight 93 before it reached Washington D.C. during the 11 September 2001 terrorist attacks, but Flight 93 was prematurely brought down by the hijackers after passengers attacked the cockpit, so the F-16s were retasked to patrol the local airspace and later escorted Air Force One back to Washington.
The F-16 had been scheduled to remain in service with the U.S. Air Force until 2025. Its replacement is planned to be the F-35A variant of the Lockheed Martin F-35 Lightning II, which is expected to gradually begin replacing several multirole aircraft among the program's member nations. However, owing to delays in the F-35 program, all USAF F-16s will receive service life extension upgrades. In 2022, it was announced the USAF would continue to operate the F-16 for another two decades.
=== Israel ===
The F-16's first air-to-air combat success was achieved by the Israeli Air Force (IAF) over the Bekaa Valley on 28 April 1981, against a Syrian Mi-8 helicopter, which was downed with cannon fire. On 7 June 1981, eight Israeli F-16s, escorted by six F-15s, executed Operation Opera, their first employment in a significant air-to-ground operation. This raid severely damaged Osirak, an Iraqi nuclear reactor under construction near Baghdad, to prevent the regime of Saddam Hussein from using the reactor for the creation of nuclear weapons.
The following year, during the 1982 Lebanon War Israeli F-16s engaged Syrian aircraft in one of the largest air battles involving jet aircraft, which began on 9 June and continued for two more days. Israeli Air Force F-16s were credited with 44 air-to-air kills during the conflict.
In January 2000, Israel completed a purchase of 102 new F-16I aircraft in a deal totaling $4.5 billion. F-16s were also used in their ground-attack role for strikes against targets in Lebanon. IAF F-16s participated in the 2006 Lebanon War and the 2008–09 Gaza War. During and after the 2006 Lebanon war, IAF F-16s shot down Iranian-made UAVs launched by Hezbollah, using Rafael Python 5 air-to-air missiles.
On 10 February 2018, an Israeli Air Force F-16I was shot down in northern Israel when it was hit by a relatively old model S-200 (NATO name SA-5 Gammon) surface-to-air missile of the Syrian Air Defense Force. The pilot and navigator ejected safely in Israeli territory. The F-16I was part of a bombing mission against Syrian and Iranian targets around Damascus after an Iranian drone entered Israeli airspace and was shot down. An Israel Air Force investigation determined on 27 February 2018 that the loss was due to pilot error since the IAF determined the air crew did not adequately defend themselves.
Following the aftermath of the 7 October 2023 attacks, F-16Is have played a major role in Israel's Operation Swords of Iron, executing numerous airstrikes against Hamas targets in Gaza. The IAF has also employed F-16s in operations against Hezbollah in Lebanon and in strikes on Iranian-linked assets in Syria and Iraq, demonstrating the aircraft's versatility and reach.
On 16 July 2024, the last single-seat F-16C Barak-1 ('Lightning' in Hebrew) were retired; the IAF continue to use the F-16D Brakeet and F-16I Sufa two-seat variants. In October 2024, during Operation Days of Repentance F-16Is took part in significant operations against Iranian military infrastructure as the Israeli forces launched coordinated strikes on Iranian air defense systems and missile production facilities, aiming to degrade Iran's military capabilities and deter further aggression.
Israeli F-16s have been instrumental in operations against Houthi targets in Yemen, taking advantage of the F-16's extended operational range and strategic reach, flying a distance of approximately 1,700 kilometers (about 1,056 miles). Notably, on December 26, 2024, as part of Operation Tzelilei HaKerem, the IAF conducted airstrikes targeting Sana'a International Airport and other strategic locations, responding to Houthi missile and drone attacks on Israeli territory.
=== Pakistan ===
During the Soviet–Afghan War, Pakistan Air Force (PAF) F-16As shot down between 20 and 30 Soviet and Afghan warplanes; the political situation however resulted in PAF officially recognizing only 9 kills which were made inside Pakistani airspace. From May 1986 to January 1989, PAF F-16s from the Tail Choppers and Griffin squadrons using mostly AIM-9 Sidewinder missiles, shot down four Afghan Su-22s, two MiG-23s, one Su-25, and one An-26. Most of these kills were by missiles, but at least one, a Su-22, was destroyed by cannon fire. One F-16 was lost in these battles. The downed F-16 was likely hit accidentally by the other F-16.
On 7 June 2002, a PAF F-16B Block 15 (S. No. 82-605) shot down an Indian Air Force unmanned aerial vehicle, an Israeli-made Searcher II, using an AIM-9L Sidewinder missile, during a night interception near Lahore.
The Pakistan Air Force has used its F-16s in various foreign and internal military exercises, such as the "Indus Vipers" exercise in 2008 conducted jointly with Turkey.
Between May 2009 and November 2011, the PAF F-16 fleet flew more than 5,500 sorties in support of the Pakistan Army's operations against the Taliban insurgency in the FATA region of North-West Pakistan. More than 80% of the dropped munitions were laser-guided bombs.
On 27 February 2019, following six Pakistan Air Force airstrikes in Jammu and Kashmir, India, Pakistani officials said that two of its fighter jets shot down one MiG-21 and one Su-30MKI belonging to the Indian Air Force. Indian officials only confirmed the loss of one MiG-21 but denied losing any Su-30MKI in the clash and claimed the Pakistani claims as dubious. Additionally Indian officials also claimed to have shot down one F-16 belonging to the Pakistan Air Force. This was denied by the Pakistani side, considered dubious by neutral sources, and later backed by a report by Foreign Policy magazine, reporting that the US had completed a physical count of Pakistan's F-16s and found none missing. A report by The Washington Post noted that the Pentagon and State Department refused public comment on the matter but did not deny the earlier report.
On 8 May 2025, Indian media reported that a PAF F-16 was shot down, which was not claimed by Indian authorities and subsequently refuted by Pakistan.
=== Turkey ===
The Turkish Air Force acquired its first F-16s in 1987. F-16s were later produced in Turkey under four phases of Peace Onyx programs. In 2015, they were upgraded to Block 50/52+ with CCIP by Turkish Aerospace Industries. Turkish F-16s are being fitted with indigenous AESA radars and EW suite called SPEWS-II.
On 18 June 1992, a Greek Mirage F1 crashed during a dogfight with a Turkish F-16. On 8 February 1995, a Turkish F-16 crashed into the Aegean Sea after being intercepted by Greek Mirage F1 fighters.
Turkish F-16s have participated in Bosnia-Herzegovina and Kosovo since 1993 in support of United Nations resolutions.
On 8 October 1996, seven months after the escalation a Greek Mirage 2000 reportedly fired an R.550 Magic II missile and shot down a Turkish F-16D over the Aegean Sea. The Turkish pilot died, while the co-pilot ejected and was rescued by Greek forces. In August 2012, after the downing of an RF-4E on the Syrian coast, Turkish Defence Minister İsmet Yılmaz confirmed that the Turkish F-16D was shot down by a Greek Mirage 2000 with an R.550 Magic II in 1996 near Chios island. Greece denies that the F-16 was shot down. Both Mirage 2000 pilots reported that the F-16 caught fire and they saw one parachute.
On 23 May 2006, two Greek F-16s intercepted a Turkish RF-4 reconnaissance aircraft and two F-16 escorts off the coast of the Greek island of Karpathos, within the Athens FIR. A mock dogfight ensued between the two sides, resulting in a midair collision between a Turkish F-16 and a Greek F-16. The Turkish pilot ejected safely, but the Greek pilot died owing to damage caused by the collision.
Turkey used its F-16s extensively in its conflict with Kurdish insurgents in southeastern parts of Turkey and Iraq. Turkey launched its first cross-border raid on 16 December 2007, a prelude to the 2008 Turkish incursion into northern Iraq, involving 50 fighters before Operation Sun. This was the first time Turkey had mounted a night-bombing operation on a massive scale, and also the largest operation conducted by the Turkish Air Force.
During the Syrian Civil War, Turkish F-16s were tasked with airspace protection on the Syrian border. After the RF-4 downing in June 2012 Turkey changed its rules of engagement against Syrian aircraft, resulting in scrambles and downings of Syrian combat aircraft. On 16 September 2013, a Turkish Air Force F-16 shot down a Syrian Arab Air Force Mil Mi-17 helicopter near the Turkish border. On 23 March 2014, a Turkish Air Force F-16 shot down a Syrian Arab Air Force MiG-23 when it allegedly entered Turkish air space during a ground attack mission against Al Qaeda-linked insurgents. On 16 May 2015, two Turkish Air Force F-16s shot down a Syrian Mohajer 4 UAV firing two AIM-9 missiles after it trespassed into Turkish airspace for 5 minutes. A Turkish Air Force F-16 shot down a Russian Air Force Sukhoi Su-24 on the Turkey-Syria border on 24 November 2015.
On 1 March 2020, two Syrian Sukhoi Su-24s were shot down by Turkish Air Force F-16s using air-to-air missiles over Syria's Idlib Governorate. All four pilots safely ejected. On 3 March 2020, a Syrian Arab Army Air Force L-39 combat trainer was shot down by a Turkish F-16 over Syria's Idlib province. The pilot died.
As a part of Turkish F-16 modernization program new air-to-air missiles are being developed and tested for the aircraft. GÖKTUĞ program led by TUBITAK SAGE has presented two types of air-to-air missiles named as Bozdogan (Merlin) and Gokdogan (Peregrine). While Bozdogan has been categorized as a Within Visual Range Air-to-Air Missile (WVRAAM), Gokdogan is a Beyond Visual Range Air-to-Air Missile (BVRAAM). On 14 April 2021, first live test exercise of Bozdogan have successfully completed and the first batch of missiles are expected to be delivered throughout the same year to the Turkish Air Force.
=== Egypt ===
On 16 February 2015, Egyptian F-16s struck weapons caches and training camps of the Islamic State (ISIS) in Libya in retaliation for the murder of 21 Egyptian Coptic Christian construction workers by masked militants affiliated with ISIS. The airstrikes killed 64 ISIS fighters, including three leaders in Derna and Sirte on the coast.
=== Europe ===
The Royal Netherlands Air Force, Belgian Air Component, Royal Danish Air Force and Royal Norwegian Air Force all fly the F-16. All F-16s in most European air forces are equipped with drag chutes specifically to allow them to operate from automobile highways.
A Yugoslavian MiG-29 was shot down by a Dutch F-16AM during the Kosovo War in 1999. Belgian and Danish F-16s also participated in joint operations over Kosovo during the war. Dutch, Belgian, Danish, and Norwegian F-16s were deployed during the 2011 intervention in Libya and in Afghanistan. In Libya, Norwegian F-16s dropped almost 550 bombs and flew 596 missions, some 17% of the total strike missions including the bombing of Muammar Gaddafi's headquarters.
In late March 2018, Croatia announced its intention to purchase 12 used Israeli F-16C/D "Barak"/"Brakeet" jets, pending U.S. approval. Acquiring these F-16s would allow Croatia to retire its aging MiG-21s. In January 2019, the deal was canceled because U.S. would only allow the resale if Israel stripped the planes of all the modernized electronics, while Croatia insisted on the original deal with all the upgrades installed. At the end of November 2021, Croatia signed with France instead, for 12 Rafales.
On 11 July 2018, Slovakia's government approved the purchase of 14 F-16 Block 70/72 to replace its aging fleet of Soviet-made MiG-29s. A contract was signed on 12 December 2018 in Bratislava.
=== Ukraine ===
In May 2023, an international coalition consisting of the United Kingdom, the Netherlands, Belgium and Denmark announced their intention to train Ukrainian Air Force pilots on the F-16 ahead of possible future deliveries to increase the Ukrainian Air Force capabilities in the current Russo-Ukrainian War. The U.S. confirmed that it would approve the re-export from these countries to Ukraine. Denmark has agreed to help train Ukrainians on their usage of the fighter. Denmark's acting Defence Minister Troels Lund Poulsen said that Denmark "will now be able to move forward for a collective contribution to train Ukrainian pilots to fly F-16s". On 6 July 2023, Romania announced that it will host the future training center after the meeting of the Supreme Council of National Defense. During the 2023 Vilnius summit, a coalition was formed consisting of Denmark, the Netherlands, Belgium, Canada, Luxembourg, Norway, Poland, Portugal, Romania, Sweden, the United Kingdom, and Ukraine. A number of Ukrainian pilots began training in Denmark and the U.S. The European F-16 Training Center, organized by Romania, the Netherlands, and Lockheed Martin through several subcontractors, officially opened on 13 November 2023. It is located at the Romanian Air Force's 86th Air Base, and Ukrainian pilots began training there in September 2024. On 17 August 2023, the U.S. approved the transfer of F-16s from the Netherlands and Denmark to Ukraine after the Ukrainian pilots have completed their training. The Netherlands and Denmark have announced that together they will donate up to 61 F-16AM/BM Block 15 MLU fighters to Ukraine once pilot training has been completed.
On 13 May 2024, Danish Prime Minister Mette Frederiksen said that "F-16 from Denmark will be in the air over Ukraine within months." Denmark is sending 19 F-16s in total. By the end of July 2024, the first F-16s were delivered to Ukraine.
On 4 August 2024, President Zelensky announced to the public that the F-16 was now in operational service with Ukraine. Zelensky stated at an opening ceremony that: "F-16s are in Ukraine. We did it. I am proud of our guys who are mastering these jets and have already started using them for our country,".
On 26 August 2024, F-16s were reportedly used to intercept Russian cruise missiles for the first time. Also on 26 August, a Ukrainian F-16 crashed and the pilot, Oleksii Mes, was killed while intercepting Russian aerial targets during the cruise missile strikes. The cause is under investigation.
On 13 December 2024, the Ukrainian Air Force stated an F-16 shot down six Russian cruise missiles. Two were downed with "medium-range missiles", another two with "short-range missiles", and two were claimed to be downed by 20 mm cannon.
On 3 June 2025, spokesman of the Ukrainian Air Force, Yuri Ignat stated that Ukraine's F-16s are outmatched by Russian jets, missiles and air defenses.
==== Combat losses ====
Ukraine has confirmed the loss of three F-16 fighters as of May 2025. The first crash occurred on August 2024. On 30 August 2024, the Commander of the Ukrainian Air Force, Mykola Oleshchuk, was dismissed by President Zelenskyy and replaced by Lieutenant General Anatolii Kryvonozhko, which was partially attributed to "indications" that the F-16 that crashed on 26 August was shot down in "a friendly fire incident". Ukrainian parliamentarian Maryana Bezuhla and Oleshchuk had previously argued over the cause of the F-16 loss.
The second crash occurred on 12 April 2025. Ukraine stated that pilot Pavlo Ivanov was killed in action flying an F-16. BBC-Ukraine reported that Russian Armed Forces fired three missiles at the F-16, which was probably flying over the Sumy region, either from an S-400 ground-to-air or an R-37 air-to-air missile system.
The third crash occurred on 16 May 2025. Ukraine stated that a third F-16 was lost in air combat at around 3:30 am, during a combat mission where an emergency developed on board, but did not specify the cause. The pilot was stated to have steered the aircraft from populated areas before ejecting and was rescued in a stable condition.
=== Others ===
Venezuela Air Force have flown the F-16 on combat missions. During the November 1992 Venezuelan coup attempt, two F-16A belonging to the government loyalist managed to shoot down two OV-10 Bronco and an AT-27 Tucano flown by the rebels and establishing aerial superiority for the government forces.
Two F-16B of the Indonesian Air Force intercepted and engaged several US Navy F/A-18 Hornets over the Java Sea in the 2003 Bawean incident.
The Royal Moroccan Air Force and the Royal Bahraini Air Force each lost a single F-16C, both shot down by Houthi anti-aircraft fire during the Saudi Arabian-led intervention in Yemen, respectively on 11 May 2015 and on 30 December 2015.
On 11 October 2023, Deputy Assistant Secretary for Regional Security Mira Resnick confirmed to Jorge Argüello, Argentinean ambassador to the US, that the State Department has approved the transfer of 38 F-16s from Denmark. On 16 April 2024, it was announced by defense minister Luis Petri that the country went through with the purchase of 24+1 Danish F-16s, that are to be brought up to date before they are sent to Argentina. The 25th plane, an F-16B MLU Block 10, meant for mechanics training, came disassembled in an Argentinian C-130 in late December 2024. Six F-16s a year are to be delivered from Denmark to Argentina until all are delivered, with the first batch expected around November 2025.
In 2019, the US State Department approved the possible sale of 8 F-16 Block 70s to Bulgaria, and the deal was approved by the Bulgarian parliament, and President Rumen Radev. In November 2022, the purchase of a further 8 F-16 Block 70 fighters, spares, weapons and other systems was approved for delivery in 2027. The Bulgarian Air Force expects delivery of the first eight new F-16 Block 70s by 2025 and the second batch of eight F-16 Block 70s is expected in 2027.
In 2024, Argentina selected a bid for 24 F-16AM/BM aircraft from Denmark, instead of one from JF-17s from China/Pakistan. The first aircraft, a F-16B, was unveiled in Buenos Aires on 24 February 2025.
=== Potential operators ===
==== Philippines ====
In 2021, the Defense Security Cooperation Agency approved the Philippines' purchase of 12 F-16s worth an estimated US$2.43 billion. However, the Philippines has yet to complete this deal due to financial constraints with negotiations ongoing. In April 2025, the possible sale of 20 F-16s were approved, upgrading the previous approval made by DSCA. It was reported in May 2025 that Lockheed Martin was interested in developing a facility similar to the Center for Innovation and Security Solutions in Abu Dhabi, depending on the success of the F-16s being sold.
==== Vietnam ====
In 2025, multiple news channels reported that Vietnam is finalizing an agreement to purchase at least 24 F-16s, possibly the F-16V variant.
=== Civilian operators ===
==== Top Aces ====
In January 2021, Canadian defence contractor Top Aces announced that they had taken delivery of the first civilian owned F-16s to their US HQ in Mesa, Arizona. In an approval process that had taken years, they had purchased a batch of 29 F-16A/B Netz from the Israeli Air Force, including several that had taken part in Operation Opera. A year later, the first of these aircraft had finished the extensive AAMS mission system upgrades including AESA radar, HMCS, ECM, and Tactical Datalink. In late 2022 they began regular operations flying as contracted aggressors for USAF F-22 and F-35 squadrons in Luke AFB and Eglin AFB, as well as supporting exercises in other USAF and USMC bases.
== Variants ==
F-16 models are denoted by increasing block numbers to denote upgrades. The blocks cover both single- and two-seat versions. A variety of software, hardware, systems, weapons compatibility, and structural enhancements have been instituted over the years to gradually upgrade production models and retrofit delivered aircraft.
While many F-16s were produced according to these block designs, there have been many other variants with significant changes, usually because of modification programs. Other changes have resulted in role-specialization, such as the close air support and reconnaissance variants. Several models were also developed to test new technology. The F-16 design also inspired the design of other aircraft, which are considered derivatives. Older F-16s are being converted into QF-16 drone targets.
F-16A/B
The F-16A (single seat) and F-16B (two seat) were initial production variants. These variants include the Block 1, 5, 10, 15, and 20 versions. Block 15 was the first major change to the F-16 with larger horizontal stabilizers. It is the most numerous of all F-16 variants with 983 produced. Around 300 earlier USAF F-16A and B aircraft were upgraded to the Block 15 Mid-Life Update (MLU) standard, getting analogous capability to F-16C/D Block 50/52 aircraft. From 1987 a total of 214 Block 15 aircraft were upgraded to OCU (Operational Capability Upgrade) standard, with engines, structural and electronic improvements, and from 1988 all Block 15 were directly build to OCU specifications. Between 1989 and 1992 a total of 271 Block 15OCU airframes (246 F-16A and 25 F-16B) were converted at the Ogden Air Logistic Center to the ADF (Air Defense Fighter) variant, with improved IFF system, radio and radar, the ability to carry advanced Beyond Visual Range missiles and the addition of a side-mounted 150,000 candlepower spotlight for visual night identification of intruders. Originally intended for Cold-War air defense of the continental U.S. airspace, with the fall of the Berlin Wall the ADF lost a clear mission, and most were mothballed starting from 1994. Some mothballed ADFs were later exported to Jordan (12 -A and 4 -B models) and Thailand (15 -A and 1 -B), while 30 -A and 4 -B models were leased to Italy from 2003 to 2012
F-16C/D
The F-16C (single seat) and F-16D (two seat) variants entered production in 1984. The first C/D version was the Block 25 with improved cockpit avionics and radar which added all-weather capability with beyond-visual-range (BVR) AIM-7 and AIM-120 air-air missiles. Block 30/32, 40/42, and 50/52 were later C/D versions. The F-16C/D had a unit cost of US$18.8 million (1998). Operational cost per flight hour has been estimated at $7,000 to $22,470 or $24,000, depending on the calculation method.
F-16E/F
The F-16E (single seat) and F-16F (two seat) are newer F-16 Block 60 variants based on the F-16C/D Block 50/52. The United Arab Emirates invested heavily in their development. They feature improved AN/APG-80 active electronically scanned array (AESA) radar, infrared search and track (IRST), avionics, conformal fuel tanks (CFTs), and the more powerful General Electric F110-GE-132 engine.
F-16IN
For the Indian MRCA competition for the Indian Air Force, Lockheed Martin offered the F-16IN Super Viper. The F-16IN is based on the F-16E/F Block 60 and features conformal fuel tanks; AN/APG-80 AESA radar, GE F110-GE-132A engine with FADEC controls; electronic warfare suite and infrared search and track (IRST) unit; updated glass cockpit; and a helmet-mounted cueing system. As of 2011, the F-16IN is no longer in the competition. In 2016, Lockheed Martin offered the new F-16 Block 70/72 version to India under the Make in India program. In 2016, the Indian government offered to purchase 200 (potentially up to 300) fighters in a deal worth $13–15bn. As of 2017, Lockheed Martin has agreed to manufacture F-16 Block 70 fighters in India with the Indian defense firm Tata Advanced Systems Limited. The new production line could be used to build F-16s for India and for exports.
F-16IQ
In September 2010, the Defense Security Cooperation Agency informed the United States Congress of a possible Foreign Military Sale of 18 F-16IQ aircraft along with the associated equipment and services to the newly reformed Iraqi Air Force. The total value of sale was estimated at US$4.2 billion. The Iraqi Air Force purchased those 18 jets in the second half of 2011, then later exercised an option to purchase 18 more for a total of 36 F-16IQs. As of 2021, the Iraqi had lost two in accidents. By 2023, the US government reported that these jets were Iraq's most capable airborne platforms with a 66 percent mission-capable rate. Their maintenance was being supported by private contractors. At the same time, Iraq's Russian-made systems were suffering from sanctions imposed in the wake of Russia's invasion of Ukraine.
F-16N
The F-16N was an adversary aircraft operated by the United States Navy. It is based on the standard F-16C/D Block 30, is powered by the General Electric F110-GE-100 engine, and is capable of supercruise. The F-16N has a strengthened wing and is capable of carrying an Air Combat Maneuvering Instrumentation (ACMI) pod on the starboard wingtip. Although the single-seat F-16Ns and twin-seat (T)F-16Ns are based on the early-production small-inlet Block 30 F-16C/D airframe, they retain the APG-66 radar of the F-16A/B. In addition, the aircraft's 20 mm cannon has been removed, as has the airborne self-protection jammer (ASPJ), and they carry no missiles. Their EW fit consists of an ALR-69 radar warning receiver (RWR) and an ALE-40 chaff/flare dispenser. The F-16Ns and (T)F-16Ns have the standard Air Force tailhook and undercarriage and are not aircraft carrier–capable. Production totaled 26 airframes, of which 22 are single-seat F-16Ns and 4 are twin-seat TF-16Ns. The initial batch of aircraft was in service between 1988 and 1998. At that time, hairline cracks were discovered in several bulkheads, and the Navy did not have the resources to replace them, so the aircraft were eventually retired, with one aircraft sent to the collection of the National Naval Aviation Museum at NAS Pensacola, Florida, and the remainder placed in storage at Davis-Monthan AFB. These aircraft were later replaced by embargoed ex-Pakistani F-16s in 2003. The original inventory of F-16Ns was previously operated by adversary squadrons at NAS Oceana, Virginia; NAS Key West, Florida; and the former NAS Miramar, California. The current F-16A/B aircraft are operated by the Naval Strike and Air Warfare Center at NAS Fallon, Nevada.
F-16V
At the 2012 Singapore Air Show, Lockheed Martin unveiled plans for the new F-16V variant with the V suffix for its Viper nickname. It features an AN/APG-83 active electronically scanned array (AESA) radar, a new mission computer and electronic warfare suite, an automated ground collision avoidance system, and various cockpit improvements; this package is an option on current production F-16s and can be retrofitted to most in service F-16s. First flight took place 21 October 2015. Taiwanese media reported that Taiwan and the U.S. both initially invested in the development of the F-16V. Upgrades to Taiwan's F-16 fleet began in January 2017. The first country to confirm the purchase of 16 new F-16 Block 70/72 was Bahrain. Greece announced the upgrade of 84 F-16C/D Block 52+ and Block 52+ Advanced (Block 52M) to the latest V (Block 70/72) variant in October 2017. Slovakia announced on 11 July 2018 that it intends to purchase 14 F-16 Block 70/72 aircraft. Lockheed Martin has redesignated the F-16V Block 70 as the "F-21" in its offering for India's fighter requirement. Taiwan's Republic of China Air Force announced on 19 March 2019 that it formally requested the purchase of an additional 66 F-16V fighters. The Trump administration approved the sale on 20 August 2019. On 14 August 2020, Lockheed Martin was awarded a US$62 billion contract by the US DoD that includes 66 new F-16s at US$8 billion (~$9.28 billion in 2023) for Taiwan.
QF-16
In September 2013, Boeing and the U.S. Air Force tested an unmanned F-16, with two US Air Force pilots controlling the airplane from the ground as it flew from Tyndall AFB over the Gulf of Mexico.
=== Related developments ===
Vought Model 1600
Proposed naval variant
General Dynamics F-16XL
1980s technology demonstrator
General Dynamics NF-16D VISTA
1990s experimental fighter
Mitsubishi F-2
1990s Japanese multirole fighter based on the F-16
== Operators ==
As of 2024, there have been 2,145 F-16s in active service around the world.
=== Former operators ===
Denmark – Royal Danish Air Force sold 24 F-16s to Argentine Air Force in 2024. 19 F-16s donated to Ukrainian Air Force.
Italy – Italian Air Force used up to 30 F-16As and 4 F-16Bs of the Block 15 ADF variant, leased from the United States Air Force, from 2003 to 2012.
Netherlands – Royal Netherlands Air Force sold 6 F-16s to Royal Jordanian Air Force and 36 F-16s to Chilean Air Force in 2005. Donating the rest of the fleet of 42 aircraft to Ukraine in 2024.
Norway – Royal Norwegian Air Force (RNoAF) on 6 January 2022, Norway announced that all F-16s had been retired and replaced with the F-35. The RNoAF sold 32 of their F-16s to Romanian Air Force, with the remaining operational aircraft being donated to Ukraine.
== Notable accidents and incidents ==
The F-16 has been involved in over 670 hull-loss accidents as of January 2020.
On 8 May 1975, while practicing a 9-g aerial display maneuver with the second YF-16 (tail number 72-1568) at Fort Worth, Texas, prior to being sent to the Paris Air Show, one of the main landing gears jammed. The test pilot, Neil Anderson, had to perform an emergency gear-up landing and chose to do so in the grass, hoping to minimize damage and avoid injuring any observers. The aircraft was only slightly damaged, but because of the mishap, the first prototype was sent to the Paris Air Show in its place.
On 15 November 1982, while on a training flight outside Kunsan Air Base in South Korea, USAF Captain Ted Harduvel died when he crashed inverted into a mountain ridge. In 1985, Harduvel's widow filed a lawsuit against General Dynamics claiming an electrical malfunction, not pilot error, as the cause; a jury awarded the plaintiff $3.4 million in damages. However, in 1989, the U.S. Court of Appeals ruled the contractor had immunity to lawsuits, overturning the previous judgment. The court remanded the case to the trial court "for entry of judgment in favor of General Dynamics". The accident and subsequent trial was the subject of the 1992 film Afterburn.
On 23 March 1994, during a joint Army-Air Force exercise at Pope AFB, North Carolina, F-16D (AF Serial No. 88-0171) of the 23d Fighter Wing / 74th Fighter Squadron was simulating an engine-out approach when it collided with a USAF C-130E. Both F-16 crew members ejected, but their aircraft, on full afterburner, continued on an arc towards Green Ramp and struck a USAF C-141 that was being boarded by US Army paratroopers. This accident resulted in 24 fatalities and at least 100 others injured. It has since been known as the "Green Ramp disaster".
On 15 September 2003, a USAF Thunderbirds F-16C crashed during an air show at Mountain Home AFB, Idaho. Captain Christopher Stricklin attempted a "split S" maneuver based on an incorrect mean-sea-level altitude of the airfield. Climbing to only 1,670 ft (510 m) above ground level instead of 2,500 ft (760 m), Stricklin had insufficient altitude to complete the maneuver, but was able to guide the aircraft away from spectators and ejected less than one second before impact. Stricklin survived with only minor injuries; the aircraft was destroyed. USAF procedure for demonstration "Split-S" maneuvers was changed, requiring both pilots and controllers to use above-ground-level (AGL) altitudes.
On 26 January 2015, a Greek F-16D crashed while performing a NATO training exercise in Albacete, Spain. Both crew members and nine French soldiers on the ground died when it crashed in the flight line, destroying or damaging two Italian AMXs, two French Alpha jets, and one French Mirage 2000. Investigations suggested that the accident was due to an erroneous rudder setting that was caused by loose papers in the cockpit.
On 7 July 2015, an F-16CJ collided with a Cessna 150M over Moncks Corner, South Carolina, U.S. The pilot of the F-16 ejected safely, but both people in the Cessna were killed.
On 11 October 2018, an F-16 MLU from the 2nd Tactical Wing of the Belgian Air Component, on the apron at Florennes Air Station, was hit by a gun burst from a nearby F-16, whose cannon was fired inadvertently during maintenance. The aircraft caught fire and was burned to the ground, while two other F-16s were damaged and two maintenance personnel were treated for aural trauma.
On 11 March 2020, a Pakistani F-16AM (Serial No. 92730) of the No. 9 Squadron (Pakistan Air Force) crashed in the Shakarparian area of Islamabad during rehearsals for the Pakistan Day Parade. The plane crashed when the F-16 was executing an aerobatic loop. As a result, the pilot of the F-16, Wing Commander Noman Akram, who was also the Commanding Officer of the No. 9 Squadron "Griffins", lost his life. A board of inquiry ordered by the Pakistan Air Force later revealed that the pilot had every chance to eject but opted not to and tried his best to save the aircraft and avoid civilian casualties on the ground. Videos taken by locals on the ground show his F-16AM crashing into some woods. He was hailed a hero by Pakistanis while also gaining some attention internationally.
On 6 May 2023, a U.S. Air Force F-16C of the 8th Fighter Wing crashed in a field near Osan Air Base in South Korea during a daytime training sortie. The pilot safely ejected from the aircraft.
On 20 March 2024, an F-16 operated by the Hellenic Air Force crashed into the sea, close to the island of Psathoura in the northern Aegean Sea. The pilot ejected from the aircraft and was later rescued.
On 30 April, an Air Force General Dyamics F-16 crashed outside Holloman Air Force Base, located near Alamogordo in New Mexico. The pilot ejected safely before impact.
On 8 May 2024, an F-16C of the Republic of Singapore Air Force crashed during takeoff within Tengah Air Base. The pilot successfully ejected from the aircraft without major injuries. The cause was later identified to be from the malfunction of two of the three primary pitch rate gyroscopes on the aircraft. This was noted to be a "rare occurrence" by Lockheed Martin due to the concurrent failure of the two independent pitch rate gyroscopes giving similar inputs which caused the digital flight control computer to reject inputs from the correctly functioning pitch rate gyroscope and the backup pitch rate gyroscope when it was activated by the rejection of a primary pitch rate gyroscope.
On 28 August, an F-16 of the Ukrainian Air Force crashed in an undisclosed location in Ukraine during Russian air strike. The pilot of the aircraft, Oleksii Mes, died in the accident.
== Aircraft on display ==
As newer variants have entered service, many examples of older F-16 models have been preserved for display worldwide, particularly in Europe and the United States.
== Specifications (F-16C Block 50 and 52) ==
Data from USAF sheet, International Directory of Military Aircraft, Flight Manual for F-16C/D Block 50/52+General characteristics
Crew: 1
Length: 49 ft 5 in (15.06 m)
Wingspan: 32 ft 8 in (9.96 m)
Height: 16 ft (4.9 m)
Wing area: 300 sq ft (28 m2)
Airfoil: NACA 64A204
Empty weight: 18,900 lb (8,573 kg)
Gross weight: 26,500 lb (12,020 kg)
Max takeoff weight: 42,300 lb (19,187 kg)
Fuel capacity: 7,000 pounds (3,200 kg) internal
Powerplant: 1 × General Electric F110-GE-129 for Block 50 aircraft , 17,155 lbf (76.31 kN) thrust dry, 29,500 lbf (131 kN) with afterburner(1 × Pratt & Whitney F100-PW-229 for Block 52 aircraft, 17,800 lbf (79 kN) thrust dry and 29,160 lbf (129.7 kN) with afterburner.)
Performance
Maximum speed: Mach 2.05, 1,176 kn (1,353 mph; 2,178 km/h) at 40,000 feet, clean
Mach 1.2, 800 kn (921 mph; 1,482 km/h) at sea level
Cruise speed: 504 kn (580 mph, 933 km/h)
Combat range: 295 nmi (339 mi, 546 km) on a hi-lo-hi mission with 4 × 1,000 lb (454 kg) bombs
Ferry range: 2,277 nmi (2,620 mi, 4,217 km) with three drop tanks
Service ceiling: 50,000 ft (15,000 m)
g limits: +9
Roll rate: 324°/s
Wing loading: 88.3 lb/sq ft (431 kg/m2)
Thrust/weight: 1.095 (1.24 with loaded weight & 50% internal fuel)
Armament
Guns: 1 × 20 mm (0.787 in) M61A1 Vulcan 6-barrel rotary cannon, 511 rounds
Hardpoints: 2 × wing-tip air-to-air missile launch rails, 6 × under-wing, and 3 × under-fuselage pylon (2 of 3 for sensors) stations with a capacity of up to 17,000 lb (7,700 kg) of stores
Rockets:
4 × LAU-61/LAU-68 rocket pods (each with 19/7 × Hydra 70 mm/APKWS rockets, respectively)
4 × LAU-5003 rocket pods (each with 19 × CRV7 70 mm rockets)
4 × LAU-10 rocket pods (each with 4 × Zuni 127 mm rockets)
Missiles:
Air-to-air missiles:
6 × AIM-9 Sidewinder
6 × AIM-120 AMRAAM
6 × IRIS-T
6 × Python-4
6 × Python-5
2 × AIM-7 Sparrow and 4 × AIM-9 Sidewinder
Air-to-surface missiles:
6 × AGM-65 Maverick
2 × AGM-88 HARM
AGM-158 JASSM (JASSM)
Anti-ship missiles:
2 × AGM-84 Harpoon
4 × AGM-119 Penguin
Joint Strike Missile (to be integrated)
Bombs:
8 × CBU-87 Combined Effects Munition
8 × CBU-89 Gator mine
8 × CBU-97 Sensor Fuzed Weapon
4 × Mark 84 general-purpose bombs
8 × Mark 83 GP bombs
12 × Mark 82 GP bombs
8 × GBU-39 Small Diameter Bomb (SDB)
4 × GBU-10 Paveway II
6 × GBU-12 Paveway II
4 × GBU-24 Paveway III
4 × GBU-27 Paveway III
4 × Joint Direct Attack Munition (JDAM) series
4 × AGM-154 Joint Standoff Weapon (JSOW)
Wind Corrected Munitions Dispenser (WCMD)
B61 nuclear bomb
B83 nuclear bomb
Others:
ADM-160 MALD
SUU-42A/A flares/infrared decoys dispenser pod and chaff pod or
AN/ALQ-131 & AN/ALQ-184 ECM pods on centerline or
LANTIRN, Lockheed Martin Sniper XR & Litening targeting pods or
AN/ASQ-213 HARM targeting system (HTS) Pod (typically configured on station 5L with Sniper XR pod on station 5R) or
Up to 3 × 300/330/370/600 US gallon (1,135, 1,250, 1,400, 2,270 L) Sargent Fletcher drop tanks for ferry flight/extended range/loitering time or
UTC Aerospace DB-110 long range EO/IR sensor pod on centerline
Avionics
AN/APG-83 / AN/APG-68 radar (depends on aircraft variant). The AN/APG-68 radar is being replaced on many US Air Force F-16C/D Block 40/42 and 50/52 aircraft by the AN/APG-83 AESA radar.
AN/ALR-56M radar warning receiver, being replaced on US Air Force F-16C/D Block 40/42 and 50/52 by AN/ALR-69A(V)
AN/ALQ-213 electronic warfare suite, being replaced on US Air Force F-16C/D Block 40/42 and 50/52 by AN/ALQ-257
MIL-STD-1553 bus
== See also ==
Aircraft in fiction#F-16 Fighting Falcon
Fourth-generation fighter
Green Ramp disaster
David S. Lewis (General Dynamics' CEO during formative period for F-16)
RSAF Black Knights – F-16 Aerobatic Team
Related development
Vought Model 1600
General Dynamics F-16XL
General Dynamics X-62 VISTA
AIDC F-CK-1 Ching-kuo
KAI T-50 Golden Eagle
Mitsubishi F-2
Aircraft of comparable role, configuration, and era
Chengdu J-10
Dassault Mirage 2000
McDonnell Douglas F/A-18 Hornet
Mikoyan MiG-29
PAC/CAC JF-17 Thunder
Saab JAS 39 Gripen
Related lists
List of active United States military aircraft
List of fighter aircraft
List of military electronics of the United States
== References ==
=== Notes ===
=== Citations ===
=== Bibliography ===
== Further reading ==
Drendel, Lou. F-16 Fighting Falcon – Walk Around No. 1. Carrollton, Texas: Squadron/Signal Books, 1993. ISBN 0-89747-307-8.
Gunston, Bill. United States Military Aircraft of the 20th century London: Salamander Books Ltd, 1984. ISBN 0-86101-163-5.
Jenkins, Dennis R. McDonnell Douglas F-15 Eagle, Supreme Heavy-Weight Fighter. Arlington, Texas: Aerofax, 1998. ISBN 1-85780-081-8.
Sweetman, Bill. Supersonic Fighters: The F-16 Fighting Falcons. Mankato, Minnesota: Capstone Press, 2008. ISBN 1-4296-1315-7.
Williams, Anthony G. and Emmanuel Gustin. Flying Guns: The Modern Era. Ramsbury, UK: The Crowood Press, 2004. ISBN 1-86126-655-3.
== External links ==
F-16 USAF fact sheet
F-16 page on LockheedMartin.com and F-16 articles on Code One magazine site
F-16.net Fighting Falcon resource |
F-22 | The Lockheed Martin/Boeing F-22 Raptor is an American twin-engine, jet-powered, all-weather, supersonic stealth fighter aircraft. As a product of the United States Air Force's Advanced Tactical Fighter (ATF) program, the aircraft was designed as an air superiority fighter, but also incorporates ground attack, electronic warfare, and signals intelligence capabilities. The prime contractor, Lockheed Martin, built most of the F-22 airframe and weapons systems and conducted final assembly, while program partner Boeing provided the wings, aft fuselage, avionics integration, and training systems.
First flown in 1997, the F-22 descended from the Lockheed YF-22 and was variously designated F-22 and F/A-22 before it formally entered service in December 2005 as the F-22A. Although the U.S. Air Force (USAF) had originally planned to buy a total of 750 ATFs to replace its F-15 Eagles, it later scaled down to 381, and the program was ultimately cut to 195 aircraft – 187 of them operational models – in 2009 due to political opposition from high costs, a perceived lack of air-to-air threats at the time of production, and the development of the more affordable and versatile F-35. The last aircraft was delivered in 2012.
The F-22 is a critical component of the USAF's high-end tactical airpower. While it had a protracted development and initial operational difficulties, the aircraft became the service's leading counter-air platform against peer adversaries. Although designed for air superiority operations, the F-22 has also performed strike and electronic surveillance, including missions in the Middle East against the Islamic State and Assad-aligned forces. The F-22 is expected to remain a cornerstone of the USAF's fighter fleet until its succession by the Boeing F-47.
== Development ==
=== Origins ===
The F-22 originated from the Advanced Tactical Fighter (ATF) program that the U.S. Air Force (USAF) initiated in 1981 to replace the F-15 Eagle and F-16 Fighting Falcon. Intelligence reports indicated that their effectiveness would be eroded by emerging worldwide threats emanating from the Soviet Union, including new developments in surface-to-air missile systems for integrated air defense networks, the introduction of the Beriev A-50 "Mainstay" airborne warning and control system (AWACS), and the proliferation of the Sukhoi Su-27 "Flanker" and Mikoyan MiG-29 "Fulcrum" class of fighter aircraft. Code-named "Senior Sky", the ATF would become an air superiority fighter program influenced by these threats; in the potential scenario of a Soviet and Warsaw Pact invasion in Central Europe, the ATF was envisaged to support the air-land battle by spearheading offensive and defensive counter-air operations (OCA/DCA) in this highly contested environment that would then enable following echelons of NATO strike and attack aircraft to perform air interdiction against ground formations; to do so, the ATF would make an ambitious leap in capability and survivability by taking advantage of the new technologies in fighter design on the horizon, including composite materials, lightweight alloys, advanced flight control systems and avionics, more powerful propulsion systems for supersonic cruise (or supercruise) around Mach 1.5, and stealth technology for low observability.
The USAF published an ATF request for information (RFI) to the aerospace industry in May 1981, and following a period of concept and specification development, the ATF System Program Office (SPO) issued the demonstration and validation (Dem/Val) request for proposals (RFP) in September 1985, with requirements placing strong emphasis on stealth, supersonic cruise and maneuver. The RFP saw some alterations after its initial release, including more stringent signature reduction requirements in December 1985 and the addition of the requirement for flying technology demonstrator prototypes in May 1986. Owing to the immense investments required to develop the advanced technologies, teaming among companies was encouraged. Of the seven bidding companies, Lockheed and Northrop were selected on 31 October 1986. Lockheed, through its Skunk Works division at Burbank, California, teamed with Boeing and General Dynamics while Northrop teamed with McDonnell Douglas. These two contractor teams undertook a 50-month Dem/Val phase, culminating in the flight test of two technology demonstrator prototypes, the Lockheed YF-22 and Northrop YF-23; while they represented competing designs, the prototypes were meant for demonstrating concept viability and risk mitigation rather than a competitive flyoff. Concurrently, Pratt & Whitney and General Electric competed for the ATF engines.
Dem/Val was focused on system engineering, technology development plans, and risk reduction over point aircraft designs; in fact, after down-select, the Lockheed team completely redesigned the airframe configuration in summer 1987 due to weight analysis, with notable changes including the wing planform from swept trapezoidal to diamond-like delta and a reduction in forebody planform area. The team extensively used analytical and empirical methods including computational fluid dynamics and computer-aided design, wind tunnel testing (18,000 hours for Dem/Val), and radar cross-section (RCS) calculations and pole testing. Avionics were tested in ground prototypes and flying laboratories. During Dem/Val, the SPO used trade studies from both teams to review the ATF system specifications and adjust or delete requirements that were significant weight and cost drivers while having marginal value. The short takeoff and landing (STOL) requirement was relaxed to delete thrust-reversers, saving substantial weight. Side looking radars and the dedicated infrared search and track (IRST) system were eventually removed as well, although space and cooling provisions were retained to allow for their later addition. The ejection seat was downgraded from a fresh design to the existing ACES II. Despite efforts by both teams to rein in weight, the takeoff gross weight estimates grew from 50,000 to 60,000 lb (22,700 to 27,200 kg), resulting in engine thrust requirement increasing from 30,000 to 35,000 lbf (133 to 156 kN) class.
Each team built two prototype air vehicles for Dem/Val, one for each engine option. The YF-22 had its maiden flight on 29 September 1990 and, in testing, successfully demonstrated supercruise, high angle-of-attack maneuvers, and the firing of air-to-air missiles from internal weapons bays. After the flight test of the demonstrator prototypes at Edwards Air Force Base, the teams submitted the results and their full-scale development design proposals – or Preferred System Concept – in December 1990; on 23 April 1991, the Secretary of the USAF, Donald Rice, announced the Lockheed team and Pratt & Whitney as the winners of the ATF and engine competitions. Both designs met or exceeded all performance requirements; the YF-23 was considered stealthier and faster, but the YF-22, with its thrust vectoring nozzles, was more maneuverable as well as less expensive and risky, having flown considerably more test sorties and hours than its counterpart. The press also speculated that the Lockheed team's design was more adaptable to the Navy Advanced Tactical Fighter (NATF) for replacing the F-14 Tomcat, but by fiscal year (FY) 1992, the U.S. Navy had abandoned NATF due to cost.
=== Full-scale development ===
The program formally moved to full-scale development, or Engineering & Manufacturing Development (EMD), in August 1991. The production F-22 design (internally designated Configuration 645) had also evolved to have notable differences from the YF-22, which was immature due to being frozen relatively soon after the complete redesign in the summer of 1987. While the overall layout was similar, the external geometry saw significant alterations; the wing's leading edge sweep angle was decreased from 48° to 42°, while the vertical stabilizers were shifted rearward and decreased in area by 20%. The radome shape was changed for better radar performance, the wingtips were clipped for antennas, and the dedicated airbrake was eliminated. To improve pilot visibility and aerodynamics, the canopy was moved forward 7 inches (18 cm) and the engine inlets moved rearward 14 inches (36 cm). The shapes of the fuselage, wing, and stabilator trailing edges were refined to improve aerodynamics, strength, and stealth characteristics. The internal structural design was refined and reinforced, with the production airframe designed for a service life of 8,000 hours. The revised shaping was validated with over 17,000 additional hours of wind tunnel testing and further RCS testing at Helendale, California and the USAF RATSCAT range before the first flight. Increasing weight during EMD due to demanding ballistic survivability requirements and added capabilities caused slight reductions in projected range and maneuver performance.
Aside from advances in air vehicle and propulsion technology, the F-22's avionics were unprecedented in complexity and scale for a combat aircraft, with the integration of multiple sensors systems and antennas, including electronic warfare, communication/navigation/identification (CNI), and software of 1.7 million lines of code written in Ada. Avionics often became the pacing factor of the whole program. In light of rapidly advancing computing and semiconductor technology, the avionics was to employ the Department of Defense's (DoD) PAVE PILLAR systems architecture and Very High Speed Integrated Circuit (VHSIC) program technology; the computing and processing requirements were equivalent to multiple contemporary Cray supercomputers to achieve sensor fusion. To enable early looks and troubleshooting for mission software development, the software was ground-tested in Boeing's Avionics Integration Laboratory (AIL) and flight-tested on a Boeing 757 modified with F-22 avionics and sensors, called Flying Test Bed (FTB). Because much of the F-22's avionics design occurred in the 1990s as the electronics industry was shifting from military to commercial applications as the predominant market, avionics upgrade efforts was initially difficult and protracted due to changing industry standards; for instance, C/C++ rather than Ada became predominant programming languages.
The roughly equal division of work amongst the team largely carried through from Dem/Val to EMD, with prime contractor Lockheed responsible for the forward fuselage and control surfaces, General Dynamics for the center fuselage, and Boeing for aft fuselage and wings. Lockheed acquired General Dynamics' fighter portfolio at Fort Worth, Texas in 1993 and thus had the majority of the airframe manufacturing, and merged with Martin Marietta in 1995 to form Lockheed Martin. While Lockheed primarily performed Dem/Val work at its Skunk Works sites in Burbank and Palmdale, California, it shifted its program office and EMD work from Burbank to Marietta, Georgia, where it performed final assembly; Boeing manufactured airframe components, performed avionics integration and developed the training systems in Seattle, Washington. The EMD contract originally ordered seven single-seat F-22As and two twin-seat F-22Bs, although the latter was canceled in 1996 to reduce development costs and the orders were converted to single seaters. The first F-22A, an EMD aircraft with tail number 91-4001, was unveiled at Air Force Plant 6 in Dobbins Air Reserve Base in Marietta on 9 April 1997 where it was officially named "Raptor". The aircraft first flew on 7 September 1997, piloted by chief test pilot Alfred "Paul" Metz. The Raptor's designation was briefly changed to F/A-22 starting in September 2002, mimicking the Navy's F/A-18 Hornet and intended to highlight a planned ground-attack capability amid debate over the aircraft's role and relevance. The F-22 designation was reinstated in December 2005, when the aircraft entered service.
The F-22 flight test program consisted of flight sciences, developmental test (DT), and initial operational test and evaluation (IOT&E) by the 411th Flight Test Squadron (FLTS) at Edwards AFB, California, as well as follow-on OT&E and development of tactics and operational employment by the 422nd Test and Evaluation Squadron (TES) at Nellis AFB, Nevada. Nine EMD jets assigned to the 411th FLTS would participate in the test program under the Combined Test Force (CTF) at Edwards. The first two aircraft conducted envelope expansion testing, such as flying qualities, air vehicle performance, propulsion, and stores separation. The third aircraft, the first to have production-level internal structure, tested flight loads, flutter, and stores separation, while two non-flying F-22s were built for testing static loads and fatigue. Subsequent EMD aircraft and the Boeing 757 FTB tested avionics, environmental qualifications, and observables, with the first combat-capable Block 3.0 software flying in 2001. Air vehicle testing resulted in several structural design modifications and retrofits for earlier lots, including tail fin strengthening to resolve buffeting in certain conditions. Raptor 4001 was retired from flight testing in 2000 and subsequently sent to Wright-Patterson AFB for survivability testing, including live fire testing and battle damage repair training. Other retired EMD F-22s have been used as maintenance trainers.
Because the F-22 had been designed to defeat contemporary and projected Soviet fighters, the end of the Cold War and the dissolution of the Soviet Union in 1991 had major impacts on program funding; the DoD reduced its urgency for new weapon systems and the following years would see successive reductions in its budget. This resulted in the F-22's EMD being rescheduled and lengthened multiple times. Furthermore, the aircraft's sophistication and numerous technological innovations required extensive testing, which exacerbated the cost overruns and delays, especially from mission avionics. Some capabilities were also deferred to post-service upgrades, reducing the upfront cost but increasing total program cost. The program transitioned to full-rate production in March 2005 and completed EMD that December, after which the test force had flown 3,496 sorties for over 7,600 flight hours. As the F-22 was designed for upgrades throughout its lifecycle, the 411th FLTS and 422nd TES continued the DT/OT&E and tactics development of these upgrades. Derivatives such as the X-44 thrust vectoring research aircraft and the FB-22 medium-range regional bomber were proposed in the late 1990s and early 2000s, although these were eventually abandoned. In 2006, the F-22 development team won the Collier Trophy, American aviation's most prestigious award. Due to the aircraft's sophisticated capabilities, contractors have been targeted by cyberattacks and technology theft.
=== Production and procurement ===
The USAF originally envisioned ordering 750 ATFs at a total program cost of $44.3 billion and procurement cost of $26.2 billion in FY 1985 dollars, with production beginning in 1994 and service entry in the mid-to-late 1990s. The 1990 Major Aircraft Review (MAR) led by Secretary of Defense Dick Cheney reduced this to 648 aircraft beginning in 1996 and service entry in the early-to-mid 2000s. After the end of the Cold War, this was further curtailed to 442 in the 1993 Bottom-Up Review while the USAF eventually set its requirement to 381 to support its Air Expeditionary Force structure with the last deliveries in 2013. Throughout development and production, the program was continually scrutinized for its costs and less expensive alternatives such as modernized F-15 or F-16 variants were being proposed, even though the USAF considered the F-22 to provide the greatest capability increase against peer adversaries for the investment. However, funding instability had reduced the total to 339 by 1997 and production was nearly halted by Congress in 1999. Although funds were eventually restored, the planned number continued to decline due to delays and cost overruns during EMD, slipping to 277 by 2003. In 2004, with its focus on asymmetric counterinsurgency warfare in Iraq and Afghanistan, the DoD under Secretary Donald Rumsfeld further cut procurement to 183 production aircraft, despite the USAF's requirement for 381; funding for this number was reached by a multi-year procurement contract awarded in 2006, with aircraft distributed to seven combat squadrons; total program cost was projected to be $62 billion (equivalent to approximately $90.2 billion in 2023). In 2008, the Congressional defense spending bill raised the number to 187.
F-22 production would support over 1,000 subcontractors and suppliers from 46 states and up to 95,000 jobs, and spanned 15 years at a peak rate of roughly two airplanes per month, about half of the initially planned rate from the 1990 MAR; after EMD aircraft contracts, the first production lot was awarded in September 2000. As production wound down in 2011, the total program cost was estimated to be about $67.3 billion (about $360 million for each production aircraft delivered), with $32.4 billion spent on Research, Development, Test, and Evaluation (RDT&E) and $34.9 billion on procurement and military construction in then year dollars. The incremental cost for an additional F-22 was estimated at $138 million (equivalent to approximately $191 million in 2023) in 2009.
In total, 195 F-22s were built. The first two were EMD aircraft in the Block 1.0 configuration for initial flight testing and envelope expansion, while the third was a Block 2.0 aircraft built to represent the internal structure of production airframes and enabled it to test full flight loads. Six more EMD aircraft were built in the Block 10 configuration for development and upgrade testing, with the last two considered essentially production-quality jets. Production for operational squadrons consisted of 74 Block 10/20 training aircraft and 112 Block 30/35 combat aircraft for a total of 186 (or 187 when accounting for Production Representative Test Vehicles and certain EMD jets); one of the Block 30 aircraft is dedicated to flight sciences at Edwards AFB. By 2020, Block 20 aircraft from Lot 3 onward were upgraded to Block 30 standards under the Common Configuration Plan, increasing the Block 30/35 fleet to 149 aircraft while 37 remained in the Block 20 configuration for training.
=== Ban on exports ===
In order to prevent the inadvertent disclosure of the aircraft's stealth technology and classified capabilities to U.S. adversaries, annual DoD appropriations acts since FY1998 have included a provision prohibiting the use of funds made available in each act to approve or license the sale of the F-22 to any foreign government. Customers for U.S. fighters are acquiring earlier designs such as the F-15 Eagle and F-16 Fighting Falcon or the newer F-35 Lightning II, which contains technology from the F-22 but was designed to be cheaper, more flexible, and available for export. In September 2006, Congress upheld the ban on foreign F-22 sales. Despite the ban, the 2010 defense authorization bill included provisions requiring the DoD to report on the costs and feasibility for an F-22 export variant, and another report on the effect of export sales on the U.S. aerospace industry.
Some Australian defense officials and politicians have expressed interest in procuring the F-22; in 2008, the Chief of the Defence Force, Air Chief Marshal Angus Houston, stated that the aircraft was being considered by the Royal Australian Air Force (RAAF) as a potential supplement to the F-35. Some defense commentators have even advocated for the purchase in lieu of the planned F-35s, citing the F-22's known capabilities and F-35's delays and developmental uncertainties. However, considerations for the F-22 were later dropped and the F/A-18E/F Super Hornet would serve as the RAAF's interim aircraft prior to the F-35's service entry.
The Japanese government also showed interest in the F-22. The Japan Air Self-Defense Force (JASDF) would reportedly require fewer fighters for its mission if it obtained the F-22, thus reducing engineering and staffing costs. With the end of F-22 production, Japan chose the F-35 in December 2011. At one point the Israeli Air Force had hoped to purchase up to 50 F-22s. In November 2003, however, Israeli
representatives announced that after years of analysis and discussions with Lockheed Martin and the DoD, they had concluded that Israel could not afford the aircraft. Israel eventually purchased the F-35.
=== Production termination ===
Throughout the 2000s when the U.S. was primarily fighting counterinsurgency wars in Iraq and Afghanistan, the USAF's requirement for 381 F-22s was questioned over rising costs, initial reliability and availability problems, limited multirole versatility, and a lack of relevant adversaries for air combat missions. In 2006, Comptroller General of the United States David Walker found that "the DoD has not demonstrated the need" for more investment in the F-22, and further opposition was expressed by Bush Administration Secretary of Defense Rumsfeld and his successor Robert Gates, Deputy Secretary of Defense Gordon R. England, and Chairman of U.S. Senate Armed Services Committee (SASC) Senators John Warner and John McCain. Under Rumsfeld, procurement was severely cut to 183 aircraft. The F-22 lost influential supporters in 2008 after the forced resignations of Secretary of the Air Force Michael Wynne and the Chief of Staff of the Air Force General T. Michael Moseley. In November 2008, Gates stated that the F-22 lacked relevance in asymmetric post-Cold War conflicts, and in April 2009, under the Obama Administration, he called for production to end in FY 2011 after completing 187 F-22s.
The loss of staunch F-22 advocates in the upper DoD echelons resulted in the erosion of its political support. In July 2008, General James Cartwright, Vice Chairman of the Joint Chiefs of Staff, stated to the SASC his reasons for supporting the termination of F-22 production, including shifting resources to the multi-service F-35 and the electric warfare EA-18G Growler. Although Russian and Chinese fighter developments fueled concern for the USAF, Gates dismissed this and in 2010, he set the F-22 requirement to 187 aircraft by lowering the number of major regional conflict preparations from two to one, despite an effort by Wynne's and Moseley's successors Michael Donley and General Norton Schwartz to raise the number to 243; according to Schwartz, he and Donley finally relented in order to convince Gates to preserve the Long Range Strike Bomber program. After President Barack Obama threatened to veto further production at Gates' urging, both the Senate and House agreed to abide by the 187 cap in July 2009. Gates highlighted the F-35's role in the decision, and believed that the U.S. would maintain its stealth fighter numbers advantage by 2025 even with F-35 delays. In December 2011, the 195th and final F-22 was completed out of 8 test and 187 production aircraft built; the jet was delivered on 2 May 2012.
After production ended, F-22 tooling and associated documentation were retained and mothballed at the Sierra Army Depot to support repairs and maintenance throughout the fleet life cycle, as well as the possibility of a production restart or a Service Life Extension Program (SLEP). The Marietta plant space was repurposed to support the C-130J and F-35, while engineering work for sustainment and upgrades continued at Fort Worth, Texas and Palmdale, California. The curtailed production forced the USAF to extend the service of 179 F-15C/Ds until 2026—well beyond its planned retirement—and replace those with new-build F-15EX, which had an active export production line that minimized non-recurring start-up costs, to maintain adequate air superiority fighter numbers.
In April 2016, Congress directed the USAF to conduct a cost study and assessment associated with resuming production of the F-22, citing advancing threats from Russia and China. On 9 June 2017, the USAF submitted their report stating they had no plans to restart the F-22 production line due to cost-prohibitive economic and logistical challenges; it estimated it would cost approximately $50 billion to procure 194 additional F-22s at a cost of $206–216 million per aircraft, including approximately $9.9 billion for non-recurring start-up costs and $40.4 billion for acquisition with the first delivery in the mid-to-late 2020s. The long gap since the end of production meant hiring new workers, sourcing replacement vendors, and finding new plant space, contributing to the high start-up costs and lead times. The USAF believed that the funding would be better invested in its next-generation Air Superiority 2030 effort, which evolved into the Next Generation Air Dominance (NGAD).
=== Modernization and upgrades ===
The F-22 and its subsystems were designed to be upgraded over its life cycle via numbered Increments and Operational Flight Program (OFP) updates in anticipation for technological advances and evolving threats, although this initially proved difficult and costly due to the highly integrated avionics systems architecture. Amid debates over the airplane's relevance in asymmetric counterinsurgency warfare, the first upgrades primarily focused on ground attack, or strike capabilities. Joint Direct Attack Munitions (JDAM) employment was added with Increment 2 in 2005 and Small Diameter Bomb (SDB) was integrated with 3.1 in 2011; the improved AN/APG-77(V)1 radar, which incorporates air-to-ground modes, was certified in March 2007 and fitted on airframes from Lot 5 onward. To address oxygen deprivation issues, F-22s were fitted with an automatic backup oxygen system (ABOS) and modified life support system starting in 2012.
In contrast to prior upgrades, Increment 3.2 emphasized air combat capabilities with updates to electronic warfare, CNI (including Link 16 receive), and geolocation as well as AIM-9X and AIM-120D integration. Fleet releases of the two-part process began in 2013 and 2019 respectively. Concurrently, OFP updates added Automatic Ground Collision Avoidance System, cryptographic enhancements, and improved avionics stability, among others. A MIDS-JTRS terminal, which includes Mode 5 IFF and Link 16 transmit/receive capability, was installed starting in 2021. To address obsolescence and modernization difficulties, the F-22's mission computers were upgraded in 2021 with military-hardened commercial off-the-shelf (COTS) open mission system (OMS) processor modules with a modular open systems architecture (MOSA). Agile software development process in conjunction with an orchestration system was implemented to enable faster upgrades from additional vendors, and software updates shifted away from Increments developed using the waterfall model to numbered annual releases.
Additional upgrades being tested include new sensors and antennas, integration of new weapons including the AIM-260 JATM, and reliability improvements such as more durable stealth coatings; the dedicated infrared search and track (IRST), originally deleted during Dem/Val, is one of the sensors added. Other developments include all-aspect IRST functionality for the Missile Launch Detector (MLD), manned-unmanned teaming (MUM-T) capability with uncrewed collaborative combat aircraft (CCA) or "loyal wingmen", and integration of the Gentex/Raytheon (later Thales USA) Scorpion helmet-mounted display (HMD). To preserve the aircraft's stealth while enabling additional payload and fuel capacity, stealthy external carriage has been investigated since the early-2000s, with a low drag, low-observable external tank and pylon under development to increase stealthy combat radius. The F-22 has also been used a platform to test and apply technologies from the NGAD program.
Not all proposed upgrades have been implemented. The planned Multifunction Advanced Data Link (MADL) integration was cut due to development delays and lack of proliferation. While Block 20 aircraft from Lot 3 onwards have been upgraded to Block 30/35 under the Common Configuration Plan, Lockheed Martin in 2017 had also proposed upgrading all remaining Block 20 training aircraft to Block 30/35 as well to increase numbers available for combat; this was not pursued due to other budget priorities.
Aside from modernizations, the F-22's structural design and construction was improved over the course of the production run; for instance, aircraft from Lot 3 onwards had improved stabilators built by Vought. The fleet underwent a $350 million "structures repair/retrofit program" (SRP) to resolve problems identified during testing as well as address improper titanium heat treatment in the parts of early batches. By January 2021, all aircraft had gone through the SRP to ensure full service lives for the entire fleet. The F-22 has also been used to test and qualify alternative fuels, including a synthetic jet fuel consisting of 50/50 mix of JP-8 and a Fischer–Tropsch process-produced, natural gas-based fuel in August 2008, and a 50% mixture of biofuel derived from camelina in March 2011.
== Design ==
=== Overview ===
The F-22 Raptor is a fifth-generation air superiority fighter that is considered fourth generation in stealth aircraft technology by the USAF. It is the first operational aircraft to combine supercruise, supermaneuverability, stealth, and integrated avionics (or sensor fusion) in a single weapons platform to enable it to survive and conduct missions, primarily offensive and defensive counter-air operations, in highly contested environments.
The F-22's shape combines stealth and aerodynamic performance. Planform and panel edges are aligned at common angular aspects and the surfaces, also aligned accordingly, have continuous curvature to minimize the aircraft's radar cross-section. Its clipped diamond-like delta wings have the leading edge swept 42°, trailing edge swept −17°, a slight anhedral and a conical camber to reduce supersonic wave drag. The shoulder-mounted wings are smoothly blended into the fuselage with four empennage surfaces and leading edge root extensions running to the caret inlets' upper edges, where the forebody chines also meet. Flight control surfaces include leading-edge flaps, flaperons, ailerons, rudders on the canted vertical stabilizers, and all-moving horizontal tails (stabilators); for air braking, the ailerons deflect up, flaperons down, and rudders outwards to increase drag. Owing to the focus on supersonic performance, area rule is applied extensively to the airplane's shape and nearly all of the fuselage volume lies ahead of the wing's trailing edge to reduce drag at supersonic speeds, with the stabilators pivoting from tail booms extending aft of the engine nozzles. Weapons are carried internally in the fuselage for stealth. The jet has a retractable tricycle landing gear and an emergency tailhook. Fire suppression system and fuel tank inerting system are installed for survivability.
The aircraft's dual Pratt & Whitney F119 augmented turbofan engines are closely spaced and incorporate rectangular two-dimensional thrust vectoring nozzles with a range of ±20 degrees in the pitch-axis; the nozzles are fully integrated into the F-22's flight controls and vehicle management system. Each engine has dual-redundant Hamilton Standard full-authority digital engine control (FADEC) and maximum thrust in the 35,000 lbf (156 kN) class. The F-22's thrust-to-weight ratio at typical combat weight is nearly at unity in maximum military power and 1.25 in full afterburner. The fixed shoulder-mounted caret inlets are offset from the forward fuselage to divert the turbulent boundary layer and generate oblique shocks with the upper inboard corners to ensure good total pressure recovery and efficient supersonic flow compression. Maximum speed without external stores is approximately Mach 1.8 in supercruise at military/intermediate power and greater than Mach 2 with afterburners. With 18,000 lb (8,165 kg) of internal fuel and an additional 8,000 lb (3,629 kg) in two 600-gallon external tanks, the jet has a ferry range of over 1,600 nmi (1,840 mi; 2,960 km). The aircraft has a refueling boom receptacle centered on its spine and an auxiliary power unit embedded in the left wing root.
The F-22's high cruise speed and operating altitude over prior fighters improve the effectiveness of its sensors and weapon systems, and increase survivability against ground defenses such as surface-to-air missiles. Its ability to supercruise, or sustain supersonic flight without using afterburners, allows it to intercept targets that afterburner-dependent aircraft would lack the fuel to reach. The use of internal weapons bays permits the aircraft to maintain comparatively higher performance over most other combat-configured fighters due to a lack of parasitic drag from external stores. The F-22's thrust and aerodynamics enable regular combat speeds of Mach 1.5 at 50,000 feet (15,000 m), thus providing 50% greater employment range for air-to-air missiles and twice the effective range for JDAMs than with prior platforms. Its structure contains a significant amount of high-strength materials to withstand stress and heat of sustained supersonic flight. Respectively, titanium alloys and bismaleimide/epoxy composites comprise 42% and 24% of the structural weight; the materials and multiple load path structural design also enable good ballistic survivability.
The airplane's aerodynamics, relaxed stability, and powerful thrust-vectoring engines give it excellent maneuverability and energy potential across its flight envelope, capable of 9-g maneuvers at takeoff gross weight with full internal fuel. Its large control surfaces, vortex-generating chines and LERX, and vectoring nozzles provide excellent high alpha (angle of attack) characteristics, and is capable of flying at trimmed alpha of over 60° while maintaining roll control and performing maneuvers such as the Herbst maneuver (J-turn) and Pugachev's Cobra; vortex impingement on the vertical tail fins did cause more buffeting than initially anticipated, resulting in the strengthening of the fin structure by changing the rear spar from composite to titanium. The computerized triplex-redundant fly-by-wire control system and FADEC make the aircraft highly departure resistant and controllable, thus giving the pilot carefree handling.
=== Stealth ===
The F-22 was designed to be highly difficult to detect and track by radar, with radio waves reflected, scattered, or diffracted away from the emitter source towards specific sectors, or absorbed and attenuated. Measures to reduce RCS include airframe shaping such as alignment of edges and continuous curvature of surfaces, internal carriage of weapons, fixed-geometry serpentine inlets and curved vanes that prevent line-of-sight of the engine fan faces and turbines from any exterior view, use of radar-absorbent material (RAM), and attention to detail such as hinges and pilot helmets that could provide a radar return. The F-22 was also designed to have decreased radio frequency emissions, infrared signature and acoustic signature as well as reduced visibility to the naked eye. The aircraft's rectangular thrust-vectoring nozzles flatten the exhaust plume and facilitate its mixing with ambient air through shed vortices, which reduces infrared emissions to mitigate the threat of infrared homing ("heat seeking") surface-to-air or air-to-air missiles. Additional measures to reduce the infrared signature include special topcoat and active cooling to manage the heat buildup from supersonic flight.
Compared to previous stealth designs, the F-22 is less reliant on RAM, which are maintenance-intensive and susceptible to adverse weather conditions, and can undergo repairs on the flight line or in a normal hangar without climate control. The F-22 incorporates a Signature Assessment System which delivers warnings when the radar signature is degraded and necessitates repair. While the F-22's exact RCS is classified, in 2009 Lockheed Martin released information indicating that from certain angles the airplane has an RCS of 0.0001 m2 or −40 dBsm – equivalent to the radar reflection of a "steel marble"; the aircraft can mount a Luneburg lens reflector to mask its RCS. For missions where stealth is required, the mission capable rate is 62–70%. Beginning in 2021, the F-22 has been seen testing a new chrome-like surface coating, speculated to help reduce the F-22's detectability by infrared tracking systems.
The effectiveness of the stealth characteristics is difficult to gauge. The RCS value is a restrictive measurement of the aircraft's frontal or side area from the perspective of a static radar. When an aircraft maneuvers it exposes a completely different set of angles and surface area, potentially increasing radar observability. Furthermore, the F-22's stealth contouring and radar-absorbent materials are chiefly effective against high-frequency radars, usually found on other aircraft. The effects of Rayleigh scattering and resonance mean that low-frequency radars such as weather radars and early-warning radars are more likely to detect the F-22 due to its physical size. These are also conspicuous, susceptible to clutter, and have low precision. Additionally, while faint or fleeting radar contacts make defenders aware that a stealth aircraft is present, reliably vectoring interception to attack the aircraft is much more challenging.
=== Avionics ===
The aircraft has an integrated avionics system where through sensor fusion, data from all onboard sensor systems as well as off-board inputs are filtered and processed into a combined tactical picture, thus enhancing the pilot's situational awareness and reducing workload. Key mission systems include Sanders/General Electric AN/ALR-94 electronic warfare system, Martin Marietta AN/AAR-56 infrared and ultraviolet Missile Launch Detector (MLD), Westinghouse/Texas Instruments AN/APG-77 active electronically scanned array (AESA) radar, TRW Communication/Navigation/Identification (CNI) suite, and Raytheon advanced infrared search and track (IRST) being tested.
The APG-77 radar has a low-observable, active-aperture, electronically scanned antenna with multiple target track-while-scan in all weather conditions; the antenna is tilted back for stealth. Its emissions can be focused to overload enemy sensors as an electronic attack capability. The radar changes frequencies more than 1,000 times per second to lower interception probability and has an estimated range of 125–150 mi (201–241 km) against an 11 sq ft (1 m2) target and 250 mi (400 km) or more in narrow beams. The upgraded APG-77(V)1 provides air-to-ground functionality through synthetic aperture radar (SAR) mapping, ground moving target indication/track (GMTI/GMTT), and strike modes. The ALR-94 electronic warfare system, among the most technically complex equipment on the F-22, integrates more than 30 antennas blended into the wings and fuselage for all-round radar warning receiver (RWR) coverage and threat geolocation. It can be used as a passive detector capable of searching targets at ranges (250+ nmi) exceeding the radar's, and can provide enough information for a target lock and cue radar emissions to a narrow beam (down to 2° by 2° in azimuth and elevation). Depending on the detected threat, the defensive systems can prompt the pilot to release countermeasures such as flares or chaff. The MLD uses six sensors to provide full spherical infrared coverage while the advanced IRST, housed in a stealthy wing pod, is a narrow field-of-view sensor for long-range passive identification and targeting. To ensure stealth in the radio frequency spectrum, CNI emissions are strictly controlled and confined to specific sectors, with tactical communication between F-22s performed using the directional Inter/Intra-Flight Data Link (IFDL); the integrated CNI system, which incorporates a MIDS-JTRS terminal, also manages TACAN, IFF (including Mode 5), and communication through various methods such as HAVE QUICK/SATURN and SINCGARS. The aircraft was also upgraded with an automatic ground collision avoidance system (GCAS).
Information from radar, CNI, and other sensors are processed by two Hughes Common Integrated Processor (CIP) mission computers, each capable of processing up to 10.5 billion instructions per second. The F-22's baseline software has some 1.7 million lines of code, the majority involving the mission systems such as processing radar data. The highly integrated nature of the avionics architecture system, as well as the use of the programming language Ada, has made the development and testing of upgrades challenging. To enable more rapid upgrades, the CIPs were upgraded with Curtiss-Wright open mission systems (OMS) processor modules as well as a modular open systems architecture called the Open Systems Enclave (OSE) orchestration platform to allow the avionics suite to interface with containerized software from third-party vendors.
The F-22's ability to operate close to the battlefield gives the aircraft threat detection and identification capability comparative with the RC-135 Rivet Joint, and the ability to function as a "mini-AWACS", though its radar is less powerful than those of dedicated platforms. This allows the F-22 to rapidly designate targets for allies and coordinate friendly aircraft. Although communication with other aircraft types was initially limited to voice, upgrades have enabled data to be transferred through a Battlefield Airborne Communications Node (BACN) or via JTIDS/Link 16 traffic through MIDS-JTRS. The IEEE 1394B bus developed for the F-22 was derived from the commercial IEEE 1394 "FireWire" bus system. In 2007, the F-22's radar was tested as a wireless data transceiver, transmitting data at 548 megabits per second and receiving at gigabit speed, far faster than the Link 16 system. The radio frequency receivers of the electronic support measures (ESM) system give the aircraft the ability to perform intelligence, surveillance, and reconnaissance (ISR) tasks.
=== Cockpit ===
The F-22 has a glass cockpit with all-digital flight instruments. The monochrome head-up display offers a wide field of view and serves as a primary flight instrument; information is also displayed upon six color liquid-crystal display (LCD) panels. The primary flight controls are a force-sensitive side-stick controller and a pair of throttles. The USAF initially wanted to implement direct voice input (DVI) controls, but this was judged to be too technically risky and was abandoned. The canopy's dimensions are approximately 140 inches long, 45 inches wide, and 27 inches tall (355 cm × 115 cm × 69 cm) and weighs 360 pounds. The canopy was redesigned after the original design lasted an average of 331 hours instead of the required 800 hours. Although the F-22 was originally intended to have a helmet mounted display (HMD), this was deferred during development to save costs; the aircraft is currently integrating the Scorpion HMD.
The F-22 has integrated radio functionality, the signal processing systems are virtualized rather than as a separate hardware module. The integrated control panel (ICP) is a keypad system for entering communications, navigation, and autopilot data. Two 3 in × 4 in (7.6 cm × 10.2 cm) up-front displays located around the ICP are used to display integrated caution advisory/warning (ICAW) data, CNI data and also serve as the stand-by flight instrumentation group and fuel quantity indicator for redundancy. The stand-by flight group displays an artificial horizon, for basic instrument meteorological conditions. The 8 in × 8 in (20 cm × 20 cm) primary multi-function display (PMFD) is located under the ICP, and is used for navigation and situation assessment. Three 6.25 in × 6.25 in (15.9 cm × 15.9 cm) secondary multi-function displays are located around the PMFD for tactical information and stores management.
The ejection seat is a version of the ACES II commonly used in USAF aircraft, with a center-mounted ejection control. The F-22 has a complex life support system, which includes the onboard oxygen generation system (OBOGS), protective pilot garments, and a breathing regulator/anti-g (BRAG) valve controlling flow and pressure to the pilot's mask and garments. The pilot garments were developed under the Advanced Technology Anti-G Suit (ATAGS) project and protect against chemical/biological hazards and cold-water immersion, counter g-forces and low pressure at high altitudes, and provide thermal relief. Following a series of hypoxia-related issues, the life support system was consequently revised to include an automatic backup oxygen system and a new flight vest valve. In combat environments, the ejection seat includes a modified M4 carbine designated the GAU-5/A.
=== Armament ===
The F-22 has three internal weapons bays: a large main bay on the bottom of the fuselage, and two smaller bays on the sides of the fuselage, aft of the engine inlets; a small bay for countermeasures such as flares is located behind each side bay. The main bay is split along the centerline and can accommodate six LAU-142/A launchers for beyond-visual-range (BVR) missiles and each side bay has an LAU-141/A launcher for short-range missiles. The primary air-to-air missiles are the AIM-120 AMRAAM and the AIM-9 Sidewinder, with planned integration of the AIM-260 JATM. Missile launches require the bay doors to be open for less than a second, during which pneumatic or hydraulic arms push missiles clear of the aircraft; this is to reduce vulnerability to detection and to deploy missiles during high-speed flight. An internally mounted M61A2 Vulcan 20 mm rotary cannon is embedded in the airplane's right wing root with the muzzle covered by a retractable door, which remains closed when the cannon is not firing in order to minimize the negative effect the exposed muzzle on the aircraft's radar signature The radar projection of the cannon fire's path is displayed on the pilot's head-up display.
Although designed for air-to-air missiles, the main bay can replace four launchers with two bomb racks that can each carry one 1,000 lb (450 kg) or four 250 lb (110 kg) bombs for a total of 2,000 pounds (910 kg) of air-to-surface ordnance. In 2024, Lockheed Martin disclosed its proposed Mako hypersonic missile, a 1,300 lb (590 kg) weapon that can be carried internally in the F-22. While capable of carrying weapons with GPS guidance such as JDAMs and SDBs, the F-22 cannot self-designate laser-guided weapons.
While the F-22 typically carries weapons internally, the wings include four hardpoints, each rated to handle 5,000 lb (2,300 kg). Each hardpoint can accommodate a pylon that can carry a detachable 600-gallon (2,270 L) external fuel tank or a launcher holding two air-to-air missiles; the two inboard hardpoints are "plumbed" for external fuel tanks. The two outboard hardpoints have since been dedicated to a pair of stealthy pods housing the IRST and mission systems. The aircraft can jettison external tanks and their pylon attachments to restore its low observable characteristics and kinematic performance.
=== Maintenance ===
Each F-22 requires a three-week packaged maintenance plan (PMP) every 300 flight hours. Its stealth coatings were designed to be more robust and weather-resistant than those of earlier stealth aircraft, yet early coatings failed against rain and moisture when F-22s were initially posted to Guam in 2009. Stealth measures account for almost one third of maintenance, with coatings being particularly demanding. F-22 depot maintenance is performed at Ogden Air Logistics Complex at Hill AFB, Utah; considerable care is taken during maintenance due to the small fleet size and limited attrition reserve.
F-22s were available for missions 63% of the time on average in 2015, up from 40% when it was introduced in 2005. Maintenance hours per flight hour was also improved from 30 early on to 10.5 by 2009, lower than the requirement of 12; man-hours per flight hour was 43 in 2014. When introduced, the F-22 had a Mean Time Between Maintenance (MTBM) of 1.7 hours, short of the required 3.0; this rose to 3.2 hours in 2012. By fiscal year 2015, the cost per flight hour was $59,116, while the user reimbursement rate was approximately US$35,000 (~$41,145 in 2023) per flight hour in 2019.
== Operational history ==
=== Introduction into service ===
The F-22 underwent extensive testing before its service introduction. While the first production aircraft was delivered to Edwards AFB in October 2002 for IOT&E and the first jet for the 422nd TES at Nellis AFB arrived in January 2003, IOT&E was continually pushed back from its planned start in mid-2003, with mission avionics stability being particularly challenging. Following a preliminary assessment, called OT&E Phase 1, formal IOT&E began in April 2004 and was completed in December of that year. This milestone marked the successful demonstration of the jet's air-to-air mission capability, although the jet was more maintenance intensive than expected. A Follow-On OT&E (FOT&E) in 2005 cleared the F-22's air-to-ground mission capability.
The first combat ready F-22 of the 1st Fighter Wing arrived at Langley AFB, Virginia in January 2005 and that December, the USAF announced that the aircraft had achieved Initial Operational Capability (IOC) with the 94th Fighter Squadron. The unit subsequently participated in Exercise Northern Edge 06 in Alaska in June 2006 and Exercise Red Flag 07–2 at Nellis AFB in February 2007, where it demonstrated the F-22's greatly increased air combat capabilities when flying against Red Force Aggressor F-15s and F-16s with a simulated kill ratio of 108–0. These large force exercises also further refined the F-22's operational tactics and employment.
The F-22 achieved Full Operational Capability (FOC) in December 2007, when General John Corley of Air Combat Command (ACC) officially declared the F-22s of the integrated active duty 1st Fighter Wing and Virginia Air National Guard 192nd Fighter Wing fully operational. This was followed by an Operational Readiness Inspection (ORI) of the integrated wing in April 2008, in which it was rated "excellent" in all categories, with a simulated kill-ratio of 221–0. The fielding of the F-22 with its precision strike capability also contributed to the retirement of the F-117 from operational service in 2008, with the 49th Fighter Wing operating the F-22 for a brief period prior to a series of fleet consolidations to reduce long term operational costs; further consolidations to improve availability and pilot training were recommended by the Government Accountability Office in 2018.
=== Training ===
The 43rd Fighter Squadron was reactivated in 2002 as the F-22 Formal Training Unit (FTU) for the type's basic course at Tyndall AFB and the first aircraft for pilot training was delivered in September 2003. Following severe damage to the installation in the wake of Hurricane Michael in 2018, the squadron and its aircraft were relocated to nearby Eglin AFB; although it was initially feared that several jets were lost due to storm damage, all were later repaired and flown out. The FTU and its aircraft were reassigned to the 71st Fighter Squadron at Langley AFB in 2023.
As of 2014, B-Course students require 38 sorties to graduate (previously 43 sorties). Track 1 course pilots, pilots retraining from other aircraft, also saw a reduction in the number of sorties needed to graduate, from 19 to 12 sorties. F-22 students are first trained on the T-38 Talon trainer aircraft. Additional pilot training takes place on the F-16 because the aging T-38 is not rated to sustain higher G-forces and lacks modern avionics. Due to a lack of a modern trainer stand-in that can accurately emulate the F-22, the Air Force often uses F-22s to supplement training, which is costly as the F-22 costs almost 10 times more than the T-38 per flight hour. The upcoming T-7 Red Hawk features modern avionics that better approximate those of the F-22 and F-35. This is scheduled to enter initial operating capability in 2027, several years behind schedule. In 2014 the Air Force stood up the 2nd Fighter Training Squadron at Tyndall AFB which was equipped with T-38s to serve as adversary aircraft to reduce adversary training flights on the F-22s. To reduce operating costs and prolong the F-22's service life, some pilot training sorties are performed using flight simulators. The advanced F-22 weapons instructor course at USAF Weapons School is conducted by the 433rd Weapons Squadron at Nellis AFB.
=== Initial operational problems ===
During the initial years of service, F-22 pilots experienced symptoms as a result of oxygen system issues that include loss of consciousness, memory loss, emotional lability and neurological changes as well as lingering respiratory problems and a chronic cough; the issues resulted in a fatal mishap in 2010 and four-month grounding in 2011 and subsequent altitude and distance flight restrictions. In August 2012, the DoD found that the BRAG valve, which inflated the pilot's vest during high-g maneuvers, was defective and restricted breathing and the OBOGS (onboard oxygen generation system) unexpectedly fluctuated oxygen levels at high g. A Raptor Aeromedical Working Group had recommended changes in 2005 regarding oxygen supply that were unfunded but received further consideration in 2012. The F-22 CTF and 412th Aerospace Medicine Squadron eventually determined breathing restrictions as the root cause; coughing symptoms were attributed to acceleration atelectasis from high g exposure and OBOGS delivering excessive oxygen concentration. The presence of toxins and particles in some ground crew was deemed unrelated. Modifications to the life support and oxygen systems, including the installation of an automatic backup, allowed altitude and distance restrictions to be lifted in April 2013.
=== Operational service ===
Following IOC and large-scale exercises, the F-22 flew its first homeland defense mission in January 2007 under Operation Noble Eagle. In November 2007, F-22s of 90th Fighter Squadron at Elmendorf AFB, Alaska, performed their first North American Aerospace Defense Command (NORAD) interception of two Russian Tu-95MS bombers. Since then, F-22s have also escorted probing Tu-160 bombers.
The F-22 was first deployed overseas in February 2007 with the 27th Fighter Squadron to Kadena Air Base in Okinawa, Japan. This first overseas deployment was initially marred by problems when six F-22s flying from Hickam AFB, Hawaii, experienced multiple software-related system failures while crossing the International Date Line (180th meridian of longitude). The aircraft returned to Hawaii by following tanker aircraft. Within 48 hours, the error was resolved and the journey resumed. Kadena would be a frequent rotation for F-22 units; they have also been involved in training exercises in South Korea, Malaysia, and the Philippines.
Defense Secretary Gates initially refused to deploy F-22s to the Middle East in 2007; the type made its first deployment in the region at Al Dhafra Air Base in the UAE in 2009. In April 2012, F-22s have been rotating into Al Dhafra, less than 200 miles from Iran. In March 2013, the USAF announced that an F-22 had intercepted an Iranian F-4 Phantom II that approached within 16 miles of an MQ-1 Predator flying off the Iranian coastline.
On 22 September 2014, F-22s performed the type's first combat sorties by conducting some of the opening strikes of Operation Inherent Resolve, the American-led intervention in Syria; aircraft dropped 1,000-pound GPS-guided bombs on Islamic State targets near Tishrin Dam. Between September 2014 and July 2015, F-22s flew 204 sorties over Syria, dropping 270 bombs at some 60 locations. Throughout their deployment, F-22s conducted close air support (CAS) and also deterred Syrian, Iranian, and Russian aircraft from attacking U.S.-backed Kurdish forces and disrupting U.S. operations in the region. F-22s also participated in the U.S. strikes that defeated pro-Assad and Russian Wagner Group paramilitary forces near Khasham in eastern Syria on 7 February 2018. These strikes notwithstanding, the F-22's main role in the operation was conducting intelligence, surveillance and reconnaissance. The aircraft also performed missions in other regions of the Middle East; in November 2017, F-22s operating alongside B-52s bombed opium production and storage facilities in Taliban-controlled regions of Afghanistan.
To increase deployment responsiveness and reduce logistical footprint in a peer or near-peer conflict, the USAF developed a deployment concept called Rapid Raptor which involves two to four F-22s and one C-17 for logistical support, first proposed in 2008 by two F-22 pilots. The goal was for the type to be able to set up and engage in combat within 24 hours in smaller and more austere environments that would enable more dispersed and survivable disposition of forces. This concept was tested at Wake Island in 2013 and Guam in late 2014. Four F-22s were deployed to Spangdahlem Air Base in Germany, Łask Air Base in Poland, and Ämari Air Base in Estonia in August and September 2015 to further test the concept and train with NATO allies in response to the Russian annexation of Crimea in 2014. The USAF would build on the principles of Rapid Raptor and eventually integrate it into its new operational concept called Agile Combat Employment, which shifts towards distributed operations during peer conflicts; for instance, detachments of F-22s have operated from austere airfields on Tinian and Iwo Jima during exercises.
On 4 February 2023, an F-22 of the 1st Fighter Wing shot down a suspected Chinese spy balloon within visual range off the coast of South Carolina at an altitude of 60,000 to 65,000 feet (20,000 m), marking the F-22's first air-to-air kill. The wreckage landed approximately 6 miles offshore and was subsequently secured by ships of the U.S. Navy and U.S. Coast Guard. F-22s shot down additional high-altitude objects near the coast of Alaska on 10 February and over Yukon on 11 February.
The USAF expects to begin retiring the F-22 in the 2030s as it gets replaced by the Next Generation Air Dominance (NGAD) sixth-generation crewed fighter, the Boeing F-47. In May 2021, Air Force Chief of Staff Charles Q. Brown Jr. said that he envisioned a reduction in the future number of fighter fleets to "four plus one": the F-22 followed by NGAD, the F-35A, the F-15E followed by F-15EX, the F-16 followed by "MR-X", and the A-10; the A-10 was later dropped from the plans due that aircraft's accelerated retirement. In 2022 the Air Force requested that it be allowed to divest all but three of its Block 20 F-22s at Tyndall AFB. Congress denied the request to divest its 33 non-combat-coded Block 20 aircraft and passed language prohibiting the divestment through FY2026. While the Block 30/35 F-22 remains one of the USAF's top priorities and will be continually updated, the service believes the Block 20 aircraft is obsolescent and unsuitable even for training F-22 pilots and that upgrading them to Block 30/35 standards would be cost-prohibitive at $3.5 billion.
== Variants ==
F-22A
Single-seat version, was designated F/A-22A in early 2000s before reverting back to F-22A in 2005; 195 built, consisting of 8 test and 187 operational aircraft.
F-22B
Planned two-seat version with the same combat capabilities as the single-seat version, cancelled in 1996 to save development costs with test aircraft orders converted to F-22A.
Naval F-22 variant
Never formally designated, planned carrier-borne variant/derivative for the U.S. Navy's Navy Advanced Tactical Fighter (NATF) program. Because the NATF needed lower landing speeds than the F-22 for aircraft carrier operations while still attaining Mach 2-class speeds, the design would have incorporated variable-sweep wings; it would also have had expanded weapons carriage, including the AIM-152 AAAM, AGM-88 HARM, and AGM-84 Harpoon. Program was cancelled in 1991 due to tightening budgets.
=== Proposed derivatives ===
The X-44 MANTA, or multi-axis, no-tail aircraft, was a planned experimental aircraft based on the F-22 with enhanced thrust vectoring controls and no aerodynamic surface backup. The aircraft was to be solely controlled by thrust vectoring, without featuring any rudders, ailerons, or elevators. Funding for this program was halted in 2000.
The FB-22 was proposed in the early 2000s as a supersonic stealth regional bomber for the USAF. The design went through several iterations and the later ones would combine an F-22 fuselage with greatly enlarged delta wings and was projected to carry up to 30 Small Diameter Bombs to over 1,600 nmi (3,000 km), about twice the combat range of the F-22A. The FB-22 proposals were cancelled with the 2006 Quadrennial Defense Review and subsequent developments, in lieu of a larger subsonic strategic bomber with a much greater range; this became the Next-Generation Bomber, although it would be rescoped in 2009 as the Long Range Strike Bomber resulting in the B-21 Raider.
In August 2018, Lockheed Martin proposed an F-22 derivative to the Japan Air Self-Defense Force (JASDF) for its 5th/6th generation F-X program. The design, which was later also proposed to the USAF, would combine a modified F-22 airframe with enlarged wings to increase fuel capacity and combat radius to 1,200 nmi (2,200 km) as well as the avionics and improved stealth coatings of the F-35. The proposal was ultimately not considered by the USAF or JASDF due to cost as well as existing export restrictions and industrial workshare concerns.
== Operators ==
The United States Air Force is the only operator of the F-22. As of August 2022, it has 183 aircraft in its inventory.
=== Air Combat Command ===
=== Pacific Air Forces ===
=== Air National Guard ===
=== Air Force Reserve Command ===
=== Air Force Material Command ===
== Accidents ==
The first F-22 crash occurred during takeoff at Nellis AFB on 20 December 2004, in which the pilot ejected safely before impact. The investigation revealed that a brief interruption in power during an engine shutdown prior to flight caused a flight-control system malfunction; consequently the aircraft design was corrected to avoid the problem. Following a brief grounding, F-22 operations resumed after a review.
On 25 March 2009, an EMD F-22 crashed 35 miles (56 km) northeast of Edwards AFB during a test flight, resulting in the death of Lockheed Martin test pilot David P. Cooley. An Air Force Materiel Command investigation found that Cooley momentarily lost consciousness during a high-G maneuver, or g-LOC, then ejected when he found himself too low to recover. Cooley was killed during ejection by blunt-force trauma from windblast due to the aircraft's speed. The investigation found no design issues.
On 16 November 2010, an F-22 from Elmendorf AFB crashed, killing the pilot, Captain Jeffrey Haney. F-22s were restricted to flying below 25,000 feet, then grounded during the investigation. The crash was attributed to a bleed air system malfunction after an engine overheat condition was detected, shutting down the Environmental Control System (ECS) and OBOGS. The accident review board ruled Haney was to blame, as he did not react properly to engage the emergency oxygen system. Haney's widow sued Lockheed Martin, claiming equipment defects, and later reached a settlement. After the ruling, the emergency oxygen system engagement handle was redesigned and the entire system was eventually replaced by an automatic backup. On 11 February 2013, the DoD's Inspector General released a report stating that the USAF had erred in blaming Haney, and that facts did not sufficiently support conclusions; the USAF stated that it stood by the ruling.
On 15 November 2012, an F-22 crashed to the east of Tyndall AFB during a training mission. The pilot ejected safely and no injuries were reported on the ground. The investigation determined that a "chafed" electrical wire ignited the fluid in a hydraulic line, causing a fire that damaged the flight controls.
On 15 May 2020, an F-22 from Eglin Air Force Base crashed during a routine training mission shortly after takeoff; the pilot ejected safely. The cause of the crash was attributed to a maintenance error after an aircraft wash resulting in faulty air data sensor readings.
== Aircraft on display ==
91-4002 – Hill Air Force Base Aerospace Museum in Ogden, Utah
91-4003 – National Museum of the United States Air Force in Dayton, Ohio
== Specifications (F-22A) ==
Data from USAF, manufacturers' data, Aerofax, Aviation Week, Air Forces Monthly, and Journal of Electronic DefenseGeneral characteristics
Crew: 1
Length: 62 ft 1 in (18.92 m)
Wingspan: 44 ft 6 in (13.56 m)
Height: 16 ft 8 in (5.08 m)
Wing area: 840 sq ft (78.04 m2)
Aspect ratio: 2.36
Airfoil: NACA 6 series airfoil
Empty weight: 43,340 lb (19,700 kg)
Gross weight: 64,840 lb (29,410 kg)
Max takeoff weight: 83,500 lb (38,000 kg)
Fuel capacity: 18,000 lb (8,200 kg) internally, or 26,000 lb (12,000 kg) with 2× 600 U.S. gal tanks
Powerplant: 2 × Pratt & Whitney F119-PW-100 augmented turbofans, 26,000 lbf (120 kN) thrust each dry, 35,000 lbf (160 kN) with afterburner
Performance
Maximum speed: Mach 2.25, 1,500 mph (1,303 kn; 2,414 km/h) at altitude
Mach 1.21, 800 knots (921 mph; 1,482 km/h) at sea level
Supercruise: Mach 1.76, 1,162 mph (1,010 kn; 1,870 km/h) at altitude
Range: 1,600 nmi (1,800 mi, 3,000 km) or more with 2 external fuel tanks
Combat range: 460 nmi (530 mi, 850 km) clean with 100 nmi (115 mi; 185 km) in supercruise
595 nmi (685 mi; 1,102 km) clean subsonic
750 nmi (863 mi; 1,389 km) with 100 nmi in supercruise with 2× 600 U.S. gal tanks
Ferry range: 1,740 nmi (2,000 mi, 3,220 km)
Service ceiling: 65,000 ft (20,000 m)
g limits: +9.0/−3.0
Wing loading: 77.2 lb/sq ft (377 kg/m2)
Thrust/weight: 1.08 (1.25 with loaded weight and 50% internal fuel)
Armament
Guns: 1× 20 mm M61A2 Vulcan rotary cannon, 480 rounds
Internal weapons bays:
Air-to-air mission loadout:
6× AIM-120C/D AMRAAM or
4× AIM-120A/B
2× AIM-9M/X Sidewinder
Air-to-ground mission loadout:
2× 1,000 lb (450 kg) JDAM and 2× AIM-120 or
8× 250 lb (110 kg) GBU-39 SDB and 2× AIM-120 or
4× 250 lb GBU-39 and 4× AIM-120
2× AIM-9
Hardpoint (external):
4× under-wing pylon stations can be fitted to carry weapons, each with a capacity of 5,000 lb (2,270 kg) or 600 U.S. gallon (2,270 L) drop tanks
4x AIM-120 AMRAAM (external)
Avionics
AN/APG-77 or AN/APG-77(V)1 AESA radar: 125–150 miles (201–241 km) against 1 m2 (11 sq ft) targets (estimated range), more than 250 miles (400 km) in narrow beams
AN/AAR-56 Missile Launch Detector (MLD)
Advanced Infrared Search and Track (IRST)
AN/ALR-94 electronic warfare system: 250 nautical miles (460 km) or more detection range for radar warning receiver (RWR)
Integrated CNI Avionics including:
Inter/Intra-Flight Datalink (IFDL)
MIDS-JTRS
Link 16/JTIDS
IFF (Mode 5)
Embedded GPS/INS (EGI)
TACAN
HAVE QUICK/SATURN
SINCGARS
MJU-39/40 flares for protection against IR missiles
== See also ==
Advanced Tactical Fighter
Aircraft in fiction#F-22 Raptor
Related development
Lockheed YF-22
Lockheed Martin FB-22
Lockheed Martin X-44 MANTA
Aircraft of comparable role, configuration, and era
Chengdu J-20
Lockheed Martin F-35 Lightning II
Sukhoi Su-57
TAI TF Kaan
Related lists
List of fighter aircraft
List of Lockheed aircraft
List of active United States military aircraft
List of megaprojects, Aerospace
List of military electronics of the United States
== Notes ==
== References ==
=== Citations ===
=== Bibliography ===
== Further reading ==
Wallace, Mike; Holder, William G. (1998). Lockheed-Martin F-22 Raptor: An Illustrated History. Atglen, PA: Schiffer Publishing. ISBN 9780764305580. OCLC 39910177.
== External links ==
Official website
F-22 Demo at 2007 Capital Airshow in Sacramento – with narrative by F-22 pilot Paul "Max" Moga
Contracting Strategy for F-22 Modernization - U.S. Department of Defense |
F-35 | The Lockheed Martin F-35 Lightning II is an American family of single-seat, single-engine, supersonic stealth strike fighters. A multirole combat aircraft designed for both air superiority and strike missions, it also has electronic warfare and intelligence, surveillance, and reconnaissance capabilities. Lockheed Martin is the prime F-35 contractor with principal partners Northrop Grumman and BAE Systems. The aircraft has three main variants: the conventional takeoff and landing (CTOL) F-35A, the short take-off and vertical-landing (STOVL) F-35B, and the carrier variant (CV) catapult-assisted take-off but arrested recovery (CATOBAR) F-35C.
The aircraft descends from the Lockheed Martin X-35, which in 2001 beat the Boeing X-32 to win the Joint Strike Fighter (JSF) program intended to replace the F-16 Fighting Falcon, F/A-18 Hornet, and the McDonnell Douglas AV-8B Harrier II "jump jet", among others. Its development is principally funded by the United States, with additional funding from program partner countries from the North Atlantic Treaty Organization (NATO) and close U.S. allies, including Australia, Canada, Denmark, Italy, the Netherlands, Norway, the United Kingdom, and formerly Turkey. Several other countries have also ordered, or are considering ordering, the aircraft. The program has drawn criticism for its unprecedented size, complexity, ballooning costs, and delayed deliveries. The acquisition strategy of concurrent production of the aircraft while it was still in development and testing led to expensive design changes and retrofits. As of July 2024, the average flyaway costs per plane are: US$82.5 million for the F-35A, $109 million for the F-35B, and $102.1 million for the F-35C.
The F-35 first flew in 2006 and entered service with the U.S. Marine Corps F-35B in July 2015, followed by the U.S. Air Force F-35A in August 2016 and the U.S. Navy F-35C in February 2019. The aircraft was first used in combat in 2018 by the Israeli Air Force. The U.S. plans to buy 2,456 F-35s through 2044, which will represent the bulk of the crewed tactical aviation of the U.S. Air Force, Navy, and Marine Corps for several decades; the aircraft is planned to be a cornerstone of NATO and U.S.-allied air power and to operate to 2070.
== Development ==
=== Program origins ===
The F-35 was the product of the Joint Strike Fighter (JSF) program, which was the merger of various combat aircraft programs from the 1980s and 1990s. One progenitor program was the Defense Advanced Research Projects Agency (DARPA) Advanced Short Take-Off/Vertical Landing (ASTOVL) which ran from 1983 to 1994; ASTOVL aimed to develop a Harrier jump jet replacement for the U.S. Marine Corps (USMC) and the UK Royal Navy. Under one of ASTOVL's classified programs, the Supersonic STOVL Fighter (SSF), Lockheed's Skunk Works conducted research for a stealthy supersonic STOVL fighter intended for both U.S. Air Force (USAF) and USMC; among key STOVL technologies explored was the shaft-driven lift fan (SDLF) system. Lockheed's concept was a single-engine canard delta aircraft weighing about 24,000 lb (11,000 kg) empty. ASTOVL was rechristened as the Common Affordable Lightweight Fighter (CALF) in 1993 and involved Lockheed, McDonnell Douglas, and Boeing.
The end of the Cold War and the collapse of the Soviet Union in 1991 caused considerable reductions in Department of Defense (DoD) spending and subsequent restructuring. In 1993, the Joint Advanced Strike Technology (JAST) program emerged following the cancellation of the USAF's Multi-Role Fighter (MRF) and U.S. Navy's (USN) Advanced Attack/Fighter (A/F-X) programs. MRF, a program for a relatively affordable F-16 Fighting Falcon replacement, was scaled back and delayed due to post–Cold War defense posture easing F-16 fleet usage and thus extending its service life as well as increasing budget pressure from the Lockheed Martin F-22 Advanced Tactical Fighter (ATF) program. The A/F-X, initially known as the Advanced-Attack (A-X), began in 1991 as the USN's follow-on to the Advanced Tactical Aircraft (ATA) program for an Grumman A-6 Intruder replacement; the ATA's resulting McDonnell Douglas A-12 Avenger II had been canceled due to technical problems and cost overruns in 1991. In the same year, the termination of the Naval Advanced Tactical Fighter (NATF), a naval development of USAF's ATF program to replace the Grumman F-14 Tomcat, resulted in additional fighter capability being added to A-X, which was then renamed A/F-X. Amid increased budget pressure, the DoD's Bottom-Up Review (BUR) in September 1993 announced MRF's and A/F-X's cancellations, with applicable experience brought to the emerging JAST program. JAST was not meant to develop a new aircraft, but rather to develop requirements, mature technologies, and demonstrate concepts for advanced strike warfare.
As JAST progressed, the need for concept demonstrator aircraft by 1996 emerged, which would coincide with the full-scale flight demonstrator phase of ASTOVL/CALF. Because the ASTOVL/CALF concept appeared to align with the JAST charter, the two programs were eventually merged in 1994 under the JAST name, with the program now serving the USAF, USMC, and USN. JAST was subsequently renamed to Joint Strike Fighter (JSF) in 1995, with STOVL submissions by McDonnell Douglas, Northrop Grumman, Lockheed Martin, and Boeing. The JSF was expected to eventually replace large numbers of multi-role and strike fighters in the inventories of the US and its allies, including the Harrier, F-16, F/A-18, Fairchild A-10 Thunderbolt II, and Lockheed F-117 Nighthawk.
International participation is a key aspect of the JSF program, starting with United Kingdom participation in the ASTOVL program. Many international partners requiring modernization of their air forces were interested in the JSF. The United Kingdom joined JAST/JSF as a founding member in 1995 and thus became the only Tier 1 partner of the JSF program; Italy, the Netherlands, Denmark, Norway, Canada, Australia, and Turkey joined the program during the Concept Demonstration Phase (CDP), with Italy and the Netherlands being Tier 2 partners and the rest Tier 3. Consequently, the aircraft was developed in cooperation with international partners and available for export.
=== JSF competition ===
Boeing and Lockheed Martin were selected in early 1997 for CDP, with their concept demonstrator aircraft designated X-32 and X-35 respectively; the McDonnell Douglas team was eliminated and Northrop Grumman and British Aerospace joined the Lockheed Martin team. Each firm would produce two prototype air vehicles to demonstrate conventional takeoff and landing (CTOL), carrier takeoff and landing (CV), and STOVL. Lockheed Martin's design would make use of the work on the SDLF system conducted under the ASTOVL/CALF program. The key aspect of the X-35 that enabled STOVL operation, the SDLF system consists of the lift fan in the forward center fuselage that could be activated by engaging a clutch that connects the driveshaft to the turbines and thus augmenting the thrust from the engine's swivel nozzle. Research from prior aircraft incorporating similar systems, such as the Convair Model 200, Rockwell XFV-12, and Yakovlev Yak-141, were also taken into consideration. By contrast, Boeing's X-32 employed direct lift system that the augmented turbofan would be reconfigured to when engaging in STOVL operation.
Lockheed Martin's commonality strategy was to replace the STOVL variant's SDLF with a fuel tank and the aft swivel nozzle with a two-dimensional thrust vectoring nozzle for the CTOL variant. STOVL operation is made possible through a patented shaft-driven LiftFan propulsion system. This would enable identical aerodynamic configuration for the STOVL and CTOL variants, while the CV variant would have an enlarged wing to reduce landing speed for carrier recovery. Due to aerodynamic characteristics and carrier recovery requirements from the JAST merger, the design configuration settled on a conventional tail compared to the canard delta design from the ASTOVL/CALF; notably, the conventional tail configuration offers much lower risk for carrier recovery compared to the ASTOVL/CALF canard configuration, which was designed without carrier compatibility in mind. This enabled greater commonality between all three variants, as the commonality goal was important at this design stage. Lockheed Martin's prototypes would consist of the X-35A for demonstrating CTOL before converting it to the X-35B for STOVL demonstration and the larger-winged X-35C for CV compatibility demonstration.
The X-35A first flew on 24 October 2000 and conducted flight tests for subsonic and supersonic flying qualities, handling, range, and maneuver performance. After 28 flights, the aircraft was then converted into the X-35B for STOVL testing, with key changes including the addition of the SDLF, the three-bearing swivel module (3BSM), and roll-control ducts. The X-35B would successfully demonstrate the SDLF system by performing stable hover, vertical landing, and short takeoff in less than 500 ft (150 m). The X-35C first flew on 16 December 2000 and conducted field landing carrier practice tests.
On 26 October 2001, Lockheed Martin was declared the winner and was awarded the System Development and Demonstration (SDD) contract; Pratt & Whitney was separately awarded a development contract for the F135 engine for the JSF. The F-35 designation, which was out of sequence with standard DoD numbering, was allegedly determined on the spot by program manager Major General Mike Hough; this came as a surprise even to Lockheed Martin, which had expected the F-24 designation for the JSF.
=== Design and production ===
As the JSF program moved into the System Development and Demonstration phase, the X-35 demonstrator design was modified to create the F-35 combat aircraft. The forward fuselage was lengthened by 5 inches (13 cm) to make room for mission avionics, while the horizontal stabilizers were moved 2 inches (5.1 cm) aft to retain balance and control. The diverterless supersonic inlet changed from a four-sided to a three-sided cowl shape and was moved 30 inches (76 cm) aft. The fuselage section was fuller, the top surface raised by 1 inch (2.5 cm) along the centerline and the lower surface bulged to accommodate weapons bays. Following the designation of the X-35 prototypes, the three variants were designated F-35A (CTOL), F-35B (STOVL), and F-35C (CV), all with a design service life of 8,000 hours. Prime contractor Lockheed Martin performs overall systems integration and final assembly and checkout (FACO) at Air Force Plant 4 in Fort Worth, Texas, while Northrop Grumman and BAE Systems supply components for mission systems and airframe.
Adding the systems of a fighter aircraft added weight. The F-35B gained the most, largely due to a 2003 decision to enlarge the weapons bays for commonality between variants; the total weight growth was reportedly up to 2,200 pounds (1,000 kg), over 8%, causing all STOVL key performance parameter (KPP) thresholds to be missed. In December 2003, the STOVL Weight Attack Team (SWAT) was formed to reduce the weight increase; changes included thinned airframe members, smaller weapons bays and vertical stabilizers, less thrust fed to the roll-post outlets, and redesigning the wing-mate joint, electrical elements, and the airframe immediately aft of the cockpit. The inlet was also revised to accommodate more powerful, greater mass flow engines. Many changes from the SWAT effort were applied to all three variants for commonality. By September 2004, these efforts had reduced the F-35B's weight by over 3,000 pounds (1,400 kg), while the F-35A and F-35C were reduced in weight by 2,400 pounds (1,100 kg) and 1,900 pounds (860 kg) respectively. The weight reduction work cost $6.2 billion and caused an 18-month delay.
The first F-35A, designated AA-1, was rolled out at Fort Worth on 19 February 2006 and first flew on 15 December 2006 with chief test pilot Jon S. Beesley at the controls. In 2006, the F-35 was given the name "Lightning II" after the Lockheed P-38 Lightning of World War II. Some USAF pilots have nicknamed the aircraft "Panther" instead, and other nicknames include "Fat Amy" and "Battle Penguin".
The aircraft's software was developed as six releases, or Blocks, for SDD. The first two Blocks, 1A and 1B, readied the F-35 for initial pilot training and multi-level security. Block 2A improved the training capabilities, while 2B was the first combat-ready release planned for the USMC's Initial Operating Capability (IOC). Block 3i retains the capabilities of 2B while having new Technology Refresh 2 (TR-2) hardware and was planned for the USAF's IOC. The final release for SDD, Block 3F, would have full flight envelope and all baseline combat capabilities. Alongside software releases, each block also incorporates avionics hardware updates and air vehicle improvements from flight and structural testing. In what is known as "concurrency", some low rate initial production (LRIP) aircraft lots would be delivered in early Block configurations and eventually upgraded to Block 3F once development is complete. After 17,000 flight test hours, the final flight for the SDD phase was completed in April 2018. Like the F-22, the F-35 has been targeted by cyberattacks and technology theft efforts, as well as potential vulnerabilities in the integrity of the supply chain.
Testing found several major problems: early F-35B airframes were vulnerable to premature cracking, the F-35C arrestor hook design was unreliable, fuel tanks were too vulnerable to lightning strikes, the helmet display had problems, and more. Software was repeatedly delayed due to its unprecedented scope and complexity. In 2009, the DoD Joint Estimate Team (JET) estimated that the program was 30 months behind the public schedule. In 2011, the program was "re-baselined"; that is, its cost and schedule goals were changed, pushing the IOC from the planned 2010 to July 2015. The decision to simultaneously test, fix defects, and begin production was criticized as inefficient; in 2014, Under Secretary of Defense for Acquisition Frank Kendall called it "acquisition malpractice". The three variants shared just 25% of their parts, far below the anticipated commonality of 70%.
The program received considerable criticism for cost overruns and for the total projected lifetime cost, as well as quality management shortcomings by contractors. As of August 2023, the program was 80% over budget and 10 years late.
The JSF program was expected to cost about $200 billion for acquisition in base-year 2002 dollars when SDD was awarded in 2001. As early as 2005, the Government Accountability Office (GAO) had identified major program risks in cost and schedule. The costly delays strained the relationship between the Pentagon and contractors. By 2017, delays and cost overruns had pushed the F-35 program's expected acquisition costs to $406.5 billion, with total lifetime cost (i.e., to 2070) to $1.5 trillion in then-year dollars which also includes operations and maintenance. The F-35A's unit cost (not including engine) for LRIP Lot 13 was $79.2 million in base-year 2012 dollars. Delays in development and operational test and evaluation, including integration into the Joint Simulation Environment, pushed full-rate production decision from the end of 2019 to March 2024, although actual production rate had already approached the full rate by 2020; the combined full rate at the Fort Worth, Italy, and Japan FACO plants is 156 aircraft annually.
=== Upgrades and further development ===
The F-35 is expected to be continually upgraded over its lifetime. The first combat-capable Block 2B configuration, which had basic air-to-air and strike capabilities, was declared ready by the USMC in July 2015. The Block 3F configuration began operational test and evaluation (OT&E) in December 2018 and its completion in late 2023 concluded SDD in March 2024. The F-35 program is also conducting sustainment and upgrade development, with early aircraft from LRIP lot 2 onwards gradually upgraded to the baseline Block 3F standard by 2021.
With Block 3F as the final build for SDD, the first major upgrade program is Block 4 which began development in 2019 and was initially captured under the Continuous Capability Development and Delivery (C2D2) program. Block 4 is expected to enter service in incremental steps from the late 2020s to early 2030s and integrates additional weapons, including those unique to international customers, improved sensor capabilities including the new AN/APG-85 AESA radar and additional ESM bandwidth, and adds Remotely Operated Video Enhanced Receiver (ROVER) support. C2D2 also places greater emphasis on agile software development to enable quicker releases.
The key enabler of Block 4 is Technology Refresh 3 (TR-3) avionics hardware, which consists of new display, core processor, and memory modules to support increased processing requirements, as well as engine upgrade that increases the amount of cooling available to support the additional mission systems. The engine upgrade effort explored both improvements to the F135 as well as significantly more power and efficient adaptive cycle engines. In 2018, General Electric and Pratt & Whitney were awarded contracts to develop adaptive cycle engines for potential application in the F-35, and in 2022, the F-35 Adaptive Engine Replacement program was launched to integrate them. However, in 2023 the USAF chose an improved F135 under the Engine Core Upgrade (ECU) program over an adaptive cycle engine due to cost as well as concerns over risk of integrating the new engine, initially designed for the F-35A, on the B and C. Difficulties with the new TR-3 hardware, including regression testing, have caused delays to Block 4 as well as a halt in aircraft deliveries from July 2023 to July 2024.
Defense contractors have offered upgrades to the F-35 outside of official program contracts. In 2013, Northrop Grumman disclosed its development of a directional infrared countermeasures suite, named Threat Nullification Defensive Resource (ThNDR). The countermeasure system would share the same space as the Distributed Aperture System (DAS) sensors and acts as a laser missile jammer to protect against infrared-homing missiles.
Israel operates a unique subvariant of the F-35A, designated the F-35I, that is designed to better interface with and incorporate Israeli equipment and weapons. The Israeli Air Force also has their own F-35I test aircraft that provides more access to the core avionics to include their own equipment.
=== Procurement and international participation ===
The United States is the primary customer and financial backer, with planned procurement of 1,763 F-35As for the USAF, 353 F-35Bs and 67 F-35Cs for the USMC, and 273 F-35Cs for the USN. Additionally, the United Kingdom, Italy, the Netherlands, Turkey, Australia, Norway, Denmark and Canada have agreed to contribute US$4.375 billion towards development costs, with the United Kingdom contributing about 10% of the planned development costs as the sole Tier 1 partner. Britain supplies ejector seats, rear fuselage, active interceptor systems, targeting lasers and weapon release cables, mainly through British Aerospace, amounting to 15% of the value of the F-35, and is the largest supplier of spare parts for the jet after the US. The initial plan was that the U.S. and eight major partner countries would acquire over 3,100 F-35s through 2035. The three tiers of international participation generally reflect financial stake in the program, the amount of technology transfer and subcontracts open for bid by national companies, and the order in which countries can obtain production aircraft. Alongside program partner countries, Israel and Singapore have joined as Security Cooperative Participants (SCP). Sales to SCP and non-partner states, including Belgium, Japan, and South Korea, are made through the Pentagon's Foreign Military Sales program. Turkey was removed from the F-35 program in July 2019 over security concerns following its purchase of a Russian S-400 surface-to-air missile system.
As of July 2024, the average flyaway costs per plane are: $82.5 million for the F-35A, $109 million for the F-35B, and $102.1 million for the F-35C.
== Design ==
=== Overview ===
The F-35 is a family of single-engine, supersonic, stealth multirole strike fighters. The second fifth-generation fighter to enter US service and the first operational supersonic STOVL stealth fighter, the F-35 emphasizes low observables, advanced avionics and sensor fusion that enable a high level of situational awareness and long range lethality; the USAF considers the aircraft its primary strike fighter for conducting suppression of enemy air defense (SEAD) and air interdiction missions, owing to the advanced sensors and mission systems.
The F-35 has a wing-tail configuration with two vertical stabilizers canted for stealth. Flight control surfaces include leading-edge flaps, flaperons, rudders, and all-moving horizontal tails (stabilators); leading edge root extensions or chines also run forwards to the inlets. The relatively short 35-foot wingspan of the F-35A and F-35B is set by the requirement to fit inside USN amphibious assault ship parking areas and elevators; the F-35C's larger wing is more fuel efficient. The fixed diverterless supersonic inlets (DSI) use a bumped compression surface and forward-swept cowl to shed the boundary layer of the forebody away from the inlets, which form a Y-duct for the engine. Structurally, the F-35 drew upon lessons from the F-22; composites comprise 35% of airframe weight, with the majority being bismaleimide and composite epoxy materials as well as some carbon nanotube-reinforced epoxy in later production lots. The F-35 is considerably heavier than the lightweight fighters it replaces, with the lightest variant having an empty weight of 29,300 lb (13,300 kg); much of the weight can be attributed to the internal weapons bays and the extensive avionics carried.
While lacking the kinematic performance of the larger twin-engine F-22, the F-35 is competitive with fourth-generation fighters such as the F-16 and F/A-18, especially when they carry weapons because the F-35's internal weapons bay eliminates drag from external stores. All variants have a top speed of Mach 1.6 (1,220 mph; 1,960 km/h), attainable with full internal payload. The Pratt & Whitney F135 engine gives good subsonic acceleration and energy, with supersonic dash in afterburner. The F-35, while not a "supercruising" aircraft, can fly at Mach 1.2 (913 mph; 1,470 km/h) for a dash of 150 miles (240 km) with afterburners. This ability can be useful in battlefield situations. The large stabilitors, leading edge extensions and flaps, and canted rudders provide excellent high alpha (angle-of-attack) characteristics, with a trimmed alpha of 50°. Relaxed stability and triplex-redundant fly-by-wire controls provide excellent handling qualities and departure resistance. Having over double the F-16's internal fuel, the F-35 has a considerably greater combat radius, while stealth also enables a more efficient mission flight profile.
=== Sensors and avionics ===
The F-35's mission systems are among the most complex aspects of the aircraft. The avionics and sensor fusion are designed to improve the pilot's situational awareness and command-and-control capabilities and facilitate network-centric warfare. Key sensors include the Northrop Grumman AN/APG-81 active electronically scanned array (AESA) radar, BAE Systems AN/ASQ-239 Barracuda electronic warfare system, Northrop Grumman/Raytheon AN/AAQ-37 Electro-optical Distributed Aperture System (DAS), Lockheed Martin AN/AAQ-40 Electro-Optical Targeting System (EOTS) and Northrop Grumman AN/ASQ-242 Communications, Navigation, and Identification (CNI) suite. The F-35 was designed for its sensors to work together to provide a cohesive image of the local battlespace; for example, the APG-81 radar also acts as a part of the electronic warfare system.
Much of the F-35's software was developed in C and C++ programming languages, while Ada83 code from the F-22 was also used; the Block 3F software has 8.6 million lines of code. The Green Hills Software Integrity DO-178B real-time operating system (RTOS) runs on integrated core processors (ICPs); data networking includes the IEEE 1394b and Fibre Channel buses. The avionics use commercial off-the-shelf (COTS) components when practical to make upgrades cheaper and more flexible; for example, to enable fleet software upgrades for the software-defined radio (SDR) systems. The mission systems software, particularly for sensor fusion, was one of the program's most difficult parts and responsible for substantial program delays.
The APG-81 radar uses electronic scanning for rapid beam agility and incorporates passive and active air-to-air modes, strike modes, and synthetic aperture radar (SAR) capability, with multiple target track-while-scan at ranges in excess of 80 nmi (150 km). The antenna is tilted backwards for stealth. Complementing the radar is the AAQ-37 DAS, which consists of six infrared sensors that provide all-aspect missile launch warning and target tracking; the DAS acts as a situational awareness infrared search-and-track (SAIRST) and gives the pilot spherical infrared and night-vision imagery on the helmet visor. The ASQ-239 Barracuda electronic warfare system has ten radio frequency antennas embedded into the edges of the wing and tail for all-aspect radar warning receiver (RWR). It also provides sensor fusion of radio frequency and infrared tracking functions, geolocation threat targeting, and multispectral image countermeasures for self-defense against missiles. The electronic warfare system can detect and jam hostile radars. The AAQ-40 EOTS is mounted behind a faceted low-observable window under the nose and performs laser targeting, forward-looking infrared (FLIR), and long range IRST functions. The ASQ-242 CNI suite uses a half dozen physical links, including the directional Multifunction Advanced Data Link (MADL), for covert CNI functions. Through sensor fusion, information from radio frequency receivers and infrared sensors are combined to form a single tactical picture for the pilot. The all-aspect target direction and identification can be shared via MADL to other platforms without compromising low observability, while Link 16 enables communication with older systems.
The F-35 was designed to accept upgrades to its processors, sensors, and software over its lifespan. Technology Refresh 3, which includes a new core processor and a new cockpit display, is planned for Lot 15 aircraft. Lockheed Martin has offered the Advanced EOTS for the Block 4 configuration; the improved sensor fits into the same area as the baseline EOTS with minimal changes. In June 2018, Lockheed Martin picked Raytheon for improved DAS. The USAF has studied the potential for the F-35 to orchestrate attacks by unmanned combat aerial vehicles (UCAVs) via its sensors and communications equipment.
A new radar called the AN/APG-85 is planned for Block 4 F-35s. According to the JPO, the new radar will be compatible with all three major F-35 variants. However, it is unclear if older aircraft will be retrofitted with the new radar.
=== Stealth and signatures ===
Stealth is a key aspect of the F-35's design, and radar cross-section (RCS) is minimized through careful shaping of the airframe and the use of radar-absorbent materials (RAM); visible measures to reduce RCS include alignment of edges and continuous curvature of surfaces, serration of skin panels, and the masking of the engine face and turbine. Additionally, the F-35's diverterless supersonic inlet (DSI) uses a compression bump and forward-swept cowl rather than a splitter gap or bleed system to divert the boundary layer away from the inlet duct, eliminating the diverter cavity and further reducing radar signature. The RCS of the F-35 has been characterized as lower than a metal golf ball at certain frequencies and angles; in some conditions, the F-35 compares favorably to the F-22 in stealth. For maintainability, the F-35's stealth design took lessons from earlier stealth aircraft such as the F-22; the F-35's radar-absorbent fibermat skin is more durable and requires less maintenance than older topcoats. The aircraft also has reduced infrared and visual signatures as well as strict controls of radio frequency emitters to prevent their detection. The F-35's stealth design is primarily focused on high-frequency X-band wavelengths; low-frequency radars can spot stealthy aircraft due to Rayleigh scattering, but such radars are also conspicuous, susceptible to clutter, and lack precision. To disguise its RCS, the aircraft can mount four Luneburg lens reflectors.
Noise from the F-35 caused concerns in residential areas near potential bases for the aircraft, and residents near two such bases—Luke Air Force Base, Arizona, and Eglin Air Force Base (AFB), Florida—requested environmental impact studies in 2008 and 2009 respectively. Although the noise levels, in decibels, were comparable to those of prior fighters such as the F-16, the F-35's sound power is stronger—particularly at lower frequencies. Subsequent surveys and studies have indicated that the noise of the F-35 was not perceptibly different from the F-16 and F/A-18E/F, though the greater low-frequency noise was noticeable for some observers.
=== Cockpit ===
The glass cockpit was designed to give the pilot good situational awareness. The main display is a 20-by-8-inch (50 by 20 cm) panoramic touchscreen, which shows flight instruments, stores management, CNI information, and integrated caution and warnings; the pilot can customize the arrangement of the information. Below the main display is a smaller stand-by display. The cockpit has a speech-recognition system developed by Adacel. The F-35 does not have a head-up display; instead, flight and combat information is displayed on the visor of the pilot's helmet in a helmet-mounted display system (HMDS). The one-piece tinted canopy is hinged at the front and has an internal frame for structural strength. The Martin-Baker US16E ejection seat is launched by a twin-catapult system housed on side rails. There is a right-hand side stick and throttle hands-on throttle-and-stick system. For life support, an onboard oxygen-generation system (OBOGS) is fitted and powered by the Integrated Power Package (IPP), with an auxiliary oxygen bottle and backup oxygen system for emergencies.
The Vision Systems International helmet display is a key piece of the F-35's human-machine interface. Instead of the head-up display mounted atop the dashboard of earlier fighters, the HMDS puts flight and combat information on the helmet visor, allowing the pilot to see it no matter which way they are facing. Infrared and night vision imagery from the Distributed Aperture System can be displayed directly on the HMDS and enables the pilot to "see through" the aircraft. The HMDS allows an F-35 pilot to fire missiles at targets even when the nose of the aircraft is pointing elsewhere by cuing missile seekers at high angles off-boresight. Each helmet costs $400,000. The HMDS weighs more than traditional helmets, and there is concern that it can endanger lightweight pilots during ejection.
Due to the HMDS's vibration, jitter, night-vision and sensor display problems during development, Lockheed Martin and Elbit issued a draft specification in 2011 for an alternative HMDS based on the AN/AVS-9 night vision goggles as backup, with BAE Systems chosen later that year. A cockpit redesign would be needed to adopt an alternative HMDS. Following progress on the baseline helmet, development on the alternative HMDS was halted in October 2013. In 2016, the Gen 3 helmet with improved night vision camera, new liquid crystal displays, automated alignment and software enhancements was introduced with LRIP lot 7.
=== Armament ===
To preserve its stealth shaping, the F-35 has two internal weapons bays each with two weapons stations. The two outboard weapon stations each can carry ordnance up to 2,500 lb (1,100 kg), or 1,500 lb (680 kg) for the F-35B, while the two inboard stations carry air-to-air missiles. Air-to-surface weapons for the outboard station include the Joint Direct Attack Munition (JDAM), Paveway series of bombs, Joint Standoff Weapon (JSOW), and cluster munitions (Wind Corrected Munitions Dispenser). The station can also carry multiple smaller munitions such as the GBU-39 Small Diameter Bombs (SDB), GBU-53/B StormBreaker and SPEAR 3; up to four SDBs can be carried per station for the F-35A and F-35C, and three for the F-35B. The F-35A achieved certification to carry the B61 Mod 12 nuclear bomb in October 2023. The inboard station can carry the AIM-120 AMRAAM and eventually the AIM-260 JATM. Two compartments behind the weapons bays contain flares, chaff, and towed decoys.
The aircraft can use six external weapons stations for missions that do not require stealth. The wingtip pylons each can carry an AIM-9X or AIM-132 ASRAAM and are canted outwards to reduce their radar cross-section. Additionally, each wing has a 5,000 lb (2,300 kg) inboard station and a 2,500 lb (1,100 kg) middle station, or 1,500 lb (680 kg) for F-35B. The external wing stations can carry large air-to-surface weapons that would not fit inside the weapons bays such as the AGM-158 Joint Air to Surface Standoff Missile (JASSM) or AGM-158C LRASM cruise missile. An air-to-air missile load of eight AIM-120s and two AIM-9s is possible using internal and external weapons stations; a configuration of six 2,000 lb (910 kg) bombs, two AIM-120s and two AIM-9s can also be arranged. The F-35 is armed with a 25 mm GAU-22/A rotary cannon, a lighter four-barrel variant of the GAU-12/U Equalizer. On the F-35A this is mounted internally near the left wing root with 182 rounds carried; the gun is more effective against ground targets than the 20 mm gun carried by other USAF fighters. In 2020, a USAF report noted "unacceptable" accuracy problems with the GAU-22/A on the F-35A. These were due to "misalignments" in the gun's mount, which was also susceptible to cracking. These problems were resolved by 2024. The F-35B and F-35C have no internal gun and instead can use a Terma A/S multi-mission pod (MMP) carrying the GAU-22/A and 220 rounds; the pod is mounted on the centerline of the aircraft and shaped to reduce its radar cross-section. In lieu of the gun, the pod can also be used for different equipment and purposes, such as electronic warfare, aerial reconnaissance, or rear-facing tactical radar. The pod was not susceptible to the accuracy issues that once plagued the gun on the F-35A variant, though was apparently not problem-free.
Lockheed Martin is developing a weapon rack called Sidekick that would enable the internal outboard station to carry two AIM-120s, thus increasing the internal air-to-air payload to six missiles, currently offered for Block 4. Block 4 will also have a rearranged hydraulic line and bracket to allow the F-35B to carry four SDBs per internal outboard station; integration of the MBDA Meteor is also planned. The USAF and USN are planning to integrate the AGM-88G AARGM-ER internally in the F-35A and F-35C. Norway and Australia are funding an adaptation of the Naval Strike Missile (NSM) for the F-35; designated Joint Strike Missile (JSM), two missiles can be carried internally with an additional four externally. Both hypersonic missiles and direct energy weapons such as solid-state laser are currently being considered as future upgrades; in 2024, Lockheed Martin disclosed its proposed Mako hypersonic missile, which can be carried internally in the F-35A and C and externally on the B. Additionally, Lockheed Martin is studying integrating a fiber laser that uses spectral beam combining multiple individual laser modules into a single high-power beam, which can be scaled to various levels.
The USAF plans for the F-35A to take up the close air support (CAS) mission in contested environments; amid criticism that it is not as well suited as a dedicated attack platform, USAF chief of staff Mark Welsh placed a focus on weapons for CAS sorties, including guided rockets, fragmentation rockets that shatter into individual projectiles before impact, and more compact ammunition for higher capacity gun pods. Fragmentary rocket warheads create greater effects than cannon shells as each rocket creates a "thousand-round burst", delivering more projectiles than a strafing run.
=== Engine ===
The aircraft is powered by a single Pratt & Whitney F135 low-bypass augmented turbofan with rated thrust of 28,000 lbf (125 kN) at military power and 43,000 lbf (191 kN) with afterburner. Derived from the Pratt & Whitney F119 used by the F-22, the F135 has a larger fan and higher bypass ratio to increase subsonic thrust and fuel efficiency, and unlike the F119, is not optimized for supercruise. The engine contributes to the F-35's stealth by having a low-observable augmenter, or afterburner, that incorporates fuel injectors into thick curved vanes; these vanes are covered by ceramic radar-absorbent materials and mask the turbine. The stealthy augmenter had problems with pressure pulsations, or "screech", at low altitude and high speed early in its development. The low-observable axisymmetric nozzle consists of 15 partially overlapping flaps that create a sawtooth pattern at the trailing edge, which reduces radar signature and creates shed vortices that reduce the infrared signature of the exhaust plume. Due to the engine's large dimensions, the U.S. Navy had to modify its underway replenishment system to facilitate at-sea logistics support. The F-35's Integrated Power Package (IPP) performs power and thermal management and integrates environment control, auxiliary power unit, engine starting, and other functions into a single system.
The F135-PW-600 variant for the F-35B incorporates the Shaft-Driven Lift Fan (SDLF) to allow STOVL operations. Designed by Lockheed Martin and developed by Rolls-Royce, the SDLF, also known as the Rolls-Royce LiftSystem, consists of the lift fan, drive shaft, two roll posts, and a "three-bearing swivel module" (3BSM). The nozzle features three bearings resembling a short cylinder with nonparallel bases. As the toothed edges are rotated by motors, the nozzle swivels from being linear with the engine to being perpendicular. The thrust vectoring 3BSM nozzle allows the main engine exhaust to be deflected downward at the tail of the aircraft and is moved by a "fueldraulic" actuator that uses pressurized fuel as the working fluid. Unlike the Harrier's Pegasus engine that entirely uses direct engine thrust for lift, the F-35B's system augments the swivel nozzle's thrust with the lift fan; the fan is powered by the low-pressure turbine through a drive shaft when engaged with a clutch and placed near the front of the aircraft to provide a torque countering that of the 3BSM nozzle. Roll control during slow flight is achieved by diverting unheated engine bypass air through wing-mounted thrust nozzles called roll posts.
An alternative engine, the General Electric/Allison/Rolls-Royce F136, was being developed in the 1990s and 2000s; originally, F-35 engines from Lot 6 onward were competitively tendered. Using technology from the General Electric YF120, the F136 was claimed to have a greater temperature margin than the F135 due to the higher mass flow design making full use of the inlet. The F136 was canceled in December 2011 due to lack of funding.
The F-35 is expected to receive propulsion upgrades over its lifecycle to adapt to emerging threats and enable additional capabilities. In 2016, the Adaptive Engine Transition Program (AETP) was launched to develop and test adaptive cycle engines, with one major potential application being the re-engining of the F-35; in 2018, both GE and P&W were awarded contracts to develop 45,000 lbf (200 kN) thrust class demonstrators, with the designations XA100 and XA101 respectively. In addition to potential re-engining, P&W is also developing improvements to the baseline F135; the Engine Core Upgrade (ECU) is an update to the power module, originally called Growth Option 1.0 and then Engine Enhancement Package, that improves engine thrust and fuel burn by 5% and bleed air cooling capacity by 50% to support Block 4. The F135 ECU was selected over AETP engines in 2023 to provide additional power and cooling for the F-35. Although GE had expected that the more revolutionary XA100 could enter service with the F-35A and C by 2027 and could be adapted for the F-35B, the increased cost and risk caused the USAF to choose the F135 ECU instead.
=== Maintenance and logistics ===
The F-35 is designed to require less maintenance than prior stealth aircraft. Some 95% of all field-replaceable parts are "one deep"—that is, nothing else needs to be removed to reach the desired part; for instance, the ejection seat can be replaced without removing the canopy. The F-35 has a fibermat radar-absorbent material (RAM) baked into the skin, which is more durable, easier to work with, and faster to cure than older RAM coatings; similar coatings are being considered for application on older stealth aircraft such as the F-22. Skin corrosion on the F-22 led to the F-35 using a less galvanic corrosion-inducing skin gap filler, fewer gaps in the airframe skin needing filler, and better drainage. The flight control system uses electro-hydrostatic actuators rather than traditional hydraulic systems; these controls can be powered by lithium-ion batteries in case of emergency. Commonality between variants led to the USMC's first aircraft maintenance Field Training Detachment, which applied USAF lessons to their F-35 operations.
The F-35 was initially supported by a computerized maintenance management system named Autonomic Logistics Information System (ALIS). In concept, any F-35 can be serviced at any maintenance facility and all parts can be globally tracked and shared as needed. Due to numerous problems, such as unreliable diagnoses, excessive connectivity requirements, and security vulnerabilities, ALIS is being replaced by the cloud-based Operational Data Integrated Network (ODIN). From September 2020, ODIN base kits (OBKs) were running ALIS software, as well as ODIN software, first at Marine Corps Air Station (MCAS) Yuma, Arizona, then at Naval Air Station Lemoore, California, in support of Strike Fighter Squadron (VFA) 125 on 16 July 2021, and then Nellis Air Force Base, Nevada, in support of the 422nd Test and Evaluation Squadron (TES) on 6 August 2021. In 2022, over a dozen more OBK sites will replace the ALIS's Standard Operating Unit unclassified (SOU-U) servers. OBK performance is double that of ALIS.
== Operational history ==
=== Testing ===
The first F-35A, AA-1, conducted its engine run in September 2006 and first flew on 15 December 2006. Unlike all subsequent aircraft, AA-1 did not have the weight optimization from SWAT; consequently, it mainly tested subsystems common to subsequent aircraft, such as the propulsion, electrical system, and cockpit displays. This aircraft was retired from flight testing in December 2009 and was used for live-fire testing at NAS China Lake.
The first F-35B, BF-1, flew on 11 June 2008, while the first weight-optimized F-35A and F-35C, AF-1 and CF-1, flew on 14 November 2009 and 6 June 2010 respectively. The F-35B's first hover was on 17 March 2010, followed by its first vertical landing the next day. The F-35 Integrated Test Force (ITF) consisted of 18 aircraft at Edwards Air Force Base and Naval Air Station Patuxent River. Nine aircraft at Edwards, five F-35As, three F-35Bs, and one F-35C, performed flight sciences testing such as F-35A envelope expansion, flight loads, stores separation, as well as mission systems testing. The other nine aircraft at Patuxent River, five F-35Bs and four F-35Cs, were responsible for F-35B and C envelope expansion and STOVL and CV suitability testing. Additional carrier suitability testing was conducted at Naval Air Warfare Center Aircraft Division at Lakehurst, New Jersey. Two non-flying aircraft of each variant were used to test static loads and fatigue. For testing avionics and mission systems, a modified Boeing 737-300 with a duplication of the cockpit, the Lockheed Martin CATBird has been used. Field testing of the F-35's sensors were conducted during Exercise Northern Edge 2009 and 2011, serving as significant risk-reduction steps.
Flight tests revealed several serious deficiencies that required costly redesigns, caused delays, and resulted in several fleet-wide groundings. In 2011, the F-35C failed to catch the arresting wire in all eight landing tests; a redesigned tail hook was delivered two years later. By June 2009, many of the initial flight test targets had been accomplished but the program was behind schedule. Software and mission systems were among the biggest sources of delays for the program, with sensor fusion proving especially challenging. In fatigue testing, the F-35B suffered several premature cracks, requiring a redesign of the structure. A third non-flying F-35B is currently planned to test the redesigned structure. The F-35B and C also had problems with the horizontal tails suffering heat damage from prolonged afterburner use. Early flight control laws had problems with "wing drop" and also made the airplane sluggish, with high angles-of-attack tests in 2015 against an F-16 showing a lack of energy.
At-sea testing of the F-35B was first conducted aboard USS Wasp. In October 2011, two F-35Bs conducted three weeks of initial sea trials, called Development Test I. The second F-35B sea trials, Development Test II, began in August 2013, with tests including nighttime operations; two aircraft completed 19 nighttime vertical landings using DAS imagery. The first operational testing involving six F-35Bs was done on the Wasp in May 2015. The final Development Test III on USS America involving operations in high sea states was completed in late 2016. A Royal Navy F-35 conducted the first "rolling" landing on board HMS Queen Elizabeth in October 2018.
After the redesigned tail hook arrived, the F-35C's carrier-based Development Test I began in November 2014 aboard USS Nimitz and focused on basic day carrier operations and establishing launch and recovery handling procedures. Development Test II, which focused on night operations, weapons loading, and full power launches, took place in October 2015. The final Development Test III was completed in August 2016, and included tests of asymmetric loads and certifying systems for landing qualifications and interoperability. Operational test of the F-35C was conducted in 2018 and the first operational squadron achieved safe-for-flight milestone that December, paving the way for its introduction in 2019.
The F-35's reliability and availability have fallen short of requirements, especially in the early years of testing. The ALIS maintenance and logistics system was plagued by excessive connectivity requirements and faulty diagnoses. In late 2017, the GAO reported the time needed to repair an F-35 part averaged 172 days, which was "twice the program's objective", and that shortage of spare parts was degrading readiness. In 2019, while individual F-35 units have achieved mission-capable rates of over the target of 80% for short periods during deployed operations, fleet-wide rates remained below target. The fleet availability goal of 65% was also not met, although the trend shows improvement. Internal gun accuracy of the F-35A was unacceptable until misalignment issues were addressed by 2024. As of 2020, the number of the program's most serious issues have been decreased by half.
Operational test and evaluation (OT&E) with Block 3F, the final configuration for SDD, began in December 2018, but its completion was delayed particularly by technical problems in integration with the DOD's Joint Simulation Environment (JSE); the F-35 finally completed all JSE trials in September 2023.
=== United States ===
==== Training ====
The F-35A and F-35B were cleared for basic flight training in early 2012, although there were concerns over safety and performance due to lack of system maturity at the time. During the Low Rate Initial Production (LRIP) phase, the three U.S. military services jointly developed tactics and procedures using flight simulators, testing effectiveness, discovering problems and refining design. On 10 September 2012, the USAF began an operational utility evaluation (OUE) of the F-35A, including logistical support, maintenance, personnel training, and pilot execution.
The USMC F-35B Fleet Replacement Squadron (FRS) was initially based at Eglin AFB in 2012 alongside USAF F-35A training units, before moving to MCAS Beaufort in 2014 while another FRS was stood up at MCAS Miramar in 2020. The USAF F-35A basic course is held at Eglin AFB and Luke AFB; in January 2013, training began at Eglin with capacity for 100 pilots and 2,100 maintainers at once. Additionally, the 6th Weapons Squadron of the USAF Weapons School was activated at Nellis AFB in June 2017 for F-35A weapons instructor curriculum while the 65th Aggressor Squadron was reactivated with the F-35A in June 2022 to expand training against adversary stealth aircraft tactics. The USN stood up its F-35C FRS in 2012 with VFA-101 at Eglin AFB, but operations would later be transferred and consolidated under VFA-125 at NAS Lemoore in 2019. The F-35C was introduced to the Strike Fighter Tactics Instructor course, or TOPGUN, in 2020 and the additional capabilities of the aircraft greatly revamped the course syllabus.
==== U.S. Marine Corps ====
On 16 November 2012, the USMC received the first F-35B of VMFA-121 at MCAS Yuma. The USMC declared Initial Operational Capability (IOC) for the F-35B in the Block 2B configuration on 31 July 2015 after operational trials, with some limitations in night operations, mission systems, and weapons carriage. USMC F-35Bs participated in their first Red Flag exercise in July 2016 with 67 sorties conducted. The first F-35B deployment occurred in 2017 at MCAS Iwakuni, Japan; combat employment began in July 2018 from the amphibious assault ship USS Essex, with the first combat strike on 27 September 2018 against a Taliban target in Afghanistan.
In addition to deploying F-35Bs on amphibious assault ships, the USMC plans to disperse the aircraft among austere forward-deployed bases with shelter and concealment to enhance survivability while remaining close to a battlespace. Known as distributed STOVL operations (DSO), F-35Bs would operate from temporary bases in allied territory within hostile missile engagement zones and displace inside the enemy's 24- to 48-hour targeting cycle; this strategy allows F-35Bs to rapidly respond to operational needs, with mobile forward arming and refueling points (M-FARPs) accommodating KC-130 and MV-22 Osprey aircraft to rearm and refuel the jets, as well as littoral areas for sea links of mobile distribution sites. For higher echelons of maintenance, F-35Bs would return from M-FARPs to rear-area friendly bases or ships. Helicopter-portable metal planking is needed to protect unprepared roads from the F-35B's exhaust; the USMC are studying lighter heat-resistant options. These operations have become part of the larger USMC Expeditionary Advanced Base Operations (EABO) concept.
The first USMC F-35C squadron, VMFA-314, achieved Full Operational Capability in July 2021 and was first deployed on board USS Abraham Lincoln as a part of Carrier Air Wing 9 in January 2022.
In 2024, Lt. Gen. Sami Sadat of Afghanistan described an operation using F-35Bs from USS Essex which bombed a Taliban position through cloud cover. "The impact [the F-35] left on my soldiers was amazing. Like, whoa, you know, we have this technology", Sadat said. "But also the impact on the Taliban was quite crippling, because they have never seen Afghan forces move in the winter, and they have never seen planes that could bomb through the clouds."
On 9 November 2024, Marine F-35Cs carried out strikes on the Houthi movement in Yemen in the context of the Red Sea crisis, making it the first time the F-35C has been used in combat.
==== U.S. Air Force ====
USAF F-35A in the Block 3i configuration achieved IOC with the USAF's 34th Fighter Squadron at Hill Air Force Base, Utah on 2 August 2016. F-35As conducted their first Red Flag exercise in 2017; system maturity had improved and the aircraft scored a kill ratio of 15:1 against an F-16 aggressor squadron in a high-threat environment. The first USAF F-35A deployment occurred on 15 April 2019 to Al Dhafra Air Base, UAE. On 27 April 2019, USAF F-35As were first used in combat in an airstrike on an Islamic State tunnel network in northern Iraq.
For European basing, RAF Lakenheath in the UK was chosen as the first installation to station two F-35A squadrons, with 48 aircraft adding to the 48th Fighter Wing's existing F-15C and F-15E squadrons. The first aircraft of the 495th Fighter Squadron arrived on 15 December 2021.
The F-35's operating cost is higher than some older USAF tactical aircraft. In fiscal year 2018, the F-35A's cost per flight hour (CPFH) was $44,000, a number that was reduced to $35,000 in 2019. For comparison, in 2015 the CPFH of the A-10 was $17,716; the F-15C, $41,921; and the F-16C, $22,514. Lockheed Martin hopes to reduce it to $25,000 by 2025 through performance-based logistics and other measures.
==== U.S. Navy ====
The USN achieved operational status with the F-35C in Block 3F on 28 February 2019. On 2 August 2021, the F-35C of VFA-147, as well as the CMV-22 Osprey, embarked on their maiden deployments as part of Carrier Air Wing 2 on board USS Carl Vinson.
USN F-35Cs operating from the USS Carl Vinson took part the training exercise Pacific Stellar 2025 in February, along with the French and Japanese navies.
In April 2025, F-35C's from VFA-97 shot down multiple Houthi drones in the Red Sea, making it the first time the Navy has used the jet in combat.
=== United Kingdom ===
The United Kingdom's Royal Air Force and Royal Navy operate the F-35B. Called Lightning in British service, it has replaced the Harrier GR9, retired in 2010, and Tornado GR4, retired in 2019. The F-35 is to be Britain's primary strike aircraft for the next three decades. One of the Royal Navy's requirements was a Shipborne Rolling and Vertical Landing (SRVL) mode to increase maximum landing weight by using wing lift during landing. Like the Italian Navy, British F-35Bs use ski-jumps to fly from their aircraft carriers, HMS Queen Elizabeth and HMS Prince of Wales. British F-35Bs are not intended to use the Brimstone 2 missile. In July 2013, Chief of the Air Staff Air Chief Marshal Sir Stephen Dalton announced that No. 617 Squadron would be the RAF's first operational F-35 squadron.
The first British F-35 squadron was No. 17 (Reserve) Test and Evaluation Squadron (TES), which stood up on 12 April 2013 as the aircraft's Operational Evaluation Unit. By June 2013, the RAF had received three F-35s of the 48 on order, initially based at Eglin Air Force Base. In June 2015, the F-35B undertook its first launch from a ski-jump at NAS Patuxent River. On 5 July 2017, it was announced the second UK-based RAF squadron would be No. 207 Squadron, which reformed on 1 August 2019 as the Lightning Operational Conversion Unit. No. 617 Squadron reformed on 18 April 2018 during a ceremony in Washington, D.C., becoming the first RAF front-line squadron to operate the type; receiving its first four F-35Bs on 6 June, flying from MCAS Beaufort to RAF Marham. On 10 January 2019, No. 617 Squadron and its F-35s were declared combat-ready.
April 2019 saw the first overseas deployment of a UK F-35 squadron when No. 617 Squadron went to RAF Akrotiri, Cyprus. This reportedly led on 25 June 2019 to the first combat use of an RAF F-35B: an armed reconnaissance flight searching for Islamic State targets in Iraq and Syria. In October 2019, F-35s of 617 Squadron and No. 17 TES were embarked on HMS Queen Elizabeth for the first time. No. 617 Squadron departed RAF Marham on 22 January 2020 for their first Exercise Red Flag with the Lightning. As of November 2022, 26 F-35Bs were based in the United Kingdom (with 617 and 207 Squadrons) and a further three were permanently based in the United States (with 17 Squadron) for testing and evaluation purposes.
The UK's second operational squadron is the Fleet Air Arm's 809 Naval Air Squadron, which stood up in December 2023.
=== Australia ===
Australia's first F-35, designated A35-001, was manufactured in 2014, with flight training provided through international Pilot Training Centre (PTC) at Luke Air Force Base in Arizona. The first two F-35s were unveiled to the Australian public on 3 March 2017 at the Avalon Airshow. By 2021, the Royal Australian Air Force had accepted 26 F-35As, with nine in the US and 17 operating at No 3 Squadron and No 2 Operational Conversion Unit at RAAF Base Williamtown. With 41 trained RAAF pilots and 225 trained technicians for maintenance, the fleet was declared ready to deploy on operations. It was originally expected that Australia would receive all 72 F-35s by 2023. Its final nine aircraft, which were the TR-3 version, arrived in Australia in December 2024.
=== Israel ===
The Israeli Air Force (IAF) declared the F-35 operationally capable on 6 December 2017. According to Kuwaiti newspaper Al Jarida, in July 2018, a test mission of at least three IAF F-35s flew to Iran's capital Tehran and back to Tel Aviv. While publicly unconfirmed, regional leaders acted on the report; Iran's supreme leader Ali Khamenei reportedly fired the air force chief and commander of Iran's Revolutionary Guard Corps over the mission.
On 22 May 2018, IAF chief Amikam Norkin said that the service had employed their F-35Is in two attacks on two battle fronts, marking the first combat operation of an F-35 by any country. Norkin said it had been flown "all over the Middle East", and showed photos of an F-35I flying over Beirut in daylight. In July 2019, Israel expanded its strikes against Iranian missile shipments; IAF F-35Is allegedly struck Iranian targets in Iraq twice.
In November 2020, the IAF announced the delivery of a unique F-35I testbed aircraft among a delivery of four aircraft received in August, to be used to test and integrate Israeli-produced weapons and electronic systems on F-35s received later. This is the only example of a testbed F-35 delivered to a non-US air force.
On 11 May 2021, eight IAF F-35Is took part in an attack on 150 targets in Hamas' rocket array, including 50–70 launch pits in the northern Gaza Strip, as part of Operation Guardian of the Walls.
On 6 March 2022, the IDF stated that on 15 March 2021, F-35Is shot down two Iranian drones carrying weapons to the Gaza Strip. This was the first operational shoot down and interception carried out by the F-35. They were also used in the Gaza war.
On 2 November 2023, the IDF posted on social media that they used an F-35I to shoot down a Houthi cruise missile over the Red Sea that was fired from Yemen during the Gaza war.
The F-35I Adir was used in the 29 September 2024 Israeli attacks on Yemen. F-35Is were also reportedly involved in the October 2024 Israeli strikes on Iran.
Britain supplies Israel with parts for the F-35 through the global spares pool. Patrick Wintour wrote in The Guardian that, following criticism of Israel's role in the Gaza war, the legality of continuing this supply was questioned. The government in 2025 argued in a court case testing whether the law was broken by supplying Israel with F-35 parts usable to attack Palestinians in Gaza that preserving the British role in the F-35 jet fighter programme overrode UK laws on arms export controls and any UK obligation to prevent genocide in Israel.
=== Italy ===
Italy's F-35As were declared to have reached initial operational capability (IOC) on 30 November 2018. At the time Italy had taken delivery of 10 F-35As and one F-35B, with 2 F-35As and the one F-35B being stationed in the U.S. for training, the remaining 8 F-35As were stationed in Amendola. Italian Navy F-35Bs have been operating from the Italian aircraft carrier ITS Cavour, where they have also conducted drills in the Philippine Sea with the US in 2024.
=== Japan ===
Japan's F-35As were declared to have reached initial operational capability (IOC) on 29 March 2019. At the time Japan had taken delivery of 10 F-35As stationed in Misawa Air Base. Japan plans to eventually acquire a total of 147 F-35s, which will include 42 F-35Bs. It plans to use the latter variant to equip Japan's Izumo-class multi-purpose destroyers.
=== Norway ===
On 6 November 2019 Norway declared initial operational capability (IOC) for its fleet of 15 F-35As out of a planned 52 F-35As. On 6 January 2022 Norway's F-35As replaced its older F-16A and B models for the NATO quick reaction alert mission in the high north. In April 2025, the total number of F-35s delivered totaled 49 out of 52.
On 22 September 2023, two F-35As from the Royal Norwegian Air Force landed on a motorway near Tervo, Finland, showing, for the first time, that F-35As can operate from paved roads. Unlike the F-35B they cannot land vertically. The fighters were also refueled with their engines running. Commander of the Royal Norwegian Air Force, Major General Rolf Folland, said: "Fighter jets are vulnerable on the ground, so by being able to use small airfields – and now motorways – (this) increases our survivability in war",
=== Netherlands ===
On 27 December 2021, the Netherlands declared initial operational capability (IOC) for its fleet of 24 F-35As that it has received to date from its order for 46 F-35As. In 2022, the Netherlands announced they will order an additional six F-35s, totaling 52 aircraft ordered. As of September 2024, 40 out of the 52 ordered have been delivered, and the Netherlands seeks to order another 6 jets to help completely phase out their F-16 fleet.
== Variants ==
The F-35 was designed with three initial variants – the F-35A, a CTOL land-based version; the F-35B, a STOVL version capable of use either on land or on aircraft carriers; and the F-35C, a CATOBAR carrier-based version. Since then, there has been work on the design of nationally specific versions for Israel and Canada.
=== F-35A ===
The F-35A is the conventional take-off and landing (CTOL) variant intended for the USAF and other air forces. It is the smallest, lightest version and capable of 9 g, the highest of all variants.
Although the F-35A currently conducts aerial refueling via boom and receptacle method, the aircraft can be modified for probe-and-drogue refueling if needed by the customer. A drag chute pod can be installed on the F-35A, with the Royal Norwegian Air Force being the first operator to adopt it.
=== F-35B ===
The F-35B is the short take-off and vertical landing (STOVL) variant of the aircraft. Similar in size to the A variant, the B sacrifices about a third of the A variant's fuel volume to accommodate the shaft-driven lift fan (SDLF). This variant is limited to 7 g. Unlike other variants, the F-35B has no landing hook. The "STOVL/HOOK" control instead engages conversion between normal and vertical flight. The F-35B is capable of Mach 1.6 (1,960 km/h; 1,220 mph) and can perform vertical and/or short take-off and landing (V/STOL).
=== F-35C ===
The F-35C is a carrier-based variant designed for catapult-assisted take-off, barrier arrested recovery operations from aircraft carriers. Compared to the F-35A, the F-35C features larger wings with foldable wingtip sections, larger control surfaces for improved low-speed control, stronger landing gear for the stresses of carrier arrested landings, a twin-wheel nose gear, and a stronger tailhook for use with carrier arrestor cables. The larger wing area allows for decreased landing speed while increasing both range and payload. The F-35C is limited to 7.5 g.
=== F-35I "Adir" ===
The F-35I Adir (Hebrew: אדיר, meaning "Awesome", or "Mighty One") is an F-35A with unique Israeli modifications. The US initially refused to allow such changes before permitting Israel to integrate its own electronic warfare systems, including sensors and countermeasures. The main computer has a plug-and-play function for add-on systems; proposals include an external jamming pod, and new Israeli air-to-air missiles and guided bombs in the internal weapon bays. A senior IAF official said that the F-35's stealth may be partly overcome within 10 years despite a 30 to 40-year service life, thus Israel's insistence on using their own electronic warfare systems. In 2010, Israel Aerospace Industries (IAI) considered a two-seat F-35 concept; an IAI executive noted that there was a "known demand for two seats not only from Israel but from other air forces." In 2008, IAI planned to produce conformal fuel tanks, as well as stealthy external fuel tanks
Israel had ordered a total of 75 F-35Is by 2023, with 36 already delivered as of November 2022.
=== Proposed variants ===
==== CF-35 ====
The Canadian CF-35 was a proposed variant that would differ from the F-35A through the addition of a drogue parachute and the potential inclusion of an F-35B/C-style refueling probe. In 2012, it was revealed that the CF-35 would employ the same boom refueling system as the F-35A. One alternative proposal would have been the adoption of the F-35C for its probe refueling and lower landing speed; however, the Parliamentary Budget Officer's report cited the F-35C's limited performance and payload as being too high a price to pay. Following the 2015 Federal Election the Liberal Party, whose campaign had included a pledge to cancel the F-35 procurement, formed a new government and commenced an open competition to replace the existing CF-18 Hornet. The CF-35 variant was deemed too expensive to develop, and was never considered. The Canadian government decided to not pursue any other modifications in the Future Fighter Capability Project, and instead focused on the potential procurement of the existing F-35A variant.
On 28 March 2022, the Canadian Government began negotiations with Lockheed Martin for 88 F-35As to replace the aging fleet of CF-18 fighters starting in 2025. The aircraft are reported to cost up to CA$19bn total with a life-cycle cost estimated at CA$77bn over the course of the F-35 program. On 9 January 2023, Canada formally confirmed the purchase of 88 aircraft. The initial delivery to the Royal Canadian Air Force in 2026 will be 4 aircraft, followed by 6 aircraft each in 2027–2028, and the rest to be delivered by 2032. The additional characteristics confirmed for the CF-35 included the drag chute pod for landings at short/icy arctic runways, as well as the 'sidekick' system, which allows the CF-35 to carry up to 6 x AIM-120D missiles internally (instead of the typical internal capacity of 4 x AIM-120 missiles on other variants).
==== New export variant ====
In December 2021, it was reported that Lockheed Martin was developing a new variant for an unspecified foreign customer. The Department of Defense released US$49 million in funding for this work.
== Operators ==
Australia
Royal Australian Air Force – 72 F-35A delivered as of December 2024.
Belgium
Belgian Air Component – 8 officially delivered (but none have left the US as of March 2024), 34 F-35A planned as of 2019.
Denmark
Royal Danish Air Force – 17 F-35As delivered (including 6 stationed at Luke AFB for training) of the 27 planned for the RDAF as of April 2025.
Israel
Israeli Air Force – 46 delivered as of April 2025 (F-35I "Adir"). Includes one F-35 testbed aircraft for indigenous Israeli weapons, electronics and structural upgrades, designated (AS-15). A total of 75 ordered.
Italy
Italian Air Force – 24 F-35As and 8 F-35B's delivered as of April 2025 of 75 F-35As and 20 F-35Bs ordered for the Italian Air Force.
Italian Navy – 6 delivered as of September 2024, out of 20 F-35Bs ordered for the Italian Navy.
Japan
Japan Air Self-Defense Force – 42 F-35As operational as of April 2025 with a total order of 147, including 105 F-35As and 42 F-35Bs.
Netherlands
Royal Netherlands Air Force – 42 F-35As delivered and operational, of which 8 trainer aircraft based at Luke Air Force Base in the USA. 52 F-35As ordered in total. The RNLAF is the second air force with a 5th gen-only fighter fleet after the retirement of its F-16s.
Norway
Royal Norwegian Air Force – 52 F-35A delivered. They differ from other F-35A through the addition of a drogue parachute.
Poland
Polish Air Force – 32 F-35A “Husarz” Block 4 jets with "Technology Refresh 3" software update and drogue parachutes were ordered on 31 January 2020. The deliveries are expected to begin in first F-35s manufactured for Poland which rolled off the assembly line in 2024, and conclude in 2030. There are plans for two more squadrons consisting of 16 jets each, for a total of 32 additional F-35s. The first domestic flights of the F-35 by Polish pilots took place in February 2025, signaling the start of the country's use of the aircraft.
South Korea
Republic of Korea Air Force – 40 F-35As ordered and delivered as of January 2022, with 25 more ordered in September 2023.
Republic of Korea Navy – about 20 F-35Bs planned. It has not yet been approved by South Korean parliament.
United Kingdom
Royal Air Force and Royal Navy (owned by the RAF but jointly operated) – 39 F-35Bs received with 35 in the UK after the loss of one aircraft in November 2021; the other three are in the US where they are used for testing and training. A total of 48 ordered as of 2021; a total of 138 were originally planned, the expectation in 2021 was to eventually reach around 60 or 80. In 2022, it was announced that the UK would acquire 74 F-35Bs, with a decision on whether or not to go beyond that number, including the possibility of reviving the original plan of 138 aircraft, to be made in the mid-2020s. In February 2024, the United Kingdom appeared to signal a reaffirmation of its commitment to procure 138 F-35B aircraft, as per the original plan.
United States
United States Air Force – 400+ delivered with 1,763 F-35As planned
United States Marine Corps – 112 F-35B/C delivered with 280 F-35Bs and 140 F-35Cs planned
United States Navy – 110+ delivered with 273 F-35Cs planned
=== Future operators ===
Canada
Royal Canadian Air Force – 88 F-35As (Block 4) ordered on 9 January 2023. The first four are to be delivered in 2026, six in 2027, another six in 2028, and the rest delivered by 2032. The aircraft are to replace CF-18s delivered in the 1980s.
Czech Republic
Czech Air Force – On 29 June 2023, the U.S. State Department announced the approval of a possible sale to the Czech Republic of F-35 aircraft, munitions and related equipment worth up to $5.62 billion. On 29 January 2024, the Czech government signed a memorandum of understanding with the U.S. to buy 24 F-35As. In September 2024, the Czech Republic signed a contract for F-35A logistics support.
Finland
Finnish Air Force – In 2022, ordered 64 F-35A Block 4s via the HX Fighter Program to replace F/A-18 Hornets.
Germany
German Air Force – In 2022, ordered 35 F-35As for delivery starting in 2026. As of 2024, an order for 10 more was being considered. German F-35s will also replace the older Panavia Tornados in carrying the B61 nuclear bomb.
Greece
Hellenic Air Force – In 2024, ordered 20 F-35As for delivery in late 2027 to early 2028, with an option to buy 20 more.
Romania
Romanian Air Force – Romania signed the contract for 32 F-35A worth $6.5 billion on 21 November 2024. The plan is to buy 48 F-35A aircraft in two phases – first phase of 32 and second phase of 16. The first F-35s will arrive after 2030 and will replace the current Romanian F-16 fleet between 2034 and 2040.
Singapore
Republic of Singapore Air Force – 12 F-35Bs on order as of February 2024 with first 4 to be delivered in 2026; The other 8 are to be delivered in 2028. 8 F-35As have been ordered, and are expected to arrive by 2030.
Switzerland
Swiss Air Force – 36 F-35A ordered to replace the current F-5E/F Tiger II and F/A-18C/D Hornet. Deliveries will begin in 2027 and conclude in 2030.
=== Potential sales ===
India
Indian Air Force - In February 2025, U.S. President Donald Trump offered the F-35 to Prime Minister Narendra Modi of India, which as of March 2025, was also mulling a competing offer from Russia's Sukhoi Su-57.
=== Cancellations ===
Republic of China
Republic of China Air Force – Taiwan has repeatedly expressed interest in buying the F-35 to deter and fight off any Chinese attempt to seize the island by force. It is reportedly most interested in the F-35B STOVL variant, which could enable the Republic of China Air Force to continue operations if China bombed the island's runways. But the U.S. has repeatedly rebuffed this interest—for example, in March 2009, September 2011, early 2017, and March 2018. The usual reason given is to prevent provoking Beijing. But in April 2018, another reason for U.S. reluctance surfaced: concern that Chinese spies within the Taiwanese Armed Forces might gain classified data about the aircraft. In November 2018, it was reported that Taiwanese military leaders had abandoned efforts to buy the F-35, and would instead buy a larger number of F-16V Viper aircraft. The decision was reportedly motivated by concerns about industry independence, cost, and espionage concerns.
Thailand
Royal Thai Air Force – 8 or 12 planned to replace F-16A/B Block 15 ADF in service. On 12 January 2022, Thailand's cabinet approved a budget for the first four F-35A, estimated at 13.8 billion baht in FY2023. On 22 May 2023, the United States Department of Defense implied it will turn down Thailand's bid to buy F-35 fighters, and instead offer F-16 Block 70/72 Viper and F-15EX Eagle II fighters, a Royal Thai Air Force source said.
Turkey
Turkish Air Force – 30 were ordered, of up to 100 total planned. Future purchases have been banned by the U.S. with contracts canceled by early 2020, following Turkey's decision to buy the S-400 missile system from Russia. Six of Turkey's 30 ordered F-35As were completed as of 2019 (they are still kept in a hangar in the United States as of 2023 and so far haven't been transferred to the USAF, despite a modification in the 2020 Fiscal Year defense budget by the U.S. Congress which gives authority to do so if necessary), and two more were at the assembly line in 2020. The first four F-35As were delivered to Luke Air Force Base in 2018 and 2019 for the training of Turkish pilots. On 20 July 2020, the U.S. government had formally approved the seizure of eight F-35As originally bound for Turkey and their transfer to the USAF, together with a contract to modify them to USAF specifications. The U.S. has not refunded the $1.4 billion payment made by Turkey for purchasing the F-35A fighters as of January 2023. On 1 February 2024, the United States expressed willingness to readmit Turkey into the F-35 program if Turkey agrees to give up its S-400 system. After Trump and Erdoğan's phone call in March 2025, news was reported in the press that Trump could approve the sale of F-35s to Türkiye if Türkiye resolves the S-400 issue.
United Arab Emirates
United Arab Emirates Air Force – Up to 50 F-35As planned. But on 27 January 2021, the Biden administration temporarily suspended the F-35 sales to the UAE. After pausing the bill to review the sale, the Biden administration confirmed to move forward with the deal on 13 April 2021. In December 2021 UAE withdrew from purchasing F-35s as they did not agree to the additional terms of the transaction from the US. On 14 September 2024, a senior UAE official said that the United Arab Emirates does not expect to resume talks with the U.S. about the F-35.
== Accidents and notable incidents ==
The F-35 has been described as a relatively safe military aircraft. Still, since 2014, more than a dozen have crashed or otherwise been involved in incidents that have killed or severely injured people or destroyed the aircraft. Some were caused by operator error; others by mechanical problems, some of which set the entire program back.
== Specifications (F-35A) ==
Data from Lockheed Martin: F-35 specifications, Lockheed Martin: F-35 weaponry, Lockheed Martin: F-35 Program Status, F-35 Program brief, FY2019 Select Acquisition Report (SAR), Director of Operational Test & EvaluationGeneral characteristics
Crew: 1
Length: 51.4 ft (15.7 m)
Wingspan: 35 ft (11 m)
Height: 14.4 ft (4.4 m)
Wing area: 460 sq ft (43 m2)
Aspect ratio: 2.66
Empty weight: 29,300 lb (13,290 kg)
Gross weight: 49,540 lb (22,471 kg)
Max takeoff weight: 65,918 lb (29,900 kg)
Fuel capacity: 18,250 lb (8,278 kg) internal
Powerplant: 1 × Pratt & Whitney F135-PW-100 afterburning turbofan, 28,000 lbf (120 kN) thrust dry, 43,000 lbf (190 kN) with afterburner
Performance
Maximum speed: Mach 1.6 at high altitude
Mach 1.06, 700 knots (806 mph; 1,296 km/h) at sea level
Range: 1,500 nmi (1,700 mi, 2,800 km)
Combat range: 669 nmi (770 mi, 1,239 km) interdiction mission (air-to-surface) on internal fuel
760 nmi (870 mi; 1,410 km), air-to-air configuration on internal fuel
Service ceiling: 50,000 ft (15,000 m)
g limits: +9.0
Wing loading: 107.7 lb/sq ft (526 kg/m2) at gross weight
Thrust/weight: 0.87 at gross weight (1.07 at loaded weight with 50% internal fuel)
Armament
Guns: 1 × 25 mm GAU-22/A 4-barrel rotary cannon, 180 rounds
Hardpoints: 4 × internal stations, 6 × external stations on wings with a capacity of 5,700 pounds (2,600 kg) internal, 15,000 pounds (6,800 kg) external, 18,000 pounds (8,200 kg) total weapons payload, with provisions to carry combinations of:
Missiles:
Air-to-air missiles:
AIM-9X Sidewinder
AIM-120 AMRAAM
AIM-132 ASRAAM
AIM-260 JATM (being integrated)
MBDA Meteor (Block 4, for F-35B, not before 2027)
Air-to-surface missiles:
AGM-88G AARGM-ER (Block 4)
AGM-158 JASSM
AGM-179 JAGM
SPEAR 3 (Block 4, in development, integration contracted)
Stand-in Attack Weapon (SiAW)
Anti-ship missiles:
AGM-158C LRASM (being integrated)
Joint Strike Missile (being integrated)
Bombs:
Joint Direct Attack Munition
Paveway
Precision-guided glide bomb:
AGM-154 JSOW
GBU-39 Small Diameter Bomb
GBU-53/B StormBreaker
B61 mod 12 nuclear bomb
Avionics
AN/APG-81 or AN/APG-85 (Lot 17 onwards) AESA radar
AN/AAQ-40 Electro-Optical Targeting System
AN/AAQ-37 Electro-Optical Distributed Aperture System
AN/ASQ-239 Barracuda electronic warfare/electronic countermeasures system
AN/ASQ-242 CNI suite, which includes
Harris Corporation Multifunction Advanced Data Link (MADL) communication system
Link 16 data link
SINCGARS
An IFF interrogator and transponder
HAVE QUICK
AM, VHF, UHF AM, and UHF FM Radio
GUARD survival radio
A radar altimeter
An instrument landing system
A TACAN system
Instrument carrier landing system
A JPALS
TADIL-J JVMF/VMF
=== Differences between variants ===
== Appearances in media ==
== See also ==
Related development
Lockheed Martin X-35 – Concept demonstrator aircraft for Joint Strike Fighter program
Aircraft of comparable role, configuration, and era
Chengdu J-20 – Chinese fifth-generation fighter aircraft
HAL AMCA – Indian fifth-generation fighter under development by Aeronautical Development Agency and Hindustan Aeronautics Limited
KAI KF-21 Boramae – Advanced multirole fighter aircraft under development by South Korea and Indonesia
Lockheed Martin F-22 Raptor – American fifth-generation air superiority fighter
Shenyang J-35 – Chinese fifth-generation fighter aircraft
Sukhoi Su-57 – Russian fifth-generation fighter aircraft
Sukhoi Su-75 Checkmate – Russian single engine fifth-generation fighter under development by Sukhoi
TAI TF Kaan – Turkish fifth-generation fighter under development by Turkish Aerospace Industries
Related lists
List of fighter aircraft
List of active United States military aircraft
List of megaprojects, Aerospace
List of military electronics of the United States
== Notes ==
== References ==
=== Bibliography ===
Hamstra, Jeffrey (2019). Hamstra, Jeffrey W. (ed.). The F-35 Lightning II: From Concept to Cockpit. American Institute of Aeronautics and Astronautics. doi:10.2514/4.105678. ISBN 978-1-62410-566-1. S2CID 212996081.
Keijsper, Gerald (2007). Lockheed F-35 Joint Strike Fighter. London: Pen & Sword Aviation. ISBN 978-1-84415-631-3.
Lake, Jon. "The West's Great Hope". AirForces Monthly, December 2010.
Polmar, Norman (2005). The Naval Institute Guide to the Ships and Aircraft of the U.S. Fleet. Annapolis, MD: Naval Institute Press. ISBN 978-1-59114-685-8.
== Further reading ==
Borgu, Aldo (2004). A Big Deal: Australia's Future Air Combat Capability. Canberra: Australian Strategic Policy Institute. ISBN 1-920722-25-4.
Spick, Mike (2002). The Illustrated Directory of Fighters. London: Salamander. ISBN 1-84065-384-1.
Winchester, Jim (2005). Concept Aircraft: Prototypes, X-Planes, and Experimental Aircraft. San Diego, CA: Thunder Bay Press. ISBN 978-1-59223-480-6. OCLC 636459025.
== External links ==
Official JSF website. Archived 27 October 2007 at the Wayback Machine.
Official F-35 Team website
F35 Lightning II | Northrop Grumman
F-35 page on U.S. Naval Air Systems Command site. Archived 7 March 2010 at the Wayback Machine.
F-35 – Royal Air Force |
Facial recognition system | A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.
Development began on similar systems in the 1960s, beginning as a form of computer application. Since their inception, facial recognition systems have seen wider uses in recent times on smartphones and in other forms of technology, such as robotics. Because computerized facial recognition involves the measurement of a human's physiological characteristics, facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems as a biometric technology is lower than iris recognition, fingerprint image acquisition, palm recognition or voice recognition, it is widely adopted due to its contactless process. Facial recognition systems have been deployed in advanced human–computer interaction, video surveillance, law enforcement, passenger screening, decisions on employment and housing and automatic indexing of images.
Facial recognition systems are employed throughout the world today by governments and private companies. Their effectiveness varies, and some systems have previously been scrapped because of their ineffectiveness. The use of facial recognition systems has also raised controversy, with claims that the systems violate citizens' privacy, commonly make incorrect identifications, encourage gender norms and racial profiling, and do not protect important biometric data. The appearance of synthetic media such as deepfakes has also raised concerns about its security. These claims have led to the ban of facial recognition systems in several cities in the United States. Growing societal concerns led social networking company Meta Platforms to shut down its Facebook facial recognition system in 2021, deleting the face scan data of more than one billion users. The change represented one of the largest shifts in facial recognition usage in the technology's history. IBM also stopped offering facial recognition technology due to similar concerns.
== History of facial recognition technology ==
Automated facial recognition was pioneered in the 1960s by Woody Bledsoe, Helen Chan Wolf, and Charles Bisson, whose work focused on teaching computers to recognize human faces. Their early facial recognition project was dubbed "man-machine" because a human first needed to establish the coordinates of facial features in a photograph before they could be used by a computer for recognition. Using a graphics tablet, a human would pinpoint facial features coordinates, such as the pupil centers, the inside and outside corners of eyes, and the widows peak in the hairline. The coordinates were used to calculate 20 individual distances, including the width of the mouth and of the eyes. A human could process about 40 pictures an hour, building a database of these computed distances. A computer would then automatically compare the distances for each photograph, calculate the difference between the distances, and return the closed records as a possible match.
In 1970, Takeo Kanade publicly demonstrated a face-matching system that located anatomical features such as the chin and calculated the distance ratio between facial features without human intervention. Later tests revealed that the system could not always reliably identify facial features. Nonetheless, interest in the subject grew and in 1977 Kanade published the first detailed book on facial recognition technology.
In 1993, the Defense Advanced Research Project Agency (DARPA) and the Army Research Laboratory (ARL) established the face recognition technology program FERET to develop "automatic face recognition capabilities" that could be employed in a productive real life environment "to assist security, intelligence, and law enforcement personnel in the performance of their duties." Face recognition systems that had been trialled in research labs were evaluated. The FERET tests found that while the performance of existing automated facial recognition systems varied, a handful of existing methods could viably be used to recognize faces in still images taken in a controlled environment. The FERET tests spawned three US companies that sold automated facial recognition systems. Vision Corporation and Miros Inc were founded in 1994, by researchers who used the results of the FERET tests as a selling point. Viisage Technology was established by an identification card defense contractor in 1996 to commercially exploit the rights to the facial recognition algorithm developed by Alex Pentland at MIT.
Following the 1993 FERET face-recognition vendor test, the Department of Motor Vehicles (DMV) offices in West Virginia and New Mexico became the first DMV offices to use automated facial recognition systems to prevent people from obtaining multiple driving licenses using different names. Driver's licenses in the United States were at that point a commonly accepted form of photo identification. DMV offices across the United States were undergoing a technological upgrade and were in the process of establishing databases of digital ID photographs. This enabled DMV offices to deploy the facial recognition systems on the market to search photographs for new driving licenses against the existing DMV database. DMV offices became one of the first major markets for automated facial recognition technology and introduced US citizens to facial recognition as a standard method of identification. The increase of the US prison population in the 1990s prompted U.S. states to established connected and automated identification systems that incorporated digital biometric databases, in some instances this included facial recognition. In 1999, Minnesota incorporated the facial recognition system FaceIT by Visionics into a mug shot booking system that allowed police, judges and court officers to track criminals across the state.
Until the 1990s, facial recognition systems were developed primarily by using photographic portraits of human faces. Research on face recognition to reliably locate a face in an image that contains other objects gained traction in the early 1990s with the principal component analysis (PCA). The PCA method of face detection is also known as Eigenface and was developed by Matthew Turk and Alex Pentland. Turk and Pentland combined the conceptual approach of the Karhunen–Loève theorem and factor analysis, to develop a linear model. Eigenfaces are determined based on global and orthogonal features in human faces. A human face is calculated as a weighted combination of a number of Eigenfaces. Because few Eigenfaces were used to encode human faces of a given population, Turk and Pentland's PCA face detection method greatly reduced the amount of data that had to be processed to detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997, the PCA Eigenface method of face recognition was improved upon using linear discriminant analysis (LDA) to produce Fisherfaces. LDA Fisherfaces became dominantly used in PCA feature based face recognition. While Eigenfaces were also used for face reconstruction. In these approaches no global structure of the face is calculated which links the facial features or parts.
Purely feature based approaches to facial recognition were overtaken in the late 1990s by the Bochum system, which used Gabor filter to record the face features and computed a grid of the face structure to link the features. Christoph von der Malsburg and his research team at the University of Bochum developed Elastic Bunch Graph Matching in the mid-1990s to extract a face out of an image using skin segmentation. By 1997, the face detection method developed by Malsburg outperformed most other facial detection systems on the market. The so-called "Bochum system" of face detection was sold commercially on the market as ZN-Face to operators of airports and other busy locations. The software was "robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses—even sunglasses".
Real-time face detection in video footage became possible in 2001 with the Viola–Jones object detection framework for faces. Paul Viola and Michael Jones combined their face detection method with the Haar-like feature approach to object recognition in digital images to launch AdaBoost, the first real-time frontal-view face detector. By 2015, the Viola–Jones algorithm had been implemented using small low power detectors on handheld devices and embedded systems. Therefore, the Viola–Jones algorithm has not only broadened the practical application of face recognition systems but has also been used to support new features in user interfaces and teleconferencing.
Ukraine is using the US-based Clearview AI facial recognition software to identify dead Russian soldiers. Ukraine has conducted 8,600 searches and identified the families of 582 deceased Russian soldiers. The IT volunteer section of the Ukrainian army using the software is subsequently contacting the families of the deceased soldiers to raise awareness of Russian activities in Ukraine. The main goal is to destabilise the Russian government. It can be seen as a form of psychological warfare. About 340 Ukrainian government officials in five government ministries are using the technology. It is used to catch spies that might try to enter Ukraine.
Clearview AI's facial recognition database is only available to government agencies who may only use the technology to assist in the course of law enforcement investigations or in connection with national security.
The software was donated to Ukraine by Clearview AI. Russia is thought to be using it to find anti-war activists. Clearview AI was originally designed for US law enforcement. Using it in war raises new ethical concerns. One London based surveillance expert, Stephen Hare, is concerned it might make the Ukrainians appear inhuman: "Is it actually working? Or is it making [Russians] say: 'Look at these lawless, cruel Ukrainians, doing this to our boys'?"
== Techniques for face recognition ==
While humans can recognize faces without much effort, facial recognition is a challenging pattern recognition problem in computing. Facial recognition systems attempt to identify a human face, which is three-dimensional and changes in appearance with lighting and facial expression, based on its two-dimensional image. To accomplish this computational task, facial recognition systems perform four steps. First face detection is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face pose, image size and photographic properties, such as illumination and grayscale. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction. Features such as eyes, nose and mouth are pinpointed and measured in the image to represent the face. The so established feature vector of the face is then, in the fourth step, matched against a database of faces.
=== Traditional ===
Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features.
Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.
Recognition algorithms can be divided into two main approaches: geometric, which looks at distinguishing features, or photo-metric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features.
Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching. Modern facial recognition systems make increasing use of machine learning techniques such as deep learning.
=== Human identification at a distance (HID) ===
To enable human identification at a distance (HID) low-resolution images of faces are enhanced using face hallucination. In CCTV imagery faces are often very small. But because facial recognition algorithms that identify and plot facial features require high resolution images, resolution enhancement techniques have been developed to enable facial recognition systems to work with imagery that has been captured in environments with a high signal-to-noise ratio. Face hallucination algorithms that are applied to images prior to those images being submitted to the facial recognition system use example-based machine learning with pixel substitution or nearest neighbour distribution indexes that may also incorporate demographic and age related facial characteristics. Use of face hallucination techniques improves the performance of high resolution facial recognition algorithms and may be used to overcome the inherent limitations of super-resolution algorithms. Face hallucination techniques are also used to pre-treat imagery where faces are disguised. Here the disguise, such as sunglasses, is removed and the face hallucination algorithm is applied to the image. Such face hallucination algorithms need to be trained on similar face images with and without disguise. To fill in the area uncovered by removing the disguise, face hallucination algorithms need to correctly map the entire state of the face, which may be not possible due to the momentary facial expression captured in the low resolution image.
=== 3-dimensional recognition ===
Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D-dimensional face recognition research is enabled by the development of sophisticated sensors that project structured light onto the face. 3D matching technique are sensitive to expressions, therefore researchers at Technion applied tools from metric geometry to treat expressions as isometries. A new method of capturing 3D images of faces uses three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject's face in real-time and be able to face detect and recognize.
=== Thermal cameras ===
A different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or makeup. Unlike conventional cameras, thermal cameras can capture facial imagery even in low-light and nighttime conditions without using a flash and exposing the position of the camera. However, the databases for face recognition are limited. Efforts to build databases of thermal face images date back to 2004. By 2016, several databases existed, including the IIITD-PSE and the Notre Dame thermal face database. Current thermal face recognition systems are not able to reliably detect a face in a thermal image that has been taken of an outdoor environment.
In 2018, researchers from the U.S. Army Research Laboratory (ARL) developed a technique that would allow them to match facial imagery obtained using a thermal camera with those in databases that were captured using a conventional camera. Known as a cross-spectrum synthesis method due to how it bridges facial recognition from two different imaging modalities, this method synthesize a single image by analyzing multiple facial regions and details. It consists of a non-linear regression model that maps a specific thermal image into a corresponding visible facial image and an optimization issue that projects the latent projection back into the image space. ARL scientists have noted that the approach works by combining global information (i.e. features across the entire face) with local information (i.e. features regarding the eyes, nose, and mouth). According to performance tests conducted at ARL, the multi-region cross-spectrum synthesis model demonstrated a performance improvement of about 30% over baseline methods and about 5% over state-of-the-art methods.
== Application ==
=== Social media ===
Founded in 2013, Looksery went on to raise money for its face modification app on Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The application allows video chat with others through a special filter for faces that modifies the look of users. Image augmenting applications already on the market, such as Facetune and Perfect365, were limited to static images, whereas Looksery allowed augmented reality to live videos. In late 2015 SnapChat purchased Looksery, which would then become its landmark lenses function. Snapchat filter applications use face detection technology and on the basis of the facial features identified in an image a 3D mesh mask is layered over the face. A variety of technologies attempt to fool facial recognition software by the use of anti-facial recognition masks.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users. The system is said to be 97% accurate, compared to 85% for the FBI's Next Generation Identification system.
TikTok's algorithm has been regarded as especially effective, but many were left to wonder at the exact programming that caused the app to be so effective in guessing the user's desired content. In June 2020, TikTok released a statement regarding the "For You" page, and how they recommended videos to users, which did not include facial recognition. In February 2021, however, TikTok agreed to a $92 million settlement to a US lawsuit which alleged that the app had used facial recognition in both user videos and its algorithm to identify age, gender and ethnicity.
=== ID verification ===
The emerging use of facial recognition is in the use of ID verification services. Many companies and others are working in the market now to provide these services to banks, ICOs, and other e-businesses. Face recognition has been leveraged as a form of biometric authentication for various computing platforms and devices; Android 4.0 "Ice Cream Sandwich" added facial recognition using a smartphone's front camera as a means of unlocking devices, while Microsoft introduced face recognition login to its Xbox 360 video game console through its Kinect accessory, as well as Windows 10 via its "Windows Hello" platform (which requires an infrared-illuminated camera). In 2017, Apple's iPhone X smartphone introduced facial recognition to the product line with its "Face ID" platform, which uses an infrared illumination system.
==== Face ID ====
Apple introduced Face ID on the flagship iPhone X as a biometric authentication successor to the Touch ID, a fingerprint based system. Face ID has a facial recognition sensor that consists of two parts: a "Romeo" module that projects more than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the pattern. The pattern is sent to a local "Secure Enclave" in the device's central processing unit (CPU) to confirm a match with the phone owner's face.
The facial pattern is not accessible by Apple. The system will not work with eyes closed, in an effort to prevent unauthorized access. The technology learns from changes in a user's appearance, and therefore works with hats, scarves, glasses, and many sunglasses, beard and makeup. It also works in the dark. This is done by using a "Flood Illuminator", which is a dedicated infrared flash that throws out invisible infrared light onto the user's face to get a 2d picture in addition to the 30,000 facial points.
=== Healthcare ===
Facial recognition algorithms can help in diagnosing some diseases using specific features on the nose, cheeks and other part of the human face. Relying on developed data sets, machine learning has been used to identify genetic abnormalities just based on facial dimensions. FRT has also been used to verify patients before surgery procedures.
In March, 2022 according to a publication by Forbes, FDNA, an AI development company claimed that in the space of 10 years, they have worked with geneticists to develop a database of about 5,000 diseases and 1500 of them can be detected with facial recognition algorithms.
=== Deployment of FRT for availing government services ===
==== India ====
In an interview, the National Health Authority chief Dr. R.S. Sharma said that facial recognition technology would be used in conjunction with Aadhaar to authenticate the identity of people seeking vaccines. Ten human rights and digital rights organizations and more than 150 individuals signed a statement by the Internet Freedom Foundation that raised alarm against the deployment of facial recognition technology in the central government's vaccination drive process. Implementation of an error-prone system without adequate legislation containing mandatory safeguards, would deprive citizens of essential services and linking this untested technology to the vaccination roll-out in India will only exclude persons from the vaccine delivery system.
In July, 2021, a press release by the Government of Meghalaya stated that facial recognition technology (FRT) would be used to verify the identity of pensioners to issue a Digital Life Certificate using "Pensioner's Life Certification Verification" mobile application. The notice, according to the press release, purports to offer pensioners "a secure, easy and hassle-free interface for verifying their liveness to the Pension Disbursing Authorities from the comfort of their homes using smart phones". Mr. Jade Jeremiah Lyngdoh, a law student, sent a legal notice to the relevant authorities highlighting that "The application has been rolled out without any anchoring legislation which governs the processing of personal data and thus, lacks lawfulness and the Government is not empowered to process data."
=== Deployment in security services ===
==== Commonwealth ====
The Australian Border Force and New Zealand Customs Service have set up an automated border processing system called SmartGate that uses face recognition, which compares the face of the traveller with the data in the e-passport microchip. All Canadian international airports use facial recognition as part of the Primary Inspection Kiosk program that compares a traveler face to their photo stored on the ePassport. This program first came to Vancouver International Airport in early 2017 and was rolled up to all remaining international airports in 2018–2019.
Police forces in the United Kingdom have been trialing live facial recognition technology at public events since 2015. In May 2017, a man was arrested using an automatic facial recognition (AFR) system mounted on a van operated by the South Wales Police. Ars Technica reported that "this appears to be the first time [AFR] has led to an arrest". However, a 2018 report by Big Brother Watch found that these systems were up to 98% inaccurate. The report also revealed that two UK police forces, South Wales Police and the Metropolitan Police, were using live facial recognition at public events and in public spaces.
In September 2019, South Wales Police use of facial recognition was ruled lawful. Live facial recognition has been trialled since 2016 in the streets of London and will be used on a regular basis from Metropolitan Police from beginning of 2020. In August 2020 the Court of Appeal ruled that the way the facial recognition system had been used by the South Wales Police in 2017 and 2018 violated human rights.
However, by 2024 the Metropolitan Police were using the technique with a database of 16,000 suspects, leading to over 360 arrests, including rapists and someone wanted for grievous bodily harm for 8 years. They claim a false positive rate of only 1 in 6,000. The photos of those not identified by the system are deleted immediately.
==== United States ====
The U.S. Department of State operates one of the largest face recognition systems in the world with a database of 117 million American adults, with photos typically drawn from driver's license photos. Although it is still far from completion, it is being put to use in certain cities to give clues as to who was in the photo. The FBI uses the photos as an investigative tool, not for positive identification. As of 2016, facial recognition was being used to identify people in photos taken by police in San Diego and Los Angeles (not on real-time video, and only against booking photos) and use was planned in West Virginia and Dallas.
In recent years Maryland has used face recognition by comparing people's faces to their driver's license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody. Many other states are using or developing a similar system however some states have laws prohibiting its use.
The FBI has also instituted its Next Generation Identification program to include face recognition, as well as more traditional biometrics like fingerprints and iris scans, which can pull from both criminal and civil databases. The federal Government Accountability Office criticized the FBI for not addressing various concerns related to privacy and accuracy.
Starting in 2018, U.S. Customs and Border Protection deployed "biometric face scanners" at U.S. airports. Passengers taking outbound international flights can complete the check-in, security and the boarding process after getting facial images captured and verified by matching their ID photos stored on CBP's database. Images captured for travelers with U.S. citizenship will be deleted within up to 12-hours. The Transportation Security Administration (TSA) had expressed its intention to adopt a similar program for domestic air travel during the security check process in the future. The American Civil Liberties Union is one of the organizations against the program, concerning that the program will be used for surveillance purposes.
In 2019, researchers reported that Immigration and Customs Enforcement (ICE) uses facial recognition software against state driver's license databases, including for some states that provide licenses to undocumented immigrants.
In December 2022, 16 major domestic airports in the US started testing facial-recognition tech where kiosks with cameras are checking the photos on travelers' IDs to make sure that passengers are not impostors. In 2025, it was revealed that the New Orleans Police Department had rolled out what the ACLU's Freed Wessler called "the first known widespread effort by police in a major US city to use AI to identify people in live camera feeds for the purpose of making immediate arrests." in defiance of a 2022 city ordinance limiting the use of the technology.
==== China ====
In 2006, the "Skynet" (天網))Project was initiated by the Chinese government to implement CCTV surveillance nationwide and as of 2018, there have been 20 million cameras, many of which are capable of real-time facial recognition, deployed across the country for this project. Some official claim that the current Skynet system can scan the entire Chinese population in one second and the world population in two seconds.
In 2017, the Qingdao police was able to identify twenty-five wanted suspects using facial recognition equipment at the Qingdao International Beer Festival, one of which had been on the run for 10 years. The equipment works by recording a 15-second video clip and taking multiple snapshots of the subject. That data is compared and analyzed with images from the police department's database and within 20 minutes, the subject can be identified with a 98.1% accuracy.
In 2018, Chinese police in Zhengzhou and Beijing were using smart glasses to take photos which are compared against a government database using facial recognition to identify suspects, retrieve an address, and track people moving beyond their home areas.
As of late 2017, China has deployed facial recognition and artificial intelligence technology in Xinjiang. Reporters visiting the region found surveillance cameras installed every hundred meters or so in several cities, as well as facial recognition checkpoints at areas like gas stations, shopping centers, and mosque entrances. In May 2019, Human Rights Watch reported finding Face++ code in the Integrated Joint Operations Platform (IJOP), a police surveillance app used to collect data on, and track the Uighur community in Xinjiang. Human Rights Watch released a correction to its report in June 2019 stating that the Chinese company Megvii did not appear to have collaborated on IJOP, and that the Face++ code in the app was inoperable. In February 2020, following the Coronavirus outbreak, Megvii applied for a bank loan to optimize the body temperature screening system it had launched to help identify people with symptoms of a Coronavirus infection in crowds. In the loan application Megvii stated that it needed to improve the accuracy of identifying masked individuals.
Many public places in China are implemented with facial recognition equipment, including railway stations, airports, tourist attractions, expos, and office buildings. In October 2019, a professor at Zhejiang Sci-Tech University sued the Hangzhou Safari Park for abusing private biometric information of customers. The safari park uses facial recognition technology to verify the identities of its Year Card holders. An estimated 300 tourist sites in China have installed facial recognition systems and use them to admit visitors. This case is reported to be the first on the use of facial recognition systems in China. In August 2020, Radio Free Asia reported that in 2019 Geng Guanjun, a citizen of Taiyuan City who had used the WeChat app by Tencent to forward a video to a friend in the United States was subsequently convicted on the charge of the crime "picking quarrels and provoking troubles". The Court documents showed that the Chinese police used a facial recognition system to identify Geng Guanjun as an "overseas democracy activist" and that China's network management and propaganda departments directly monitor WeChat users.
In 2019, Protestors in Hong Kong destroyed smart lampposts amid concerns they could contain cameras and facial recognition system used for surveillance by Chinese authorities. Human rights groups have criticized the Chinese government for using artificial intelligence facial recognition technology in its suppression against Uyghurs, Christians and Falun Gong practitioners.
==== India ====
Even though facial recognition technology (FRT) is not fully accurate, it is being increasingly deployed for identification purposes by the police in India. FRT systems generate a probability match score, or a confidence score between the suspect who is to be identified and the database of identified criminals that is available with the police. The National Automated Facial Recognition System (AFRS) is already being developed by the National Crime Records Bureau (NCRB), a body constituted under the Ministry of Home Affairs. The project seeks to develop and deploy a national database of photographs which would comport with a facial recognition technology system by the central and state security agencies. The Internet Freedom Foundation has flagged concerns regarding the project. The NGO has highlighted that the accuracy of FRT systems are "routinely exaggerated and the real numbers leave much to be desired. The implementation of such faulty FRT systems would lead to high rates of false positives and false negatives in this recognition process."
Under the Supreme Court of India's decision in Justice K.S. Puttaswamy vs Union of India (22017 10 SCC 1), any justifiable intrusion by the State into people's right to privacy, which is protected as a fundamental right under Article 21 of the Constitution, must confirm to certain thresholds, namely: legality, necessity, proportionality and procedural safeguards. As per the Internet Freedom Foundation, the National Automated Facial Recognition System (AFRS) proposal fails to meet any of these thresholds, citing "absence of legality," "manifest arbitrariness," and "absence of safeguards and accountability."
While the national level AFRS project is still in the works, police departments in various states in India are already deploying facial recognition technology systems, such as: TSCOP + CCTNS in Telangana, Punjab Artificial Intelligence System (PAIS) in Punjab, Trinetra in Uttar Pradesh, Police Artificial Intelligence System in Uttarakhand, AFRS in Delhi, Automated Multimodal Biometric Identification System (AMBIS) in Maharashtra, FaceTagr in Tamil Nadu. The Crime and Criminal Tracking Network and Systems (CCTNS), which is a Mission Mode Project under the National e-Governance Plan (NeGP), is viewed as a system which would connect police stations across India, and help them "talk" to each other. The project's objective is to digitize all FIR-related information, including FIRs registered, as well as cases investigated, charge sheets filed, and suspects and wanted persons in all police stations. This shall constitute a national database of crime and criminals in India. CCTNS is being implemented without a data protection law in place. CCTNS is proposed to be integrated with the AFRS, a repository of all crime and criminal related facial data which can be deployed to purportedly identify or verify a person from a variety of inputs ranging from images to videos. This has raised privacy concerns from civil society organizations and privacy experts. Both the projects have been censured as instruments of "mass surveillance" at the hands of the state. In Rajasthan, 'RajCop,' a police app has been recently integrated with a facial recognition module which can match the face of a suspect against a database of known persons in real-time. Rajasthan police is in currently working to widen the ambit of this module by making it mandatory to upload photographs of all arrested persons in CCTNS database, which will "help develop a rich database of known offenders."
Helmets fixed with camera have been designed and being used by Rajasthan police in law and order situations to capture police action and activities of "the miscreants, which can later serve as evidence during the investigation of such cases." PAIS (Punjab Artificial Intelligence System), App employs deep learning, machine learning, and face recognition for the identification of criminals to assist police personnel. The state of Telangana has installed 8 lakh CCTV cameras, with its capital city Hyderabad slowly turning into a surveillance capital.
A false positive happens when facial recognition technology misidentifies a person to be someone they are not, that is, it yields an incorrect positive result. They often results in discrimination and strengthening of existing biases. For example, in 2018, Delhi Police reported that its FRT system had an accuracy rate of 2%, which sank to 1% in 2019. The FRT system even failed to distinguish accurately between different sexes.
The government of Delhi in collaboration with Indian Space Research Organisation (ISRO) is developing a new technology called Crime Mapping Analytics and Predictive System (CMAPS). The project aims to deploy space technology for "controlling crime and maintaining law and order." The system will be connected to a database containing data of criminals. The technology is envisaged to be deployed to collect real-time data at the crime scene.
In a reply dated November 25, 2020 to a Right to Information request filed by the Internet Freedom Foundation seeking information about the facial recognition system being used by the Delhi Police (with reference number DEPOL/R/E/20/07128), the Office of the Deputy Commissioner of Police cum Public Information Officer: Crime stated that they cannot provide the information under section 8(d) of the Right to Information Act, 2005.
A Right to Information (RTI) request dated July 30, 2020 was filed with the Office of the Commissioner, Kolkata Police, seeking information about the facial recognition technology that the department was using. The information sought was denied stating that the department was exempted from disclosure under section 24(4) of the RTI Act.
==== Latin America ====
In the 2000 Mexican presidential election, the Mexican government employed face recognition software to prevent voter fraud. Some individuals had been registering to vote under several different names, in an attempt to place multiple votes. By comparing new face images to those already in the voter database, authorities were able to reduce duplicate registrations.
In Colombia public transport busses are fitted with a facial recognition system by FaceFirst Inc to identify passengers that are sought by the National Police of Colombia. FaceFirst Inc also built the facial recognition system for Tocumen International Airport in Panama. The face recognition system is deployed to identify individuals among the travellers that are sought by the Panamanian National Police or Interpol. Tocumen International Airport operates an airport-wide surveillance system using hundreds of live face recognition cameras to identify wanted individuals passing through the airport. The face recognition system was initially installed as part of a US$11 million contract and included a computer cluster of sixty computers, a fiber-optic cable network for the airport buildings, as well as the installation of 150 surveillance cameras in the airport terminal and at about 30 airport gates.
At the 2014 FIFA World Cup in Brazil the Federal Police of Brazil used face recognition goggles. Face recognition systems "made in China" were also deployed at the 2016 Summer Olympics in Rio de Janeiro. Nuctech Company provided 145 inspection terminals for Maracanã Stadium and 55 terminals for the Deodoro Olympic Park.
==== European Union ====
Police forces in at least 21 countries of the European Union use, or plan to use, facial recognition systems, either for administrative or criminal purposes.
===== Greece =====
Greek police passed a contract with Intracom-Telecom for the provision of at least 1,000 devices equipped with live facial recognition system. The delivery is expected before the summer 2021. The total value of the contract is over 4 million euros, paid for in large part by the Internal Security Fund of the European Commission.
===== Italy =====
Italian police acquired a face recognition system in 2017, Sistema Automatico Riconoscimento Immagini (SARI). In November 2020, the Interior ministry announced plans to use it in real-time to identify people suspected of seeking asylum.
===== The Netherlands =====
The Netherlands has deployed facial recognition and artificial intelligence technology since 2016. The database of the Dutch police currently contains over 2.2 million pictures of 1.3 million Dutch citizens. This accounts for about 8% of the population. In The Netherlands, face recognition is not used by the police on municipal CCTV.
==== South Africa ====
In South Africa, in 2016, the city of Johannesburg announced it was rolling out smart CCTV cameras complete with automatic number plate recognition and facial recognition.
=== Deployment in retail stores ===
The US firm 3VR, now Identiv, is an example of a vendor which began offering facial recognition systems and services to retailers as early as 2007. In 2012, the company advertised benefits such as "dwell and queue line analytics to decrease customer wait times", "facial surveillance analytic[s] to facilitate personalized customer greetings by employees" and the ability to "[c]reate loyalty programs by combining Point of sale (POS) data with facial recognition".
==== United States ====
In 2018, the National Retail Federation Loss Prevention Research Council called facial recognition technology "a promising new tool" worth evaluating.
In July 2020, the Reuters news agency reported that during the 2010s the pharmacy chain Rite Aid had deployed facial recognition video surveillance systems and components from FaceFirst, DeepCam LLC, and other vendors at some retail locations in the United States. Cathy Langley, Rite Aid's vice president of asset protection, used the phrase "feature matching" to refer to the systems and said that usage of the systems resulted in less violence and organized crime in the company's stores, while former vice president of asset protection Bob Oberosler emphasized improved safety for staff and a reduced need for the involvement of law enforcement organizations. In a 2020 statement to Reuters in response to the reporting, Rite Aid said that it had ceased using the facial recognition software and switched off the cameras.
According to director Read Hayes of the National Retail Federation Loss Prevention Research Council, Rite Aid's surveillance program was either the largest or one of the largest programs in retail. The Home Depot, Menards, Walmart, and 7-Eleven are among other US retailers also engaged in large-scale pilot programs or deployments of facial recognition technology.
Of the Rite Aid stores examined by Reuters in 2020, those in communities where people of color made up the largest racial or ethnic group were three times as likely to have the technology installed, raising concerns related to the substantial history of racial segregation and racial profiling in the United States. Rite Aid said that the selection of locations was "data-driven", based on the theft histories of individual stores, local and national crime data, and site infrastructure.
==== Australia ====
In 2019, facial recognition to prevent theft was in use at Sydney's Star Casino and was also deployed at gaming venues in New Zealand.
In June 2022, consumer group CHOICE reported facial recognition was in use in Australia at Kmart, Bunnings, and The Good Guys. The Good Guys subsequently suspended the technology pending a legal challenge by CHOICE to the Office of the Australian Information Commissioner, while Bunnings kept the technology in use and Kmart maintained its trial of the technology.
=== Additional uses ===
At the American football championship game Super Bowl XXXV in January 2001, police in Tampa Bay, Florida used Viisage face recognition software to search for potential criminals and terrorists in attendance at the event. 19 people with minor criminal records were potentially identified.
Face recognition systems have also been used by photo management software to identify the subjects of photographs, enabling features such as searching images by person, as well as suggesting photos to be shared with a specific contact if their presence were detected in a photo. By 2008 facial recognition systems were typically used as access control in security systems.
The United States' popular music and country music celebrity Taylor Swift surreptitiously employed facial recognition technology at a concert in 2018. The camera was embedded in a kiosk near a ticket booth and scanned concert-goers as they entered the facility for known stalkers.
On August 18, 2019, The Times reported that the UAE-owned Manchester City hired a Texas-based firm, Blink Identity, to deploy facial recognition systems in a driver program. The club has planned a single super-fast lane for the supporters at the Etihad stadium. However, civil rights groups cautioned the club against the introduction of this technology, saying that it would risk "normalising a mass surveillance tool". The policy and campaigns officer at Liberty, Hannah Couchman said that Man City's move is alarming, since the fans will be obliged to share deeply sensitive personal information with a private company, where they could be tracked and monitored in their everyday lives.
In 2019, casinos in Australia and New Zealand rolled out facial recognition to prevent theft, and a representative of Sydney's Star Casino said they would also provide 'customer service' like welcoming a patron back to a bar.
In August 2020, amid the COVID-19 pandemic in the United States, American football stadiums of New York and Los Angeles announced the installation of facial recognition for upcoming matches. The purpose is to make the entry process as touchless as possible. Disney's Magic Kingdom, near Orlando, Florida, likewise announced a test of facial recognition technology to create a touchless experience during the pandemic; the test was originally slated to take place between March 23 and April 23, 2021, but the limited timeframe had been removed as of late April 2021.
Media companies have begun using face recognition technology to streamline their tracking, organizing, and archiving pictures and videos.
== Advantages and disadvantages ==
=== Compared to other biometric systems ===
In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins.
One key advantage of a facial recognition system that it is able to perform mass identification as it does not require the cooperation of the test subject to work. Properly designed systems installed in airports, multiplexes, and other public places can identify individuals among the crowd, without passers-by even being aware of the system. However, as compared to other biometric techniques, face recognition may not be most reliable and efficient. Quality measures are very important in facial recognition systems as large degrees of variations are possible in face images. Factors such as illumination, expression, pose and noise during face capture can affect the performance of facial recognition systems. Among all biometric systems, facial recognition has the highest false acceptance and rejection rates, thus questions have been raised on the effectiveness of or bias of face recognition software in cases of railway and airport security, law enforcement and housing and employment decisions.
=== Weaknesses ===
Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute in 2008, describes one obstacle related to the viewing angle of the face: "Face recognition has been getting pretty good at full frontal faces and 20 degrees off, but as soon as you go towards profile, there've been problems." Besides the pose variations, low-resolution face images are also very hard to recognize. This is one of the main obstacles of face recognition in surveillance systems. It has also been suggested that camera settings can favour sharper imagery of white skin than of other skin tones.
Face recognition is less effective if facial expressions vary. A big smile can render the system less effective. For instance: Canada, in 2009, allowed only neutral facial expressions in passport photos.
There is also inconstancy in the datasets used by researchers. Researchers may use anywhere from several subjects to scores of subjects and a few hundred images to thousands of images. Data sets may be diverse and inclusive or mainly contain images of white males. It is important for researchers to make available the datasets they used to each other, or have at least a standard or representative dataset.
Although high degrees of accuracy have been claimed for some facial recognition systems, these outcomes are not universal. The consistently worst accuracy rate is for those who are 18 to 30 years old, Black and female.
=== Racial bias and skin tone ===
Studies have shown that facial recognition algorithms tend to perform better on individuals with lighter skin tones compared to those with darker skin tones. This disparity arises primarily because training datasets often overrepresent lighter-skinned individuals, leading to higher error rates for darker-skinned people. For example, a 2018 study found that leading commercial gender classification models, which are facial recognition models, have an error rate up to 7 times higher for those with darker skin tones compared to those with lighter skin tones.
Common image compression methods, such as JPEG chroma subsampling, have been found to disproportionately degrade performance for darker-skinned individuals. These methods inadequately represent color information, which adversely affects the ability of algorithms to recognize darker-skinned individuals accurately.
=== Cross-race effect bias ===
Facial recognition systems often demonstrate lower accuracy when identifying individuals with non-Eurocentric facial features. Known as the Cross-race effect, this bias occurs when systems perform better on racial or ethnic groups that are overrepresented in their training data, resulting in reduced accuracy for underrepresented groups. The overrepresented group is generally the more populous group in the location that the model is being developed. For example, models developed in Asian cultures generally perform better on Asian facial features than Eurocentric facial features due to overrepresentation in the developers training dataset. The opposite is observed in models developed in Eurocentric cultures.
The systems used for facial recognition often lack the sufficient training needed to fully recognize those features not of Eurocentric descent. When the training and databases for these Machine Learning (ML) models do not contain a diverse representation, the models fail to identify the missed population, adding to their racial biases.
The cross-race effect is not exclusive to machines; humans also experience difficulty recognizing faces from racial or ethnic groups different from their own. This is an example of inherent human biases being perpetuated in training datasets.
=== Challenges for individuals with disabilities ===
Facial recognition technologies encounter significant challenges when identifying individuals with disabilities. For instance, systems have been shown to perform worse when recognizing individuals with Down syndrome, often leading to increased false match rates. This is due to distinct facial structures associated with the condition that are not adequately represented in training datasets.
More broadly, facial recognition systems tend to overlook diverse physical characteristics related to disabilities. The lack of representative data for individuals with varying disabilities further emphasizes the need for inclusive algorithmic designs to mitigate bias and improve accuracy.
Additionally, facial expression recognition technologies often fail to accurately interpret the emotional states of individuals with intellectual disabilities. This shortcoming can hinder effective communication and interaction, underscoring the necessity for systems trained on diverse datasets that include individuals with intellectual disabilities.
Furthermore, biases in facial recognition algorithms can lead to discriminatory outcomes for people with disabilities. For example, certain facial features or asymmetries may result in misidentification or exclusion, highlighting the importance of developing accessible and fair biometric systems.
=== Advancements in fairness and mitigation strategies ===
Efforts to address these biases include designing algorithms specifically for fairness. A notable study introduced a method to learn fair face representations by using a progressive cross-transformer model. This approach highlights the importance of balancing accuracy across demographic groups while avoiding performance drops in specific populations.
Additionally, targeted dataset collection has been shown to improve racial equity in facial recognition systems. By prioritizing diverse data inputs, researchers demonstrated measurable reductions in performance disparities between racial groups.
=== Ineffectiveness ===
Critics of the technology complain that the London Borough of Newham scheme has, as of 2004, never recognized a single criminal, despite several criminals in the system's database living in the Borough and the system has been running for several years. "Not once, as far as the police know, has Newham's automatic face recognition system spotted a live target." This information seems to conflict with claims that the system was credited with a 34% reduction in crime (hence why it was rolled out to Birmingham also).
An experiment in 2002 by the local police department in Tampa, Florida, had similarly disappointing results. A system at Boston's Logan Airport was shut down in 2003 after failing to make any matches during a two-year test period.
In 2014, Facebook stated that in a standardized two-option facial recognition test, its online system scored 97.25% accuracy, compared to the human benchmark of 97.5%.
Systems are often advertised as having accuracy near 100%; this is misleading as the outcomes are not universal. The studies often use samples that are smaller and less diverse than would be necessary for large scale applications. Because facial recognition is not completely accurate, it creates a list of potential matches. A human operator must then look through these potential matches and studies show the operators pick the correct match out of the list only about half the time. This causes the issue of targeting the wrong suspect.
== Controversies ==
=== Privacy violations ===
Civil rights organizations and privacy campaigners such as the Electronic Frontier Foundation, Big Brother Watch and the ACLU express concern that privacy is being compromised by the use of surveillance technologies. Face recognition can be used not just to identify an individual, but also to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social media profiles, Internet behavior, and travel patterns. Concerns have been raised over who would have access to the knowledge of one's whereabouts and people with them at any given time. Moreover, individuals have limited ability to avoid or thwart face recognition tracking unless they hide their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any marketer, government agency, or random stranger to secretly collect the identities and associated personal information of any individual captured by the face recognition system. Consumers may not understand or be aware of what their data is being used for, which denies them the ability to consent to how their personal information gets shared.
In July 2015, the United States Government Accountability Office conducted a Report to the Ranking Member, Subcommittee on Privacy, Technology and the Law, Committee on the Judiciary, U.S. Senate. The report discussed facial recognition technology's commercial uses, privacy issues, and the applicable federal law. It states that previously, issues concerning facial recognition technology were discussed and represent the need for updating the privacy laws of the United States so that federal law continually matches the impact of advanced technologies. The report noted that some industry, government, and private organizations were in the process of developing, or have developed, "voluntary privacy guidelines". These guidelines varied between the stakeholders, but their overall aim was to gain consent and inform citizens of the intended use of facial recognition technology. According to the report the voluntary privacy guidelines helped to counteract the privacy concerns that arise when citizens are unaware of how their personal data gets put to use.
In 2016, Russian company NtechLab caused a privacy scandal in the international media when it launched the FindFace face recognition system with the promise that Russian users could take photos of strangers in the street and link them to a social media profile on the social media platform Vkontakte (VK). In December 2017, Facebook rolled out a new feature that notifies a user when someone uploads a photo that includes what Facebook thinks is their face, even if they are not tagged. Facebook has attempted to frame the new functionality in a positive light, amidst prior backlashes. Facebook's head of privacy, Rob Sherman, addressed this new feature as one that gives people more control over their photos online. "We've thought about this as a really empowering feature," he says. "There may be photos that exist that you don't know about." Facebook's DeepFace has become the subject of several class action lawsuits under the Biometric Information Privacy Act, with claims alleging that Facebook is collecting and storing face recognition data of its users without obtaining informed consent, in direct violation of the 2008 Biometric Information Privacy Act (BIPA). The most recent case was dismissed in January 2016 because the court lacked jurisdiction. In the US, surveillance companies such as Clearview AI are relying on the First Amendment to the United States Constitution to data scrape user accounts on social media platforms for data that can be used in the development of facial recognition systems.
In 2019, the Financial Times first reported that facial recognition software was in use in the King's Cross area of London. The development around London's King's Cross mainline station includes shops, offices, Google's UK HQ and part of St Martin's College. According to the UK Information Commissioner's Office: "Scanning people's faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all." The UK Information Commissioner Elizabeth Denham launched an investigation into the use of the King's Cross facial recognition system, operated by the company Argent. In September 2019 it was announced by Argent that facial recognition software would no longer be used at King's Cross. Argent claimed that the software had been deployed between May 2016 and March 2018 on two cameras covering a pedestrian street running through the centre of the development. In October 2019, a report by the deputy London mayor Sophie Linden revealed that in a secret deal the Metropolitan Police had passed photos of seven people to Argent for use in their King's cross facial recognition system.
Automated Facial Recognition was trialled by the South Wales Police on multiple occasions between 2017 and 2019. The use of the technology was challenged in court by a private individual, Edward Bridges, with support from the charity Liberty (case known as R (Bridges) v Chief Constable South Wales Police). The case was heard in the Court of Appeal and a judgement was given in August 2020. The case argued that the use of Facial Recognition was a privacy violation on the basis that there was insufficient legal framework or proportionality in the use of Facial Recognition and that its use was in violation of the Data Protection Acts 1998 and 2018. The case was decided in favour of Bridges and did not award damages. The case was settled via a declaration of wrongdoing. In response to the case, the British Government has repeatedly attempted to pass a Bill regulating the use of Facial Recognition in public spaces. The proposed Bills have attempted to appoint a Commissioner with the ability to regulate Facial Recognition use by Government Services in a similar manner to the Commissioner for CCTV. Such a Bill has yet to come into force [correct as of September 2021].
In January 2023, New York Attorney General Letitia James asked for more information on the use of facial recognition technology from Madison Square Garden Entertainment following reports that the firm used it to block lawyers involved in litigation against the company from entering Madison Square Garden. She noted such a move would could go against federal, state, and local human rights laws.
=== Imperfect technology in law enforcement ===
As of 2018, it is still contested as to whether or not facial recognition technology works less accurately on people of color. One study by Joy Buolamwini (MIT Media Lab) and Timnit Gebru (Microsoft Research) found that the error rate for gender recognition for women of color within three commercial facial recognition systems ranged from 23.8% to 36%, whereas for lighter-skinned men it was between 0.0 and 1.6%. Overall accuracy rates for identifying men (91.9%) were higher than for women (79.4%), and none of the systems accommodated a non-binary understanding of gender. It also showed that the datasets used to train commercial facial recognition models were unrepresentative of the broader population and skewed toward lighter-skinned males. However, another study showed that several commercial facial recognition software sold to law enforcement offices around the country had a lower false non-match rate for black people than for white people.
Experts fear that face recognition systems may actually be hurting citizens the police claims they are trying to protect. It is considered an imperfect biometric, and in a study conducted by Georgetown University researcher Clare Garvie, she concluded that "there's no consensus in the scientific community that it provides a positive identification of somebody." It is believed that with such large margins of error in this technology, both legal advocates and facial recognition software companies say that the technology should only supply a portion of the case – no evidence that can lead to an arrest of an individual. The lack of regulations holding facial recognition technology companies to requirements of racially biased testing can be a significant flaw in the adoption of use in law enforcement. CyberExtruder, a company that markets itself to law enforcement said that they had not performed testing or research on bias in their software. CyberExtruder did note that some skin colors are more difficult for the software to recognize with current limitations of the technology. "Just as individuals with very dark skin are hard to identify with high significance via facial recognition, individuals with very pale skin are the same," said Blake Senftner, a senior software engineer at CyberExtruder.
The United States' National Institute of Standards and Technology (NIST) carried out extensive testing of FRT system 1:1 verification and 1:many identification. It also tested for the differing accuracy of FRT across different demographic groups. The independent study concluded at present, no FRT system has 100% accuracy.
=== Data protection ===
In 2010, Peru passed the Law for Personal Data Protection, which defines biometric information that can be used to identify an individual as sensitive data. In 2012, Colombia passed a comprehensive Data Protection Law which defines biometric data as senstivite information. According to Article 9(1) of the EU's 2016 General Data Protection Regulation (GDPR) the processing of biometric data for the purpose of "uniquely identifying a natural person" is sensitive and the facial recognition data processed in this way becomes sensitive personal data. In response to the GDPR passing into the law of EU member states, EU based researchers voiced concern that if they were required under the GDPR to obtain individual's consent for the processing of their facial recognition data, a face database on the scale of MegaFace could never be established again. In September 2019 the Swedish Data Protection Authority (DPA) issued its first ever financial penalty for a violation of the EU's General Data Protection Regulation (GDPR) against a school that was using the technology to replace time-consuming roll calls during class. The DPA found that the school illegally obtained the biometric data of its students without completing an impact assessment. In addition the school did not make the DPA aware of the pilot scheme. A 200,000 SEK fine (€19,000/$21,000) was issued.
In the United States of America several U.S. states have passed laws to protect the privacy of biometric data. Examples include the Illinois Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). In March 2020 California residents filed a class action against Clearview AI, alleging that the company had illegally collected biometric data online and with the help of face recognition technology built up a database of biometric data which was sold to companies and police forces. At the time Clearview AI already faced two lawsuits under BIPA and an investigation by the Privacy Commissioner of Canada for compliance with the Personal Information Protection and Electronic Documents Act (PIPEDA).
== Bans on the use of facial recognition technology ==
=== United States of America ===
In May 2019, San Francisco, California became the first major United States city to ban the use of facial recognition software for police and other local government agencies' usage. San Francisco Supervisor, Aaron Peskin, introduced regulations that will require agencies to gain approval from the San Francisco Board of Supervisors to purchase surveillance technology. The regulations also require that agencies publicly disclose the intended use for new surveillance technology. In June 2019, Somerville, Massachusetts became the first city on the East Coast to ban face surveillance software for government use, specifically in police investigations and municipal surveillance. In July 2019, Oakland, California banned the usage of facial recognition technology by city departments.
The American Civil Liberties Union ("ACLU") has campaigned across the United States for transparency in surveillance technology and has supported both San Francisco and Somerville's ban on facial recognition software. The ACLU works to challenge the secrecy and surveillance with this technology.
During the George Floyd protests, use of facial recognition by city government was banned in Boston, Massachusetts. As of June 10, 2020, municipal use has been banned in:
Berkeley, California
Oakland, California
Boston, Massachusetts – June 30, 2020
Brookline, Massachusetts
Cambridge, Massachusetts
Northampton, Massachusetts
Springfield, Massachusetts
Somerville, Massachusetts
Portland, Oregon – September 2020
The West Lafayette, Indiana City Council passed an ordinance banning facial recognition surveillance technology.
On October 27, 2020, 22 human rights groups called upon the University of Miami to ban facial recognition technology. This came after the students accused the school of using the software to identify student protesters. The allegations were, however, denied by the university.
A state police reform law in Massachusetts will take effect in July 2021; a ban passed by the legislature was rejected by governor Charlie Baker. Instead, the law requires a judicial warrant, limit the personnel who can perform the search, record data about how the technology is used, and create a commission to make recommendations about future regulations.
Reports in 2024 revealed that some police departments, including San Francisco Police Department, had skirted bans on facial recognition technology that had been enacted in their respective cities.
=== European Union ===
In January 2020, the European Union suggested, but then quickly scrapped, a proposed moratorium on facial recognition in public spaces.
The European "Reclaim Your Face" coalition launched in October 2020. The coalition calls for a ban on facial recognition and launched a European Citizens' Initiative in February 2021. More than 60 organizations call on the European Commission to strictly regulate the use of biometric surveillance technologies.
== Emotion recognition ==
In the 18th and 19th century, the belief that facial expressions revealed the moral worth or true inner state of a human was widespread and physiognomy was a respected science in the Western world. From the early 19th century onwards photography was used in the physiognomic analysis of facial features and facial expression to detect insanity and dementia. In the 1960s and 1970s the study of human emotions and its expressions was reinvented by psychologists, who tried to define a normal range of emotional responses to events. The research on automated emotion recognition has since the 1970s focused on facial expressions and speech, which are regarded as the two most important ways in which humans communicate emotions to other humans. In the 1970s the Facial Action Coding System (FACS) categorization for the physical expression of emotions was established. Its developer Paul Ekman maintains that there are six emotions that are universal to all human beings and that these can be coded in facial expressions. Research into automatic emotion specific expression recognition has in the past decades focused on frontal view images of human faces. Facial thermography can be considered as a promising tool of emotion recognition.
In 2016, facial feature emotion recognition algorithms were among the new technologies, alongside high-definition CCTV, high resolution 3D face recognition and iris recognition, that found their way out of university research labs. In 2016, Facebook acquired FacioMetrics, a facial feature emotion recognition corporate spin-off by Carnegie Mellon University. In the same year Apple Inc. acquired the facial feature emotion recognition start-up Emotient. By the end of 2016, commercial vendors of facial recognition systems offered to integrate and deploy emotion recognition algorithms for facial features. The MIT's Media Lab spin-off Affectiva by late 2019 offered a facial expression emotion detection product that can recognize emotions in humans while driving.
== Anti-facial recognition systems ==
The development of anti-facial recognition technology is effectively an arms race between privacy researchers and big data companies. Big data companies increasingly use convolutional AI technology to create ever more advanced facial recognition models. Solutions to block facial recognition may not work on newer software, or on different types of facial recognition models. One popular cited example of facial-recognition blocking is the CVDazzle makeup and haircut system, but the creators note on their website that it has been outdated for quite some time as it was designed to combat a particular facial recognition algorithm and may not work. Another example is the emergence of facial recognition that can identify people wearing facemasks and sunglasses, especially after the COVID-19 pandemic.
Given that big data companies have much more funding than privacy researchers, it is very difficult for anti-facial recognition systems to keep up. There is also no guarantee that obfuscation techniques that were used for images taken in the past and stored, such as masks or software obfuscation, would protect users from facial-recognition analysis of those images by future technology.
In January 2013, Japanese researchers from the National Institute of Informatics created 'privacy visor' glasses that use nearly infrared light to make the face underneath it unrecognizable to face recognition software that use infrared. The latest version uses a titanium frame, light-reflective material and a mask which uses angles and patterns to disrupt facial recognition technology through both absorbing and bouncing back light sources. However, these methods are used to prevent infrared facial recognition and would not work on AI facial recognition of plain images. Some projects use adversarial machine learning to come up with new printed patterns that confuse existing face recognition software.
One method that may work to protect from facial recognition systems are specific haircuts and make-up patterns that prevent the used algorithms to detect a face, known as computer vision dazzle. Incidentally, the makeup styles popular with Juggalos may also protect against facial recognition.
Facial masks that are worn to protect from contagious viruses can reduce the accuracy of facial recognition systems. A 2020 NIST study, tested popular one-to-one matching systems and found a failure rate between five and fifty percent on masked individuals. The Verge speculated that the accuracy rate of mass surveillance systems, which were not included in the study, would be even less accurate than the accuracy of one-to-one matching systems. The facial recognition of Apple Pay can work through many barriers, including heavy makeup, thick beards and even sunglasses, but fails with masks. However, facial recognition of masked faces is increasingly getting more reliable.
Another solution is the application of obfuscation to images that may fool facial recognition systems while still appearing normal to a human user. These could be used for when images are posted online or on social media. However, as it is hard to remove images once they are on the internet, the obfuscation on these images may be defeated and the face of the user identified by future advances in technology. Two examples of this technique, developed in 2020, are the ANU's 'Camera Adversaria' camera app, and the University of Chicago's Fawkes image cloaking software algorithm which applies obfuscation to already taken photos. However, by 2021 the Fawkes obfuscation algorithm had already been specifically targeted by Microsoft Azure which changed its algorithm to lower Fawkes' effectiveness.
== See also ==
Lists
List of computer vision topics
List of emerging technologiesOutline of artificial intelligence
== References ==
== Further reading ==
Farokhi, Sajad; Shamsuddin, Siti Mariyam; Flusser, Jan; Sheikh, U.U; Khansari, Mohammad; Jafari-Khouzani, Kourosh (2014). "Near infrared face recognition by combining Zernike moments and undecimated discrete wavelet transform". Digital Signal Processing. 31 (1): 13–27. Bibcode:2014DSP....31...13F. doi:10.1016/j.dsp.2014.04.008.
"The Face Detection Algorithm Set to Revolutionize Image Search" (Feb. 2015), MIT Technology Review
Garvie, Clare; Bedoya, Alvaro; Frankle, Jonathan (October 18, 2016). Perpetual Line Up: Unregulated Police Face Recognition in America. Center on Privacy & Technology at Georgetown Law. Retrieved October 22, 2016.
"Facial Recognition Software 'Sounds Like Science Fiction,' but May Affect Half of Americans". As It Happens. Canadian Broadcasting Corporation. October 20, 2016. Retrieved October 22, 2016. Interview with Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law and co-author of Perpetual Line Up: Unregulated Police Face Recognition in America.
Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
== External links ==
Media related to Facial recognition system at Wikimedia Commons
A Photometric Stereo Approach to Face Recognition (master's thesis). The University of the West of England, Bristol. |
Ferry | A ferry is a boat that transports passengers, and occasionally vehicles and cargo, across a body of water. A small passenger ferry with multiple stops, like those in Venice, Italy, is sometimes referred to as a water taxi or water bus.
Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels. Ship connections of much larger distances (such as over long distances in water bodies like the Baltic Sea) may also be called ferry services, and many carry vehicles.
== History ==
The profession of the ferryman is embodied in Greek mythology in Charon, the boatman who transported souls across the River Styx to the Underworld.
Speculation that a pair of oxen propelled a ship having a water wheel can be found in 4th century Roman literature "Anonymus De Rebus Bellicis". Though impractical, there is no reason why it could not work and such a ferry, modified by using horses, was used in Lake Champlain in 19th-century America. See Experiment (horse powered boat).
In 1850 the roll-on roll-off (ro-ro) ferry, Leviathan designed to carry freight wagons efficiently across the Firth of Forth in Scotland started to operate between Granton, near Edinburgh, and Burntisland in Fife. The vessel design was highly innovative and the ability to move freight in great quantities and with minimal labour signalled the way ahead for sea-borne transport, converting the ro-ro ferry from an experimental and marginal ship type into one of central importance in the transport of goods and passengers.
In 1871, the world's first car ferry crossed the Bosphorus in Istanbul. The iron steamship, named Suhulet (meaning 'ease' or 'convenience') was designed by the general manager of Şirket-i Hayriye (Bosporus Steam Navigation Company), Giritli Hüseyin Haki Bey and built by the Greenwich shipyard of Maudslay, Sons and Field. It weighed 157 tons, was 155 feet (47 meters) long, 27 feet (8.2 meters) wide and had a draft of 9 feet (2.7 meters). It was capable of travelling up to 6 knots with the side wheel turned by its 450-horsepower, single-cylinder, two-cycle steam engine. Launched in 1872, Suhulet's unique features consisted of a symmetrical entry and exit for horse carriages, along with a dual system of hatchways. The ferry operated on the Üsküdar-Kabataş route, which is still serviced by modern ferries today.
== Notable services ==
=== Asia ===
In Hong Kong, Star Ferry carries passengers across Victoria Harbour. Other carriers ferry travelers between Hong Kong Island and outlying islands like Cheung Chau, Lantau Island and Lamma Island.
In the Philippines, the Philippine Nautical Highway System forms the backbone of the nationwide transport system by integrating ports with highway systems; the system has three main routes. Another known ferry service is the Pasig River Ferry Service, which is the only water-based transportation in Metro Manila. This system cruises the Pasig River.
==== Bangladesh ====
The country's extensive river network makes ferries a practical and affordable mode of transport. Passenger ferries, locally referred to as "launches," are widely used to travel to the southern and south-western regions of Bangladesh from the capital. The most popular destinations include Barisal, Bhola, Patuakhali, and Khulna. Additionally, there are water-transport routes connecting Dhaka with Kolkata in India.
Approximately 200 launches operate across 107 water routes throughout the country as of 2022. To support the launch services, the BIWTA has developed 292 wharfs (ghats) for the docking of these vessels, and oversees 380 launch terminals.
There are 53 roll-on/roll-off ferries running on seven routes across the country: Paturia–Daulatdia, Aricha–Kazirhat, Shimulia–Banglabazar, Bhola–Lakshmipur, Lajarhat–Veduria, Char Kalipur–Kalipur Bazar and Harinaghat Chandpur–Shariatpur.
More than 800,000 small and medium wooden sailboats and rowboats, often retrofitted to be motorised, are an important means of transportation for people and goods across the country, especially during the rainy season. These boats transport over 1.2 million tonnes of freight annually. Among these are the dingi, which is the oldest form of Bengal boat. Larger cargo boats includes vessels such as the balam, bajra and sampan. Under the category of bainkata (flat-bottomed) boats are the ghasi, gachari, dorakha, kathami, mallar, patam and panshi, among others. Ubiquitous throughout Bangladesh, especially in monsoon flood-prone regions, is the kosha, a small, highly manoeuvrable boat that is easy to operate. These various traditional wooden boats play a vital role in providing transportation during the rainy season when other modes become impractical due to flooding.
The ferries are often overloaded and continue to operate in poor weather; many people die each year in ferry and launch accidents. From 2005 to 2015, nearly 1,800 casualties have been reported due to river transport incidents, a number which may be higher due to the prevalence of unregistered vessels. In 2014, the launch Pinak 6 sank in the Padma River with more than 200 passengers aboard near Munshiganj's Louhajang Upazila.
==== India ====
India's ro-ro ferry service between Ghogha and Dahej was inaugurated by Prime Minister Narendra Modi on 22 October 2017. It aims to connect South Gujarat and Saurashtra currently separated by 360 kilometres (220 mi) of roadway to 31 kilometres (19 mi) of ferry service. It is a part of the larger Sagar Mala project.
Water transport in Mumbai consists of ferries, hovercraft, and catamarans, operated by various government agencies as well as private entities. The Kerala State Water Transport Department (SWTD), operating under the Ministry of Transport, Government of Kerala, India regulates the inland navigation systems in the Indian state of Kerala and provides inland water transport facilities. It stands for catering to the passenger and cargo traffic needs of the inhabitants of the waterlogged areas of the Districts of Alappuzha, Kottayam, Kollam, Ernakulam, Kannur and Kasargode. SWTD ferry service is also one of the most affordable modes to enjoy the beauty of the scenic Kerala backwaters.
Ferry operates between Port Blair, Havelock & Neil Islands in the Andaman Islands while Boat Operates For Ross Island, North Bay, Elephanta Beach, Red Skin, Jolly Bouy. Ferries and catamarans are operated by Green Ocean, Makruzz, ITT Majestic, Nautika.
==== Indonesia ====
As the largest archipelagic country, Indonesia has several ferry routes which is managed mostly by PT. ASDP Indonesia Ferry (Persero) and several private companies. ASDP_Indonesia_Ferry or ASDP is a state-owned company engaged in the business of integrated ferry and port services and waterfront tourist destinations. ASDP operates a ferry fleet of more than 160 units handling more than 300 routes in 36 ports throughout Indonesia.
==== Japan ====
Japan used to rely heavily on ferries for passenger and goods transportation among the four main islands of Hokkaido, Honshu, Shikoku and Kyushu. However, as highway and railway bridges and undersea tunnels (such as the Seikan Tunnel and Honshū–Shikoku Bridge Project) have been constructed, the ferry transportation has recently become for short-distance sightseeing passengers with or without car, and for long-distance truck drivers hauling goods.
==== Malaysia ====
The Malaysian state of Penang is home to the oldest ferry service in the country. The first regular ferry service operating across the Penang Strait between George Town and Province Wellesley (now Seberang Perai) was launched in 1894 by Quah Beng Kee and his brothers. The iconic yellow double-deck roll-on/roll-off (RORO) ferries were introduced in 1957. Between 1959 and 2002, a total of 15 vessels were commissioned for the service.
Currently operated by Penang Port Sdn Bhd, the ferry service has evolved over the decades. The RORO ferries were retired in 2021, with speedboats temporarily replacing them. In 2023, these speedboats were succeeded by four newly-built catamarans, which now serve only passengers and motorcyclists. These catamarans operate between the Raja Tun Uda Ferry Terminal in George Town and the Sultan Abdul Halim Ferry Terminal in Seberang Perai.
=== Russian Federation ===
Due to the geographical features of Russia, it has a large number of both sea and river ferry crossings. Car ferries operate from the continental part of Russia to Sakhalin, Kamchatka and Japan. The Ust-Luga – Kaliningrad ferry also runs, until February 2022 ferries also ran from St. Petersburg to different cities of the Baltic Sea. Before the construction of the Kerch Bridge, there was a ferry across the Kerch Strait, whose service was resumed after the Kerch bridge explosion.
There are also more than 100 ferry crossings on different rivers in Russia. These are usually symmetrical through ferries with two ramps for quick entry and exit of cars. For some categories of car owners, these ferries may be free if there is no alternative crossing of the river.
=== Europe ===
==== Great Britain ====
The busiest seaway in the world, the English Channel, connects Great Britain and mainland Europe, with ships sailing from the UK ports of Dover, Newhaven, Poole, Portsmouth and Plymouth to French ports, such as Calais, Dunkirk, Dieppe, Roscoff, Cherbourg-Octeville, Caen, St Malo and Le Havre. The busiest ferry route to France is the Dover to Calais crossing with approximately 9,168,000 passengers using the service in 2018. Ferries from Great Britain also sail to Belgium, the Netherlands, Norway, Spain and Ireland. Some ferries carry mainly tourist traffic, but most also carry freight, and some are exclusively for the use of freight lorries. In Britain, car-carrying ferries are sometimes referred to as RORO (roll-on, roll-off) for the ease by which vehicles can board and leave.
==== Denmark ====
The busiest single ferry route in terms of the number of departures is across the northern part of Øresund, between Helsingborg, Scania, Sweden and Elsinore, Denmark. Before the Øresund bridge was opened in July 2000, car and "car and train" ferries departed up to seven times every hour (every 8.5 minutes). This has since been reduced, but a car ferry still departs from each harbor every 15 minutes during daytime. The route is around 2.2 nautical miles (4.1 km; 2.5 mi) and the crossing takes 22 minutes. Today, all ferries on this route are constructed so that they do not need to turn around in the harbors. This also means that the ferries lack stems and sterns, since the vessels sail in both directions. Starboard and port-side are dynamic, depending on the direction the ferry sails. Despite the short crossing, the ferries are equipped with restaurants (on three out of four ferries), cafeterias, and kiosks. Passengers without cars often make a double or triple return journey in the restaurants; for this, a single journey ticket is sufficient. Passenger and bicycle passenger tickets are inexpensive compared with longer routes.
==== Baltic Sea ====
Large cruiseferries sail in the Baltic Sea between Finland, Åland, Sweden, Estonia, Latvia and Saint Petersburg, Russia. In many ways, these ferries are like cruise ships, but they can also carry hundreds of cars on car decks. Besides providing passenger and car transport across the sea, Baltic Sea cruise-ferries are a popular tourist destination unto themselves, with multiple restaurants, nightclubs, bars, shops and entertainment on board. Helsinki was the busiest international passenger ferry port in the world in 2017 with over 11.8 million passengers whilst the second business international ferry port, Dover, had 11.7 million passengers. The Helsinki-Tallinn route alone accounted for nine million passengers. In 2022 the port of Helsinki had almost 8 million passengers, of which 6.3 million travelled between Helsinki and Tallinn. Additionally many smaller ferries operate on domestic routes in Finland, Sweden and Estonia.
The south-west and southern parts of the Baltic Sea has several routes mainly for heavy traffic and cars. The ferry routes of Rødby-Puttgarden, Trelleborg-Rostock, Trelleborg-Travemünde, Trelleborg-Świnoujście, Gedser-Rostock, Gdynia-Karlskrona, and Ystad-Świnoujście are all typical transports ferries. On the longer of these routes, simple cabins are available. Some of these routes previously also carried trains, but since 2020 these trains are instead routed around the Baltic via the Great Belt fixed link and Jutland.
==== Turkey ====
In Istanbul, ferries connect the European and Asian shores of Bosphorus, as well as Princes' Islands and nearby coastal towns. In 2014, İDO transported 47 million passengers, the largest ferry system in the world.
==== Italy ====
The largest ferry system in Italy is in Venice. The city's water taxis (Italian: taxi d'acqua) provide service all around the city's canals. They can carry up to 10 people. They operate on a series of lines that stop at different locations around Venice.
==== Sweden ====
The world's shortest ferry line is the Ferry Lina in Töreboda, Sweden. It takes around 20–25 seconds and is hand powered.
=== North America ===
==== Canada ====
Due to the numbers of large freshwater lakes and length of shoreline in Canada, various provinces and territories have ferry services.
BC Ferries operates the third largest ferry service in the world which carries travellers between Vancouver Island and the British Columbia mainland on the country's west coast. This ferry service operates to other islands including the Gulf Islands and Haida Gwaii. In 2015, BC Ferries carried more than 8 million vehicles and 20 million passengers. In Vancouver there is SeaBus.
Canada's east coast has been home to numerous inter- and intra-provincial ferry and coastal services, including a large network operated by the federal government under CN Marine and later Marine Atlantic. Private and publicly owned ferry operations in eastern Canada include Marine Atlantic, serving the island of Newfoundland, as well as Bay Ferries, Northumberland Ferries, CTMA, Coastal Transport, and STQ. Canadian waters in the Great Lakes once hosted numerous ferry services, but these have been reduced to those offered by Owen Sound Transportation Company and several smaller operations. There are also several commuter passenger ferry services operated in major cities, such as Metro Transit in Halifax, and Toronto Island ferries in Toronto. There is also the Société des traversiers du Québec.
==== United States ====
Due to the North Carolina coast's geography, consisting of numerous sounds, inlets, tidal arms, and islands, ferry transportation is essential in the region. The state operates twelve routes, eight of which are under the oversight of the North Carolina Department of Transportation Ferry Division, three of which are under the direct oversight of the North Carolina Department of Transportation, and one of which is under the oversight of the North Carolina Division of Parks and Recreation. Three of the Ferry Division routes are tolled, and all ferry routes operated by the North Carolina Department of Transportation carry both vehicles and pedestrians, although certain vessels only carry pedestrians and cyclists. The National Park Service additionally works with private companies to offer ferry service to locations such as Cape Lookout and Portsmouth.
Washington State Ferries operates the most extensive ferry system in the continental United States and the second largest in the world by vehicles carried, with ten routes on Puget Sound and the Strait of Juan de Fuca serving terminals in Washington and Vancouver Island. In 2016, Washington State Ferries carried 10.5 million vehicles and 24.2 million riders in total.
The Alaska Marine Highway System provides service between Bellingham, Washington, and various towns and villages throughout Southeast and Southwest Alaska, including crossings of the Gulf of Alaska. AMHS provides affordable access to many small communities with no road connection or airport.
The Staten Island Ferry in New York City, sailing between the boroughs of Manhattan and Staten Island, is the nation's single busiest ferry route by passenger volume. Unlike riders on many other ferry services, Staten Island Ferry passengers do not pay any fare to ride it. New York City also has a network of smaller ferries, or water taxis, that shuttle commuters along the Hudson River from locations in New Jersey and Northern Manhattan down to the midtown, downtown and Wall Street business centers. Several ferry companies also offer service linking midtown and lower Manhattan with locations in the boroughs of Queens and Brooklyn, crossing the city's East River. New York City Mayor Bill de Blasio announced in February 2015 that city would begin an expanded Citywide Ferry Service, and launched as NYC Ferry in 2017, linking heretofore relatively isolated communities such as Manhattan's Lower East Side, Soundview in The Bronx, Astoria and the Rockaways in Queens and such Brooklyn neighborhoods as Bay Ridge, Sunset Park, and Red Hook with existing ferry landings in Lower Manhattan and Midtown Manhattan. A second expansion phase connected Staten Island to the West Side of Manhattan, and added a stop in Throgs Neck, in the Bronx. NYC Ferry is now the largest passenger fleet in the United States.
The New Orleans area also has many ferries that carry both vehicles and pedestrians. Most notable is the Algiers Ferry, which has been in continuous operation since 1827 and is one of the oldest operating ferries in North America.
In New England, vehicle-carrying ferry services between mainland Cape Cod and the islands of Martha's Vineyard and Nantucket are operated by The Woods Hole, Martha's Vineyard and Nantucket Steamship Authority, which sails year-round between Woods Hole and Vineyard Haven as well as Hyannis and Nantucket. Seasonal service is also operated from Woods Hole to Oak Bluffs during the summer and fall. As there are no bridges or tunnels connecting the islands to the mainland, The Steamship Authority ferries in addition to being the only method for transporting private cars to or from the islands, also ferry heavy freight and supplies, such as construction materials and fuel, competing with tug and barge companies. Additionally, Hy-Line Cruises operates high-speed catamaran service from Hyannis to both islands, and several smaller operations run seasonal passenger-only service primarily geared towards tourist day-trippers from other mainland ports, including New Bedford, (New Bedford Fast Ferry) Falmouth, (Island Queen ferry and Falmouth Ferry) and Harwich (Freedom Cruise Line). Ferries also bring riders and vehicles across Long Island Sound to such Connecticut cities as Bridgeport and New London, and to Block Island in Rhode Island from points on Long Island.
Transbay commuting in the San Francisco Bay Area was primarily ferry-based until the advent of automobiles in the 1940s, and most bridges in the area were built to supplant ferry services. By the 1970s, ferries were primarily used by tourists with Golden Gate Ferry, an organization under the ownership of the same governing body as the Golden Gate Bridge, left as the sole commute operator. The 1989 Loma Prieta earthquake prompted the restoration of service to the East Bay. The modern ferry network is primarily under the authority of San Francisco Bay Ferry, connecting with cities as far as Vallejo. Tourist excursions are also offered by Blue & Gold Fleet and Red & White Fleet. A ferry serves Angel Island (which also accepts private craft). Alcatraz is served exclusively by ferry service administered by the National Park Service.
Until the completion of the Mackinac Bridge in the 1950s, ferries were used for vehicle transportation between the Lower and the Upper Peninsulas of Michigan, across the Straits of Mackinac in the United States. Ferry service for bicycles and passengers continues across the straits for transport to Mackinac Island, where motorized vehicles are almost completely prohibited. This crossing is made possible by two ferry lines Shepler's Ferry and Mackinac Island Ferry Company (formerly Star Line).
A ferry service runs between Milwaukee, Wisconsin and Muskegon, Michigan operated by Lake Express. Another ferry SS Badger operates between Manitowoc, Wisconsin and Ludington, Michigan. Both cross Lake Michigan.
Numerous additional inland ferry routes exist in the United States, such as the Cave-In-Rock Ferry across the Ohio River, and the Benton-Houston Ferry across the Tennessee River.
===== Modernization of ferry system =====
The FTA announced in September 2024 that it would award $300 million in grants to modernize ferry systems in the United States. These grants will support 18 projects across 14 states, emphasizing upgrading environmentally friendly propulsion systems. Eight of the 18 projects will receive funding for this purpose.
One notable project is the San Francisco ferry system, which will receive $11.5 million to improve the connection between Treasure Island and Mission Bay. In Maine, the ferry system will be upgraded in Lincolnville and Islesboro. Additionally, Alaska will receive a significant $106.4 million grant to replace a 60-year-old vessel operating in the southwest. This vessel is a crucial connector for the region.
These grants are part of the FTA's efforts to improve ferry transportation in the United States and promote sustainable transportation options.
==== Mexico ====
Mexico has ferry services run by Baja Ferries that connect La Paz located on the Baja California Peninsula with Mazatlán and Topolobampo. Passenger ferries also run from Playa del Carmen to the island of Cozumel.
=== South America ===
There are several ferries in South America.
Chacao Channel has ferry lines.
=== Oceania ===
==== Australia ====
In Australia, two Spirit of Tasmania ferries carry passengers and vehicles 450 kilometres (280 mi) across Bass Strait, the body of water that separates Tasmania from the Australian mainland, often under turbulent sea conditions. These run overnight but also include day crossings in peak time. Both ferries are based in the northern Tasmanian port city of Devonport and sail to Geelong. Before Geelong this ferry used to sail to Melbourne.
The double-ended Freshwater-class ferry cuts an iconic shape as it makes its way up and down Sydney Harbour New South Wales, Australia between Manly and Circular Quay.
==== New Zealand ====
In New Zealand, ferries connect Wellington in the North Island with Picton in the South Island, linking New Zealand's two main islands. The route is 92 kilometres (57 mi), and is run by two companies – government-owned Interislander, and independent Bluebridge, who say the trip takes three and half hours.
== Types ==
Ferry designs depend on the length of the route, the passenger or vehicle capacity required, speed requirements and the water conditions the craft must deal with.
=== Double-ended ===
Double-ended ferries have interchangeable bows and sterns, allowing them to shuttle back and forth between two terminals without having to turn around. Well-known double-ended ferry systems include the BC Ferries, the Staten Island Ferry, Washington State Ferries, Star Ferry, several ferries on the North Carolina Ferry System, and the Lake Champlain Transportation Company. Most Norwegian fjord and coastal ferries are double-ended vessels. All ferries from southern Prince Edward Island to the mainland of Canada were double-ended. This service was discontinued upon completion of the Confederation Bridge. Some ferries in Sydney, Australia and British Columbia are also double-ended. In 2008, BC Ferries launched the first of the Coastal-class ferries, which at the time were the world's largest double enders. These were surpassed as the world's largest double-enders when P&O Ferries launched their first double-ender, called the P&O Pioneer, which entered service in June 2023 replacing Pride of Kent.
=== Hydrofoil ===
Hydrofoils have the advantage of higher cruising speeds, succeeding hovercraft on some English Channel routes where the ferries now compete against the Eurotunnel and Eurostar trains that use the Channel Tunnel. Passenger-only hydrofoils also proved a practical, fast and relatively economical solution in the Canary Islands, but were recently replaced by faster catamaran "high speed" ferries that can carry cars. Their replacement by the larger craft is seen by critics as a retrograde step given that the new vessels use much more fuel and foster the inappropriate use of cars in islands already suffering from the impact of mass tourism.
=== Hovercraft ===
Hovercraft were developed in the 1960s and 1970s to carry cars. The largest was the massive SR.N4 which carried cars in its centre section with ramps at the bow and stern between England and France. The hovercraft was superseded by catamarans which are nearly as fast and are less affected by sea and weather conditions. Only one service now remains, a foot passenger service between Portsmouth and the Isle of Wight run by Hovertravel. From 1984 to 1994 Scandinavian Airlines to operated a hovercraft service between Malmö and Copenhagen Airport as a connecting "flight" for passengers from southern Sweden. The service was replaced by a regular boat in 1994 and by the Öresund bridge in 2000.
=== Catamaran ===
Since 1990 high speed catamarans have revolutionised ferry services, replacing hovercraft, hydrofoils and conventional monohull ferries. In the 1990s there were a variety of builders, but the industry has consolidated to two builders of large vehicular ferries between 60 and 120 metres. Incat of Hobart, Tasmania favours a Wave-piercing hull to deliver a smooth ride, while Austal of Perth, Western Australia builds ships based on SWATH designs. Both these companies also compete in the smaller river ferry industry with a number of other ship builders.
Stena Line once operated the largest catamarans in the world, the Stena HSS class, between the United Kingdom and Ireland. These waterjet-powered vessels, displaced 19,638 tonnes, accommodating 375 passenger cars and 1,500 passengers. Other examples of these super-size catamarans are found in the Condor Ferries fleet with the Condor Voyager and Rapide.
=== Roll-on/roll-off ===
Roll-on/roll-off ferries (RORO) are large conventional ferries named for the ease by which vehicles can board and leave.
=== Cruiseferry / RoPax ===
A cruiseferry is a ship that combines the features of a cruise ship with a roll-on/roll-off ferry. They are also known as RoPax for their combined Roll on/Roll Off and passenger design.
=== Fast RoPax ferry ===
Fast RoPax ferries are conventional ferries with a large garage intake and a relatively large passenger capacity, with conventional diesel propulsion and propellers that sail over 25 knots (46 km/h; 29 mph). Pioneering this class of ferries was Attica Group, when it introduced Superfast I between Greece and Italy in 1995 through its subsidiary company Superfast Ferries. Cabins, if existent, are much smaller than those on cruise ships.
=== Turntable ferry ===
This type of ferry allows vehicles to load from the "side". The vehicle platform can be turned. When loading, the platform is turned sideways to allow sideways loading of vehicles. Then the platform is turned back, in line with the vessel, and the journey across water is made.
=== Pontoon ferry ===
Pontoon ferries and flat-bottomed boats such as punts carry passengers and vehicles across rivers and lakes and are widely used in less-developed countries with large rivers where the cost of bridge construction is prohibitive. One or more vehicles are carried on such ferries with ramps at either end for vehicles or animals to board. Cable ferries are usually pontoon ferries. In the Netherlands, Belgium and Germany many such small cable ferries exist and are called püntes.
=== Train ferry ===
A train ferry is a ship designed to carry railway vehicles. Typically, one level of the ship is fitted with railway tracks, and the vessel has a door at either or both of the front and rear to give access to the wharves.
=== Foot ferry ===
Foot ferries are small craft used to ferry foot passengers, and often also cyclists, over rivers. These are either self-propelled craft or cable ferries. Such ferries are for example to be found on the lower River Scheldt in Belgium and in particular the Netherlands. Regular foot ferry service also exists in the capital of the Czech Republic, Prague, and across the Yarra River in Melbourne, Australia at Newport. Restored, expanded ferry service in the Port of New York and New Jersey uses boats for pedestrians only.
The UK has a variety of historic foot ferries such as the Butley Foot Ferry across Butley Creek which dates back to 1383.
=== Cable ferry ===
Very short distances may be crossed by a cable or chain ferry, which is usually a pontoon ferry (see above), where the ferry is propelled along and steered by cables connected to each shore. Sometimes the cable ferry is human powered by someone on the boat. Reaction ferries are cable ferries that use the perpendicular force of the current as a source of power. Examples of a current propelled ferry are the four Rhine ferries in Basel, Switzerland. Cable ferries may be used in fast-flowing rivers across short distances. With an ocean crossing of approximately 1900 metres, the cable ferry between Vancouver Island and Denman Island in British Columbia; is the longest one in the world.
Free ferries operate in some parts of the world, such as at Woolwich in London, England (across the River Thames); in Amsterdam, Netherlands (across the IJ waterway); along the Murray River in South Australia, and across many lakes in British Columbia. Many cable ferries operate on lakes and rivers in Canada, among them a cable ferry that charges a toll operates on the Rivière des Prairies between Laval-sur-le-Lac and Île Bizard in Quebec, Canada. In Finland there were 40 road ferries (cable ferries) in 2009, on lakes, rivers and on sea between islands.
== Air ferries ==
In the 1950s and 1960s, travel on an "air ferry" was possible—airplanes, often ex-military, specially equipped to take a small number of cars in addition to foot passengers. These operated various routes including between the United Kingdom and Continental Europe. Companies operating such services included Channel Air Bridge, Silver City Airways, and Corsair.
The term is also applied to any "ferrying" by air, and is commonly used when referring to airborne military operations.
== Docking ==
Ferries often dock at specialized facilities designed to position the boat for loading and unloading, called a ferry slip. If the ferry transports road vehicles or railway carriages there will usually be an adjustable ramp called an apron that is part of the slip. In other cases, the apron ramp will be a part of the ferry itself, acting as a wave guard when elevated and lowered to meet a fixed ramp at the terminus – a road segment that extends partially underwater or meet the ferry slip.
== Records ==
=== Gross tonnage ===
The world's largest ferries are typically those operated in Europe, with different vessels holding the record depending on whether length, gross tonnage or car vehicle capacity is the metric.
=== Oldest ===
The sole contender as oldest ferry in continuous operation is the Mersey Ferry from Liverpool to Birkenhead, England. In 1150, the Benedictine Priory at Birkenhead was established. The monks used to charge a small fare to row passengers across the estuary. In 1330, Edward III granted a charter to the Priory and its successors for ever: "the right of ferry there... for men, horses and goods, with leave to charge reasonable tolls". However, there may have been a short break following the Dissolution of the monasteries after 1536.
On 11 October 1811, inventor John Stevens' ship the Juliana, began operation as the first steam-powered ferry (service was between New York City, and Hoboken, New Jersey).
The Elwell Ferry, a cable ferry in North Carolina, travels a distance of 110 yards (100 m), shore to shore, with a travel time of five minutes.
=== Largest networks ===
Waxholmsbolaget – 21 vessels serving around 300 ports of call in the Stockholm archipelago.
Istanbul Ferry Network – 87 vessels serving 86 ports of call in and around the Bosporus of Istanbul, Turkey.
BC Ferries – 36 vessels serving 47 ports of call along the west coast of British Columbia, Canada, carrying 22.3 million passengers annually.
Caledonian MacBrayne – 31 vessels serving 50 ports of call along the west coast of Scotland, carrying 1.43 million passengers annually.
Sydney Ferries – 31 vessels serving 36 ports of call in Port Jackson (Sydney Harbour), carrying 15.3 million passengers annually.
Washington State Ferries – 21 vessels serving 20 ports of call around Puget Sound of Washington, United States, carrying 24.2 million passengers annually.
Metrolink Queensland – 21 vessels serving 26 ports of call along the Brisbane River in Brisbane, Australia, carrying 2.7 million passengers annually.
Société des traversiers du Québec
=== Busiest networks ===
Istanbul Ferry Network – 40 million passengers annually.
Washington State Ferries – 24.2 million passengers annually.
Staten Island Ferry in New York City – 23.9 million passengers annually; busiest single-line ferry in the world.
Amsterdam GVB Ferries – 22.4 million passengers annually.
BC Ferries – 22.3 million passengers annually.
Star Ferry in Hong Kong – 19.7 million passengers annually.
=== Fastest ===
The gas turbine powered Luciano Federico L operated by Montevideo-based Buquebus, holds the Guinness World Record for the fastest car ferry in the world, in service between Montevideo, Uruguay and Buenos Aires, Argentina: its maximum speed, achieved in sea trials, was 60.2 knots (111.5 km/h; 69.3 mph). It can carry 450 passengers and 52 cars along the 110-nautical-mile (200 km; 130 mi) route.
== Sustainability ==
The contributions of ferry travel to climate change have received less scrutiny than land and air transport, and vary considerably according to factors like speed and the number of passengers carried. Average carbon dioxide emissions by ferries per passenger-kilometre seem to be 0.12 kg (4.2 oz). However, 18-knot (21 mph; 33 km/h) ferries between Finland and Sweden produce 0.221 kg (7.8 oz) of CO2, with total emissions equalling a CO2 equivalent of 0.223 kg (7.9 oz), while 24–27-knot (28–31 mph; 44–50 km/h) ferries between Finland and Estonia produce 0.396 kg (14.0 oz) of CO2 with total emissions equalling a CO2 equivalent of 0.4 kg (14 oz).
=== Alternative fuels ===
With the price of oil at high levels, and with increasing pressure from consumers for measures to tackle global warming, a number of innovations for energy and the environment were put forward at the Interferry conference in Stockholm. According to the company Solar Sailor, hybrid marine power and solar wing technology are suitable for use with ferries, private yachts and even tankers.
Alternative fuels are becoming more widespread on ferries. The fastest passenger ferry in the world Buquebus, runs on LNG, while Sweden's Stena converted one of its ferries to run on both diesel and methanol in 2015. Both LNG and methanol reduce CO2 emissions considerably and replace costly diesel fuel.
Megawatt-class battery electric ferries operate in Scandinavia, with several more scheduled for operation. As of 2017, the world's biggest purely electric ferry was the MF Tycho Brahe, which operates on the Helsingør–Helsingborg ferry route across the Øresund between Denmark and Sweden. The ferry weights 8414 tonnes, and has an electric storage capacity of more than 4 MWh.
Since 2015, Norwegian ferry company Norled has operated e-ferry Ampere on the Lavik-Opedal connection on the E39 north of Bergen. Further north on the Norwegian west coast, the connection between Anda and Lote will be the world's first route served only by e-ferries. The first of two ships, MF Gloppefjord, was put into service in January 2018, followed by MF Eidsfjord. The owner, Fjord1, has commissioned a further seven battery-powered ferries to be in operation from 2020. A total of 60 battery powered car ferries are expected to be operational in Norway by 2021.
Since 15 August 2019, Ærø Municipality have operated E-ferry Ellen between the southern Danish ports of Fynshav and Søby, on the island of Ærø. The e-ferry is capable of carrying 30 vehicles and 200 passengers and is powered by a battery "with an unprecedented capacity" of 4.3 MWh (5,800 hp⋅h). The vessel can sail up to 22 nautical miles (25 mi; 41 km) between charges – seven times further than previously possible for an e-ferry. It will now need to prove it can provide up to seven return trips per day. The European Union, which supported the project, aims to roll out 100 or more of these ferries by 2030.
A special feature is the Danish Udbyhøj cable ferry in Randers Fjord which has a land-based power supply by means of a retractable submarine cable.
== Accidents ==
The following notable maritime disasters involved ferries:
== See also ==
== References ==
=== Notes ===
=== Bibliography ===
== External links ==
"Off Ferries, New And Old", May 1931, Popular Science
"Ferry" . Encyclopædia Britannica (11th ed.). 1911. |
Fighter aircraft | Fighter aircraft (early on also pursuit aircraft) are military aircraft designed primarily for air-to-air combat. In military conflict, the role of fighter aircraft is to establish air superiority of the battlespace. Domination of the airspace above a battlefield permits bombers and attack aircraft to engage in tactical and strategic bombing of enemy targets, and helps prevent the enemy from doing the same.
The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters.
Many modern fighter aircraft also have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, and these include the interceptor and, historically, the heavy fighter and night fighter.
== History ==
Since World War I, achieving and maintaining air superiority has been considered essential for victory in conventional warfare.
Fighters continued to be developed throughout World War I, to deny enemy aircraft and dirigibles the ability to gather information by reconnaissance over the battlefield. Early fighters were very small and lightly armed by later standards, and most were biplanes built with a wooden frame covered with fabric, and a maximum airspeed of about 100 mph (160 km/h). A successful German biplane, the Albatross, however, was built with a plywood shell, rather than fabric, which created a stronger, faster airplane. As control of the airspace over armies became increasingly important, all of the major powers developed fighters to support their military operations. Between the wars, wood was largely replaced in part or whole by metal tubing, and finally aluminum stressed skin structures (monocoque) began to predominate.
By World War II, most fighters were all-metal monoplanes armed with batteries of machine guns or cannons and some were capable of speeds approaching 400 mph (640 km/h). Most fighters up to this point had one engine, but a number of twin-engine fighters were built; however they were found to be outmatched against single-engine fighters and were relegated to other tasks, such as night fighters equipped with radar sets.
By the end of the war, turbojet engines were replacing piston engines as the means of propulsion, further increasing aircraft speed. Since the weight of the turbojet engine was far less than a piston engine, having two engines was no longer a handicap and one or two were used, depending on requirements. This in turn required the development of ejection seats so the pilot could escape, and G-suits to counter the much greater forces being applied to the pilot during maneuvers.
In the 1950s, radar was fitted to day fighters, since due to ever increasing air-to-air weapon ranges, pilots could no longer see far enough ahead to prepare for the opposition. Subsequently, radar capabilities grew enormously and are now the primary method of target acquisition. Wings were made thinner and swept back to reduce transonic drag, which required new manufacturing methods to obtain sufficient strength. Skins were no longer sheet metal riveted to a structure, but milled from large slabs of alloy. The sound barrier was broken, and after a few false starts due to required changes in controls, speeds quickly reached Mach 2, past which aircraft cannot maneuver sufficiently to avoid attack.
Air-to-air missiles largely replaced guns and rockets in the early 1960s since both were believed unusable at the speeds being attained, however the Vietnam War showed that guns still had a role to play, and most fighters built since then are fitted with cannon (typically between 20 and 30 mm (0.79 and 1.18 in) in caliber) in addition to missiles. Most modern combat aircraft can carry at least a pair of air-to-air missiles.
In the 1970s, turbofans replaced turbojets, improving fuel economy enough that the last piston engine support aircraft could be replaced with jets, making multi-role combat aircraft possible. Honeycomb structures began to replace milled structures, and the first composite components began to appear on components subjected to little stress.
With the steady improvements in computers, defensive systems have become increasingly efficient. To counter this, stealth technologies have been pursued by the United States, Russia, India and China. The first step was to find ways to reduce the aircraft's reflectivity to radar waves by burying the engines, eliminating sharp corners and diverting any reflections away from the radar sets of opposing forces. Various materials were found to absorb the energy from radar waves, and were incorporated into special finishes that have since found widespread application. Composite structures have become widespread, including major structural components, and have helped to counterbalance the steady increases in aircraft weight—most modern fighters are larger and heavier than World War II medium bombers.
Because of the importance of air superiority, since the early days of aerial combat armed forces have constantly competed to develop technologically superior fighters and to deploy these fighters in greater numbers, and fielding a viable fighter fleet consumes a substantial proportion of the defense budgets of modern armed forces.
The global combat aircraft market was worth $45.75 billion in 2017 and is projected by Frost & Sullivan at $47.2 billion in 2026: 35% modernization programs and 65% aircraft purchases, dominated by the Lockheed Martin F-35 with 3,000 deliveries over 20 years.
== Classification ==
A fighter aircraft is primarily designed for air-to-air combat. A given type may be designed for specific combat conditions, and in some cases for additional roles such as air-to-ground fighting. Historically the British Royal Flying Corps and Royal Air Force referred to them as "scouts" until the early 1920s, while the U.S. Army called them "pursuit" aircraft until the late 1940s (using the designation P, as in Curtiss P-40 Warhawk, Republic P-47 Thunderbolt and Bell P-63 Kingcobra). The UK changed to calling them fighters in the 1920s, while the US Army did so in the 1940s. A short-range fighter designed to defend against incoming enemy aircraft is known as an interceptor.
Recognized classes of fighter include:
Air superiority fighter
Fighter-bomber
Heavy fighter
Interceptor
Light fighter
All-weather fighter (including the night fighter)
Reconnaissance fighter
Strategic fighter (including the escort fighter and strike fighter)
Of these, the Fighter-bomber, reconnaissance fighter and strike fighter classes are dual-role, possessing qualities of the fighter alongside some other battlefield role. Some fighter designs may be developed in variants performing other roles entirely, such as ground attack or unarmed reconnaissance. This may be for political or national security reasons, for advertising purposes, or other reasons.
The Sopwith Camel and other "fighting scouts" of World War I performed a great deal of ground-attack work. In World War II, the USAAF and RAF often favored fighters over dedicated light bombers or dive bombers, and types such as the Republic P-47 Thunderbolt and Hawker Hurricane that were no longer competitive as aerial combat fighters were relegated to ground attack. Several aircraft, such as the F-111 and F-117, have received fighter designations though they had no fighter capability due to political or other reasons. The F-111B variant was originally intended for a fighter role with the U.S. Navy, but it was canceled. This blurring follows the use of fighters from their earliest days for "attack" or "strike" operations against ground targets by means of strafing or dropping small bombs and incendiaries. Versatile multi role fighter-bombers such as the McDonnell Douglas F/A-18 Hornet are a less expensive option than having a range of specialized aircraft types.
Some of the most expensive fighters such as the US Grumman F-14 Tomcat, McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor and Russian Sukhoi Su-27 were employed as all-weather interceptors as well as air superiority fighter aircraft, while commonly developing air-to-ground roles late in their careers. An interceptor is generally an aircraft intended to target (or intercept) bombers and so often trades maneuverability for climb rate.
As a part of military nomenclature, a letter is often assigned to various types of aircraft to indicate their use, along with a number to indicate the specific aircraft. The letters used to designate a fighter differ in various countries. In the English-speaking world, "F" is often now used to indicate a fighter (e.g. Lockheed Martin F-35 Lightning II or Supermarine Spitfire F.22), though "P" used to be used in the US for pursuit (e.g. Curtiss P-40 Warhawk), a translation of the French "C" (Dewoitine D.520 C.1) for Chasseur while in Russia "I" was used for Istrebitel, or exterminator (Polikarpov I-16).
=== Air superiority fighter ===
As fighter types have proliferated, the air superiority fighter emerged as a specific role at the pinnacle of speed, maneuverability, and air-to-air weapon systems – able to hold its own against all other fighters and establish its dominance in the skies above the battlefield.
=== Interceptor ===
The interceptor is a fighter designed specifically to intercept and engage approaching enemy aircraft. There are two general classes of interceptor: relatively lightweight aircraft in the point-defence role, built for fast reaction, high performance and with a short range, and heavier aircraft with more comprehensive avionics and designed to fly at night or in all weathers and to operate over longer ranges. Originating during World War I, by 1929 this class of fighters had become known as the interceptor.
=== Night and all-weather fighters ===
The equipment necessary for daytime flight is inadequate when flying at night or in poor visibility. The night fighter was developed during World War I with additional equipment to aid the pilot in flying straight, navigating and finding the target. From modified variants of the Royal Aircraft Factory B.E.2c in 1915, the night fighter has evolved into the highly capable all-weather fighter.
=== Strategic fighters ===
The strategic fighter is a fast, heavily armed and long-range type, able to act as an escort fighter protecting bombers, to carry out offensive sorties of its own as a penetration fighter and maintain standing patrols at significant distance from its home base.
Bombers are vulnerable due to their low speed, large size and poor maneuvrability. The escort fighter was developed during World War II to come between the bombers and enemy attackers as a protective shield. The primary requirement was for long range, with several heavy fighters given the role. However they too proved unwieldy and vulnerable, so as the war progressed techniques such as drop tanks were developed to extend the range of more nimble conventional fighters.
The penetration fighter is typically also fitted for the ground-attack role, and so is able to defend itself while conducting attack sorties.
== Piston engine fighters ==
=== 1914–1918: World War I ===
The word "fighter" was first used to describe a two-seat aircraft carrying a machine gun (mounted on a pedestal) and its operator as well as the pilot. Although the term was coined in the United Kingdom, the first examples were the French Voisin pushers beginning in 1910, and a Voisin III would be the first to shoot down another aircraft, on 5 October 1914.
However at the outbreak of World War I, front-line aircraft were mostly unarmed and used almost exclusively for reconnaissance. On 15 August 1914, Miodrag Tomić encountered an enemy airplane while on a reconnaissance flight over Austria-Hungary which fired at his aircraft with a revolver, so Tomić fired back. It was believed to be the first exchange of fire between aircraft. Within weeks, all Serbian and Austro-Hungarian aircraft were armed.
Another type of military aircraft formed the basis for an effective "fighter" in the modern sense of the word. It was based on small fast aircraft developed before the war for air racing such with the Gordon Bennett Cup and Schneider Trophy. The military scout airplane was not expected to carry serious armament, but rather to rely on speed to "scout" a location, and return quickly to report, making it a flying horse. British scout aircraft, in this sense, included the Sopwith Tabloid and Bristol Scout. The French and the Germans didn't have an equivalent as they used two seaters for reconnaissance, such as the Morane-Saulnier L, but would later modify pre-war racing aircraft into armed single seaters. It was quickly found that these were of little use since the pilot couldn't record what he saw while also flying, while military leaders usually ignored what the pilots reported.
Attempts were made with handheld weapons such as pistols and rifles and even light machine guns, but these were ineffective and cumbersome. The next advance came with the fixed forward-firing machine gun, so that the pilot pointed the entire aircraft at the target and fired the gun, instead of relying on a second gunner. Roland Garros bolted metal deflector plates to the propeller so that it would not shoot itself out of the sky and a number of Morane-Saulnier Ns were modified. The technique proved effective, however the deflected bullets were still highly dangerous.
Soon after the commencement of the war, pilots armed themselves with pistols, carbines, grenades, and an assortment of improvised weapons. Many of these proved ineffective as the pilot had to fly his airplane while attempting to aim a handheld weapon and make a difficult deflection shot. The first step in finding a real solution was to mount the weapon on the aircraft, but the propeller remained a problem since the best direction to shoot is straight ahead. Numerous solutions were tried. A second crew member behind the pilot could aim and fire a swivel-mounted machine gun at enemy airplanes; however, this limited the area of coverage chiefly to the rear hemisphere, and effective coordination of the pilot's maneuvering with the gunner's aiming was difficult. This option was chiefly employed as a defensive measure on two-seater reconnaissance aircraft from 1915 on. Both the SPAD S.A and the Royal Aircraft Factory B.E.9 added a second crewman ahead of the engine in a pod but this was both hazardous to the second crewman and limited performance. The Sopwith L.R.T.Tr. similarly added a pod on the top wing with no better luck.
An alternative was to build a "pusher" scout such as the Airco DH.2, with the propeller mounted behind the pilot. The main drawback was that the high drag of a pusher type's tail structure made it slower than a similar "tractor" aircraft.
A better solution for a single seat scout was to mount the machine gun (rifles and pistols having been dispensed with) to fire forwards but outside the propeller arc. Wing guns were tried but the unreliable weapons available required frequent clearing of jammed rounds and misfires and remained impractical until after the war. Mounting the machine gun over the top wing worked well and was used long after the ideal solution was found. The Nieuport 11 of 1916 used this system with considerable success, however, this placement made aiming and reloading difficult but would continue to be used throughout the war as the weapons used were lighter and had a higher rate of fire than synchronized weapons. The British Foster mounting and several French mountings were specifically designed for this kind of application, fitted with either the Hotchkiss or Lewis Machine gun, which due to their design were unsuitable for synchronizing. The need to arm a tractor scout with a forward-firing gun whose bullets passed through the propeller arc was evident even before the outbreak of war and inventors in both France and Germany devised mechanisms that could time the firing of the individual rounds to avoid hitting the propeller blades. Franz Schneider, a Swiss engineer, had patented such a device in Germany in 1913, but his original work was not followed up. French aircraft designer Raymond Saulnier patented a practical device in April 1914, but trials were unsuccessful because of the propensity of the machine gun employed to hang fire due to unreliable ammunition. In December 1914, French aviator Roland Garros asked Saulnier to install his synchronization gear on Garros' Morane-Saulnier Type L parasol monoplane. Unfortunately the gas-operated Hotchkiss machine gun he was provided had an erratic rate of fire and it was impossible to synchronize it with the propeller. As an interim measure, the propeller blades were fitted with metal wedges to protect them from ricochets. Garros' modified monoplane first flew in March 1915 and he began combat operations soon after. Garros scored three victories in three weeks before he himself was downed on 18 April and his airplane, along with its synchronization gear and propeller was captured by the Germans. Meanwhile, the synchronization gear (called the Stangensteuerung in German, for "pushrod control system") devised by the engineers of Anthony Fokker's firm was the first system to enter service. It would usher in what the British called the "Fokker scourge" and a period of air superiority for the German forces, making the Fokker Eindecker monoplane a feared name over the Western Front, despite its being an adaptation of an obsolete pre-war French Morane-Saulnier racing airplane, with poor flight characteristics and a by now mediocre performance. The first Eindecker victory came on 1 July 1915, when Leutnant Kurt Wintgens, of Feldflieger Abteilung 6 on the Western Front, downed a Morane-Saulnier Type L. His was one of five Fokker M.5K/MG prototypes for the Eindecker, and was armed with a synchronized aviation version of the Parabellum MG14 machine gun. The success of the Eindecker kicked off a competitive cycle of improvement among the combatants, both sides striving to build ever more capable single-seat fighters. The Albatros D.I and Sopwith Pup of 1916 set the classic pattern followed by fighters for about twenty years. Most were biplanes and only rarely monoplanes or triplanes. The strong box structure of the biplane provided a rigid wing that allowed the accurate control essential for dogfighting. They had a single operator, who flew the aircraft and also controlled its armament. They were armed with one or two Maxim or Vickers machine guns, which were easier to synchronize than other types, firing through the propeller arc. Gun breeches were in front of the pilot, with obvious implications in case of accidents, but jams could be cleared in flight, while aiming was simplified.
The use of metal aircraft structures was pioneered before World War I by Breguet but would find its biggest proponent in Anthony Fokker, who used chrome-molybdenum steel tubing for the fuselage structure of all his fighter designs, while the innovative German engineer Hugo Junkers developed two all-metal, single-seat fighter monoplane designs with cantilever wings: the strictly experimental Junkers J 2 private-venture aircraft, made with steel, and some forty examples of the Junkers D.I, made with corrugated duralumin, all based on his experience in creating the pioneering Junkers J 1 all-metal airframe technology demonstration aircraft of late 1915. While Fokker would pursue steel tube fuselages with wooden wings until the late 1930s, and Junkers would focus on corrugated sheet metal, Dornier was the first to build a fighter (the Dornier-Zeppelin D.I) made with pre-stressed sheet aluminum and having cantilevered wings, a form that would replace all others in the 1930s. As collective combat experience grew, the more successful pilots such as Oswald Boelcke, Max Immelmann, and Edward Mannock developed innovative tactical formations and maneuvers to enhance their air units' combat effectiveness.
Allied and – before 1918 – German pilots of World War I were not equipped with parachutes, so in-flight fires or structural failures were often fatal. Parachutes were well-developed by 1918 having previously been used by balloonists, and were adopted by the German flying services during the course of that year. The well-known Manfred von Richthofen, the "Red Baron", was wearing one when he was killed, but the allied command continued to oppose their use on various grounds.
In April 1917, during a brief period of German aerial supremacy a British pilot's average life expectancy was calculated to average 93 flying hours, or about three weeks of active service. More than 50,000 airmen from both sides died during the war.
=== 1919–1938: Inter-war period ===
Fighter development stagnated between the wars, especially in the United States and the United Kingdom, where budgets were small. In France, Italy and Russia, where large budgets continued to allow major development, both monoplanes and all metal structures were common. By the end of the 1920s, however, those countries overspent themselves and were overtaken in the 1930s by those powers that hadn't been spending heavily, namely the British, the Americans, the Spanish (in the Spanish civil war) and the Germans.
Given limited budgets, air forces were conservative in aircraft design, and biplanes remained popular with pilots for their agility, and remained in service long after they ceased to be competitive. Designs such as the Gloster Gladiator, Fiat CR.42 Falco, and Polikarpov I-15 were common even in the late 1930s, and many were still in service as late as 1942. Up until the mid-1930s, the majority of fighters in the US, the UK, Italy and Russia remained fabric-covered biplanes.
Fighter armament eventually began to be mounted inside the wings, outside the arc of the propeller, though most designs retained two synchronized machine guns directly ahead of the pilot, where they were more accurate (that being the strongest part of the structure, reducing the vibration to which the guns were subjected). Shooting with this traditional arrangement was also easier because the guns shot directly ahead in the direction of the aircraft's flight, up to the limit of the guns range; unlike wing-mounted guns which to be effective required to be harmonised, that is, preset to shoot at an angle by ground crews so that their bullets would converge on a target area a set distance ahead of the fighter. Rifle-caliber .30 and .303 in (7.62 and 7.70 mm) calibre guns remained the norm, with larger weapons either being too heavy and cumbersome or deemed unnecessary against such lightly built aircraft. It was not considered unreasonable to use World War I-style armament to counter enemy fighters as there was insufficient air-to-air combat during most of the period to disprove this notion.
The rotary engine, popular during World War I, quickly disappeared, its development having reached the point where rotational forces prevented more fuel and air from being delivered to the cylinders, which limited horsepower. They were replaced chiefly by the stationary radial engine though major advances led to inline engines gaining ground with several exceptional engines—including the 1,145 cu in (18,760 cm3) V-12 Curtiss D-12. Aircraft engines increased in power several-fold over the period, going from a typical 180 hp (130 kW) in the 900 kg (2,000 lb) Fokker D.VII of 1918 to 900 hp (670 kW) in the 2,500 kg (5,500 lb) Curtiss P-36 of 1936. The debate between the sleek in-line engines versus the more reliable radial models continued, with naval air forces preferring the radial engines, and land-based forces often choosing inlines. Radial designs did not require a separate (and vulnerable) radiator, but had increased drag. Inline engines often had a better power-to-weight ratio.
Some air forces experimented with "heavy fighters" (called "destroyers" by the Germans). These were larger, usually twin-engined aircraft, sometimes adaptations of light or medium bomber types. Such designs typically had greater internal fuel capacity (thus longer range) and heavier armament than their single-engine counterparts. In combat, they proved vulnerable to more agile single-engine fighters.
The primary driver of fighter innovation, right up to the period of rapid re-armament in the late 1930s, were not military budgets, but civilian aircraft racing. Aircraft designed for these races introduced innovations like streamlining and more powerful engines that would find their way into the fighters of World War II. The most significant of these was the Schneider Trophy races, where competition grew so fierce, only national governments could afford to enter.
At the very end of the inter-war period in Europe came the Spanish Civil War. This was just the opportunity the German Luftwaffe, Italian Regia Aeronautica, and the Soviet Union's Voenno-Vozdushnye Sily needed to test their latest aircraft. Each party sent numerous aircraft types to support their sides in the conflict. In the dogfights over Spain, the latest Messerschmitt Bf 109 fighters did well, as did the Soviet Polikarpov I-16. The later German design was earlier in its design cycle, and had more room for development and the lessons learned led to greatly improved models in World War II. The Russians failed to keep up and despite newer models coming into service, I-16s remaining the most common Soviet front-line fighter into 1942 despite being outclassed by the improved Bf 109s in World War II. For their part, the Italians developed several monoplanes such as the Fiat G.50 Freccia, but being short on funds, were forced to continue operating obsolete Fiat CR.42 Falco biplanes.
From the early 1930s the Japanese were at war against both the Chinese Nationalists and the Russians in China, and used the experience to improve both training and aircraft, replacing biplanes with modern cantilever monoplanes and creating a cadre of exceptional pilots. In the United Kingdom, at the behest of Neville Chamberlain (more famous for his 'peace in our time' speech), the entire British aviation industry was retooled, allowing it to change quickly from fabric covered metal framed biplanes to cantilever stressed skin monoplanes in time for the war with Germany, a process that France attempted to emulate, but too late to counter the German invasion. The period of improving the same biplane design over and over was now coming to an end, and the Hawker Hurricane and Supermarine Spitfire started to supplant the Gloster Gladiator and Hawker Fury biplanes but many biplanes remained in front-line service well past the start of World War II. While not a combatant in Spain, they too absorbed many of the lessons in time to use them.
The Spanish Civil War also provided an opportunity for updating fighter tactics. One of the innovations was the development of the "finger-four" formation by the German pilot Werner Mölders. Each fighter squadron (German: Staffel) was divided into several flights (Schwärme) of four aircraft. Each Schwarm was divided into two Rotten, which was a pair of aircraft. Each Rotte was composed of a leader and a wingman. This flexible formation allowed the pilots to maintain greater situational awareness, and the two Rotten could split up at any time and attack on their own. The finger-four would be widely adopted as the fundamental tactical formation during World War Two, including by the British and later the Americans.
=== 1939–1945: World War II ===
World War II featured fighter combat on a larger scale than any other conflict to date. German Field Marshal Erwin Rommel noted the effect of airpower: "Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage..." Throughout the war, fighters performed their conventional role in establishing air superiority through combat with other fighters and through bomber interception, and also often performed roles such as tactical air support and reconnaissance.
Fighter design varied widely among combatants. The Japanese and Italians favored lightly armed and armored but highly maneuverable designs such as the Japanese Nakajima Ki-27, Nakajima Ki-43 and Mitsubishi A6M Zero and the Italian Fiat G.50 Freccia and Macchi MC.200. In contrast, designers in the United Kingdom, Germany, the Soviet Union, and the United States believed that the increased speed of fighter aircraft would create g-forces unbearable to pilots who attempted maneuvering dogfights typical of the First World War, and their fighters were instead optimized for speed and firepower. In practice, while light, highly maneuverable aircraft did possess some advantages in fighter-versus-fighter combat, those could usually be overcome by sound tactical doctrine, and the design approach of the Italians and Japanese made their fighters ill-suited as interceptors or attack aircraft.
==== European theater ====
During the invasion of Poland and the Battle of France, Luftwaffe fighters—primarily the Messerschmitt Bf 109—held air superiority, and the Luftwaffe played a major role in German victories in these campaigns. During the Battle of Britain, however, British Hurricanes and Spitfires proved roughly equal to Luftwaffe fighters. Additionally Britain's radar-based Dowding system directing fighters onto German attacks and the advantages of fighting above Britain's home territory allowed the RAF to deny Germany air superiority, saving the UK from possible German invasion and dealing the Axis a major defeat early in the Second World War. On the Eastern Front, Soviet fighter forces were overwhelmed during the opening phases of Operation Barbarossa. This was a result of the tactical surprise at the outset of the campaign, the leadership vacuum within the Soviet military left by the Great Purge, and the general inferiority of Soviet designs at the time, such as the obsolescent Polikarpov I-15 biplane and the I-16. More modern Soviet designs, including the Mikoyan-Gurevich MiG-3, LaGG-3 and Yakolev Yak-1, had not yet arrived in numbers and in any case were still inferior to the Messerschmitt Bf 109. As a result, during the early months of these campaigns, Axis air forces destroyed large numbers of Red Air Force aircraft on the ground and in one-sided dogfights. In the later stages on the Eastern Front, Soviet training and leadership improved, as did their equipment. By 1942 Soviet designs such as the Yakovlev Yak-9 and Lavochkin La-5 had performance comparable to the German Bf 109 and Focke-Wulf Fw 190. Also, significant numbers of British, and later U.S., fighter aircraft were supplied to aid the Soviet war effort as part of Lend-Lease, with the Bell P-39 Airacobra proving particularly effective in the lower-altitude combat typical of the Eastern Front. The Soviets were also helped indirectly by the American and British bombing campaigns, which forced the Luftwaffe to shift many of its fighters away from the Eastern Front in defense against these raids. The Soviets increasingly were able to challenge the Luftwaffe, and while the Luftwaffe maintained a qualitative edge over the Red Air Force for much of the war, the increasing numbers and efficacy of the Soviet Air Force were critical to the Red Army's efforts at turning back and eventually annihilating the Wehrmacht.
Meanwhile, air combat on the Western Front had a much different character. Much of this combat focused on the strategic bombing campaigns of the RAF and the USAAF against German industry intended to wear down the Luftwaffe. Axis fighter aircraft focused on defending against Allied bombers while Allied fighters' main role was as bomber escorts. The RAF raided German cities at night, and both sides developed radar-equipped night fighters for these battles. The Americans, in contrast, flew daylight bombing raids into Germany delivering the Combined Bomber Offensive. Unescorted Consolidated B-24 Liberators and Boeing B-17 Flying Fortress bombers, however, proved unable to fend off German interceptors (primarily Bf 109s and Fw 190s). With the later arrival of long range fighters, particularly the North American P-51 Mustang, American fighters were able to escort far into Germany on daylight raids and by ranging ahead attrited the Luftwaffe to establish control of the skies over Western Europe.
By the time of Operation Overlord in June 1944, the Allies had gained near complete air superiority over the Western Front. This cleared the way both for intensified strategic bombing of German cities and industries, and for the tactical bombing of battlefield targets. With the Luftwaffe largely cleared from the skies, Allied fighters increasingly served as ground attack aircraft.
Allied fighters, by gaining air superiority over the European battlefield, played a crucial role in the eventual defeat of the Axis, which Reichmarshal Hermann Göring, commander of the German Luftwaffe summed up when he said: "When I saw Mustangs over Berlin, I knew the jig was up."
==== Pacific theater ====
Major air combat during the war in the Pacific began with the entry of the Western Allies following Japan's attack against Pearl Harbor. The Imperial Japanese Navy Air Service primarily operated the Mitsubishi A6M Zero, and the Imperial Japanese Army Air Service flew the Nakajima Ki-27 and the Nakajima Ki-43, initially enjoying great success, as these fighters generally had better range, maneuverability, speed and climb rates than their Allied counterparts. Additionally, Japanese pilots were well trained and many were combat veterans from Japan's campaigns in China. They quickly gained air superiority over the Allies, who at this stage of the war were often disorganized, under-trained and poorly equipped, and Japanese air power contributed significantly to their successes in the Philippines, Malaysia and Singapore, the Dutch East Indies and Burma.
By mid-1942, the Allies began to regroup and while some Allied aircraft such as the Brewster Buffalo and the P-39 Airacobra were hopelessly outclassed by fighters like Japan's Mitsubishi A6M Zero, others such as the Army's Curtiss P-40 Warhawk and the Navy's Grumman F4F Wildcat possessed attributes such as superior firepower, ruggedness and dive speed, and the Allies soon developed tactics (such as the Thach Weave) to take advantage of these strengths. These changes soon paid dividends, as the Allied ability to deny Japan air superiority was critical to their victories at Coral Sea, Midway, Guadalcanal and New Guinea. In China, the Flying Tigers also used the same tactics with some success, although they were unable to stem the tide of Japanese advances there.
By 1943, the Allies began to gain the upper hand in the Pacific Campaign's air campaigns. Several factors contributed to this shift. First, the Lockheed P-38 Lightning and second-generation Allied fighters such as the Grumman F6 Hellcat and later the Vought F4 Corsair, the Republic P-47 Thunderbolt and the North American P-51 Mustang, began arriving in numbers. These fighters outperformed Japanese fighters in all respects except maneuverability. Other problems with Japan's fighter aircraft also became apparent as the war progressed, such as their lack of armor and light armament, which had been typical of all pre-war fighters worldwide, but the problem was particularly difficult to rectify on the Japanese designs. This made them inadequate as either bomber-interceptors or ground-attack aircraft, roles Allied fighters were still able to fill. Most importantly, Japan's training program failed to provide enough well-trained pilots to replace losses. In contrast, the Allies improved both the quantity and quality of pilots graduating from their training programs. By mid-1944, Allied fighters had gained air superiority throughout the theater, which would not be contested again during the war. The extent of Allied quantitative and qualitative superiority by this point in the war was demonstrated during the Battle of the Philippine Sea, a lopsided Allied victory in which Japanese fliers were shot down in such numbers and with such ease that American fighter pilots likened it to a great 'turkey shoot'. Late in the war, Japan began to produce new fighters such as the Nakajima Ki-84 and the Kawanishi N1K to replace the Zero, but only in small numbers, and by then Japan lacked the trained pilots or sufficient fuel to mount an effective challenge to Allied attacks. During the closing stages of the war, Japan's fighter arm could not seriously challenge raids over Japan by American Boeing B-29 Superfortresses, and was largely reduced to Kamikaze attacks.
==== Technological innovations ====
Fighter technology advanced rapidly during the Second World War. Piston-engines, which powered the vast majority of World War II fighters, grew more powerful: at the beginning of the war fighters typically had engines producing between 1,000 hp (750 kW) and 1,400 hp (1,000 kW), while by the end of the war many could produce over 2,000 hp (1,500 kW). For example, the Spitfire, one of the few fighters in continuous production throughout the war, was in 1939 powered by a 1,030 hp (770 kW) Merlin II, while variants produced in 1945 were equipped with the 2,035 hp (1,517 kW) Rolls-Royce Griffon 61. Nevertheless, these fighters could only achieve modest increases in top speed due to problems of compressibility created as aircraft and their propellers approached the sound barrier, and it was apparent that propeller-driven aircraft were approaching the limits of their performance. German jet and rocket-powered fighters entered combat in 1944, too late to impact the war's outcome. The same year the Allies' only operational jet fighter, the Gloster Meteor, also entered service. World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51 Mustang, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds. Armament also advanced during the war. The rifle-caliber machine guns that were common on prewar fighters could not easily down the more rugged warplanes of the era. Air forces began to replace or supplement them with cannons, which fired explosive shells that could blast a hole in an enemy aircraft – rather than relying on kinetic energy from a solid bullet striking a critical component of the aircraft, such as a fuel line or control cable, or the pilot. Cannons could bring down even heavy bombers with just a few hits, but their slower rate of fire made it difficult to hit fast-moving fighters in a dogfight. Eventually, most fighters mounted cannons, sometimes in combination with machine guns. The British epitomized this shift. Their standard early war fighters mounted eight .303 in (7.7 mm) caliber machine guns, but by mid-war they often featured a combination of machine guns and 20 mm (0.79 in) cannons, and late in the war often only cannons. The Americans, in contrast, had problems producing a cannon design, so instead placed multiple .50 in (12.7 mm) heavy machine guns on their fighters. Fighters were also increasingly fitted with bomb racks and air-to-surface ordnance such as bombs or rockets beneath their wings, and pressed into close air support roles as fighter-bombers. Although they carried less ordnance than light and medium bombers, and generally had a shorter range, they were cheaper to produce and maintain and their maneuverability made it easier for them to hit moving targets such as motorized vehicles. Moreover, if they encountered enemy fighters, their ordnance (which reduced lift and increased drag and therefore decreased performance) could be jettisoned and they could engage enemy fighters, which eliminated the need for fighter escorts that bombers required.
Heavily armed fighters such as Germany's Focke-Wulf Fw 190, Britain's Hawker Typhoon and Hawker Tempest, and America's Curtiss P-40, F4U Corsair, P-47 Thunderbolt and P-38 Lightning all excelled as fighter-bombers, and since the Second World War ground attack has become an important secondary capability of many fighters.
World War II also saw the first use of airborne radar on fighters. The primary purpose of these radars was to help night fighters locate enemy bombers and fighters. Because of the bulkiness of these radar sets, they could not be carried on conventional single-engined fighters and instead were typically retrofitted to larger heavy fighters or light bombers such as Germany's Messerschmitt Bf 110 and Junkers Ju 88, Britain's de Havilland Mosquito and Bristol Beaufighter, and America's Douglas A-20, which then served as night fighters. The Northrop P-61 Black Widow, a purpose-built night fighter, was the only fighter of the war that incorporated radar into its original design. Britain and America cooperated closely in the development of airborne radar, and Germany's radar technology generally lagged slightly behind Anglo-American efforts, while other combatants developed few radar-equipped fighters.
A concept originated from German engineer Bernhard J. Schrage in 1943 as a response to the increasing threat posed by Allied heavy bombers, particularly at night. The Schrage Musik system involved mounting upward-facing cannon turrets, typically twin 20mm or 30mm guns, in the belly of German night fighters such as the Messerschmitt Bf 110 and later versions of the Junkers Ju 88. These guns were angled upwards to target the vulnerable underside of enemy bombers.
=== 1946–present: Post–World War II period ===
Several prototype fighter programs begun early in 1945 continued on after the war and led to advanced piston-engine fighters that entered production and operational service in 1946. A typical example is the Lavochkin La-9 'Fritz', which was an evolution of the successful wartime Lavochkin La-7 'Fin'. Working through a series of prototypes, the La-120, La-126 and La-130, the Lavochkin design bureau sought to replace the La-7's wooden airframe with a metal one, as well as fit a laminar flow wing to improve maneuver performance, and increased armament. The La-9 entered service in August 1946 and was produced until 1948; it also served as the basis for the development of a long-range escort fighter, the La-11 'Fang', of which nearly 1200 were produced 1947–51. Over the course of the Korean War, however, it became obvious that the day of the piston-engined fighter was coming to a close and that the future would lie with the jet fighter.
This period also witnessed experimentation with jet-assisted piston engine aircraft. La-9 derivatives included examples fitted with two underwing auxiliary pulsejet engines (the La-9RD) and a similarly mounted pair of auxiliary ramjet engines (the La-138); however, neither of these entered service. One that did enter service – with the U.S. Navy in March 1945 – was the Ryan FR-1 Fireball; production was halted with the war's end on VJ-Day, with only 66 having been delivered, and the type was withdrawn from service in 1947. The USAAF had ordered its first 13 mixed turboprop-turbojet-powered pre-production prototypes of the Consolidated Vultee XP-81 fighter, but this program was also canceled by VJ Day, with 80% of the engineering work completed.
== Rocket-powered fighters ==
The first rocket-powered aircraft was the Lippisch Ente, which made a successful maiden flight in March 1928. The only pure rocket aircraft ever mass-produced was the Messerschmitt Me 163B Komet in 1944, one of several German World War II projects aimed at developing high speed, point-defense aircraft. Later variants of the Me 262 (C-1a and C-2b) were also fitted with "mixed-power" jet/rocket powerplants, while earlier models were fitted with rocket boosters, but were not mass-produced with these modifications.
The USSR experimented with a rocket-powered interceptor in the years immediately following World War II, the Mikoyan-Gurevich I-270. Only two were built.
In the 1950s, the British developed mixed-power jet designs employing both rocket and jet engines to cover the performance gap that existed in turbojet designs. The rocket was the main engine for delivering the speed and height required for high-speed interception of high-level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure the aircraft was able to make a powered landing rather than risking an unpredictable gliding return.
The Saunders-Roe SR.53 was a successful design, and was planned for production when economics forced the British to curtail most aircraft programs in the late 1950s. Furthermore, rapid advancements in jet engine technology rendered mixed-power aircraft designs like Saunders-Roe's SR.53 (and the following SR.177) obsolete. The American Republic XF-91 Thunderceptor –the first U.S. fighter to exceed Mach 1 in level flight– met a similar fate for the same reason, and no hybrid rocket-and-jet-engine fighter design has ever been placed into service.
The only operational implementation of mixed propulsion was Rocket-Assisted Take Off (RATO), a system rarely used in fighters, such as with the zero-length launch, RATO-based takeoff scheme from special launch platforms, tested out by both the United States and the Soviet Union, and made obsolete with advancements in surface-to-air missile technology.
== Jet-powered fighters ==
It has become common in the aviation community to classify jet fighters by "generations" for historical purposes. No official definitions of these generations exist; rather, they represent the notion of stages in the development of fighter-design approaches, performance capabilities, and technological evolution. Different authors have packed jet fighters into different generations. For example, Richard P. Hallion of the Secretary of the US Air Force's Action Group classified the F-16 as a sixth-generation jet fighter.
The timeframes associated with each generation remain inexact and are only indicative of the period during which their design philosophies and technology employment enjoyed a prevailing influence on fighter design and development. These timeframes also encompass the peak period of service entry for such aircraft.
=== 1940s–1950s: First-generation ===
The first generation of jet fighters comprised the initial, subsonic jet-fighter designs introduced late in World War II (1939–1945) and in the early post-war period. They differed little from their piston-engined counterparts in appearance, and many employed unswept wings. Guns and cannon remained the principal armament. The need to obtain a decisive advantage in maximum speed pushed the development of turbojet-powered aircraft forward. Top speeds for fighters rose steadily throughout World War II as more powerful piston engines developed, and they approached transonic flight-speeds where the efficiency of propellers drops off, making further speed increases nearly impossible.
The first jets developed during World War II and saw combat in the last two years of the war. Messerschmitt developed the first operational jet fighter, the Me 262A, primarily serving with the Luftwaffe's JG 7, the world's first jet-fighter wing. It was considerably faster than contemporary piston-driven aircraft, and in the hands of a competent pilot, proved quite difficult for Allied pilots to defeat. The Luftwaffe never deployed the design in numbers sufficient to stop the Allied air campaign, and a combination of fuel shortages, pilot losses, and technical difficulties with the engines kept the number of sorties low. Nevertheless, the Me 262 indicated the obsolescence of piston-driven aircraft. Spurred by reports of the German jets, Britain's Gloster Meteor entered production soon after, and the two entered service around the same time in 1944. Meteors commonly served to intercept the V-1 flying bomb, as they were faster than available piston-engined fighters at the low altitudes used by the flying bombs. Nearer the end of World War II, the first military jet-powered light-fighter design, the Luftwaffe intended the Heinkel He 162A Spatz (sparrow) to serve as a simple jet fighter for German home defense, with a few examples seeing squadron service with JG 1 by April 1945. By the end of the war almost all work on piston-powered fighters had ended. A few designs combining piston- and jet-engines for propulsion – such as the Ryan FR Fireball – saw brief use, but by the end of the 1940s virtually all new fighters were jet-powered.
Despite their advantages, the early jet-fighters were far from perfect. The operational lifespan of turbines were very short and engines were temperamental, while power could be adjusted only slowly and acceleration was poor (even if top speed was higher) compared to the final generation of piston fighters. Many squadrons of piston-engined fighters remained in service until the early to mid-1950s, even in the air forces of the major powers (though the types retained were the best of the World War II designs). Innovations including ejection seats, air brakes and all-moving tailplanes became widespread in this period.
The Americans began using jet fighters operationally after World War II, the wartime Bell P-59 having proven a failure. The Lockheed P-80 Shooting Star (soon re-designated F-80) was more prone to wave drag than the swept-wing Me 262, but had a cruise speed (660 km/h (410 mph)) as high as the maximum speed attainable by many piston-engined fighters. The British designed several new jets, including the distinctive single-engined twin boom de Havilland Vampire which Britain sold to the air forces of many nations.
The British transferred the technology of the Rolls-Royce Nene jet-engine to the Soviets, who soon put it to use in their advanced Mikoyan-Gurevich MiG-15 fighter, which used fully swept wings that allowed flying closer to the speed of sound than straight-winged designs such as the F-80. The MiG-15s' top speed of 1,075 km/h (668 mph) proved quite a shock to the American F-80 pilots who encountered them in the Korean War, along with their armament of two 23 mm (0.91 in) cannons and a single 37 mm (1.5 in) cannon. Nevertheless, in the first jet-versus-jet dogfight, which occurred during the Korean War on 8 November 1950, an F-80 shot down two North Korean MiG-15s.
The Americans responded by rushing their own swept-wing fighter – the North American F-86 Sabre – into battle against the MiGs, which had similar transsonic performance. The two aircraft had different strengths and weaknesses, but were similar enough that victory could go either way. While the Sabres focused primarily on downing MiGs and scored favorably against those flown by the poorly-trained North Koreans, the MiGs in turn decimated US bomber formations and forced the withdrawal of numerous American types from operational service.
The world's navies also transitioned to jets during this period, despite the need for catapult-launching of the new aircraft. The U.S. Navy adopted the Grumman F9F Panther as their primary jet fighter in the Korean War period, and it was one of the first jet fighters to employ an afterburner. The de Havilland Sea Vampire became the Royal Navy's first jet fighter. Radar was used on specialized night-fighters such as the Douglas F3D Skyknight, which also downed MiGs over Korea, and later fitted to the McDonnell F2H Banshee and swept-wing Vought F7U Cutlass and McDonnell F3H Demon as all-weather / night fighters. Early versions of Infra-red (IR) air-to-air missiles (AAMs) such as the AIM-9 Sidewinder and radar-guided missiles such as the AIM-7 Sparrow whose descendants remain in use as of 2021, were first introduced on swept-wing subsonic Demon and Cutlass naval fighters.
=== 1950s–1960s: Second-generation ===
Technological breakthroughs, lessons learned from the aerial battles of the Korean War, and a focus on conducting operations in a nuclear warfare environment shaped the development of second-generation fighters. Technological advances in aerodynamics, propulsion and aerospace building-materials (primarily aluminum alloys) permitted designers to experiment with aeronautical innovations such as swept wings, delta wings, and area-ruled fuselages. Widespread use of afterburning turbojet engines made these the first production aircraft to break the sound barrier, and the ability to sustain supersonic speeds in level flight became a common capability amongst fighters of this generation.
Fighter designs also took advantage of new electronics technologies that made effective radars small enough to carry aboard smaller aircraft. Onboard radars permitted detection of enemy aircraft beyond visual range, thereby improving the handoff of targets by longer-ranged ground-based warning- and tracking-radars. Similarly, advances in guided-missile development allowed air-to-air missiles to begin supplementing the gun as the primary offensive weapon for the first time in fighter history. During this period, passive-homing infrared-guided (IR) missiles became commonplace, but early IR missile sensors had poor sensitivity and a very narrow field of view (typically no more than 30°), which limited their effective use to only close-range, tail-chase engagements. Radar-guided (RF) missiles were introduced as well, but early examples proved unreliable. These semi-active radar homing (SARH) missiles could track and intercept an enemy aircraft "painted" by the launching aircraft's onboard radar. Medium- and long-range RF air-to-air missiles promised to open up a new dimension of "beyond-visual-range" (BVR) combat, and much effort concentrated on further development of this technology.
The prospect of a potential third world war featuring large mechanized armies and nuclear-weapon strikes led to a degree of specialization along two design approaches: interceptors, such as the English Electric Lightning and Mikoyan-Gurevich MiG-21F; and fighter-bombers, such as the Republic F-105 Thunderchief and the Sukhoi Su-7B. Dogfighting, per se, became de-emphasized in both cases. The interceptor was an outgrowth of the vision that guided missiles would completely replace guns and combat would take place at beyond-visual ranges. As a result, strategists designed interceptors with a large missile-payload and a powerful radar, sacrificing agility in favor of high speed, altitude ceiling and rate of climb. With a primary air-defense role, emphasis was placed on the ability to intercept strategic bombers flying at high altitudes. Specialized point-defense interceptors often had limited range and few, if any, ground-attack capabilities. Fighter-bombers could swing between air-superiority and ground-attack roles, and were often designed for a high-speed, low-altitude dash to deliver their ordnance. Television- and IR-guided air-to-surface missiles were introduced to augment traditional gravity bombs, and some were also equipped to deliver a nuclear bomb.
=== 1960s–1970s: Third-generation jet fighters ===
The third generation witnessed continued maturation of second-generation innovations, but it is most marked by renewed emphases on maneuverability and on traditional ground-attack capabilities. Over the course of the 1960s, increasing combat experience with guided missiles demonstrated that combat would devolve into close-in dogfights. Analog avionics began to appear, replacing older "steam-gauge" cockpit instrumentation. Enhancements to the aerodynamic performance of third-generation fighters included flight control surfaces such as canards, powered slats, and blown flaps. A number of technologies would be tried for vertical/short takeoff and landing, but thrust vectoring would be successful on the Harrier.
Growth in air-combat capability focused on the introduction of improved air-to-air missiles, radar systems, and other avionics. While guns remained standard equipment (early models of F-4 being a notable exception), air-to-air missiles became the primary weapons for air-superiority fighters, which employed more sophisticated radars and medium-range RF AAMs to achieve greater "stand-off" ranges, however, kill probabilities proved unexpectedly low for RF missiles due to poor reliability and improved electronic countermeasures (ECM) for spoofing radar seekers. Infrared-homing AAMs saw their fields of view expand to 45°, which strengthened their tactical usability. Nevertheless, the low dogfight loss-exchange ratios experienced by American fighters in the skies over Vietnam led the U.S. Navy to establish its famous "TOPGUN" fighter-weapons school, which provided a graduate-level curriculum to train fleet fighter-pilots in advanced Air Combat Maneuvering (ACM) and Dissimilar air combat training (DACT) tactics and techniques.
This era also saw an expansion in ground-attack capabilities, principally in guided missiles, and witnessed the introduction of the first truly effective avionics for enhanced ground attack, including terrain-avoidance systems. Air-to-surface missiles (ASM) equipped with electro-optical (E-O) contrast seekers – such as the initial model of the widely used AGM-65 Maverick – became standard weapons, and laser-guided bombs (LGBs) became widespread in an effort to improve precision-attack capabilities. Guidance for such precision-guided munitions (PGM) was provided by externally-mounted targeting pods, which were introduced in the mid-1960s.
The third generation also led to the development of new automatic-fire weapons, primarily chain-guns that use an electric motor to drive the mechanism of a cannon. This allowed a plane to carry a single multi-barrel weapon (such as the 20 mm (0.79 in) Vulcan), and provided greater accuracy and rates of fire. Powerplant reliability increased, and jet engines became "smokeless" to make it harder to sight aircraft at long distances.
Dedicated ground-attack aircraft (like the Grumman A-6 Intruder, SEPECAT Jaguar and LTV A-7 Corsair II) offered longer range, more sophisticated night-attack systems or lower cost than supersonic fighters. With variable-geometry wings, the supersonic F-111 introduced the Pratt & Whitney TF30, the first turbofan equipped with afterburner. The ambitious project sought to create a versatile common fighter for many roles and services. It would serve well as an all-weather bomber, but lacked the performance to defeat other fighters. The McDonnell F-4 Phantom was designed to capitalize on radar and missile technology as an all-weather interceptor, but emerged as a versatile strike-bomber nimble enough to prevail in air combat, adopted by the U.S. Navy, Air Force and Marine Corps. Despite numerous shortcomings that would not be fully addressed until newer fighters, the Phantom claimed 280 aerial kills (more than any other U.S. fighter) over Vietnam. With range and payload capabilities that rivaled that of World War II bombers such as B-24 Liberator, the Phantom would become a highly successful multirole aircraft.
=== 1970s–2000s: Fourth-generation ===
Fourth-generation fighters continued the trend towards multirole configurations, and were equipped with increasingly sophisticated avionics- and weapon-systems. Fighter designs were significantly influenced by the Energy-Maneuverability (E-M) theory developed by Colonel John Boyd and mathematician Thomas Christie, based upon Boyd's combat experience in the Korean War and as a fighter-tactics instructor during the 1960s. E-M theory emphasized the value of aircraft-specific energy maintenance as an advantage in fighter combat. Boyd perceived maneuverability as the primary means of getting "inside" an adversary's decision-making cycle, a process Boyd called the "OODA loop" (for "Observation-Orientation-Decision-Action"). This approach emphasized aircraft designs capable of performing "fast transients" – quick changes in speed, altitude, and direction – as opposed to relying chiefly on high speed alone.
E-M characteristics were first applied to the McDonnell Douglas F-15 Eagle, but Boyd and his supporters believed these performance parameters called for a small, lightweight aircraft with a larger, higher-lift wing. The small size would minimize drag and increase the thrust-to-weight ratio, while the larger wing would minimize wing loading; while the reduced wing loading tends to lower top speed and can cut range, it increases payload capacity and the range reduction can be compensated for by increased fuel in the larger wing. The efforts of Boyd's "Fighter mafia" would result in the General Dynamics F-16 Fighting Falcon (now Lockheed Martin's).
The F-16's maneuverability was further enhanced by its slight aerodynamic instability. This technique, called "relaxed static stability" (RSS), was made possible by introduction of the "fly-by-wire" (FBW) flight-control system (FLCS), which in turn was enabled by advances in computers and in system-integration techniques. Analog avionics, required to enable FBW operations, became a fundamental requirement, but began to be replaced by digital flight-control systems in the latter half of the 1980s. Likewise, Full Authority Digital Engine Controls (FADEC) to electronically manage powerplant performance was introduced with the Pratt & Whitney F100 turbofan. The F-16's sole reliance on electronics and wires to relay flight commands, instead of the usual cables and mechanical linkage controls, earned it the sobriquet of "the electric jet". Electronic FLCS and FADEC quickly became essential components of all subsequent fighter designs.
Other innovative technologies introduced in fourth-generation fighters included pulse-Doppler fire-control radars (providing a "look-down/shoot-down" capability), head-up displays (HUD), "hands on throttle-and-stick" (HOTAS) controls, and multi-function displays (MFD), all essential equipment as of 2019. Aircraft designers began to incorporate composite materials in the form of bonded-aluminum honeycomb structural elements and graphite epoxy laminate skins to reduce weight. Infrared search-and-track (IRST) sensors became widespread for air-to-ground weapons delivery, and appeared for air-to-air combat as well. "All-aspect" IR AAM became standard air superiority weapons, which permitted engagement of enemy aircraft from any angle (although the field of view remained relatively limited). The first long-range active-radar-homing RF AAM entered service with the AIM-54 Phoenix, which solely equipped the Grumman F-14 Tomcat, one of the few variable-sweep-wing fighter designs to enter production. Even with the tremendous advancement of air-to-air missiles in this era, internal guns were standard equipment.
Another revolution came in the form of a stronger reliance on ease of maintenance, which led to standardization of parts, reductions in the numbers of access panels and lubrication points, and overall parts reduction in more complicated equipment like the engines. Some early jet fighters required 50 man-hours of work by a ground crew for every hour the aircraft was in the air; later models substantially reduced this to allow faster turn-around times and more sorties in a day. Some modern military aircraft only require 10-man-hours of work per hour of flight time, and others are even more efficient.
Aerodynamic innovations included variable-camber wings and exploitation of the vortex lift effect to achieve higher angles of attack through the addition of leading-edge extension devices such as strakes.
Unlike interceptors of the previous eras, most fourth-generation air-superiority fighters were designed to be agile dogfighters (although the Mikoyan MiG-31 and Panavia Tornado ADV are notable exceptions). The continually rising cost of fighters, however, continued to emphasize the value of multirole fighters. The need for both types of fighters led to the "high/low mix" concept, which envisioned a high-capability and high-cost core of dedicated air-superiority fighters (like the F-15 and Su-27) supplemented by a larger contingent of lower-cost multi-role fighters (such as the F-16 and MiG-29).
Most fourth-generation fighters, such as the McDonnell Douglas F/A-18 Hornet, HAL Tejas, JF-17 and Dassault Mirage 2000, are true multirole warplanes, designed as such from the start. This was facilitated by multimode avionics that could switch seamlessly between air and ground modes. The earlier approaches of adding on strike capabilities or designing separate models specialized for different roles generally became passé (with the Panavia Tornado being an exception in this regard). Attack roles were generally assigned to dedicated ground-attack aircraft such as the Sukhoi Su-25 and the A-10 Thunderbolt II.
A typical US Air Force fighter wing of the period might contain a mix of one air superiority squadron (F-15C), one strike fighter squadron (F-15E), and two multirole fighter squadrons (F-16C). Perhaps the most novel technology introduced for combat aircraft was stealth, which involves the use of special "low-observable" (L-O) materials and design techniques to reduce the susceptibility of an aircraft to detection by the enemy's sensor systems, particularly radars. The first stealth aircraft introduced were the Lockheed F-117 Nighthawk attack aircraft (introduced in 1983) and the Northrop Grumman B-2 Spirit bomber (first flew in 1989). Although no stealthy fighters per se appeared among the fourth generation, some radar-absorbent coatings and other L-O treatments developed for these programs are reported to have been subsequently applied to fourth-generation fighters.
==== 1990s–2000s: 4.5-generation ====
The end of the Cold War in 1992 led many governments to significantly decrease military spending as a "peace dividend". Air force inventories were cut. Research and development programs working on "fifth-generation" fighters took serious hits. Many programs were canceled during the first half of the 1990s, and those that survived were "stretched out". While the practice of slowing the pace of development reduces annual investment expenses, it comes at the penalty of increased overall program and unit costs over the long-term. In this instance, however, it also permitted designers to make use of the tremendous achievements being made in the fields of computers, avionics and other flight electronics, which had become possible largely due to the advances made in microchip and semiconductor technologies in the 1980s and 1990s. This opportunity enabled designers to develop fourth-generation designs – or redesigns – with significantly enhanced capabilities. These improved designs have become known as "Generation 4.5" fighters, recognizing their intermediate nature between the 4th and 5th generations, and their contribution in furthering development of individual fifth-generation technologies.
The primary characteristics of this sub-generation are the application of advanced digital avionics and aerospace materials, modest signature reduction (primarily RF "stealth"), and highly integrated systems and weapons. These fighters have been designed to operate in a "network-centric" battlefield environment and are principally multirole aircraft. Key weapons technologies introduced include beyond-visual-range (BVR) AAMs; Global Positioning System (GPS)–guided weapons, solid-state phased-array radars; helmet-mounted sights; and improved secure, jamming-resistant datalinks. Thrust vectoring to further improve transient maneuvering capabilities has also been adopted by many 4.5th generation fighters, and uprated powerplants have enabled some designs to achieve a degree of "supercruise" ability. Stealth characteristics are focused primarily on frontal-aspect radar cross section (RCS) signature-reduction techniques including radar-absorbent materials (RAM), L-O coatings and limited shaping techniques.
"Half-generation" designs are either based on existing airframes or are based on new airframes following similar design theory to previous iterations; however, these modifications have introduced the structural use of composite materials to reduce weight, greater fuel fractions to increase range, and signature reduction treatments to achieve lower RCS compared to their predecessors. Prime examples of such aircraft, which are based on new airframe designs making extensive use of carbon-fiber composites, include the Eurofighter Typhoon, Dassault Rafale, Saab JAS 39 Gripen, JF-17 Thunder, and HAL Tejas Mark 1A.
Apart from these fighter jets, most of the 4.5 generation aircraft are actually modified variants of existing airframes from the earlier fourth generation fighter jets. Such fighter jets are generally heavier and examples include the Boeing F/A-18E/F Super Hornet, which is an evolution of the F/A-18 Hornet, the F-15E Strike Eagle, which is a ground-attack/multi-role variant of the F-15 Eagle, the Su-30SM and Su-35S modified variants of the Sukhoi Su-27, and the MiG-35 upgraded version of the Mikoyan MiG-29. The Su-30SM/Su-35S and MiG-35 feature thrust vectoring engine nozzles to enhance maneuvering. The upgraded version of F-16 is also considered a member of the 4.5 generation aircraft.
Generation 4.5 fighters first entered service in the early 1990s, and most of them are still being produced and evolved. It is quite possible that they may continue in production alongside fifth-generation fighters due to the expense of developing the advanced level of stealth technology needed to achieve aircraft designs featuring very low observables (VLO), which is one of the defining features of fifth-generation fighters. Of the 4.5th generation designs, the Strike Eagle, Super Hornet, Typhoon, Gripen, and Rafale have been used in combat.
The U.S. government has defined 4.5 generation fighter aircraft as those that "(1) have advanced capabilities, including— (A) AESA radar; (B) high capacity data-link; and (C) enhanced avionics; and (2) have the ability to deploy current and reasonably foreseeable advanced armaments."
=== 2000s–2020s: Fifth-generation ===
Currently the cutting edge of fighter design, fifth-generation fighters are characterized by being designed from the start to operate in a network-centric combat environment, and to feature extremely low, all-aspect, multi-spectral signatures employing advanced materials and shaping techniques. They have multifunction AESA radars with high-bandwidth, low-probability of intercept (LPI) data transmission capabilities. The infra-red search and track sensors incorporated for air-to-air combat as well as for air-to-ground weapons delivery in the 4.5th generation fighters are now fused in with other sensors for Situational Awareness IRST or SAIRST, which constantly tracks all targets of interest around the aircraft so the pilot need not guess when he glances. These sensors, along with advanced avionics, glass cockpits, helmet-mounted sights (not currently on F-22), and improved secure, jamming-resistant LPI datalinks are highly integrated to provide multi-platform, multi-sensor data fusion for vastly improved situational awareness while easing the pilot's workload. Avionics suites rely on extensive use of very high-speed integrated circuit (VHSIC) technology, common modules, and high-speed data buses. Overall, the integration of all these elements is claimed to provide fifth-generation fighters with a "first-look, first-shot, first-kill capability".
A key attribute of fifth-generation fighters is a small radar cross-section. Great care has been taken in designing its layout and internal structure to minimize RCS over a broad bandwidth of detection and tracking radar frequencies; furthermore, to maintain its VLO signature during combat operations, primary weapons are carried in internal weapon bays that are only briefly opened to permit weapon launch. Furthermore, stealth technology has advanced to the point where it can be employed without a tradeoff with aerodynamics performance, in contrast to previous stealth efforts. Some attention has also been paid to reducing IR signatures, especially on the F-22. Detailed information on these signature-reduction techniques is classified, but in general includes special shaping approaches, thermoset and thermoplastic materials, extensive structural use of advanced composites, conformal sensors, heat-resistant coatings, low-observable wire meshes to cover intake and cooling vents, heat ablating tiles on the exhaust troughs (seen on the Northrop YF-23), and coating internal and external metal areas with radar-absorbent materials and paint (RAM/RAP).
The AESA radar offers unique capabilities for fighters (and it is also quickly becoming essential for Generation 4.5 aircraft designs, as well as being retrofitted onto some fourth-generation aircraft). In addition to its high resistance to ECM and LPI features, it enables the fighter to function as a sort of "mini-AWACS", providing high-gain electronic support measures (ESM) and electronic warfare (EW) jamming functions. Other technologies common to this latest generation of fighters includes integrated electronic warfare system (INEWS) technology, integrated communications, navigation, and identification (CNI) avionics technology, centralized "vehicle health monitoring" systems for ease of maintenance, fiber optics data transmission, stealth technology and even hovering capabilities. Maneuver performance remains important and is enhanced by thrust-vectoring, which also helps reduce takeoff and landing distances. Supercruise may or may not be featured; it permits flight at supersonic speeds without the use of the afterburner – a device that significantly increases IR signature when used in full military power.
Such aircraft are sophisticated and expensive. The fifth generation was ushered in by the Lockheed Martin/Boeing F-22 Raptor in late 2005. The U.S. Air Force originally planned to acquire 650 F-22s, but now only 187 will be built. As a result, its unit flyaway cost (FAC) is around US$150 million. To spread the development costs – and production base – more broadly, the Joint Strike Fighter (JSF) program enrolls eight other countries as cost- and risk-sharing partners. Altogether, the nine partner nations anticipate procuring over 3,000 Lockheed Martin F-35 Lightning II fighters at an anticipated average FAC of $80–85 million. The F-35, however, is designed to be a family of three aircraft, a conventional take-off and landing (CTOL) fighter, a short take-off and vertical landing (STOVL) fighter, and a Catapult Assisted Take Off But Arrested Recovery (CATOBAR) fighter, each of which has a different unit price and slightly varying specifications in terms of fuel capacity (and therefore range), size and payload.
Other countries have initiated fifth-generation fighter development projects. In December 2010, it was discovered that China is developing the 5th generation fighter Chengdu J-20. The J-20 took its maiden flight in January 2011. The Shenyang J-35 took its maiden flight on 31 October 2012, and developed a carrier-based version based on Chinese aircraft carriers. United Aircraft Corporation with Russia's Mikoyan LMFS and Sukhoi Su-75 Checkmate plan, Sukhoi Su-57 became the first fifth-generation fighter jets in service with the Russian Aerospace Forces on 2020, and launch missiles in the Russo-Ukrainian War in 2022. Japan is exploring its technical feasibility to produce fifth-generation fighters. India is developing the Advanced Medium Combat Aircraft (AMCA), a medium weight stealth fighter jet designated to enter into serial production by late 2030s. India also had initiated a joint fifth generation heavy fighter with Russia called the FGFA. As of 2018 May, the project is suspected to have not yielded desired progress or results for India and has been put on hold or dropped altogether. Other countries considering fielding an indigenous or semi-indigenous advanced fifth generation aircraft include South Korea, Sweden, Turkey and Pakistan.
=== 2020s–present: Sixth-generation ===
As of November 2018, France, Germany, China, Japan, Russia, Italy, the United Kingdom and the United States have announced the development of a sixth-generation aircraft program.
France and Germany will develop a joint sixth-generation fighter to replace their current fleet of Dassault Rafales, Eurofighter Typhoons, and Panavia Tornados by 2035. The overall development will be led by a collaboration of Dassault and Airbus, while the engines will reportedly be jointly developed by Safran and MTU Aero Engines. Thales and MBDA are also seeking a stake in the project. Spain officially joined the Franco-German project to develop a Next-Generation Fighter (NGF) that will form part of a broader Future Combat Air Systems (FCAS) with the signing of a letter of intent (LOI) on February 14, 2019.
Currently at the concept stage, the first sixth-generation jet fighter is expected to enter service in the United States Navy in 2025–30 period. The USAF seeks a new fighter for the 2030–50 period named the "Next Generation Tactical Aircraft" ("Next Gen TACAIR"). The US Navy looks to replace its F/A-18E/F Super Hornets beginning in 2025 with the Next Generation Air Dominance air superiority fighter.
The United Kingdom's proposed stealth fighter is being developed along with Japan and Italy in Team Tempest, consisting of BAE Systems, Rolls-Royce, Leonardo S.p.A. and MBDA. The aircraft is intended to enter service in 2035.
Saudi Arabia is also looking to get involved in Team Tempest.
== Weapons ==
Fighters were typically armed with guns only for air to air combat up through the late 1950s, though unguided rockets for mostly air to ground use and limited air to air use were deployed in WWII. From the late 1950s forward guided missiles came into use for air to air combat. Throughout this history fighters which by surprise or maneuver attain a good firing position have achieved the kill about one third to one half the time, no matter what weapons were carried. The only major historic exception to this has been the low effectiveness shown by guided missiles in the first one to two decades of their existence.
From WWI to the present, fighter aircraft have featured machine guns and automatic cannons as weapons, and they are still considered as essential back-up weapons today. The power of air-to-air guns has increased greatly over time, and has kept them relevant in the guided missile era. In WWI two rifle (approximately 0.30) caliber machine guns was the typical armament, producing a weight of fire of about 0.4 kg (0.88 lb) per second. In WWII rifle caliber machine guns also remained common, though usually in larger numbers or supplemented with much heavier 0.50 caliber machine guns or cannons. The standard WWII American fighter armament of six 0.50-cal (12.7mm) machine guns fired a bullet weight of approximately 3.7 kg/sec (8.1 lbs/sec), at a muzzle velocity of 856 m/s (2,810 ft/s). British and German aircraft tended to use a mix of machine guns and autocannon, the latter firing explosive projectiles. Later British fighters were exclusively cannon-armed, the US were not able to produce a reliable cannon in high numbers and most fighters remained equipped only with heavy machine guns despite the US Navy pressing for a change to 20 mm.
Post war 20–30 mm revolver cannon and rotary cannon were introduced. The modern M61 Vulcan 2ft rotary cannon that is standard on current American fighters fires a projectile weight of about 10 kg/s (22 lb/s), nearly three times that of six 0.50-cal machine guns, with higher velocity of 1,052 m/s (3450 ft/s) supporting a flatter trajectory, and with exploding projectiles. Modern fighter gun systems also feature ranging radar and lead computing electronic gun sights to ease the problem of aim point to compensate for projectile drop and time of flight (target lead) in the complex three dimensional maneuvering of air-to-air combat. However, getting in position to use the guns is still a challenge. The range of guns is longer than in the past but still quite limited compared to missiles, with modern gun systems having a maximum effective range of approximately 1,000 meters. High probability of kill also requires firing to usually occur from the rear hemisphere of the target. Despite these limits, when pilots are well trained in air-to-air gunnery and these conditions are satisfied, gun systems are tactically effective and highly cost efficient. The cost of a gun firing pass is far less than firing a missile, and the projectiles are not subject to the thermal and electronic countermeasures than can sometimes defeat missiles. When the enemy can be approached to within gun range, the lethality of guns is approximately a 25% to 50% chance of "kill per firing pass".
The range limitations of guns, and the desire to overcome large variations in fighter pilot skill and thus achieve higher force effectiveness, led to the development of the guided air-to-air missile. There are two main variations, heat-seeking (infrared homing), and radar guided. Radar missiles are typically several times heavier and more expensive than heat-seekers, but with longer range, greater destructive power, and ability to track through clouds.
The highly successful AIM-9 Sidewinder heat-seeking (infrared homing) short-range missile was developed by the United States Navy in the 1950s. These small missiles are easily carried by lighter fighters, and provide effective ranges of approximately 10 to 35 kilometres (6 to 20 mi). Beginning with the AIM-9L in 1977, subsequent versions of Sidewinder have added all-aspect capability, the ability to use the lower heat of air to skin friction on the target aircraft to track from the front and sides. The latest (2003 service entry) AIM-9X also features "off-boresight" and "lock on after launch" capabilities, which allow the pilot to make a quick launch of a missile to track a target anywhere within the pilot's vision. The AIM-9X development cost was U.S. $3 billion in mid to late 1990s dollars, and 2015 per unit procurement cost is $0.6 million each. The missile weighs 85.3 kg (188 lbs), and has a maximum range of 35 km (22 miles) at higher altitudes. Like most air-to-air missiles, lower altitude range can be as limited as only about one third of maximum due to higher drag and less ability to coast downward.
The effectiveness of infrared homing missiles was only 7% early in the Vietnam War, but improved to approximately 15%–40% over the course of the war. The AIM-4 Falcon used by the USAF had kill rates of approximately 7% and was considered a failure. The AIM-9B Sidewinder introduced later achieved 15% kill rates, and the further improved AIM-9D and J models reached 19%. The AIM-9G used in the last year of the Vietnam air war achieved 40%. Israel used almost totally guns in the 1967 Six-Day War, achieving 60 kills and 10 losses. However, Israel made much more use of steadily improving heat-seeking missiles in the 1973 Yom Kippur War. In this extensive conflict Israel scored 171 of 261 total kills with heat-seeking missiles (65.5%), 5 kills with radar guided missiles (1.9%), and 85 kills with guns (32.6%). The AIM-9L Sidewinder scored 19 kills out of 26 fired missiles (73%) in the 1982 Falklands War. But, in a conflict against opponents using thermal countermeasures, the United States only scored 11 kills out of 48 fired (Pk = 23%) with the follow-on AIM-9M in the 1991 Gulf War.
Radar guided missiles fall into two main missile guidance types. In the historically more common semi-active radar homing case the missile homes in on radar signals transmitted from launching aircraft and reflected from the target. This has the disadvantage that the firing aircraft must maintain radar lock on the target and is thus less free to maneuver and more vulnerable to attack. A widely deployed missile of this type was the AIM-7 Sparrow, which entered service in 1954 and was produced in improving versions until 1997. In more advanced active radar homing the missile is guided to the vicinity of the target by internal data on its projected position, and then "goes active" with an internally carried small radar system to conduct terminal guidance to the target. This eliminates the requirement for the firing aircraft to maintain radar lock, and thus greatly reduces risk. A prominent example is the AIM-120 AMRAAM, which was first fielded in 1991 as the AIM-7 replacement, and which has no firm retirement date as of 2016. The current AIM-120D version has a maximum high altitude range of greater than 160 km (100 mi), and cost approximately $2.4 million each (2016). As is typical with most other missiles, range at lower altitude may be as little as one third that of high altitude.
In the Vietnam air war radar missile kill reliability was approximately 10% at shorter ranges, and even worse at longer ranges due to reduced radar return and greater time for the target aircraft to detect the incoming missile and take evasive action. At one point in the Vietnam war, the U.S. Navy fired 50 AIM-7 Sparrow radar guided missiles in a row without a hit. Between 1958 and 1982 in five wars there were 2,014 combined heat-seeking and radar guided missile firings by fighter pilots engaged in air-to-air combat, achieving 528 kills, of which 76 were radar missile kills, for a combined effectiveness of 26%. However, only 4 of the 76 radar missile kills were in the beyond-visual-range mode intended to be the strength of radar guided missiles. The United States invested over $10 billion in air-to-air radar missile technology from the 1950s to the early 1970s. Amortized over actual kills achieved by the U.S. and its allies, each radar guided missile kill thus cost over $130 million. The defeated enemy aircraft were for the most part older MiG-17s, −19s, and −21s, with new cost of $0.3 million to $3 million each. Thus, the radar missile investment over that period far exceeded the value of enemy aircraft destroyed, and furthermore had very little of the intended BVR effectiveness.
However, continuing heavy development investment and rapidly advancing electronic technology led to significant improvement in radar missile reliabilities from the late 1970s onward. Radar guided missiles achieved 75% Pk (9 kills out of 12 shots) in operations in the Gulf War in 1991. The percentage of kills achieved by radar guided missiles also surpassed 50% of total kills for the first time by 1991. Since 1991, 20 of 61 kills worldwide have been beyond-visual-range using radar missiles. Discounting an accidental friendly fire kill, in operational use the AIM-120D (the current main American radar guided missile) has achieved 9 kills out of 16 shots for a 56% Pk. Six of these kills were BVR, out of 13 shots, for a 46% BVR Pk. Though all these kills were against less capable opponents who were not equipped with operating radar, electronic countermeasures, or a comparable weapon themselves, the BVR Pk was a significant improvement from earlier eras. However, a current concern is electronic countermeasures to radar missiles, which are thought to be reducing the effectiveness of the AIM-120D. Some experts believe that as of 2016 the European Meteor missile, the Russian R-37M, and the Chinese PL-15 are more resistant to countermeasures and more effective than the AIM-120D.
Now that higher reliabilities have been achieved, both types of missiles allow the fighter pilot to often avoid the risk of the short-range dogfight, where only the more experienced and skilled fighter pilots tend to prevail, and where even the finest fighter pilot can simply get unlucky. Taking maximum advantage of complicated missile parameters in both attack and defense against competent opponents does take considerable experience and skill, but against surprised opponents lacking comparable capability and countermeasures, air-to-air missile warfare is relatively simple. By partially automating air-to-air combat and reducing reliance on gun kills mostly achieved by only a small expert fraction of fighter pilots, air-to-air missiles now serve as highly effective force multipliers.
== See also ==
List of fighter aircraft
List of United States fighter aircraft
Warbird
== Notes ==
== References ==
=== Citations ===
=== Bibliography ===
Ahlgren, Jan; Linner, Anders; Wigert, Lars (2002), Gripen, the First Fourth Generation Fighter, Swedish Air Force and Saab Aerospace, ISBN 91-972803-8-0
Burton, James (1993), The Pentagon Wars: Reformers Challenge the Old Guard, Naval Institute Press, ISBN 978-1-61251-369-0
Blume, August G. (1968). Miller, Jr., Thomas G. (ed.). "Cross and Cockade Journal". History of the Serbian Air Force. Vol. 9, no. 3. Whittier, California: The Society of World War I Aero Historians.
Buttar, Prit (20 June 2014). Collision of Empires: The War on the Eastern Front in 1914. Bloomsbury USA. ISBN 978-1-78200648-0 – via Google Books.*Coox, Alvin (1985), Nomonhan: Japan Against Russia, 1939, Stanford University Press, ISBN 0-8047-1160-7
Coram, Robert (2002), Boyd: The Fighter Pilot Who Changed the Art of War, Little, Brown, and Company, ISBN 0-316-88146-5
Cross, Roy (1962), The Fighter Aircraft Pocket Book, Jarrold and Sons, Ltd.
Eden, Paul (2004). The Encyclopedia of Aircraft of WWII. Amber Books, Ltd, London. ISBN 1-904687-07-5.
Glenny, Misha (2012). The Balkans: 1804–2012. New York, New York: Penguin Books. ISBN 978-1-77089-273-6.
Grove, Eric; Ireland, Bernard (1997). Jane's War at Sea. Harper Collins. ISBN 0-00-472065-2.
Gunston, Bill; Spick, Mike (1983), Modern Air Combat, Crescent Books, ISBN 91-972803-8-0
Hammond, Grant T. (2001), The Mind of War: John Boyd and American Security, Smithsonian Institution Press, ISBN 1-56098-941-6
Higby, Patrick (March 2005), Promise and Reality: Beyond Visual Range (BVR) Air-To-Air Combat (PDF), Air War College, Maxwell Air Force Base, archived from the original (PDF) on 31 January 2017, retrieved 24 December 2016
Huenecke, Klaus (1987), Modern Combat Aircraft Design, Airlife Publishing Limited, ISBN 0-517-412659
Lee, John (1942). Fighter Facts and Fallacies. William Morrow and Company.
Munson, Kenneth (1976). Fighters Attack and Training Aircraft 1914-1919 (Revised ed.). Blandford.
Shaw, Robert (1985). Fighter Combat: Tactics and Maneuvering. Naval Institute Press. ISBN 0-87021-059-9.
Spick, Mike (1995), Designed for the Kill: The Jet Fighter—Development and Experience, United States Naval Institute, ISBN 0-87021-059-9
Spick, Mike (1987), An Illustrated Guide to Modern Fighter Combat, Salamander Books Limited, ISBN 0-86101-319-0
Spick, Mike (2000). Brassey's Modern Fighters. Pegasus Publishing Limited. ISBN 1-57488-247-3.
Sprey, Pierre (1982), "Comparing the Effectiveness of Air-to-Air Fighters: F-86 to F-18" (PDF), U.S. DoD Contract MDA 903-81-C-0312, archived from the original (PDF) on 27 August 2021, retrieved 2 July 2016
Stevenson, James (1993), The Pentagon Paradox: The Development of the F-18 Hornet, Naval Institute Press, ISBN 1-55750-775-9
Stimson, George (1983), Introduction to Airborne Radar, Hughes Aircraft Company
Stuart, William (1978), Northrop F-5 Case Study in Aircraft Design, Northrop Corp.
Wagner, Raymond (2000), Mustang Designer: Edgar Schmued and the P-51, Washington, DC: Smithsonian Institution Press
== External links ==
Fighter generations comparison chart on theaviationist.com |
Fixed-wing aircraft | A fixed-wing aircraft is a heavier-than-air aircraft, such as an airplane, which is capable of flight using aerodynamic lift. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which a rotor mounted on a spinning shaft generates lift), and ornithopters (in which the wings oscillate to generate lift). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft, and airplanes that use wing morphing are all classified as fixed wing.
Gliding fixed-wing aircraft, including free-flying gliders and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and ground effect vehicles. Most fixed-wing aircraft are operated by a pilot, but some are unmanned or controlled remotely or are completely autonomous (no remote pilot).
== History ==
=== Kites ===
Kites were used approximately 2,800 years ago in China, where kite building materials were available. Leaf kites may have been flown earlier in what is now Sulawesi, based on their interpretation of cave paintings on nearby Muna Island. By at least 549 AD paper kites were flying, as recorded that year, a paper kite was used as a message for a rescue mission. Ancient and medieval Chinese sources report kites used for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.
Kite stories were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries. Although initially regarded as curiosities, by the 18th and 19th centuries kites were used for scientific research.
=== Gliders and powered devices ===
Around 400 BC in Greece, Archytas was reputed to have designed and built the first self-propelled flying device, shaped like a bird and propelled by a jet of what was probably steam, said to have flown some 200 m (660 ft). This machine may have been suspended during its flight.
One of the earliest attempts with gliders was by 11th-century monk Eilmer of Malmesbury, which failed. A 17th-century account states that 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.
In 1799, Sir George Cayley laid out the concept of the modern airplane as a fixed-wing machine with systems for lift, propulsion, and control. Cayley was building and flying models of fixed-wing aircraft as early as 1803, and built a successful passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, had his glider L'Albatros artificiel towed by a horse along a beach. In 1884, American John J. Montgomery made controlled flights in a glider as a part of a series of gliders he built between 1883 and 1886. Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His designs were widely adopted. He also developed a type of rotary aircraft engine, but did not create a powered fixed-wing aircraft.
=== Powered flight ===
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, and Maxim abandoned work on it.
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight". By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis 220 metres (720 ft) in less than 22 seconds. The flight was certified by the FAI.
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.
=== World War I ===
World War I served initiated the use of aircraft as weapons and observation platforms. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, flown by German Luftstreitkräfte Lieutenant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights traveled between the United States and Canada in 1919.
=== Interwar aviation; the "Golden Age" ===
The so-called Golden Age of Aviation occurred between the two World Wars, during which updated interpretations of earlier breakthroughs. Innovations include Hugo Junkers' all-metal air frames in 1915 leading to multi-engine aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft power plant alongside V-12 liquid-cooled aviation engines, and longer and longer flights – as with a Vickers Vimy in 1919, followed months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts.
=== World War II ===
Airplanes had a presence in the major battles of World War II. They were an essential component of military strategies, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
Military gliders were developed and used in several campaigns, but were limited by the high casualty rate encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German U-boats.
Before and during the war, British and German designers worked on jet engines. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe. Later in the war the British Gloster Meteor entered service, but never saw action – top air speeds for that era went as high as 1,130 km/h (700 mph), with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.
=== Postwar ===
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound, flown by Chuck Yeager.
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's largest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005. The most successful aircraft is the Douglas DC-3 and its military version, the C-47, a medium sized twin engine passenger or transport aircraft that has been in service since 1936 and is still used throughout the world. Some of the hundreds of versions found other purposes, like the AC-47, a Vietnam War era gunship, which is still used in the Colombian Air Force.
== Types ==
=== Airplane/aeroplane ===
An airplane (spelled aeroplane in British English and shortened to plane) is a powered fixed-wing aircraft propelled by thrust from a jet engine or propeller. Planes come in many sizes, shapes, and wing configurations. Uses include recreation, transportation of goods and people, military, and research.
==== Seaplane ====
A seaplane (hydroplane) is capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. Seaplanes and amphibians divide into two categories: float planes and flying boats.
A float plane is similar to a land-based airplane. The fuselage is not specialized. The wheels are replaced/enveloped by floats, allowing the craft to make remain afloat for water landings.
A flying boat is a seaplane with a watertight hull for the lower (ventral) areas of its fuselage. The fuselage lands and then rests directly on the water's surface, held afloat by the hull. It does not need additional floats for buoyancy, although small underwing floats or fuselage-mounted sponsons may be used to stabilize it. Large seaplanes are usually flying boats, embodying most classic amphibian aircraft designs.
==== Powered gliders ====
Many forms of glider may include a small power plant. These include:
Motor glider – a conventional glider or sailplane with an auxiliary power plant that may be used when in flight to increase performance.
Powered hang glider – a hang glider with a power plant added.
Powered parachute – a paraglider type of parachute with an integrated air frame, seat, undercarriage and power plant hung beneath.
Powered paraglider or paramotor – a paraglider with a power plant suspended behind the pilot.
==== Ground effect vehicle ====
A ground effect vehicle (GEV) flies close to the terrain, making use of the ground effect – the interaction between the wings and the surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.
=== Glider ===
A glider is a heavier-than-air craft whose free flight does not require an engine. A sailplane is a fixed-wing glider designed for soaring – gaining height using updrafts of air and to fly for long periods.
Gliders are mainly used for recreation but have found use for purposes such as aerodynamics research, warfare and spacecraft recovery.
Motor gliders are equipped with a limited propulsion system for takeoff, or to extend flight duration.
As is the case with planes, gliders come in diverse forms with varied wings, aerodynamic efficiency, pilot location, and controls.
Large gliders are most commonly born aloft by a tow-plane or by a winch. Military gliders have been used in combat to deliver troops and equipment, while specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have made unpowered landings similar to a glider.
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is common. After take-off, further altitude can be gained through the skillful exploitation of rising air. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
One small-scale example of a glider is the paper airplane. An ordinary sheet of paper can be folded into an aerodynamic shape fairly easily; its low mass relative to its surface area reduces the required lift for flight, allowing it to glide some distance.
Gliders and sailplanes share many design elements and aerodynamic principles with powered aircraft. For example, the Horten H.IV was a tailless flying wing glider, and the delta-winged Space Shuttle orbiter glided during its descent phase. Many gliders adopt similar control surfaces and instruments as airplanes.
==== Types ====
The main application of modern glider aircraft is sport and recreation.
===== Sailplane =====
Gliders were developed in the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed the craft to glide to the next source of "lift", increasing their range. This gave rise to the popular sport of gliding.
Early gliders were built mainly of wood and metal, later replaced by composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings incorporating a high aspect ratio. Single-seat and two-seat gliders are available.
Initially, training was done by short "hops" in primary gliders, which have no cockpit and minimal instruments. Since shortly after World War II, training is done in two-seat dual control gliders, but high-performance two-seaters can make long flights. Originally skids were used for landing, later replaced by wheels, often retractable. Gliders known as motor gliders are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of wingspan and flaps.
A class of ultralight sailplanes, including some known as microlift gliders and some known as airchairs, has been defined by the FAI based on weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some crash safety as the pilot can strap into an upright seat within a deform-able structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Most are built by individual designers and hobbyists.
===== Military gliders =====
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by transport planes, e.g. C-47 Dakota, or by one-time bombers that had been relegated to secondary activities, e.g. Short Stirling. The advantage over paratroopers were that heavy equipment could be landed and that troops were quickly assembled rather than dispersed over a parachute drop zone. The gliders were treated as disposable, constructed from inexpensive materials such as wood, though a few were re-used. By the time of the Korean War, transport aircraft had become larger and more efficient so that even light tanks could be dropped by parachute, obsoleting gliders.
===== Research gliders =====
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for hang gliders.
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
===== Hang glider =====
A hang glider is a glider aircraft in which the pilot is suspended in a harness suspended from the air frame, and exercises control by shifting body weight in opposition to a control frame. Hang gliders are typically made of an aluminum alloy or composite-framed fabric wing. Pilots can soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
===== Paraglider =====
A paraglider is a lightweight, free-flying, foot-launched glider with no rigid body. The pilot is suspended in a harness below a hollow fabric wing whose shape is formed by its suspension lines. Air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside power the craft. Paragliding is most often a recreational activity.
==== Unmanned gliders ====
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flight path rather than a ballistic one. This enables stand-off aircraft to attack a target from a distance.
=== Kite ===
A kite is a tethered aircraft held aloft by wind that blows over its wing(s). High pressure below the wing deflects the airflow downwards. This deflection generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the tether.
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to confirm its flight characteristics, before adding an engine and flight controls.
==== Applications ====
===== Military =====
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
===== Science and meteorology =====
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
===== Radio aerials and light beacons =====
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require strong wind, which may be not always available with heavy equipment and a ground conductor.
Kites can be used to carry light sources such as light sticks or battery-powered lights.
===== Kite traction =====
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting is also popular.
Kite sailing opens several possibilities not available in traditional sailing:
Wind speeds are greater at higher altitudes
Kites may be maneuvered dynamically, which dramatically increases the available force
Mechanical structures are not needed to withstand bending forces; vehicles/hulls can be light or eliminated.
===== Power generation =====
Research and development projects investigate kites for harnessing high altitude wind currents for electricity generation.
===== Cultural uses =====
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
==== Designs ====
Bermuda kite
Bowed kite, e.g. Rokkaku
Cellular or box kite
Chapi-chapi
Delta kite
Foil, parafoil or bow kite
Malay kite see also wau bulan
Tetrahedral kite
==== Types ====
Expanded polystyrene kite
Fighter kite
Indoor kite
Inflatable single-line kite
Kytoon
Man-lifting kite
Rogallo parawing kite
Stunt (sport) kite
Water kite
== Characteristics ==
=== Air frame ===
The structural element of a fixed-wing aircraft is the air frame. It varies according to the aircraft's type, purpose, and technology. Early airframes were made of wood with fabric wing surfaces, When engines became available for powered flight, their mounts were made of metal. As speeds increased metal became more common until by the end of World War II, all-metal (and glass) aircraft were common. In modern times, composite materials became more common.
Typical structural elements include:
One or more mostly horizontal wings, often with an airfoil cross-section. The wing deflects air downward as the aircraft moves forward, generating lifting force to support it in flight. The wing also provides lateral stability to stop the aircraft level in steady flight. Other roles are to hold the fuel and mount the engines.
A fuselage, typically a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically slippery. The fuselage joins the other parts of the air frame and contains the payload, and flight systems.
A vertical stabilizer or fin is a rigid surface mounted at the rear of the plane and typically protruding above it. The fin stabilizes the plane's yaw (turn left or right) and mounts the rudder which controls its rotation along that axis.
A horizontal stabilizer, usually mounted at the tail near the vertical stabilizer. The horizontal stabilizer is used to stabilize the plane's pitch (tilt up or down) and mounts the elevators that provide pitch control.
Landing gear, a set of wheels, skids, or floats that support the plane while it is not in flight. On seaplanes, the bottom of the fuselage or floats (pontoons) support it while on the water. On some planes, the landing gear retracts during the flight to reduce drag.
=== Wings ===
The wings of a fixed-wing aircraft are static planes extending to either side of the aircraft. When the aircraft travels forwards, air flows over the wings that are shaped to create lift.
==== Structure ====
Kites and some lightweight gliders and airplanes have flexible wing surfaces that are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces.
Whether flexible or rigid, most wings have a strong frame to give them shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and ribs running from the leading (front) to the trailing (rear) edge.
Early airplane engines had little power and light weight was critical. Also, early airfoil sections were thin, and could not support a strong frame. Until the 1930s, most wings were so fragile that external bracing struts and wires were added. As engine power increased, wings could be made heavy and strong enough that bracing was unnecessary. Such an unbraced wing is called a cantilever wing.
==== Configuration ====
The number and shape of wings vary widely. Some designs blend the wing with the fuselage, while left and right wings separated by the fuselage are more common.
Occasionally more wings have been used, such as the three-winged triplane from World War I. Four-winged quadruplanes and other multiplane designs have had little success.
Most planes are monoplanes, with one or two parallel wings. Biplanes and triplanes stack one wing above the other. Tandem wings place one wing behind the other, possibly joined at the tips. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form.
The planform is the shape when seen from above/below. To be aerodynamically efficient, wings are straight with a long span, but a short chord (high aspect ratio). To be structurally efficient, and hence lightweight, wingspan must be as small as possible, but offer enough area to provide lift.
To travel at transonic speeds, variable geometry wings change orientation, angling backward to reduce drag from supersonic shock waves. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage. The swept wing is a straight wing swept backward or forwards.
The delta wing is a triangular shape that serves various purposes. As a flexible Rogallo wing, it allows a stable shape under aerodynamic forces, and is often used for kites and other ultralight craft. It is supersonic capable, combining high strength with low drag.
Wings are typically hollow, also serving as fuel tanks. They are equipped with flaps, which allow the wing to increase/decrease drag/lift, for take-off and landing, and acting in opposition, to change direction.
=== Fuselage ===
The fuselage is typically long and thin, usually with tapered or rounded ends to make its shape aerodynamically smooth. Most fixed-wing aircraft have a single fuselage. Others may have multiple fuselages, or the fuselage may be fitted with booms on either side of the tail to allow the extreme rear of the fuselage to be utilized.
The fuselage typically carries the flight crew, passengers, cargo, and sometimes fuel and engine(s). Gliders typically omit fuel and engines, although some variations such as motor gliders and rocket gliders have them for temporary or optional use.
Pilots of manned commercial fixed-wing aircraft control them from inside a cockpit within the fuselage, typically located at the front/top, equipped with controls, windows, and instruments, separated from passengers by a secure door. In small aircraft, the passengers typically sit behind the pilot(s) in the cabin, Occasionally, a passenger may sit beside or in front of the pilot. Larger passenger aircraft have a separate passenger cabin or occasionally cabins that are physically separated from the cockpit.
Aircraft often have two or more pilots, with one in overall command (the "pilot") and one or more "co-pilots". On larger aircraft a navigator is typically also seated in the cockpit as well. Some military or specialized aircraft may have other flight crew members in the cockpit as well.
=== Wings vs. bodies ===
==== Flying wing ====
A flying wing is a tailless aircraft that has no distinct fuselage, housing the crew, payload, and equipment inside.: 224
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany. After the war, numerous experimental designs were based on the flying wing concept. General interest continued into the 1950s, but designs did not offer a great advantage in range and presented technical problems. The flying wing is most practical for designs in the slow-to-medium speed range, and drew continual interest as a tactical airlifter design.
Interest in flying wings reemerged in the 1980s due to their potentially low radar cross-sections. Stealth technology relies on shapes that reflect radar waves only in certain directions, thus making it harder to detect. This approach eventually led to the Northrop B-2 Spirit stealth bomber (pictured). The flying wing's aerodynamics are not the primary concern. Computer-controlled fly-by-wire systems compensated for many of the aerodynamic drawbacks, enabling an efficient and stable long-range aircraft.
==== Blended wing body ====
Blended wing body aircraft have a flattened airfoil-shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are blended with the body.
Blended wing bodied aircraft incorporate design features from both fuselage and flying wing designs. The purported advantages of the blended wing body approach are efficient, high-lift wings and a wide, airfoil-shaped body. This enables the entire craft to contribute to lift generation with potentially increased fuel economy.
==== Lifting body ====
A lifting body is a configuration in which the body produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for flight stability.
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build small and lightweight manned spacecraft. The US built lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that highly shaped fuselages made it difficult to fit fuel tanks.
=== Empennage and foreplane ===
The classic airfoil section wing is unstable in flight. Flexible-wing planes often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted airfoil that is stable, or other mechanisms including electronic artificial stability.
In order to achieve trim, stability, and control, most fixed-wing types have an empennage comprising a fin and rudder that act horizontally, and a tailplane and elevator that act vertically. This is so common that it is known as the conventional layout. Sometimes two or more fins are spaced out along the tailplane.
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it.: 86 This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
=== Aircraft controls ===
==== Kite control ====
Kites are controlled by one or more tethers.
==== Free-flying aircraft controls ====
Gliders and airplanes have sophisticated control systems, especially if they are piloted.
The controls allow the pilot to direct the aircraft in the air and on the ground. Typically these are:
The yoke or joystick controls rotation of the plane about the pitch and roll axes. A yoke resembles a steering wheel. The pilot can pitch the plane down by pushing on the yoke or joystick, and pitch the plane up by pulling on it. Rolling the plane is accomplished by turning the yoke in the direction of the desired roll, or by tilting the joystick in that direction.
Rudder pedals control rotation of the plane about the yaw axis. Two pedals pivot so that when one is pressed forward the other moves backward, and vice versa. The pilot presses on the right rudder pedal to make the plane yaw to the right, and pushes on the left pedal to make it yaw to the left. The rudder is used mainly to balance the plane in turns, or to compensate for winds or other effects that push the plane about the yaw axis.
On powered types, an engine stop control ("fuel cutoff", for example) and, usually, a Throttle or thrust lever and other controls, such as a fuel-mixture control (to compensate for air density changes with altitude change).
Other common controls include:
Flap levers, which are used to control the deflection position of flaps on the wings.
Spoiler levers, which are used to control the position of spoilers on the wings, and to arm their automatic deployment in planes designed to deploy them upon landing. The spoilers reduce lift for landing.
Trim controls, which usually take the form of knobs or wheels and are used to adjust pitch, roll, or yaw trim. These are often connected to small airfoils on the trailing edge of the control surfaces and are called "trim tabs". Trim is used to reduce the amount of pressure on the control forces needed to maintain a steady course.
On wheeled types, brakes are used to slow and stop the plane on the ground, and sometimes for turns on the ground.
A craft may have two pilot seats with dual controls, allowing two to take turns.
The control system may allow full or partial automation, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot and is controlled remotely or via gyroscopes, computers/sensors or other forms of autonomous control.
=== Cockpit instrumentation ===
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
The six basic instruments, sometimes referred to as the six pack, are:
The airspeed indicator (ASI) shows the speed at which the plane is moving through the air.
The attitude indicator (AI), sometimes called the artificial horizon, indicates the exact orientation of the aircraft about its pitch and roll axes.
The altimeter indicates the altitude or height of the plane above mean sea level (AMSL).
The vertical speed indicator (VSI), or variometer, shows the rate at which the plane is climbing or descending.
The heading indicator (HI), sometimes called the directional gyro (DG), shows the magnetic compass orientation of the fuselage. The direction is affected by wind conditions and magnetic declination.
The turn coordinator (TC), or turn and bank indicator, helps the pilot to control the plane in a coordinated attitude while turning.
Other cockpit instruments include:
A two-way radio, to enable communications with other planes and with air traffic control.
A horizontal situation indicator (HSI) indicates the position and movement of the plane as seen from above with respect to the ground, including course/heading and other information.
Instruments showing the status of the plane's engines (operating speed, thrust, temperature, and other variables).
Combined display systems such as primary flight displays or navigation aids.
Information displays such as onboard weather radar displays.
A radio direction finder (RDF), to indicate the direction to one or more radio beacons, which can be used to determine the plane's position.
A satellite navigation (satnav) system, to provide an accurate position.
Some or all of these instruments may appear on a computer display and be operated with touches, ala a phone.
== See also ==
Aircraft flight mechanics
Airliner
Aviation
Aviation and the environment
Aviation history
Fuel efficiency
List of altitude records reached by different aircraft types
Maneuvering speed
Rotorcraft
== References ==
=== Notes ===
In 1903, when the Wright brothers used the word, "aeroplane" (a British English term that can also mean airplane in American English) meant wing, not the whole aircraft. See text of their patent. Patent 821,393 – Wright brothers' patent for "Flying Machine"
=== Citations ===
=== Bibliography ===
Blatner, David. The Flying Book: Everything You've Ever Wondered About Flying on Airplanes. ISBN 0-8027-7691-4
== External links ==
The airplane centre
Airliners.net
Aerospaceweb.org
How Airplanes Work – Howstuffworks.com
Smithsonian National Air and Space Museum's How Things Fly website
"Hops and Flights – a Roll Call of Early Powered Take-offs" a 1959 Flight article |
Flag carrier | A flag carrier is a transport company, such as an airline or shipping company, that, being locally registered in a given sovereign state, enjoys preferential rights or privileges accorded by that government for international operations.
Historically, the term was used to refer to airlines owned by the government of their home country and associated with the national identity of that country. Such an airline may also be known as a national airline or a national carrier, although this can have different legal meanings in some countries. Today, it is any international airline with a strong connection to its home country or that represents its home country internationally, regardless of whether it is government-owned.
Flag carriers may also be known as such due to laws requiring aircraft or ships to display the state flag of the country of their registry. For example, under the law of the United States, a U.S. flag air carrier is any airline that holds a certificate under Section 401 of the Federal Aviation Act of 1958 (i.e., any U.S.-based airline operating internationally),and any ship registered in the United States is known as a U.S. flag vessel.
== Background ==
The term "flag carrier" is a legacy of the early days of commercial aviation when governments often took the lead by establishing state-owned airlines because of the high capital costs of running them. However, not all such airlines were government-owned; Pan Am, TWA, Cathay Pacific, Union de Transports Aériens, Canadian Pacific Air Lines and Olympic Airlines were all privately owned, but were considered to be flag carriers as they were the "main national airline" and often a sign of their country's presence abroad.
The heavily regulated aviation industry also meant aviation rights are often negotiated between governments, denying airlines access to an open market. These Bilateral Air Transport Agreements similar to the Bermuda I and Bermuda II agreements specify rights awardable only to locally registered airlines, forcing some governments to jump-start airlines to avoid being disadvantaged in the face of foreign competition. Some countries also establish flag carriers such as Israel's El Al or Lebanon's Middle East Airlines for nationalist reasons or to aid the country's economy, particularly in the area of tourism.
In many cases, governments would directly assist in the growth of their flag carriers typically through subsidies and other fiscal incentives. The establishment of competitors in the form of other locally registered airlines may be prohibited or heavily regulated to avoid direct competition. Even where privately run airlines may be allowed to be established, the flag carriers may still be accorded priority, especially in the apportionment of aviation rights to local or international markets.
Near the end of the 20th century, many of these airlines have been corporatized as a public company or a state-owned enterprise, while others have been completely privatized. The aviation industry has also been gradually deregulated and liberalized, permitting greater freedoms of the air particularly in the United States and in the European Union with the signing of the Open Skies agreement. One of the features of such agreements is the right of a country to designate multiple airlines to serve international routes with the result that there is no single "flag carrier".
== List of flag-carrying airlines ==
The chart below lists airlines considered to be a "flag carrier", based on current or former state ownership or other verifiable designation as a national airline.
== See also ==
List of charter airlines
List of low-cost airlines
== Notes ==
== References ==
== External links ==
International Air Transport Association
"U.S. Flag Services". US Maritime Administration. Archived from the original on 13 August 2012.. |
Flight airspeed record | An air speed record is the highest airspeed attained by an aircraft of a particular class. The rules for all official aviation records are defined by Fédération Aéronautique Internationale (FAI), which also ratifies any claims. Speed records are divided into a number of classes with sub-divisions. There are three classes of aircraft: landplanes, seaplanes, and amphibians, and within these classes there are records for aircraft in a number of weight categories. There are still further subdivisions for piston-engined, turbojet, turboprop, and rocket-engined aircraft. Within each of these groups, records are defined for speed over a straight course and for closed circuits of various sizes carrying various payloads.
== Timeline ==
Gray text indicates unofficial records, including unconfirmed or unpublicized war secrets.
== Official records versus unofficial ==
The Lockheed SR-71 Blackbird holds the official Air Speed Record for a crewed airbreathing jet engine aircraft with a speed of 3,530 km/h (2,190 mph). The record was set on 28 July 1976 by Eldon W. Joersz and George T. Morgan Jr. near Beale Air Force Base, California, USA. It was able to take off and land unassisted on conventional runways. SR-71 pilot Brian Shul claimed in The Untouchables that he flew in excess of Mach 3.5 on 15 April 1986, over Libya, in order to avoid a missile.
Although the official record for fastest piston-engined aeroplane in level flight was held by a Grumman F8F Bearcat, the Rare Bear, with a speed of 850.23 km/h (528.31 mph), the unofficial record for fastest piston-engined aeroplane in level flight is held by a British Hawker Sea Fury at 880 km/h (547 mph). Both were demilitarised and modified fighters, while the fastest stock (original, factory-built) piston-engined aeroplane was unofficially the Supermarine Spiteful F Mk 16, which "achieved a speed of 494m.p.h. at 28,500ft during official tests at Boscombe Down" in level flight. The unofficial record for fastest piston-engined aeroplane (not in level flight) is held by a Supermarine Spitfire Mk.XIX flown by Flight Lieutenant Edward "Ted" Powles, which was calculated to have achieved a speed of 1,110 km/h (690 mph) in a dive on 5 February 1952.
The last new speed record ratified before the outbreak of World War II was set on 26 April 1939 with a Me 209 V1, at 755 km/h (469 mph). The chaos and secrecy of World War II meant that new speed breakthroughs were neither publicized nor ratified. In October 1941, an unofficial speed record of 1,004 km/h (624 mph) was secretly set by a Messerschmitt Me 163A "V4" rocket aircraft. Continued research during the war extended the secret, unofficial speed record to 1,130 km/h (700 mph) by July 1944, achieved by a Messerschmitt Me 163B "V18". The first new official record in the post-war period was achieved by a Gloster Meteor F Mk.4 in November 1945, at 975 km/h (606 mph). The first aircraft to exceed the unofficial October 1941 record of the Me 163A V4 was the Douglas D-558-1 Skystreak, which achieved 1,032 km/h (641 mph) in August 1947. The July 1944 unofficial record of the Me 163B V18 was officially surpassed in November 1947, when Chuck Yeager flew the Bell X-1 to 1,434 km/h (891 mph).
The official speed record for a seaplane moved by piston engine is 709.209 km/h (440.682 mph), which attained on 24 October 1934, by Francesco Agello in the Macchi-Castoldi M.C.72 seaplane ("idrocorsa") and it remains the current record. It was equipped with the Fiat AS.6 engine (version 1934) developing a power of 2,300 kW (3,100 hp) at 3,300 rpm, with coaxial counter-rotating propellers. The original record holding Macchi-Castoldi M.C.72 MM.181 seaplane is at the Air Force Museum at Vigna di Valle in Italy.
== Other air speed records ==
Flying between any two airports allow a large number of combinations, so setting a speed record ("speed over a recognised course") is fairly easy with an ordinary aircraft, although there are many administrative requirements for recognition.
== See also ==
Flight altitude record
Fastest propeller-driven aircraft
List of vehicle speed records
Lockheed X-7 - Mach 4.31 (2,881 mph) in the 1950s
Messerschmitt Me 163 Komet
World record
== References ==
Allward, Maurice. Modern Combat Aircraft 4: F-86 Sabre. London: Ian Allan, 1978. ISBN 0-7110-0860-4.
Andrews, C.F. and E.B. Morgan. Supermarine Aircraft since 1914. London:Putnam, 1987. ISBN 0-85177-800-3.
Belyakov, R.A. and J. Marmain. MiG: Fifty Years of Secret Aircraft Design. Shrewsbury, UK:Airlife, 1994. ISBN 1-85310-488-4.
Bowers, Peter M. Curtiss Aircraft 1907–1947. London:Putnam, 1979. ISBN 0-370-10029-8.
Cooper, H.J. "The World's Speed Record". Flight, 25 May 1951, pp. 617–619.
"Eighteen Years of World's Records". Flight, 7 February 1924, pp. 73–75.
Francillon, René J. McDonnell Douglas Aircraft since 1920. London:Putnam, 1979. ISBN 0-370-00050-1.
James, Derek N. Gloster Aircraft since 1917. London:Putnam, 1971. ISBN 0-370-00084-6.
Mason, Francis K. The British Fighter since 1912. Annapolis Maryland, US: Naval Institute Press, 1992. ISBN 1-55750-082-7.
Munson, Kenneth and John William Ransom Taylor Jane's Pocket Book of Record-breaking Aircraft. New York New York, US: Macmillan, 1978. ISBN 0-02-080630-2.
Taylor, H. A. Fairey Aircraft since 1915. London:Putnam, 1974. ISBN 0-370-00065-X.
Taylor, John W. R. Jane's All The World's Aircraft 1965–66. London:Sampson Low, Marston & Company, 1965.
Taylor, John W. R. Jane's All The World's Aircraft 1976–77. London:Jane's Yearbooks, 1976. ISBN 0-354-00538-3.
Taylor, John W. R. Jane's All The World's Aircraft 1988–89. Coulsdon, UK:Jane's Defence Data, 1988. ISBN 0-7106-0867-5.
Organ, Richard Avro Arrow: The Story of the Avro Arrow From Its Evolution To Its Extinction. Erin, ON, Canada: Boston Mills Press, 1980. ISBN 978-1550460476.
== External links ==
Web site of the Fédération Aéronautique Internationale (FAI)
Speed records time line
Speed Record Club - The Speed Record Club seeks to promote an informed and educated enthusiast identity, reporting accurately and impartially to the best of its ability on record-breaking engineering, events, attempts and history.
Ground Speed Records - Breakdown of speed records by aircraft type |
Flight altitude record | This listing of flight altitude records are the records set for the highest aeronautical flights conducted in the atmosphere and beyond, set since the age of ballooning.
Some, but not all of the records were certified by the non-profit international aviation organization, the Fédération Aéronautique Internationale (FAI). One reason for a lack of 'official' certification was that the flight occurred prior to the creation of the FAI.
For clarity, the "Fixed-wing aircraft" table is sorted by FAI-designated categories as determined by whether the record-creating aircraft left the ground by its own power (category "Altitude"), or whether it was first carried aloft by a carrier-aircraft prior to its record setting event (category "Altitude gain", or formally "Altitude Gain, Aeroplane Launched from a Carrier Aircraft"). Other sub-categories describe the airframe, and more importantly, the powerplant type (since rocket-powered aircraft can have greater altitude abilities than those with air-breathing engines).
An essential requirement for the creation of an "official" altitude record is the employment of FAI-certified observers present during the record-setting flight. Thus several records noted are unofficial due to the lack of such observers.
== Balloons ==
1783-08-15: 24 m (79 ft); Jean-François Pilâtre de Rozier of France, the first ascent in a hot-air balloon.
1783-10-19: 81 m (266 ft); Jean-François Pilâtre de Rozier, in Paris.
1783-10-19: 105 m (344 ft); Jean-François Pilâtre de Rozier with André Giroud de Villette, in Paris.
1783-11-21: 1,000 m (3,300 ft); Jean-François Pilâtre de Rozier with Marquis d'Arlandes, in Paris.
1783-12-01: 2.7 km (8,900 ft); Jacques Alexandre Charles and his assistant Marie-Noël Robert, both of France, made the first flight in a hydrogen balloon to about 610 m (2,000 ft). Charles then ascended alone to the record altitude.
1784-06-23: 4 km (13,000 ft); Pilâtre de Rozier and the chemist Joseph Proust in a Montgolfier.
1803-07-18: 7.28 km (23,900 ft); Étienne-Gaspard Robert and Auguste Lhoëst in a balloon.
1839: 7.9 km (26,000 ft); Charles Green and Spencer Rush in a free balloon.
1862-09-05: about 29,500 ft (9,000 m); Henry Coxwell and James Glaisher in a balloon filled with coal gas. Glaisher lost consciousness during the ascent due to the low air pressure and cold temperature of −11 °C (12 °F).
1901-07-31: 10.8 km (35,000 ft); Arthur Berson and Reinhard Süring in the hydrogen balloon Preußen, in an open basket and with oxygen in steel cylinders. This flight contributed to the discovery of the stratosphere.
1927-11-04: 13.222 km (43,380 ft); Captain Hawthorne C. Gray, of the U.S. Army Air Corps, in a helium balloon. Gray lost consciousness after his oxygen supply ran out and was killed in the crash.
1931-05-27: 15.781 km (51,770 ft); Auguste Piccard and Paul Kipfer in a hydrogen balloon.
1932: 16.201 km (53,150 ft) -Auguste Piccard and Max Cosyns in a hydrogen balloon.
1933-09-30: 18.501 km (60,700 ft); USSR balloon USSR-1.
1933-11-20: 18.592 km (61,000 ft); Lt. Comdr. Thomas G. W. Settle (USN) and Maj Chester L. Fordney (USMC) in Century of Progress balloon
1934-01-30: 21.946 km (72,000 ft); USSR balloon Osoaviakhim-1. The three crew were killed when the balloon broke up during the descent.
1935-11-10: 22.066 km (72,400 ft); Captain O. A. Anderson and Captain A. W. Stevens (U.S. Army Air Corps) ascended in the Explorer II gondola from the Stratobowl, near Rapid City, South Dakota, for a flight that lasted 8 hours 13 minutes and covered 362 kilometres (225 mi).
1956-11-08: 23.165 km (76,000 ft); Malcolm D. Ross and M. L. Lewis (U.S. Navy) in Office of Naval Research Strato-Lab I, using a pressurized gondola and plastic balloon launching near Rapid City, South Dakota, and landing 282 km (175 mi) away near Kennedy, Nebraska.
1957-06-02: 29.4997 km (96,784 ft); Captain Joseph W. Kittinger (U.S. Air Force) ascended in the Project Manhigh 1 gondola to a record-breaking altitude.
1957-08-19: 31.212 km (102,400 ft); above sea level, Major David Simons (U.S. Air Force) ascended from the Portsmouth Mine near Crosby, Minnesota, in the Manhigh 2 gondola for a 32-hour record-breaking flight. Simons landed at 5:32 p.m. on August 20 in northeastern South Dakota.
1960-08-16: 31.333 km (102,800 ft); Testing a high-altitude parachute system, Joseph Kittinger of the U.S. Air Force parachuted from the Excelsior III balloon over New Mexico at 102,800 ft (31,300 m). He set world records for: high-altitude jump; freefall diving by falling 26 km (16 mi) before opening his parachute; and fastest speed achieved by a human without motorized assistance, 988 km/h (614 mph).
1961-05-04: 34.668 km (113,740 ft); Commander Malcolm D. Ross and Lieutenant Commander Victor A. Prather, Jr., of the U.S. Navy ascended in the Strato-Lab V, in an unpressurized gondola. After descending, the gondola containing the two balloonists landed in the Gulf of Mexico. Prather slipped off the rescue helicopter's hook into the gulf and drowned.
1966-02-02: 37.6 km (123,000 ft); Amateur parachutist Nicholas Piantanida of the United States with his "Project Strato-Jump" II balloon. Because he was unable to disconnect his oxygen line from the gondola's main feed, the ground crew had to remotely detach the balloon from the gondola. His planned free fall and parachute jump was abandoned, and he returned to the ground in the gondola. Nick was unable to accomplish his desired free fall record, however his spectacular flight set other records that held up for 46 years. Because of the design of his glove, he was unable to reattach his safety seat belt harness. He endured very high g-forces, but survived the descent. Piantanida's ascent is not recognized by the Fédération Aéronautique Internationale as a balloon altitude world record, because he did not return with his balloon, although that was not the feat he was trying to accomplish. On this second attempt of "Project Strato-Jump", Nick Piantanida took with him 250 postmarked air-mail envelopes and letters. At the time, these letters were the first covers to have ever been delivered by the U.S. Post Office via space. The habit of taking cover letters to space continued with the Apollo Program; in 1972 there was a scandal involving the Apollo 15 astronauts. It is unclear if any of the "Project Strato-Jump" covers survived, and were eventually mailed to the intended recipients.
2012-10-14: 38.969 km (127,850 ft); Felix Baumgartner in the Red Bull Stratos balloon. The flight started near Roswell, New Mexico, and returned to earth via a record-setting parachute jump.
2014-10-24: 41.424 kilometres (135,910 ft); Alan Eustace, a senior vice president at the Google corporation, in a helium balloon, returning to earth via parachute jump during the StratEx mission executed by Paragon Space Development Corporation.
=== Hot-air balloons ===
=== Uncrewed gas balloon ===
During 1893 French scientist Jules Richard constructed sounding balloons. These uncrewed balloons, carrying light, but very precise instruments, approached an altitude of 15.24 km (50,000 ft).
A Winzen balloon launched from Chico, California, in 1972 set the uncrewed altitude record of 51.8 km (170,000 ft). Its volume was 1,350,000 m3 (47,800,000 cu ft).
On September 20, 2013, JAXA launched an ultrathin film balloon called BS13-08 made of 2.8 μm thick polyethylene film with a volume of 80,000 m3 (2,800,000 cu ft), which was 60 m (200 ft) in diameter. The balloon rose at a speed of 250 metres per minute (820 ft/min) and reached an altitude of 53.7 km (176,000 ft), surpassing the previous world record set in 2002.
This was the greatest height a flying object reached without using rockets or a launch with a cannon.
== Gliders ==
On February 17, 1986, the highest altitude obtained by a soaring aircraft was set at 14.938 km (49,009 ft) by Robert Harris using lee waves over California City, United States. The flight was accomplished using the Grob 102 Standard Astir III.
This was surpassed at 15.46 km (50,720 ft) set on August 30, 2006, by Steve Fossett (pilot) and Einar Enevoldson (co-pilot) in their high performance research glider Perlan 1, a modified Glaser-Dirks DG-500. This record was achieved over El Calafate (Patagonia, Argentina) and set as part of the Perlan Project.
This was raised at 15.902 km (52,172 ft) on September 3, 2017 by Jim Payne (pilot) and Morgan Sandercock (co-pilot) in the Perlan 2, a special built high altitude research glider. This record was again achieved over El Calafate and as part of the Perlan Project.
On September 2, 2018, within the Airbus Perlan Mission II, again from El Calafate, the Perlan II piloted by Jim Payne and Tim Gardner reached 23.203 km (76,124 ft), surpassing the 22.475 km (73,737 ft) attained by Jerry Hoyt on April 17, 1989, in a Lockheed U-2: the highest subsonic flight.
== Fixed-wing aircraft ==
=== Piston-driven propeller aeroplane ===
The highest altitude obtained by a piston-driven propeller UAV (without payload) is 20.430 kilometres (67,028 ft). It was obtained during 1988–1989 by the Boeing Condor UAV.
The highest altitude obtained in a piston-driven propeller biplane (without a payload) was 17.083 km (56,050 ft) on October 22, 1938, by Mario Pezzi at Montecelio, Italy in a Caproni Ca.161 driven by a Piaggio XI R.C. engine.
The highest altitude obtained in a piston-driven propeller monoplane (without a payload) was 18.552 km (60,870 ft) on August 4, 1995, by the Grob Strato 2C driven by two Teledyne Continental TSIO-550 engines.
=== Jet aircraft ===
The highest current world absolute general aviation altitude record for air breathing jet-propelled aircraft is 37.650 kilometres (123,520 ft) set by Aleksandr Vasilyevich Fedotov in a Mikoyan-Gurevich E-266M (MiG-25M) on August 31, 1977.
=== Rocket plane ===
The record for highest altitude obtained by a crewed rocket-powered aircraft is the US Space Shuttle (STS) which regularly reached altitudes of more than 500 kilometres (310 mi) on servicing missions to the Hubble Space Telescope.
The highest altitude obtained by a crewed aeroplane (launched from another aircraft) is 112.010 km (367,490 ft) by Brian Binnie in the Scaled Composites SpaceShipOne (powered by a Scaled Composite SD-010 engine with 80,000 newtons (18,000 lbf) of thrust) on October 4, 2004, at Mojave, California. The SpaceShipOne was launched at over 13.3 km (44,000 ft).
The previous (unofficial) record was 107.960 km (354,200 ft) set by Joseph A. Walker in a North American X-15 in mission X-15 Flight 91 on August 22, 1963. Walker had reached 106 km – crossing the Kármán line the first time – with X-15 Flight 90 the previous month.
During the X-15 program, 8 pilots flew a combined 13 flights which met the Air Force spaceflight criterion by exceeding the altitude of 80 kilometres (50 mi), qualifying these pilots as being astronauts; of those 13 flights, two (flown by the same civilian pilot) met the FAI definition of outer space: 100 kilometres (62 mi).
==== Mixed power ====
The official record for a mixed power aircraft was achieved on May 2, 1958, by Roger Carpentier when he reached 24.217 km (79,450 ft) over Istres, France in a Sud-Ouest Trident II mixed power (turbojet & rocket engine) aircraft.
The unofficial altitude record for mixed-power-aircraft with self-powered takeoff was 36.8 km (120,800 ft) on December 6, 1963, by Major Robert W. Smith in a Lockheed NF-104A mixed power (turbojet and rocket engine) aircraft.
=== Electrically powered aircraft ===
The highest altitude obtained by an electrically powered aircraft is 29.524 kilometres (96,863 ft) on August 14, 2001, by the NASA Helios, and is the highest altitude in horizontal flight by a winged aircraft. This is also the altitude record for propeller driven aircraft, FAI class U (Experimental / New Technologies), and FAI class U-1.d (Remotely controlled UAV, weight 500 to 2,500 kg (1,100 to 5,500 lb)).
== Rotorcraft ==
On June 21, 1972, Jean Boulet of France piloted an Aérospatiale SA 315B Lama helicopter to an absolute altitude record of 12.440 kilometres (40,814 ft). At that extreme altitude, the engine flamed out and Boulet had to land the helicopter by breaking another record: the longest successful autorotation in history. The helicopter was stripped of all unnecessary equipment prior to the flight to minimize weight, and the pilot breathed supplemental oxygen.
== Paper airplanes ==
The highest altitude obtained by a paper plane was previously held by the Paper Aircraft Released Into Space (PARIS) project, which was released at an altitude of 27.307 kilometres (89,590 ft), from a helium balloon that was launched approximately 160 kilometres (99 mi) west of Madrid, Spain on October 28, 2010, and recorded by The Register's "special projects bureau". The project achieved a Guinness world record recognition.
This record was broken on 24 June 2015 in Cambridgeshire, UK by the Space Club of Kesgrave High School, Suffolk, as part of their Stratos III project. The paper plane was launched from a balloon at 35.043 kilometres (114,970 ft).
== Cannon rounds ==
The current world-record for highest cannon projectile flight is held by Project HARP’s 410 mm (16 in) space gun prototype, which fired a 180 kg (400 lb) Martlet 2 projectile to a record height of 180 kilometres (590,000 ft; 110 mi) in Yuma, Arizona, on November 18, 1966. The projectile’s trajectory sent it beyond the Kármán line at 100 km (62 mi), making it the first cannon-fired projectile to do so.
The Paris Gun (German: Paris-Geschütz) was a German long-range siege gun used to bombard Paris during World War I. It was in service from March–August 1918. Its 106-kilogram (234 lb) shells had a range of about 130 km (80 mi) with a maximum altitude of about 42.3 km (26.3 mi).
== See also ==
Fédération Aéronautique Internationale
High-altitude balloon
High-altitude military parachuting
High-altitude platform station
== Notes ==
== References ==
== Bibliography ==
Andrews, C.F. and E.B. Morgan. Vickers Aircraft since 1908. London:Putnam, 1988. ISBN 0-85177-815-1.
Angelucci, Enzo and Peter M. Bowers. The American Fighter. Sparkford, UK:Haynes Publishing Group, 1987. ISBN 0-85429-635-2.
Bridgman, Leonard. Jane's All The World's Aircraft 1951–52. London: Sampson Low, Marston & Company, Ltd, 1951.
"Eighteen Years of World's Records". Flight, February 7, 1924, pp. 73–75.
Lewis, Peter. British Racing and Record-Breaking Aircraft. London:Putnam, 1971. ISBN 0-370-00067-6.
Owers, Colin. "Stop-Gap Fighter:The LUSAC Series". Air Enthusiast, Fifty, May to July 1993. Stamford, UK:Key Publishing. ISSN 0143-5450. pp. 49–51.
Taylor, John W. R. Jane's All The World's Aircraft 1965–66. London:Sampson Low, Marston & Company, 1965.
"The Royal Aero Club of the U.K.: Official Notices to Members". Flight December 16, 1920.
== External links ==
Fédération Aéronautique Internationale Official website –the international, non-profit, non-government organization that tracks aircraft world records
Balloon World Records Fédération Aéronautique Internationale
Excelsior III Details of Kittingers' Jump from a stratospheric balloon in 1960
Iowa State University – High Altitude Balloon Experiments in Technology
Eng, Cassandra (1997). "Altitude of the Highest Manned Balloon Flight". The Physics Factbook. |
Flight distance record | This list of flight distance records contains only those set without any mid-air refueling.
== Non-commercial powered aircraft ==
== Commercial aircraft ==
=== Shortest distance ===
The Loganair Westray to Papa Westray route and its return flight make up the shortest flight distance for any scheduled air carrier service. The route is 2.8 km (1.7 miles), and travel time, including taxi, is usually less than two minutes. The route is served by Loganair airlines' Britten-Norman Islander aircraft and links the island of Westray and the town of Kirkwall, on the Orkney Islands in Scotland. This record was established when service began in 1967, and it remains in effect as of December 2022.
== Other types of aircraft ==
== See also ==
Flight length
Flight endurance record
Cross-America flight air speed record
Aerial circumnavigation
Longest flights
== Notes and references ==
== References ==
Green, William, Gordon Swanborough and Pierre Layvastre. "The Saga of the Ubiquitous Breguet". Air Enthusiast, Seven, July–September 1978. pp. 161–181.
Mikesh, Robert C. and Abe, Shorzoe. Japanese Aircraft 1910-1941. London:Putnam, 1990. ISBN 0-85177-840-2.
Taylor, John W. R. Jane's All The World's Aircraft 1966-67. London:Sampson Low, Marston & Company, 1966. |
Flight dynamics (fixed-wing aircraft) | Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of gravity (cg), known as pitch, roll and yaw. These are collectively known as aircraft attitude, often principally relative to the atmospheric frame in normal flight, but also relative to terrain during takeoff or landing, or when operating at low elevation. The concept of attitude is not specific to fixed-wing aircraft, but also extends to rotary aircraft such as helicopters, and dirigibles, where the flight dynamics involved in establishing and controlling attitude are entirely different.
Control systems adjust the orientation of a vehicle about its cg. A control system includes control surfaces which, when deflected, generate a moment (or couple from ailerons) about the cg which rotates the aircraft in pitch, roll, and yaw. For example, a pitching moment comes from a force applied at a distance forward or aft of the cg, causing the aircraft to pitch up or down.
A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though an aircraft may be deliberately "sideslipped" to increase drag and descent rate during landing, to keep aircraft heading same as runway heading during cross-wind landings and during flight with asymmetric power.
== Background ==
Roll, pitch and yaw refer to rotations about the respective axes starting from a defined steady flight equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle.
The most common aeronautical convention defines roll as acting about the longitudinal axis, positive with the starboard (right) wing down. Yaw is about the vertical body axis, positive with the nose to starboard. Pitch is about an axis perpendicular to the longitudinal plane of symmetry, positive nose up.
=== Reference frames ===
Three right-handed, Cartesian coordinate systems see frequent use in flight dynamics. The first coordinate system has an origin fixed in the reference frame of the Earth:
Earth frame
Origin - arbitrary, fixed relative to the surface of the Earth
xE axis - positive in the direction of north
yE axis - positive in the direction of east
zE axis - positive towards the center of the Earth
In many flight dynamics applications, the Earth frame is assumed to be inertial with a flat xE,yE-plane, though the Earth frame can also be considered a spherical coordinate system with origin at the center of the Earth.
The other two reference frames are body-fixed, with origins moving along with the aircraft, typically at the center of gravity. For an aircraft that is symmetric from right-to-left, the frames can be defined as:
Body frame
Origin - airplane center of gravity
xb axis - positive out the nose of the aircraft in the plane of symmetry of the aircraft
zb axis - perpendicular to the xb axis, in the plane of symmetry of the aircraft, positive below the aircraft
yb axis - perpendicular to the xb,zb-plane, positive determined by the right-hand rule (generally, positive out the right wing)
Wind frame
Origin - airplane center of gravity
xw axis - positive in the direction of the velocity vector of the aircraft relative to the air
zw axis - perpendicular to the xw axis, in the plane of symmetry of the aircraft, positive below the aircraft
yw axis - perpendicular to the xw,zw-plane, positive determined by the right hand rule (generally, positive to the right)
Asymmetric aircraft have analogous body-fixed frames, but different conventions must be used to choose the precise directions of the x and z axes.
The Earth frame is a convenient frame to express aircraft translational and rotational kinematics. The Earth frame is also useful in that, under certain assumptions, it can be approximated as inertial. Additionally, one force acting on the aircraft, weight, is fixed in the +zE direction.
The body frame is often of interest because the origin and the axes remain fixed relative to the aircraft. This means that the relative orientation of the Earth and body frames describes the aircraft attitude. Also, the direction of the force of thrust is generally fixed in the body frame, though some aircraft can vary this direction, for example by thrust vectoring.
The wind frame is a convenient frame to express the aerodynamic forces and moments acting on an aircraft. In particular, the net aerodynamic force can be divided into components along the wind frame axes, with the drag force in the −xw direction and the lift force in the −zw direction.
In addition to defining the reference frames, the relative orientation of the reference frames can be determined. The relative orientation can be expressed in a variety of forms, including:
Rotation matrices
Direction cosines
Euler angles
Quaternions
The various Euler angles relating the three reference frames are important to flight dynamics. Many Euler angle conventions exist, but all of the rotation sequences presented below use the z-y'-x" convention. This convention corresponds to a type of Tait-Bryan angles, which are commonly referred to as Euler angles. This convention is described in detail below for the roll, pitch, and yaw Euler angles that describe the body frame orientation relative to the Earth frame. The other sets of Euler angles are described below by analogy.
=== Transformations (Euler angles) ===
==== From Earth frame to body frame ====
First, rotate the Earth frame axes xE and yE around the zE axis by the yaw angle ψ. This results in an intermediate reference frame with axes denoted x',y',z', where z'=zE.
Second, rotate the x' and z' axes around the y' axis by the pitch angle θ. This results in another intermediate reference frame with axes denoted x",y",z", where y"=y'.
Finally, rotate the y" and z" axes around the x" axis by the roll angle φ. The reference frame that results after the three rotations is the body frame.
Based on the rotations and axes conventions above:
Yaw angle ψ: angle between north and the projection of the aircraft longitudinal axis onto the horizontal plane;
Pitch angle θ: angle between the aircraft longitudinal axis and horizontal;
Roll angle φ: rotation around the aircraft longitudinal axis after rotating by yaw and pitch.
==== From Earth frame to wind frame ====
Heading angle σ: angle between north and the horizontal component of the velocity vector, which describes which direction the aircraft is moving relative to cardinal directions.
Flight path angle γ: is the angle between horizontal and the velocity vector, which describes whether the aircraft is climbing or descending.
Bank angle μ: represents a rotation of the lift force around the velocity vector, which may indicate whether the airplane is turning.
When performing the rotations described above to obtain the body frame from the Earth frame, there is this analogy between angles:
σ, ψ (heading vs yaw)
γ, θ (Flight path vs pitch)
μ, φ (Bank vs Roll)
==== From wind frame to body frame ====
sideslip angle β: angle between the velocity vector and the projection of the aircraft longitudinal axis onto the xw,yw-plane, which describes whether there is a lateral component to the aircraft velocity
angle of attack α: angle between the xw,yw-plane and the aircraft longitudinal axis and, among other things, is an important variable in determining the magnitude of the force of lift
When performing the rotations described earlier to obtain the body frame from the Earth frame, there is this analogy between angles:
β, ψ (sideslip vs yaw)
α, θ (attack vs pitch)
(φ = 0) (nothing vs roll)
=== Analogies ===
Between the three reference frames there are hence these analogies:
Yaw / Heading / Sideslip (Z axis, vertical)
Pitch / Flight path / Attack angle (Y axis, wing)
Roll / Bank / nothing (X axis, nose)
== Design cases ==
In analyzing the stability of an aircraft, it is usual to consider perturbations about a nominal steady flight state. So the analysis would be applied, for example, assuming:
Straight and level flight
Turn at constant speed
Approach and landing
Takeoff
The speed, height and trim angle of attack are different for each flight condition, in addition, the aircraft will be configured differently, e.g. at low speed flaps may be deployed and the undercarriage may be down.
Except for asymmetric designs (or symmetric designs at significant sideslip), the longitudinal equations of motion (involving pitch and lift forces) may be treated independently of the lateral motion (involving roll and yaw).
The following considers perturbations about a nominal straight and level flight path.
To keep the analysis (relatively) simple, the control surfaces are assumed fixed throughout the motion, this is stick-fixed stability. Stick-free analysis requires the further complication of taking the motion of the control surfaces into account.
Furthermore, the flight is assumed to take place in still air, and the aircraft is treated as a rigid body.
== Forces of flight ==
Three forces act on an aircraft in flight: weight, thrust, and the aerodynamic force.
=== Aerodynamic force ===
==== Components of the aerodynamic force ====
The expression to calculate the aerodynamic force is:
F
A
=
∫
Σ
(
−
Δ
p
n
+
f
)
d
σ
{\displaystyle \mathbf {F} _{A}=\int _{\Sigma }(-\Delta p\mathbf {n} +\mathbf {f} )\,d\sigma }
where:
Δ
p
≡
{\displaystyle \Delta p\equiv }
Difference between static pressure and free current pressure
n
≡
{\displaystyle \mathbf {n} \equiv }
outer normal vector of the element of area
f
≡
{\displaystyle \mathbf {f} \equiv }
tangential stress vector practised by the air on the body
Σ
≡
{\displaystyle \Sigma \equiv }
adequate reference surface
projected on wind axes we obtain:
F
A
=
−
(
i
w
D
+
j
w
Q
+
k
w
L
)
{\displaystyle \mathbf {F} _{A}=-(\mathbf {i} _{w}D+\mathbf {j} _{w}Q+\mathbf {k} _{w}L)}
where:
D
≡
{\displaystyle D\equiv }
Drag
Q
≡
{\displaystyle Q\equiv }
Lateral force
L
≡
{\displaystyle L\equiv }
Lift
==== Aerodynamic coefficients ====
Dynamic pressure of the free current
≡
q
=
1
2
ρ
V
2
{\displaystyle \equiv q={\tfrac {1}{2}}\,\rho \,V^{2}}
Proper reference surface (wing surface, in case of planes)
≡
S
{\displaystyle \equiv S}
Pressure coefficient
≡
C
p
=
p
−
p
∞
q
{\displaystyle \equiv C_{p}={\dfrac {p-p_{\infty }}{q}}}
Friction coefficient
≡
C
f
=
f
q
{\displaystyle \equiv C_{f}={\dfrac {f}{q}}}
Drag coefficient
≡
C
d
=
D
q
S
=
−
1
S
∫
Σ
[
(
−
C
p
)
n
⋅
i
w
+
C
f
t
⋅
i
w
]
d
σ
{\displaystyle \equiv C_{d}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {i_{w}} +C_{f}\mathbf {t} \cdot \mathbf {i_{w}} ]\,d\sigma }
Lateral force coefficient
≡
C
Q
=
Q
q
S
=
−
1
S
∫
Σ
[
(
−
C
p
)
n
⋅
j
w
+
C
f
t
⋅
j
w
]
d
σ
{\displaystyle \equiv C_{Q}={\dfrac {Q}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {j_{w}} +C_{f}\mathbf {t} \cdot \mathbf {j_{w}} ]\,d\sigma }
Lift coefficient
≡
C
L
=
L
q
S
=
−
1
S
∫
Σ
[
(
−
C
p
)
n
⋅
k
w
+
C
f
t
⋅
k
w
]
d
σ
{\displaystyle \equiv C_{L}={\dfrac {L}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }[(-C_{p})\mathbf {n} \cdot \mathbf {k_{w}} +C_{f}\mathbf {t} \cdot \mathbf {k_{w}} ]\,d\sigma }
It is necessary to know Cp and Cf in every point on the considered surface.
==== Dimensionless parameters and aerodynamic regimes ====
In absence of thermal effects, there are three remarkable dimensionless numbers:
Compressibility of the flow:
Mach number
≡
M
=
V
a
{\displaystyle \equiv M={\dfrac {V}{a}}}
Viscosity of the flow:
Reynolds number
≡
R
e
=
ρ
V
l
μ
{\displaystyle \equiv Re={\dfrac {\rho Vl}{\mu }}}
Rarefaction of the flow:
Knudsen number
≡
K
n
=
λ
l
{\displaystyle \equiv Kn={\dfrac {\lambda }{l}}}
where:
a
=
k
R
θ
≡
{\displaystyle a={\sqrt {kR\theta }}\equiv }
speed of sound
k
≡
{\displaystyle k\equiv }
specific heat ratio
R
≡
{\displaystyle R\equiv }
gas constant by mass unity
θ
≡
{\displaystyle \theta \equiv }
absolute temperature
λ
=
μ
ρ
π
2
R
θ
=
M
R
e
k
π
2
≡
{\displaystyle \lambda ={\dfrac {\mu }{\rho }}{\sqrt {\dfrac {\pi }{2R\theta }}}={\dfrac {M}{Re}}{\sqrt {\dfrac {k\pi }{2}}}\equiv }
mean free path
According to λ there are three possible rarefaction grades and their corresponding motions are called:
Continuum current (negligible rarefaction):
M
R
e
≪
1
{\displaystyle {\dfrac {M}{Re}}\ll 1}
Transition current (moderate rarefaction):
M
R
e
≈
1
{\displaystyle {\dfrac {M}{Re}}\approx 1}
Free molecular current (high rarefaction):
M
R
e
≫
1
{\displaystyle {\dfrac {M}{Re}}\gg 1}
The motion of a body through a flow is considered, in flight dynamics, as continuum current. In the outer layer of the space that surrounds the body viscosity will be negligible. However viscosity effects will have to be considered when analysing the flow in the nearness of the boundary layer.
Depending on the compressibility of the flow, different kinds of currents can be considered:
Incompressible subsonic current:
0
<
M
<
0.3
{\displaystyle 0<M<0.3}
Compressible subsonic current:
0.3
<
M
<
0.8
{\displaystyle 0.3<M<0.8}
Transonic current:
0.8
<
M
<
1.2
{\displaystyle 0.8<M<1.2}
Supersonic current:
1.2
<
M
<
5
{\displaystyle 1.2<M<5}
Hypersonic current:
5
<
M
{\displaystyle 5<M}
==== Drag coefficient equation and aerodynamic efficiency ====
If the geometry of the body is fixed and in case of symmetric flight (β=0 and Q=0), pressure and friction coefficients are functions depending on:
C
p
=
C
p
(
α
,
M
,
R
e
,
P
)
{\displaystyle C_{p}=C_{p}(\alpha ,M,Re,P)}
C
f
=
C
f
(
α
,
M
,
R
e
,
P
)
{\displaystyle C_{f}=C_{f}(\alpha ,M,Re,P)}
where:
α
≡
{\displaystyle \alpha \equiv }
angle of attack
P
≡
{\displaystyle P\equiv }
considered point of the surface
Under these conditions, drag and lift coefficient are functions depending exclusively on the angle of attack of the body and Mach and Reynolds numbers. Aerodynamic efficiency, defined as the relation between lift and drag coefficients, will depend on those parameters as well.
{
C
D
=
C
D
(
α
,
M
,
R
e
)
C
L
=
C
L
(
α
,
M
,
R
e
)
E
=
E
(
α
,
M
,
R
e
)
=
C
L
C
D
{\displaystyle {\begin{cases}C_{D}=C_{D}(\alpha ,M,Re)\\C_{L}=C_{L}(\alpha ,M,Re)\\E=E(\alpha ,M,Re)={\dfrac {C_{L}}{C_{D}}}\\\end{cases}}}
It is also possible to get the dependency of the drag coefficient respect to the lift coefficient. This relation is known as the drag coefficient equation:
C
D
=
C
D
(
C
L
,
M
,
R
e
)
≡
{\displaystyle C_{D}=C_{D}(C_{L},M,Re)\equiv }
drag coefficient equation
The aerodynamic efficiency has a maximum value, Emax, respect to CL where the tangent line from the coordinate origin touches the drag coefficient equation plot.
The drag coefficient, CD, can be decomposed in two ways. First typical decomposition separates pressure and friction effects:
C
D
=
C
D
f
+
C
D
p
{
C
D
f
=
D
q
S
=
−
1
S
∫
Σ
C
f
t
∙
i
w
d
σ
C
D
p
=
D
q
S
=
−
1
S
∫
Σ
(
−
C
p
)
n
∙
i
w
d
σ
{\displaystyle C_{D}=C_{Df}+C_{Dp}{\begin{cases}C_{Df}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }C_{f}\mathbf {t} \bullet \mathbf {i_{w}} \,d\sigma \\C_{Dp}={\dfrac {D}{qS}}=-{\dfrac {1}{S}}\int _{\Sigma }(-C_{p})\mathbf {n} \bullet \mathbf {i_{w}} \,d\sigma \end{cases}}}
There is a second typical decomposition taking into account the definition of the drag coefficient equation. This decomposition separates the effect of the lift coefficient in the equation, obtaining two terms CD0 and CDi. CD0 is known as the parasitic drag coefficient and it is the base drag coefficient at zero lift. CDi is known as the induced drag coefficient and it is produced by the body lift.
C
D
=
C
D
0
+
C
D
i
{
C
D
0
=
(
C
D
)
C
L
=
0
C
D
i
{\displaystyle C_{D}=C_{D0}+C_{Di}{\begin{cases}C_{D0}=(C_{D})_{C_{L}=0}\\C_{Di}\end{cases}}}
==== Parabolic and generic drag coefficient ====
A good attempt for the induced drag coefficient is to assume a parabolic dependency of the lift
C
D
i
=
k
C
L
2
⇒
C
D
=
C
D
0
+
k
C
L
2
{\displaystyle C_{Di}=kC_{L}^{2}\Rightarrow C_{D}=C_{D0}+kC_{L}^{2}}
Aerodynamic efficiency is now calculated as:
E
=
C
L
C
D
0
+
k
C
L
2
⇒
{
E
m
a
x
=
1
2
k
C
D
0
(
C
L
)
E
m
a
x
=
C
D
0
k
(
C
D
i
)
E
m
a
x
=
C
D
0
{\displaystyle E={\dfrac {C_{L}}{C_{D0}+kC_{L}^{2}}}\Rightarrow {\begin{cases}E_{max}={\dfrac {1}{2{\sqrt {kC_{D0}}}}}\\(C_{L})_{Emax}={\sqrt {\dfrac {C_{D0}}{k}}}\\(C_{Di})_{Emax}=C_{D0}\end{cases}}}
If the configuration of the plane is symmetrical respect to the XY plane, minimum drag coefficient equals to the parasitic drag of the plane.
C
D
m
i
n
=
(
C
D
)
C
L
=
0
=
C
D
0
{\displaystyle C_{Dmin}=(C_{D})_{CL=0}=C_{D0}}
In case the configuration is asymmetrical respect to the XY plane, however, minimum drag differs from the parasitic drag. On these cases, a new approximate parabolic drag equation can be traced leaving the minimum drag value at zero lift value.
C
D
m
i
n
=
C
D
M
≠
(
C
D
)
C
L
=
0
{\displaystyle C_{Dmin}=C_{DM}\neq (C_{D})_{CL=0}}
C
D
=
C
D
M
+
k
(
C
L
−
C
L
M
)
2
{\displaystyle C_{D}=C_{DM}+k(C_{L}-C_{LM})^{2}}
==== Variation of parameters with the Mach number ====
The Coefficient of pressure varies with Mach number by the relation given below:
C
p
=
C
p
0
|
1
−
M
∞
2
|
{\displaystyle C_{p}={\frac {C_{p0}}{\sqrt {|1-{M_{\infty }}^{2}|}}}}
where
Cp is the compressible pressure coefficient
Cp0 is the incompressible pressure coefficient
M∞ is the freestream Mach number.
This relation is reasonably accurate for 0.3 < M < 0.7 and when M = 1 it becomes ∞ which is impossible physical situation and is called Prandtl–Glauert singularity.
==== Aerodynamic force in a specified atmosphere ====
see Aerodynamic force
== Stability ==
Stability is the ability of the aircraft to counteract disturbances to its flight path.
According to David P. Davies, there are six types of aircraft stability: speed stability, stick free static longitudinal stability, static lateral stability, directional stability, oscillatory stability, and spiral stability.: 164
=== Speed stability ===
An aircraft in cruise flight is typically speed stable. If speed increases, drag increases, which will reduce the speed back to equilibrium for its configuration and thrust setting. If speed decreases, drag decreases, and the aircraft will accelerate back to its equilibrium speed where thrust equals drag.
However, in slow flight, due to lift-induced drag, as speed decreases, drag increases (and vice versa). This is known as the "back of the drag curve". The aircraft will be speed unstable, because a decrease in speed will cause a further decrease in speed.
=== Static stability and control ===
==== Longitudinal static stability ====
Longitudinal stability refers to the stability of an aircraft in pitch. For a stable aircraft, if the aircraft pitches up, the wings and tail create a pitch-down moment which tends to restore the aircraft to its original attitude. For an unstable aircraft, a disturbance in pitch will lead to an increasing pitching moment. Longitudinal static stability is the ability of an aircraft to recover from an initial disturbance. Longitudinal dynamic stability refers to the damping of these stabilizing moments, which prevents persistent or increasing oscillations in pitch.
==== Directional stability ====
Directional or weathercock stability is concerned with the static stability of the airplane about the z axis. Just as in the case of longitudinal stability it is desirable that the aircraft should tend to return to an equilibrium condition when subjected to some form of yawing disturbance. For this the slope of the yawing moment curve must be positive.
An airplane possessing this mode of stability will always point towards the relative wind, hence the name weathercock stability.
=== Dynamic stability and control ===
==== Longitudinal modes ====
It is common practice to derive a fourth order characteristic equation to describe the longitudinal motion, and then factorize it approximately into a high frequency mode and a low frequency mode. The approach adopted here is using qualitative knowledge of aircraft behavior to simplify the equations from the outset, reaching the result by a more accessible route.
The two longitudinal motions (modes) are called the short period pitch oscillation (SPPO), and the phugoid.
===== Short-period pitch oscillation =====
A short input (in control systems terminology an impulse) in pitch (generally via the elevator in a standard configuration fixed-wing aircraft) will generally lead to overshoots about the trimmed condition. The transition is characterized by a damped simple harmonic motion about the new trim. There is very little change in the trajectory over the time it takes for the oscillation to damp out.
Generally this oscillation is high frequency (hence short period) and is damped over a period of a few seconds. A real-world example would involve a pilot selecting a new climb attitude, for example 5° nose up from the original attitude. A short, sharp pull back on the control column may be used, and will generally lead to oscillations about the new trim condition. If the oscillations are poorly damped the aircraft will take a long period of time to settle at the new condition, potentially leading to Pilot-induced oscillation. If the short period mode is unstable it will generally be impossible for the pilot to safely control the aircraft for any period of time.
This damped harmonic motion is called the short period pitch oscillation; it arises from the tendency of a stable aircraft to point in the general direction of flight. It is very similar in nature to the weathercock mode of missile or rocket configurations. The motion involves mainly the pitch attitude
θ
{\displaystyle \theta }
(theta) and incidence
α
{\displaystyle \alpha }
(alpha). The direction of the velocity vector, relative to inertial axes is
θ
−
α
{\displaystyle \theta -\alpha }
. The velocity vector is:
u
f
=
U
cos
(
θ
−
α
)
{\displaystyle u_{f}=U\cos(\theta -\alpha )}
w
f
=
U
sin
(
θ
−
α
)
{\displaystyle w_{f}=U\sin(\theta -\alpha )}
where
u
f
{\displaystyle u_{f}}
,
w
f
{\displaystyle w_{f}}
are the inertial axes components of velocity. According to Newton's second law, the accelerations are proportional to the forces, so the forces in inertial axes are:
X
f
=
m
d
u
f
d
t
=
m
d
U
d
t
cos
(
θ
−
α
)
−
m
U
d
(
θ
−
α
)
d
t
sin
(
θ
−
α
)
{\displaystyle X_{f}=m{\frac {du_{f}}{dt}}=m{\frac {dU}{dt}}\cos(\theta -\alpha )-mU{\frac {d(\theta -\alpha )}{dt}}\sin(\theta -\alpha )}
Z
f
=
m
d
w
f
d
t
=
m
d
U
d
t
sin
(
θ
−
α
)
+
m
U
d
(
θ
−
α
)
d
t
cos
(
θ
−
α
)
{\displaystyle Z_{f}=m{\frac {dw_{f}}{dt}}=m{\frac {dU}{dt}}\sin(\theta -\alpha )+mU{\frac {d(\theta -\alpha )}{dt}}\cos(\theta -\alpha )}
where m is the mass.
By the nature of the motion, the speed variation
m
d
U
d
t
{\displaystyle m{\frac {dU}{dt}}}
is negligible over the period of the oscillation, so:
X
f
=
−
m
U
d
(
θ
−
α
)
d
t
sin
(
θ
−
α
)
{\displaystyle X_{f}=-mU{\frac {d(\theta -\alpha )}{dt}}\sin(\theta -\alpha )}
Z
f
=
m
U
d
(
θ
−
α
)
d
t
cos
(
θ
−
α
)
{\displaystyle Z_{f}=mU{\frac {d(\theta -\alpha )}{dt}}\cos(\theta -\alpha )}
But the forces are generated by the pressure distribution on the body, and are referred to the velocity vector. But the velocity (wind) axes set is not an inertial frame so we must resolve the fixed axes forces into wind axes. Also, we are only concerned with the force along the z-axis:
Z
=
−
Z
f
cos
(
θ
−
α
)
+
X
f
sin
(
θ
−
α
)
{\displaystyle Z=-Z_{f}\cos(\theta -\alpha )+X_{f}\sin(\theta -\alpha )}
Or:
Z
=
−
m
U
d
(
θ
−
α
)
d
t
{\displaystyle Z=-mU{\frac {d(\theta -\alpha )}{dt}}}
In words, the wind axes force is equal to the centripetal acceleration.
The moment equation is the time derivative of the angular momentum:
M
=
B
d
2
θ
d
t
2
{\displaystyle M=B{\frac {d^{2}\theta }{dt^{2}}}}
where M is the pitching moment, and B is the moment of inertia about the pitch axis.
Let:
d
θ
d
t
=
q
{\displaystyle {\frac {d\theta }{dt}}=q}
, the pitch rate.
The equations of motion, with all forces and moments referred to wind axes are, therefore:
d
α
d
t
=
q
+
Z
m
U
{\displaystyle {\frac {d\alpha }{dt}}=q+{\frac {Z}{mU}}}
d
q
d
t
=
M
B
{\displaystyle {\frac {dq}{dt}}={\frac {M}{B}}}
We are only concerned with perturbations in forces and moments, due to perturbations in the states
α
{\displaystyle \alpha }
and q, and their time derivatives. These are characterized by stability derivatives determined from the flight condition. The possible stability derivatives are:
Z
α
{\displaystyle Z_{\alpha }}
Lift due to incidence, this is negative because the z-axis is downwards whilst positive incidence causes an upwards force.
Z
q
{\displaystyle Z_{q}}
Lift due to pitch rate, arises from the increase in tail incidence, hence is also negative, but small compared with
Z
α
{\displaystyle Z_{\alpha }}
.
M
α
{\displaystyle M_{\alpha }}
Pitching moment due to incidence - the static stability term. Static stability requires this to be negative.
M
q
{\displaystyle M_{q}}
Pitching moment due to pitch rate - the pitch damping term, this is always negative.
Since the tail is operating in the flowfield of the wing, changes in the wing incidence cause changes in the downwash, but there is a delay for the change in wing flowfield to affect the tail lift, this is represented as a moment proportional to the rate of change of incidence:
M
α
˙
{\displaystyle M_{\dot {\alpha }}}
The delayed downwash effect gives the tail more lift and produces a nose down moment, so
M
α
˙
{\displaystyle M_{\dot {\alpha }}}
is expected to be negative.
The equations of motion, with small perturbation forces and moments become:
d
α
d
t
=
(
1
+
Z
q
m
U
)
q
+
Z
α
m
U
α
{\displaystyle {\frac {d\alpha }{dt}}=\left(1+{\frac {Z_{q}}{mU}}\right)q+{\frac {Z_{\alpha }}{mU}}\alpha }
d
q
d
t
=
M
q
B
q
+
M
α
B
α
+
M
α
˙
B
α
˙
{\displaystyle {\frac {dq}{dt}}={\frac {M_{q}}{B}}q+{\frac {M_{\alpha }}{B}}\alpha +{\frac {M_{\dot {\alpha }}}{B}}{\dot {\alpha }}}
These may be manipulated to yield as second order linear differential equation in
α
{\displaystyle \alpha }
:
d
2
α
d
t
2
−
(
Z
α
m
U
+
M
q
B
+
(
1
+
Z
q
m
U
)
M
α
˙
B
)
d
α
d
t
+
(
Z
α
m
U
M
q
B
−
M
α
B
(
1
+
Z
q
m
U
)
)
α
=
0
{\displaystyle {\frac {d^{2}\alpha }{dt^{2}}}-\left({\frac {Z_{\alpha }}{mU}}+{\frac {M_{q}}{B}}+(1+{\frac {Z_{q}}{mU}}){\frac {M_{\dot {\alpha }}}{B}}\right){\frac {d\alpha }{dt}}+\left({\frac {Z_{\alpha }}{mU}}{\frac {M_{q}}{B}}-{\frac {M_{\alpha }}{B}}(1+{\frac {Z_{q}}{mU}})\right)\alpha =0}
This represents a damped simple harmonic motion.
We should expect
Z
q
m
U
{\displaystyle {\frac {Z_{q}}{mU}}}
to be small compared with unity, so the coefficient of
α
{\displaystyle \alpha }
(the 'stiffness' term) will be positive, provided
M
α
<
Z
α
m
U
M
q
{\displaystyle M_{\alpha }<{\frac {Z_{\alpha }}{mU}}M_{q}}
. This expression is dominated by
M
α
{\displaystyle M_{\alpha }}
, which defines the longitudinal static stability of the aircraft, it must be negative for stability. The damping term is reduced by the downwash effect, and it is difficult to design an aircraft with both rapid natural response and heavy damping. Usually, the response is underdamped but stable.
===== Phugoid =====
If the stick is held fixed, the aircraft will not maintain straight and level flight (except in the unlikely case that it happens to be perfectly trimmed for level flight at its current altitude and thrust setting), but will start to dive, level out and climb again. It will repeat this cycle until the pilot intervenes. This long period oscillation in speed and height is called the phugoid mode. This is analyzed by assuming that the SSPO performs its proper function and maintains the angle of attack near its nominal value. The two states which are mainly affected are the flight path angle
γ
{\displaystyle \gamma }
(gamma) and speed. The small perturbation equations of motion are:
m
U
d
γ
d
t
=
−
Z
{\displaystyle mU{\frac {d\gamma }{dt}}=-Z}
which means the centripetal force is equal to the perturbation in lift force.
For the speed, resolving along the trajectory:
m
d
u
d
t
=
X
−
m
g
γ
{\displaystyle m{\frac {du}{dt}}=X-mg\gamma }
where g is the acceleration due to gravity at the Earth's surface. The acceleration along the trajectory is equal to the net x-wise force minus the component of weight. We should not expect significant aerodynamic derivatives to depend on the flight path angle, so only
X
u
{\displaystyle X_{u}}
and
Z
u
{\displaystyle Z_{u}}
need be considered.
X
u
{\displaystyle X_{u}}
is the drag increment with increased speed, it is negative, likewise
Z
u
{\displaystyle Z_{u}}
is the lift increment due to speed increment, it is also negative because lift acts in the opposite sense to the z-axis.
The equations of motion become:
m
U
d
γ
d
t
=
−
Z
u
u
{\displaystyle mU{\frac {d\gamma }{dt}}=-Z_{u}u}
m
d
u
d
t
=
X
u
u
−
m
g
γ
{\displaystyle m{\frac {du}{dt}}=X_{u}u-mg\gamma }
These may be expressed as a second order equation in flight path angle or speed perturbation:
d
2
u
d
t
2
−
X
u
m
d
u
d
t
−
Z
u
g
m
U
u
=
0
{\displaystyle {\frac {d^{2}u}{dt^{2}}}-{\frac {X_{u}}{m}}{\frac {du}{dt}}-{\frac {Z_{u}g}{mU}}u=0}
Now lift is very nearly equal to weight:
Z
=
1
2
ρ
U
2
c
L
S
w
=
W
{\displaystyle Z={\frac {1}{2}}\rho U^{2}c_{L}S_{w}=W}
where
ρ
{\displaystyle \rho }
is the air density,
S
w
{\displaystyle S_{w}}
is the wing area, W the weight and
c
L
{\displaystyle c_{L}}
is the lift coefficient (assumed constant because the incidence is constant), we have, approximately:
Z
u
=
2
W
U
=
2
m
g
U
{\displaystyle Z_{u}={\frac {2W}{U}}={\frac {2mg}{U}}}
The period of the phugoid, T, is obtained from the coefficient of u:
2
π
T
=
2
g
2
U
2
{\displaystyle {\frac {2\pi }{T}}={\sqrt {\frac {2g^{2}}{U^{2}}}}}
Or:
T
=
2
π
U
2
g
{\displaystyle T={\frac {2\pi U}{{\sqrt {2}}g}}}
Since the lift is very much greater than the drag, the phugoid is at best lightly damped. A propeller with fixed speed would help. Heavy damping of the pitch rotation or a large rotational inertia increase the coupling between short period and phugoid modes, so that these will modify the phugoid.
==== Lateral modes ====
With a symmetrical rocket or missile, the directional stability in yaw is the same as the pitch stability; it resembles the short period pitch oscillation, with yaw plane equivalents to the pitch plane stability derivatives. For this reason, pitch and yaw directional stability are collectively known as the "weathercock" stability of the missile.
Aircraft lack the symmetry between pitch and yaw, so that directional stability in yaw is derived from a different set of stability derivatives. The yaw plane equivalent to the short period pitch oscillation, which describes yaw plane directional stability is called Dutch roll. Unlike pitch plane motions, the lateral modes involve both roll and yaw motion.
===== Dutch roll =====
It is customary to derive the equations of motion by formal manipulation in what, to the engineer, amounts to a piece of mathematical sleight of hand. The current approach follows the pitch plane analysis in formulating the equations in terms of concepts which are reasonably familiar.
Applying an impulse via the rudder pedals should induce Dutch roll, which is the oscillation in roll and yaw, with the roll motion lagging yaw by a quarter cycle, so that the wing tips follow elliptical paths with respect to the aircraft.
The yaw plane translational equation, as in the pitch plane, equates the centripetal acceleration to the side force.
d
β
d
t
=
Y
m
U
−
r
{\displaystyle {\frac {d\beta }{dt}}={\frac {Y}{mU}}-r}
where
β
{\displaystyle \beta }
(beta) is the sideslip angle, Y the side force and r the yaw rate.
The moment equations are a bit trickier. The trim condition is with the aircraft at an angle of attack with respect to the airflow. The body x-axis does not align with the velocity vector, which is the reference direction for wind axes. In other words, wind axes are not principal axes (the mass is not distributed symmetrically about the yaw and roll axes). Consider the motion of an element of mass in position -z, x in the direction of the y-axis, i.e. into the plane of the paper.
If the roll rate is p, the velocity of the particle is:
v
=
−
p
z
+
x
r
{\displaystyle v=-pz+xr}
Made up of two terms, the force on this particle is first the proportional to rate of v change, the second is due to the change in direction of this component of velocity as the body moves. The latter terms gives rise to cross products of small quantities (pq, pr, qr), which are later discarded. In this analysis, they are discarded from the outset for the sake of clarity. In effect, we assume that the direction of the velocity of the particle due to the simultaneous roll and yaw rates does not change significantly throughout the motion. With this simplifying assumption, the acceleration of the particle becomes:
d
v
d
t
=
−
d
p
d
t
z
+
d
r
d
t
x
{\displaystyle {\frac {dv}{dt}}=-{\frac {dp}{dt}}z+{\frac {dr}{dt}}x}
The yawing moment is given by:
δ
m
x
d
v
d
t
=
−
d
p
d
t
x
z
δ
m
+
d
r
d
t
x
2
δ
m
{\displaystyle \delta mx{\frac {dv}{dt}}=-{\frac {dp}{dt}}xz\delta m+{\frac {dr}{dt}}x^{2}\delta m}
There is an additional yawing moment due to the offset of the particle in the y direction:
d
r
d
t
y
2
δ
m
{\displaystyle {\frac {dr}{dt}}y^{2}\delta m}
The yawing moment is found by summing over all particles of the body:
N
=
−
d
p
d
t
∫
x
z
d
m
+
d
r
d
t
∫
x
2
+
y
2
d
m
=
−
E
d
p
d
t
+
C
d
r
d
t
{\displaystyle N=-{\frac {dp}{dt}}\int xzdm+{\frac {dr}{dt}}\int x^{2}+y^{2}dm=-E{\frac {dp}{dt}}+C{\frac {dr}{dt}}}
where N is the yawing moment, E is a product of inertia, and C is the moment of inertia about the yaw axis.
A similar reasoning yields the roll equation:
L
=
A
d
p
d
t
−
E
d
r
d
t
{\displaystyle L=A{\frac {dp}{dt}}-E{\frac {dr}{dt}}}
where L is the rolling moment and A the roll moment of inertia.
===== Lateral and longitudinal stability derivatives =====
The states are
β
{\displaystyle \beta }
(sideslip), r (yaw rate) and p (roll rate), with moments N (yaw) and L (roll), and force Y (sideways). There are nine stability derivatives relevant to this motion, the following explains how they originate. However a better intuitive understanding is to be gained by simply playing with a model airplane, and considering how the forces on each component are affected by changes in sideslip and angular velocity:
Y
β
{\displaystyle Y_{\beta }}
Side force due to side slip (in absence of yaw).
Sideslip generates a sideforce from the fin and the fuselage. In addition, if the wing has dihedral, side slip at a positive roll angle increases incidence on the starboard wing and reduces it on the port side, resulting in a net force component directly opposite to the sideslip direction. Sweep back of the wings has the same effect on incidence, but since the wings are not inclined in the vertical plane, backsweep alone does not affect
Y
β
{\displaystyle Y_{\beta }}
. However, anhedral may be used with high backsweep angles in high performance aircraft to offset the wing incidence effects of sideslip. Oddly enough this does not reverse the sign of the wing configuration's contribution to
Y
β
{\displaystyle Y_{\beta }}
(compared to the dihedral case).
Y
p
{\displaystyle Y_{p}}
Side force due to roll rate.
Roll rate causes incidence at the fin, which generates a corresponding side force. Also, positive roll (starboard wing down) increases the lift on the starboard wing and reduces it on the port. If the wing has dihedral, this will result in a side force momentarily opposing the resultant sideslip tendency. Anhedral wing and or stabilizer configurations can cause the sign of the side force to invert if the fin effect is swamped.
Y
r
{\displaystyle Y_{r}}
Side force due to yaw rate.
Yawing generates side forces due to incidence at the rudder, fin and fuselage.
N
β
{\displaystyle N_{\beta }}
Yawing moment due to sideslip forces.
Sideslip in the absence of rudder input causes incidence on the fuselage and empennage, thus creating a yawing moment counteracted only by the directional stiffness which would tend to point the aircraft's nose back into the wind in horizontal flight conditions. Under sideslip conditions at a given roll angle
N
β
{\displaystyle N_{\beta }}
will tend to point the nose into the sideslip direction even without rudder input, causing a downward spiraling flight.
N
p
{\displaystyle N_{p}}
Yawing moment due to roll rate.
Roll rate generates fin lift causing a yawing moment and also differentially alters the lift on the wings, thus affecting the induced drag contribution of each wing, causing a (small) yawing moment contribution. Positive roll generally causes positive
N
p
{\displaystyle N_{p}}
values unless the empennage is anhedral or fin is below the roll axis. Lateral force components resulting from dihedral or anhedral wing lift differences has little effect on
N
p
{\displaystyle N_{p}}
because the wing axis is normally closely aligned with the center of gravity.
N
r
{\displaystyle N_{r}}
Yawing moment due to yaw rate.
Yaw rate input at any roll angle generates rudder, fin and fuselage force vectors which dominate the resultant yawing moment. Yawing also increases the speed of the outboard wing whilst slowing down the inboard wing, with corresponding changes in drag causing a (small) opposing yaw moment.
N
r
{\displaystyle N_{r}}
opposes the inherent directional stiffness which tends to point the aircraft's nose back into the wind and always matches the sign of the yaw rate input.
L
β
{\displaystyle L_{\beta }}
Rolling moment due to sideslip.
A positive sideslip angle generates empennage incidence which can cause positive or negative roll moment depending on its configuration. For any non-zero sideslip angle dihedral wings causes a rolling moment which tends to return the aircraft to the horizontal, as does back swept wings. With highly swept wings the resultant rolling moment may be excessive for all stability requirements and anhedral could be used to offset the effect of wing sweep induced rolling moment.
L
r
{\displaystyle L_{r}}
Rolling moment due to yaw rate.
Yaw increases the speed of the outboard wing whilst reducing speed of the inboard one, causing a rolling moment to the inboard side. The contribution of the fin normally supports this inward rolling effect unless offset by anhedral stabilizer above the roll axis (or dihedral below the roll axis).
L
p
{\displaystyle L_{p}}
Rolling moment due to roll rate.
Roll creates counter rotational forces on both starboard and port wings whilst also generating such forces at the empennage. These opposing rolling moment effects have to be overcome by the aileron input in order to sustain the roll rate. If the roll is stopped at a non-zero roll angle the
L
β
{\displaystyle L_{\beta }}
upward rolling moment induced by the ensuing sideslip should return the aircraft to the horizontal unless exceeded in turn by the downward
L
r
{\displaystyle L_{r}}
rolling moment resulting from sideslip induced yaw rate. Longitudinal stability could be ensured or improved by minimizing the latter effect.
===== Equations of motion =====
Since Dutch roll is a handling mode, analogous to the short period pitch oscillation, any effect it might have on the trajectory may be ignored. The body rate r is made up of the rate of change of sideslip angle and the rate of turn. Taking the latter as zero, assuming no effect on the trajectory, for the limited purpose of studying the Dutch roll:
d
β
d
t
=
−
r
{\displaystyle {\frac {d\beta }{dt}}=-r}
The yaw and roll equations, with the stability derivatives become:
C
d
r
d
t
−
E
d
p
d
t
=
N
β
β
−
N
r
d
β
d
t
+
N
p
p
{\displaystyle C{\frac {dr}{dt}}-E{\frac {dp}{dt}}=N_{\beta }\beta -N_{r}{\frac {d\beta }{dt}}+N_{p}p}
(yaw)
A
d
p
d
t
−
E
d
r
d
t
=
L
β
β
−
L
r
d
β
d
t
+
L
p
p
{\displaystyle A{\frac {dp}{dt}}-E{\frac {dr}{dt}}=L_{\beta }\beta -L_{r}{\frac {d\beta }{dt}}+L_{p}p}
(roll)
The inertial moment due to the roll acceleration is considered small compared with the aerodynamic terms, so the equations become:
−
C
d
2
β
d
t
2
=
N
β
β
−
N
r
d
β
d
t
+
N
p
p
{\displaystyle -C{\frac {d^{2}\beta }{dt^{2}}}=N_{\beta }\beta -N_{r}{\frac {d\beta }{dt}}+N_{p}p}
E
d
2
β
d
t
2
=
L
β
β
−
L
r
d
β
d
t
+
L
p
p
{\displaystyle E{\frac {d^{2}\beta }{dt^{2}}}=L_{\beta }\beta -L_{r}{\frac {d\beta }{dt}}+L_{p}p}
This becomes a second order equation governing either roll rate or sideslip:
(
N
p
C
E
A
−
L
p
A
)
d
2
β
d
t
2
+
(
L
p
A
N
r
C
−
N
p
C
L
r
A
)
d
β
d
t
−
(
L
p
A
N
β
C
−
L
β
A
N
p
C
)
β
=
0
{\displaystyle \left({\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}\right){\frac {d^{2}\beta }{dt^{2}}}+\left({\frac {L_{p}}{A}}{\frac {N_{r}}{C}}-{\frac {N_{p}}{C}}{\frac {L_{r}}{A}}\right){\frac {d\beta }{dt}}-\left({\frac {L_{p}}{A}}{\frac {N_{\beta }}{C}}-{\frac {L_{\beta }}{A}}{\frac {N_{p}}{C}}\right)\beta =0}
The equation for roll rate is identical. But the roll angle,
ϕ
{\displaystyle \phi }
(phi) is given by:
d
ϕ
d
t
=
p
{\displaystyle {\frac {d\phi }{dt}}=p}
If p is a damped simple harmonic motion, so is
ϕ
{\displaystyle \phi }
, but the roll must be in quadrature with the roll rate, and hence also with the sideslip. The motion consists of oscillations in roll and yaw, with the roll motion lagging 90 degrees behind the yaw. The wing tips trace out elliptical paths.
Stability requires the "stiffness" and "damping" terms to be positive. These are:
L
p
A
N
r
C
−
N
p
C
L
r
A
N
p
C
E
A
−
L
p
A
{\displaystyle {\frac {{\frac {L_{p}}{A}}{\frac {N_{r}}{C}}-{\frac {N_{p}}{C}}{\frac {L_{r}}{A}}}{{\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}}}}
(damping)
L
β
A
N
p
C
−
L
p
A
N
β
C
N
p
C
E
A
−
L
p
A
{\displaystyle {\frac {{\frac {L_{\beta }}{A}}{\frac {N_{p}}{C}}-{\frac {L_{p}}{A}}{\frac {N_{\beta }}{C}}}{{\frac {N_{p}}{C}}{\frac {E}{A}}-{\frac {L_{p}}{A}}}}}
(stiffness)
The denominator is dominated by
L
p
{\displaystyle L_{p}}
, the roll damping derivative, which is always negative, so the denominators of these two expressions will be positive.
Considering the "stiffness" term:
−
L
p
N
β
{\displaystyle -L_{p}N_{\beta }}
will be positive because
L
p
{\displaystyle L_{p}}
is always negative and
N
β
{\displaystyle N_{\beta }}
is positive by design.
L
β
{\displaystyle L_{\beta }}
is usually negative, whilst
N
p
{\displaystyle N_{p}}
is positive. Excessive dihedral can destabilize the Dutch roll, so configurations with highly swept wings require anhedral to offset the wing sweep contribution to
L
β
{\displaystyle L_{\beta }}
.
The damping term is dominated by the product of the roll damping and the yaw damping derivatives, these are both negative, so their product is positive. The Dutch roll should therefore be damped.
The motion is accompanied by slight lateral motion of the center of gravity and a more "exact" analysis will introduce terms in
Y
β
{\displaystyle Y_{\beta }}
etc. In view of the accuracy with which stability derivatives can be calculated, this is an unnecessary pedantry, which serves to obscure the relationship between aircraft geometry and handling, which is the fundamental objective of this article.
===== Roll subsidence =====
Jerking the stick sideways and returning it to center causes a net change in roll orientation.
The roll motion is characterized by an absence of natural stability, there are no stability derivatives which generate moments in response to the inertial roll angle. A roll disturbance induces a roll rate which is only canceled by pilot or autopilot intervention. This takes place with insignificant changes in sideslip or yaw rate, so the equation of motion reduces to:
A
d
p
d
t
=
L
p
p
.
{\displaystyle A{\frac {dp}{dt}}=L_{p}p.}
L
p
{\displaystyle L_{p}}
is negative, so the roll rate will decay with time. The roll rate reduces to zero, but there is no direct control over the roll angle.
===== Spiral mode =====
Simply holding the stick still, when starting with the wings near level, an aircraft will usually have a tendency to gradually veer off to one side of the straight flightpath. This is the (slightly unstable) spiral mode.
====== Spiral mode trajectory ======
In studying the trajectory, it is the direction of the velocity vector, rather than that of the body, which is of interest. The direction of the velocity vector when projected on to the horizontal will be called the track, denoted
μ
{\displaystyle \mu }
(mu). The body orientation is called the heading, denoted
ψ
{\displaystyle \psi }
(psi). The force equation of motion includes a component of weight:
d
μ
d
t
=
Y
m
U
+
g
U
ϕ
{\displaystyle {\frac {d\mu }{dt}}={\frac {Y}{mU}}+{\frac {g}{U}}\phi }
where g is the gravitational acceleration, and U is the speed.
Including the stability derivatives:
d
μ
d
t
=
Y
β
m
U
β
+
Y
r
m
U
r
+
Y
p
m
U
p
+
g
U
ϕ
{\displaystyle {\frac {d\mu }{dt}}={\frac {Y_{\beta }}{mU}}\beta +{\frac {Y_{r}}{mU}}r+{\frac {Y_{p}}{mU}}p+{\frac {g}{U}}\phi }
Roll rates and yaw rates are expected to be small, so the contributions of
Y
r
{\displaystyle Y_{r}}
and
Y
p
{\displaystyle Y_{p}}
will be ignored.
The sideslip and roll rate vary gradually, so their time derivatives are ignored. The yaw and roll equations reduce to:
N
β
β
+
N
r
d
μ
d
t
+
N
p
p
=
0
{\displaystyle N_{\beta }\beta +N_{r}{\frac {d\mu }{dt}}+N_{p}p=0}
(yaw)
L
β
β
+
L
r
d
μ
d
t
+
L
p
p
=
0
{\displaystyle L_{\beta }\beta +L_{r}{\frac {d\mu }{dt}}+L_{p}p=0}
(roll)
Solving for
β
{\displaystyle \beta }
and p:
β
=
(
L
r
N
p
−
L
p
N
r
)
(
L
p
N
β
−
N
p
L
β
)
d
μ
d
t
{\displaystyle \beta ={\frac {(L_{r}N_{p}-L_{p}N_{r})}{(L_{p}N_{\beta }-N_{p}L_{\beta })}}{\frac {d\mu }{dt}}}
p
=
(
L
β
N
r
−
L
r
N
β
)
(
L
p
N
β
−
N
p
L
β
)
d
μ
d
t
{\displaystyle p={\frac {(L_{\beta }N_{r}-L_{r}N_{\beta })}{(L_{p}N_{\beta }-N_{p}L_{\beta })}}{\frac {d\mu }{dt}}}
Substituting for sideslip and roll rate in the force equation results in a first order equation in roll angle:
d
ϕ
d
t
=
m
g
(
L
β
N
r
−
N
β
L
r
)
m
U
(
L
p
N
β
−
N
p
L
β
)
−
Y
β
(
L
r
N
p
−
L
p
N
r
)
ϕ
{\displaystyle {\frac {d\phi }{dt}}=mg{\frac {(L_{\beta }N_{r}-N_{\beta }L_{r})}{mU(L_{p}N_{\beta }-N_{p}L_{\beta })-Y_{\beta }(L_{r}N_{p}-L_{p}N_{r})}}\phi }
This is an exponential growth or decay, depending on whether the coefficient of
ϕ
{\displaystyle \phi }
is positive or negative. The denominator is usually negative, which requires
L
β
N
r
>
N
β
L
r
{\displaystyle L_{\beta }N_{r}>N_{\beta }L_{r}}
(both products are positive). This is in direct conflict with the Dutch roll stability requirement, and it is difficult to design an aircraft for which both the Dutch roll and spiral mode are inherently stable.
Since the spiral mode has a long time constant, the pilot can intervene to effectively stabilize it, but an aircraft with an unstable Dutch roll would be difficult to fly. It is usual to design the aircraft with a stable Dutch roll mode, but slightly unstable spiral mode.
== See also ==
== References ==
=== Notes ===
=== Bibliography ===
NK Sinha and N Ananthkrishnan (2013), Elementary Flight Dynamics with an Introduction to Bifurcation and Continuation Methods, CRC Press, Taylor & Francis.
Babister, A. W. (1980). Aircraft dynamic stability and response (1st ed.). Oxford: Pergamon Press. ISBN 978-0080247687.
== External links ==
MIXR - mixed reality simulation platform
JSBsim, An open source, platform-independent, flight dynamics & control software library in C++ |
Flight endurance record | The flight endurance record is the longest amount of time an aircraft of a particular category spent in flight without landing. It can be a solo event, or multiple people can take turns piloting the aircraft, as long as all pilots remain in the aircraft. The limit initially was the amount of fuel that could be stored for the flight, but aerial refueling extended that parameter. Due to safety concerns, the Fédération Aéronautique Internationale (FAI) no longer recognizes new records for the duration of crewed airplane or glider flights and has never recognized any duration records for helicopters.
== Airplane ==
=== Non-refueled, crewed ===
=== Refueled, crewed ===
=== Airline, scheduled ===
Not an FAI category. See Longest Flights
=== Airplane, uncrewed ===
FAI does not differentiate between non-refueled and solar aircraft. Class U : Experimental
== Helicopter ==
=== Crewed, non-refueled ===
=== Uncrewed ===
== Free balloon, crewed ==
== Airship ==
== Glider ==
== Space station, crewed ==
Duration that a specific person continuously occupies the spacecraft while in orbit.
See Also Timeline of longest spaceflights, List of spaceflight records
== Aerospacecraft, orbital, crewed ==
== See also ==
Flight distance record
== Notes ==
== References == |
Flight length | In aviation, the flight length or flight distance refers to the distance of a flight. Aircraft do not necessarily follow the great-circle distance, but may opt for a longer route due to weather, traffic, to utilise a jet stream, or to refuel.
Commercial flights are often categorized into long-, medium- or short-haul by commercial airlines based on flight length, although there is no international standard definition.
The related term flight time is defined by ICAO (International Civil Aviation Organization) as "The total time from the moment an aeroplane first moves for the purpose of taking off until the moment it finally comes to rest at the end of the flight", and is referred to colloquially as "blocks to blocks" or "chocks to chocks" time. In commercial aviation, this means the time from pushing back at the departure gate to arriving at the destination gate. Flight time is measured in hours and minutes as it is independent of geographic distance travelled. Flight time can be affected by many things such as wind, traffic, taxiing time, and aircraft used.
== Short-haul and long-haul ==
A flight's length can also be described using the aviation term of "Flight Haul Type", such as "short-haul" or "long-haul". Flight haul types can be defined using either flight distance or flight time.
=== Time-based definitions ===
=== Distance-based definitions ===
David W. Wragg classifies air services as medium-haul being between 1,600–4,000 km; 900–2,200 nmi; short-haul as being shorter and long-haul as being longer.
David Crocker defines short-haul flights as shorter than 1,000 km (540 nmi),: 208 and long-haul as the opposite.: 140
==== Asia & Australia ====
Hong Kong International Airport considers destinations in the Americas, Europe, the Middle East, Africa, Southwest Pacific and the Indian Subcontinent long-haul and all others are short-haul.
Japan Air Lines defines routes to Europe and North America as long-haul and all other flights as short-haul.
Qatar Airways defines all flights from Qatar to the Americas, Australia, and New Zealand as Ultra-long-haul, and all other flights as medium or long-haul.
Virgin Australia defines domestic flights as within Australia, short-haul as those to South East Asia/Pacific and long-haul as those to Abu Dhabi or Los Angeles.
==== Europe ====
The European Union defines any passenger flight between city pairs separated by a great circle distance between 1,500 and 3,500 km (800 and 1,900 nmi) to be medium-haul, below as short-haul, and above as long-haul routes.
Eurocontrol defines "very short-haul" flights as being less than 500 km (270 nmi), short-haul flights being between 500 and 1,500 km, medium-haul flights being between 1,500 and 4,000 km (800 and 2,200 nmi), and long-haul flights as longer than that.
The Association of European Airlines defined Long-haul as flights to Americas, sub-Saharan Africa, Asia, Australasia and medium-haul as flights to North Africa and Middle East.
The now defunct airline Air Berlin defined short- and medium-haul as flights to Europe/North Africa and long-haul as those to the rest of the world.
Air France defines short-haul as domestic, medium-haul as within Europe/North Africa and long haul as the rest of the world.
==== North America ====
American Airlines defines short-/medium-haul flights as being less than 3,000 mi (2,600 nmi; 4,800 km) and long-haul as either being more than 3000 miles or being the New York–Los Angeles and New York–San Francisco routes.
United Airlines defines short-haul flights as being less than 700 mi (600 nmi; 1,100 km) and long-haul flights as being greater than 3,000 mi (2,600 nmi; 4,800 km).
=== Aircraft-based definitions ===
Flight Haul Type terms are sometimes used when referring to commercial aircraft. Some commercial carriers choose to refer to their aircraft using flight haul type terms, for example:
Delta Air Lines referred to its Boeing 717, MD-88 and MD-90 as short-haul domestic aircraft; Boeing 757, Boeing 737, Airbus A319 and A321 as long-haul domestic; and its transoceanic Boeing 757, 767, 777 and Airbus A330 as long-haul.
Lufthansa classifies its fleet as: long-haul for wide-body aircraft such as the Airbus A330/Airbus A340, Airbus A350, Airbus A380, Boeing 747, and Boeing 787 Dreamliner; medium-haul for narrow-body aircraft like the Airbus A320 and 737 families; and short-haul for regional jets like the Embraer E-Jets and the Bombardier CRJ-900.
TUI Airways refers to their Boeing 737 as a short and mid-haul airliner and the Boeing 767 and 787 as long haul.
While they are capable of flying further, long-haul capable wide-bodies are often used on shorter trips. In 2017 - 40% of A350 routes were shorter than 2,000 nmi (2,300 mi; 3,700 km), 50% of A380 flights fell within 2,000–4,000 nmi (2,300–4,600 mi; 3,700–7,400 km), 70% of 777-200ER routes were shorter than 4,000 nmi (4,600 mi; 7,400 km), 80% of 787-9s routes were shorter than 5,000 nmi (5,800 mi; 9,300 km), 70% of 777-200LRs flights were shorter than 6,000 nmi (6,900 mi; 11,000 km).
== Superlative flights ==
=== Shortest commercial flight ===
The Westray to Papa Westray flight in Orkney, operated by Loganair, is the shortest commercial flight in the world, covering 2.8 km (1.7 mi) in two minutes scheduled flight time including taxiing.
=== Longest commercial flight ===
The world's longest ever commercial flight was Air Tahiti Nui Flight TN64 in early 2020. Due to the COVID-19 pandemic and the impossibility of transit in the United States through Los Angeles International Airport, Air Tahiti Nui scheduled and operated in March and April 2020 Flight TN64 as a non-stop flight between Papeete and Paris-Charles de Gaulle, using a Boeing 787-9 and covering 15,715 km (9,765 mi; 8,485 nmi). in a scheduled time of 16 hours and 20 minutes. As of 2023, it continues to hold the record for the longest ever scheduled commercial nonstop flight (by great circle distance) as well as the world's longest domestic flight.
As of November 9, 2020, Singapore Airlines Flights 23 and 24 is the world's longest active commercial flight between Singapore and New York–JFK, covering 15,349 km (9,537 mi; 8,288 nmi) in around 18 hours and 40 minutes, operated by an Airbus A350-900ULR.
== Distinctions ==
=== Great-circle distance versus flight length ===
The shortest distance between two geographical points is the great-circle distance. In the example (right), the aircraft travelling westward from North America to Japan is following a great-circle route extending northward towards the Arctic region. The apparent curve of the route is a result of distortion when plotted onto a conventional map projection and makes the route appear to be longer than it really is. Stretching a string between North America and Japan on a globe will demonstrate why this really is the shortest route despite appearances.
The actual flight length is the length of the track flown across the ground in practice, which is usually longer than the ideal great-circle and is influenced by a number of factors such as the need to avoid bad weather, wind direction and speed, fuel economy, navigational restrictions and other requirements. In the example, easterly flights from Japan to North America are shown taking a longer, more southerly, route than the shorter great-circle; this is to take advantage of the favourable jet stream, a fast high-altitude tail-wind that assists the aircraft along its ground track saving more time or fuel than the geographically shortest route.
=== Flight distance versus flight duration ===
Even for flights with the same origin and destination, a flight's duration can be affected by routing, wind, traffic, taxiing time, or aircraft used.
For example, on the Luxembourg to Bucharest route operated by Luxair, the scheduled flight length remains constant while the flight duration varies depending on aircraft used. On Thursday mornings, Luxair operates a DHC-8 turboprop with a scheduled duration of approximately 3 hours, while on Saturday mornings, Luxair's use of an Embraer 190 jet reduces the scheduled duration of the flight down to approximately 2 hours 20 minutes.
== See also ==
Endurance (aeronautics)
Flight distance record
Fuel economy in aircraft
International flight
List of regional airliners
Longest flights
Non-stop flight
Range (aeronautics)
Short-haul flight ban
Ultra long haul flight
== References ==
== External links ==
The Great Circle Mapper Displays Great Circle flight routes on a Map And calculates distance and duration
Flight-time and -distance calculator
Air Miles Calculator
Flight Duration Calculator
Flight Distance Calculator Flight routes duration and Great Circle Mapper |
Flight training | Flight training is a course of study used when learning to pilot an aircraft. The overall purpose of primary and intermediate flight training is the acquisition and honing of basic airmanship skills.
Flight training can be conducted under a structured accredited syllabus with a flight instructor at a flight school or as private lessons with no syllabus with a flight instructor as long as all experience requirements for the desired pilot certificate/license are met.
Typically flight training consists of a combination of two parts:
Flight Lessons given in the aircraft or in a certified Flight Training Device.
Ground School primarily given as a classroom lecture or lesson by a flight instructor where aeronautical theory is learned in preparation for the student's written, oral, and flight pilot certification/licensing examinations.
Although there are various types of aircraft, many of the principles of piloting them have common techniques, especially those aircraft which are heavier-than-air types.
Flight schools commonly rent aircraft to students and licensed pilots at an hourly rate. Typically, the hourly rate is determined by the aircraft's Hobbs meter or Tach timer, therefore the student is only charged while the aircraft engine is running. Flight instructors can also be scheduled with or without an aircraft for pilot proficiency and recurring training.
The oldest flight training school still in existence is the Royal Air Force's (RAF's) Central Flying School formed in May 1912 at Upavon, United Kingdom. The oldest civil flight school still active in the world is based in Germany at the Wasserkuppe. It was founded as "Mertens Fliegerschule", and is currently named "Fliegerschule Wasserkuppe".
== Licences ==
The International Civil Aviation Organization sets global standards for Pilot licensing that are implemented and enforced by a country's Civil aviation authority. Pilots must first meet their country's requirements to obtain a Student pilot certificate which is used for training towards a Private Pilot Licence (PPL). They can then progress to a Commercial Pilot Licence (CPL), and finally an Airline Transport Pilot Licence (ATPL).
Some countries have a Light Aircraft Pilot Licence (LAPL), but this cannot be used internationally.
Separate licences are required for different aircraft categories, for example helicopters and aeroplanes.
== Ratings ==
A type rating, also known as an endorsement, is the process undertaken by a pilot to update their license to allow them to fly a different type of aircraft. A class rating covers multiple aircraft.
An instrument rating allows a pilot to fly under instrument flight rules (IFR). A night rating allows a pilot to fly at night (that is, outside of Civil twilight).
== See also ==
Bárány chair
Bachelor of Aviation
Ground Instructor
Integrated pilot training
Pilot licensing and certification
Pilot certification in the United States
Pilot licensing in Canada
Pilot licensing in the United Kingdom
== References ==
== External links ==
Learning to Fly: A Practical Manual for Beginners (1916) by Claude Grahame-White and Harry Harper
Student Pilot Guide from the FAA
Accelerated Flight Training from Flying Mag.
Pilot Training Compass: Back to the Future from European Cockpit Association. |
Fort Myer | Fort Myer is the previous name used for a U.S. Army post next to Arlington National Cemetery in Arlington County, Virginia, and across the Potomac River from Washington, D.C. Founded during the American Civil War as Fort Cass and Fort Whipple, the post merged in 2005 with the neighboring Marine Corps installation, Henderson Hall, and is today named Joint Base Myer–Henderson Hall.
== History ==
In 1861, the land that Fort Myer would eventually occupy was part of the Arlington estate, which Mary Anna Custis Lee, the wife of Robert E. Lee, owned and at which Lee resided when not stationed elsewhere (see Arlington House, The Robert E. Lee Memorial). When the Civil War began, the Commonwealth of Virginia seceded from the United States, Lee resigned his commission, and he and his wife left the estate. The United States Government then confiscated the estate and began to use it as a burial ground for Union Army dead (see Arlington National Cemetery), to house freed slaves (Freedmen's Village), and for military purposes, including the Civil War defenses of Washington (see Washington, D.C., in the American Civil War).
=== Fort Cass ===
Shortly after the Union Army's rout at the First Battle of Bull Run (Manassas) in late July 1861, the Army constructed in August 1861 a lunette (Fort Ramsay) on the future grounds of Fort Myer. One of the first fortifications built on the Arlington Line, the lunette was located at and near the present post's Forest Circle. Later renamed to Fort Cass, the lunette had a perimeter of 288 yards (263 m) and emplacements for 12 guns.
A May 17, 1864, report from the Union Army's Inspector of Artillery (see Union Army artillery organization) noted the following:
Fort Cass, Maj. N. Shatswell commanding.–Garrison, two companies First Massachusetts Heavy Artillery—8 commissioned officers, 1 ordnance-sergeant, 220 men. Armament, three 6-pounder field guns (smooth), five 20-pounder Parrotts (rifled), three 24-pounder siege guns (smooth), one 24-pounder F. D. howitzer (smooth), one 24-pounder Coehorn mortar. Magazines, two; dry and in good condition. Ammunition, full supply, well packed and in serviceable condition. Implements, complete and serviceable. Drill in artillery, fair. Drill in infantry, fair. Discipline, fair. Garrison sufficient for the work.
Although the Army abandoned the lunette in 1865 at the end of the Civil War, the United States War Department continued to control its property.
=== Fort Whipple ===
Following the Union Army's defeat at the Second Battle of Bull Run (Manassas) in August 1862, the Army constructed Fort Whipple on the grounds of the former Arlington estate during the spring of 1863. The fort was located a short distance southeast of Fort Cass. The Army named the fort after Brevet Major General Amiel Weeks Whipple, who died in May 1863 of wounds received during the Battle of Chancellorsville. The fort was considered to be one of the strongest fortifications erected for the defense of Washington during the Civil War. It had a perimeter of 658 yards and places for 43 guns.
The May 17, 1864, report from the Union Army's Inspector of Artillery noted the following:Fort Whipple, Major Rolfe commanding.–Garrison, three companies First Massachusetts Heavy Artillery– l major, 13 commissioned officers, 1 ordnance-sergeant, 414 men. Armament, six 12-pounder field guns (smooth), four 12-pounder field howitzers (smooth), eight 12-pounder James guns (rifled), eleven 4.5-inch ordnance Magazines, four; two not in a serviceable condition. Ammunition, full supply; good condition. Implements, complete and serviceable. Drill in artillery, fair. Drill in infantry, fair. Discipline, fair. Garrison sufficient; interior work.
The Civil War ended in 1865. Fort Whipple, with its fortifications abandoned, then became the home of the Signal School of Instruction for Army and Navy Officers, established in 1869.
=== Fort Myer ===
On February 4, 1881, the Army post containing Fort Whipple was renamed Fort Myer as an honor to Brigadier General Albert J. Myer, who had commanded the newly established Signal School of Instruction for Army and Navy Officers from 1869 until he died in August 1880. Since then, the post has been a Signal Corps post, a showcase for the US Army's cavalry, and, since the 1940s, home to the Army's elite ceremonial units—The United States Army Band ("Pershing's Own") and the 3rd U.S. Infantry Regiment ("The Old Guard").
The National Weather Service was originated there by General Albert J. Myer in 1870.
Fort Myer was the site of the first flight of an aircraft at a military installation. Several exhibition flights by Orville Wright took place there in 1908. On 17 September 1908 it became the location of the first airplane fatality, as Lt. Thomas Selfridge was killed when on a demonstration flight with Orville, at an altitude of about 100 feet (30 m), a propeller split, sending the aircraft out of control. Selfridge suffered a concussion in the crash and later died, the first person to die in powered fixed-wing aircraft. Orville was badly injured, suffering broken ribs and a leg.
Quarters One on Fort Myer, which was originally built as the garrison commander's quarters, has been the home of the Chief of Staff of the United States Army since 1908 when Major General J. Franklin Bell took up residence. It has been the home of every succeeding Chief of Staff, except for General John J. Pershing.
The United States Navy established the nation's first radio telecommunications station, NAA, near Fort Myer in 1913. In 1915, the station's radio towers, "The Three Sisters", transmitted to Paris the first wireless communication that crossed the Atlantic Ocean.
During World War I, Fort Myer was a staging area for a large number of engineering, artillery, and chemical companies and regiments. The area of Fort Myer now occupied by Andrew Rader Health Clinic and the Commissary were made into a trench-system training grounds where French officers taught the Americans about trench warfare.
General George S. Patton Jr., who was posted at Fort Myer four different times, started the charitable "Society Circus" after World War I. He ultimately was post commander and commanded the 3rd Cavalry Regiment that was stationed at Fort Myer from the 1920s to 1942 when the regiment was sent to Georgia to get mechanized.
In late 2001, troops, deployed in response to the September 11th attacks, were bivouacked at Fort Myer. These troops were under Operation Noble Eagle. These included both active and National Guard Military Police units from around the nation. In 2005 the last remaining deployed responders were demobilized.
=== Joint Base Myer–Henderson Hall ===
As a result of the 2005 Base Realignment and Closure Commission initiative to create more efficiency of efforts, the Army's Fort Myer and the Marines' Henderson Hall became the first Joint Base in the Department of Defense. Joint Base Myer–Henderson Hall (JBMHH) consists of military installations at Fort Myer, Henderson Hall, The Pentagon, and Fort Lesley J. McNair. These installations and departments serve over 150,000 active duty, DoD civilian, and retired military personnel in the region.
== Commemorative ==
The fort was designated a National Historic Landmark in 1972, for its well-preserved concentration of cavalry facilities and officers' quarters, and for its importance in military aviation history. On September 1, 1970, the United States Postal Service issued its first day cover of a postcard celebrating the 100th anniversary of Weather Services at Fort Myer.
A pamphlet and one book have been published about Fort Myer. The book, Images of America: Fort Myer, contains a copy of a handwritten letter from Abraham Lincoln that appointed General Whipple's oldest son to the United States Military Academy at West Point.
== See also ==
List of National Historic Landmarks in Virginia
National Register of Historic Places listings in Arlington County, Virginia
== Notes ==
== References ==
Cooling III, Benjamin Franklin; Owen II, Walton H. (2010). Mr. Lincoln's Forts: A Guide to the Civil War Defenses of Washington (New ed.). Scarecrow Press. ISBN 9780810863071. LCCN 2009018392. OCLC 665840182. Retrieved March 5, 2018 – via Google Books.
Michael, John (2011). Fort Myer: Images of America. Charleston, South Carolina: Arcadia Publishing. ISBN 9780738587356. LCCN 2010936703. OCLC 701016030. Retrieved March 7, 2018 – via Google Books.
Staff of the Fort Myer Post (1963). The History of Fort Myer Virginia: 100th Anniversary Issue (Special Centennial Edition Of The Fort Myer Post). Arlington, Virginia: Fort Myer Post. LCCN 58061390. OCLC 7903755. Retrieved March 7, 2018 – via HathiTrust Digital Library.
== External links ==
"Joint Base Myer – Henderson Hall". Army.mil. Archived from the original on November 3, 2013.
"Fort Myer Historic District". Aviation: From Sand Dunes to Sonic Booms: A National Register of Historic Places Travel Itinerary. Washington, D.C.: National Park Service. November 18, 2003. Archived from the original on May 8, 2012.
Images of Fort Myer in the Historic American Buildings Survey (HABS) via Library of Congress:
Quartermaster Workshops at Arlington Boulevard & Second Street
Quartermaster Garage at Arlington Boulevard & Second Street
Commissary Sergeant's Quarters on Washington Avenue between Johnson Lane & Custer Road
First Sergeant's Quarters on Washington Avenue between Johnson Lane & Custer Road
Noncommissioned Officers Quarters on Washington Avenue between Johnson Lane & Custer Road
Historical Perspective |
France | France, officially the French Republic, is a country located primarily in Western Europe. Its overseas regions and territories include French Guiana in South America, Saint Pierre and Miquelon in the North Atlantic, the French West Indies, and many islands in Oceania and the Indian Ocean, giving it one of the largest discontiguous exclusive economic zones in the world. Metropolitan France shares borders with Belgium and Luxembourg to the north; Germany to the northeast; Switzerland to the east; Italy and Monaco to the southeast; Andorra and Spain to the south; and a maritime border with the United Kingdom to the northwest. Its metropolitan area extends from the Rhine to the Atlantic Ocean and from the Mediterranean Sea to the English Channel and the North Sea. Its eighteen integral regions—five of which are overseas—span a combined area of 632,702 km2 (244,288 sq mi) and have an estimated total population of over 68.6 million as of January 2025. France is a semi-presidential republic and its capital, largest city and main cultural and economic centre is Paris.
Metropolitan France was settled during the Iron Age by Celtic tribes known as Gauls before Rome annexed the area in 51 BC, leading to a distinct Gallo-Roman culture. In the Early Middle Ages, the Franks formed the kingdom of Francia, which became the heartland of the Carolingian Empire. The Treaty of Verdun of 843 partitioned the empire, with West Francia evolving into the Kingdom of France. In the High Middle Ages, France was a powerful but decentralized feudal kingdom, but from the mid-14th to the mid-15th centuries, France was plunged into a dynastic conflict with England known as the Hundred Years' War. In the 16th century, French culture flourished during the French Renaissance and a French colonial empire emerged. Internally, France was dominated by the conflict with the House of Habsburg and the French Wars of Religion between Catholics and Huguenots. France was successful in the Thirty Years' War and further increased its influence during the reign of Louis XIV.
The French Revolution of 1789 overthrew the Ancien Régime and produced the Declaration of the Rights of Man, which expresses the nation's ideals to this day. France reached its political and military zenith in the early 19th century under Napoleon Bonaparte, subjugating part of continental Europe and establishing the First French Empire. The collapse of the empire initiated a period of relative decline, in which France endured the Bourbon Restoration until the founding of the French Second Republic which was succeeded by the Second French Empire upon Napoleon III's takeover. His empire collapsed during the Franco-Prussian War in 1870. This led to the establishment of the Third French Republic, and subsequent decades saw a period of economic prosperity and cultural and scientific flourishing known as the Belle Époque. France was one of the major participants of World War I, from which it emerged victorious at great human and economic cost. It was among the Allies of World War II, but it surrendered and was occupied in 1940. Following its liberation in 1944, the short-lived Fourth Republic was established and later dissolved in the course of the defeat in the Algerian War. The current Fifth Republic was formed in 1958 by Charles de Gaulle. Algeria and most French colonies became independent in the 1960s, with the majority retaining close economic and military ties with France.
France retains its centuries-long status as a global centre of art, science, and philosophy. It hosts the fourth-largest number of UNESCO World Heritage Sites and is the world's leading tourist destination, having received 100 million foreign visitors in 2023. A developed country, France has a high nominal per capita income globally, and its advanced economy ranks among the largest in the world by both nominal GDP and PPP-adjusted GDP. It is a great power, being one of the five permanent members of the United Nations Security Council and an official nuclear-weapon state. The country is part of multiple international organizations and forums.
== Etymology ==
Originally applied to the whole Frankish Empire, the name France comes from the Latin Francia, or 'realm of the Franks'. The name of the Franks is related to the English word frank ('free'): the latter stems from the Old French franc ('free, noble, sincere'), and ultimately from the Medieval Latin word francus ('free, exempt from service; freeman, Frank"', a generalisation of the tribal name that emerged as a Late Latin borrowing of the reconstructed Frankish endonym *Frank. It has been suggested that the meaning 'free' was adopted because, after the conquest of Gaul, only Franks were free of taxation, or more generally because they had the status of freemen in contrast to servants or slaves. The etymology of *Frank is uncertain. It is traditionally derived from the Proto-Germanic word *frankōn, which translates as 'javelin' or 'lance' (the throwing axe of the Franks was known as the francisca), although these weapons may have been named because of their use by the Franks, not the other way around.
In English, 'France' is pronounced FRANSS in American English and FRAHNSS or FRANSS in British English. The pronunciation with is mostly confined to accents with the trap-bath split such as Received Pronunciation, though it can be also heard in some other dialects such as Cardiff English.
== History ==
=== Pre-6th century BC ===
The oldest traces of archaic humans in what is now France date from approximately 1.8 million years ago. Neanderthals occupied the region into the Upper Paleolithic era but were slowly replaced by Homo sapiens around 35,000 BC. This period witnessed the emergence of cave painting in the Dordogne and Pyrenees, including at Lascaux, dated to c. 18,000 BC. At the end of the Last Glacial Period (10,000 BC), the climate became milder; from approximately 7,000 BC, this part of Western Europe entered the Neolithic era, and its inhabitants became sedentary.
After demographic and agricultural development between the 4th and 3rd millennia BC, metallurgy appeared, initially working gold, copper and bronze, then later iron. France has numerous megalithic sites from the Neolithic, including the Carnac stones site (approximately 3,300 BC).
=== Antiquity (6th century BC – 5th century AD) ===
In 600 BC, Ionian Greeks from Phocaea founded the colony of Massalia (present-day Marseille). Celtic tribes penetrated parts of eastern and northern France, spreading through the rest of the country between the 5th and 3rd century BC. Around 390 BC, the Gallic chieftain Brennus and his troops made their way to Roman Italy, defeated the Romans in the Battle of the Allia, and besieged and ransomed Rome. This left Rome weakened, and the Gauls continued to harass the region until 345 BC when they entered into a peace treaty. But the Romans and the Gauls remained adversaries for centuries.
Around 125 BC, the south of Gaul was conquered by the Romans, who called this region Provincia Nostra ("Our Province"), which evolved into Provence in French. Julius Caesar conquered the remainder of Gaul and overcame a revolt by Gallic chieftain Vercingetorix in 52 BC. Gaul was divided by Augustus into provinces and many cities were founded during the Gallo-Roman period, including Lugdunum (present-day Lyon), the capital of the Gauls. In 250–290 AD, Roman Gaul suffered a crisis with its fortified borders attacked by barbarians. The situation improved in the first half of the 4th century, a period of revival and prosperity. In 312, Emperor Constantine I converted to Christianity. Christians, who had been persecuted, increased. But from the 5th century, the Barbarian Invasions resumed. Teutonic tribes invaded the region, the Visigoths settling in the southwest, the Burgundians along the Rhine River Valley, and the Franks in the north.
=== Early Middle Ages (5th–10th century) ===
In Late antiquity, ancient Gaul was divided into Germanic kingdoms and a remaining Gallo-Roman territory. Celtic Britons, fleeing the Anglo-Saxon settlement of Britain, settled in west Armorica; the Armorican peninsula was renamed Brittany and Celtic culture was revived.
The first leader to unite all Franks was Clovis I, who began his reign as king of the Salian Franks in 481, routing the last forces of the Roman governors in 486. Clovis said he would be baptised a Christian in the event of victory against the Visigothic Kingdom, which was said to have guaranteed the battle. Clovis regained the southwest from the Visigoths and was baptised in 508. Clovis I was the first Germanic conqueror after the Fall of the Western Roman Empire to convert to Catholic Christianity; thus France was given the title "Eldest daughter of the Church" by the papacy, and French kings called "the Most Christian Kings of France".
The Franks embraced the Christian Gallo-Roman culture, and ancient Gaul was renamed Francia ("Land of the Franks"). The Germanic Franks adopted Romanic languages. Clovis made Paris his capital and established the Merovingian dynasty, but his kingdom would not survive his death. The Franks treated land as a private possession and divided it among their heirs, so four kingdoms emerged from that of Clovis: Paris, Orléans, Soissons, and Rheims. The last Merovingian kings lost power to their mayors of the palace (head of household). One mayor of the palace, Charles Martel, defeated an Umayyad invasion of Gaul at the Battle of Tours in 732. His son, Pepin the Short, seized the crown of Francia from the weakened Merovingians and founded the Carolingian dynasty. Pepin's son, Charlemagne, reunited the Frankish kingdoms and built an empire across Western and Central Europe.
Proclaimed Holy Roman Emperor by Pope Leo III and thus establishing the French government's longtime historical association with the Catholic Church, Charlemagne tried to revive the Western Roman Empire and its cultural grandeur. Charlemagne's son, Louis I kept the empire united, however in 843, it was divided between Louis' three sons, into East Francia, Middle Francia and West Francia. West Francia approximated the area occupied by modern France and was its precursor.
During the 9th and 10th centuries, threatened by Viking invasions, France became a decentralised state: the nobility's titles and lands became hereditary, and the authority of the king became more religious than secular, and so was less effective and challenged by noblemen. Thus was established feudalism in France. Some king's vassals grew so powerful they posed a threat to the king. After the Battle of Hastings in 1066, William the Conqueror added "King of England" to his titles, becoming vassal and the equal of the king of France, creating recurring tensions.
=== High and Late Middle Ages (10th–15th century) ===
The Carolingian dynasty ruled France until 987, when Hugh Capet was crowned king of the Franks. His descendants unified the country through wars and inheritance. From 1190, the Capetian rulers began to be referred as "kings of France" rather than "kings of the Franks". Later kings expanded their directly possessed domaine royal to cover over half of modern France by the 15th century. Royal authority became more assertive, centred on a hierarchically conceived society distinguishing nobility, clergy, and commoners.
The nobility played a prominent role in Crusades to restore Christian access to the Holy Land. French knights made up most reinforcements in the 200 years of the Crusades, in such a fashion that the Arabs referred to crusaders as Franj. French Crusaders imported French into the Levant, making Old French the base of the lingua franca ("Frankish language") of the Crusader states. The Albigensian Crusade was launched in 1209 to eliminate the heretical Cathars in the southwest of modern-day France.
From the 11th century, the House of Plantagenet, rulers of the County of Anjou, established its dominion over the surrounding provinces of Maine and Touraine, then built an "empire" from England to the Pyrenees, covering half of modern France. Tensions between France and the Plantagenet empire would last a hundred years, until Philip II of France conquered, between 1202 and 1214, most continental possessions of the empire, leaving England and Aquitaine to the Plantagenets.
Charles IV the Fair died without an heir in 1328. The crown passed to Philip of Valois, rather than Edward of Plantagenet, who became Edward III of England. During the reign of Philip, the monarchy reached the height of its medieval power. However Philip's seat on the throne was contested by Edward in 1337, and England and France entered the off-and-on Hundred Years' War. Boundaries changed, but landholdings inside France by English Kings remained extensive for decades. With charismatic leaders, such as Joan of Arc, French counterattacks won back most English continental territories. France was struck by the Black Death, from which half of the 17 million population died.
=== Early modern period (15th century–1789) ===
The French Renaissance saw cultural development and standardisation of French, which became the official language of France and Europe's aristocracy. France became rivals of the House of Habsburg during the Italian Wars, which would dictate much of their later foreign policy until the mid-18th century. French explorers claimed lands in the Americas, paving expansion of the French colonial empire. The rise of Protestantism led France to a civil war known as the French Wars of Religion. This forced Huguenots to flee to Protestant regions such as the British Isles and Switzerland. The wars were ended by Henry IV's Edict of Nantes, which granted some freedom of religion to the Huguenots. Spanish troops assisted the Catholics from 1589 to 1594 and invaded France in 1597. Spain and France returned to all-out war between 1635 and 1659. The war cost France 300,000 casualties.
Under Louis XIII, Cardinal Richelieu promoted centralisation of the state and reinforced royal power. He destroyed castles of defiant lords and denounced the use of private armies. By the end of the 1620s, Richelieu established "the royal monopoly of force". France fought in the Thirty Years' War, supporting the Protestant side against the Habsburgs. From the 16th to the 19th century, France was responsible for about 10% of the transatlantic slave trade.
During Louis XIV's minority, trouble known as The Fronde occurred. This rebellion was driven by feudal lords and sovereign courts as a reaction to the royal absolute power. The monarchy reached its peak during the 17th century and reign of Louis XIV, during which France further increased its influence. By turning lords into courtiers at the Palace of Versailles, his command of the military went unchallenged. The "Sun King" made France the leading European power. France became the most populous European country and had tremendous influence over European politics, economy, and culture. French became the most-used language in diplomacy, science, and literature until the 20th century. France took control of territories in the Americas, Africa and Asia. In 1685, Louis XIV revoked the Edict of Nantes, forcing thousands of Huguenots into exile and published the Code Noir providing the legal framework for slavery and expelling Jews from French colonies.
Under the wars of Louis XV (r. 1715–1774), France lost New France and most Indian possessions after its defeat in the Seven Years' War (1756–1763). Its European territory kept growing, however, with acquisitions such as Lorraine and Corsica. Louis XV's weak rule, including the decadence of his court, discredited the monarchy, which in part paved the way for the French Revolution.
Louis XVI (r. 1774–1793) supported America with money, fleets and armies, helping them win independence from Great Britain. France gained revenge, but verged on bankruptcy—a factor that contributed to the Revolution. Some of the Enlightenment occurred in French intellectual circles, and scientific breakthroughs, such as the naming of oxygen (1778) and the first hot air balloon carrying passengers (1783), were achieved by French scientists. French explorers took part in the voyages of scientific exploration through maritime expeditions. Enlightenment philosophy, in which reason is advocated as the primary source of legitimacy, undermined the power of and support for the monarchy and was a factor in the Revolution.
=== Revolutionary France (1789–1799) ===
The French Revolution was a period of political and societal change that began with the Estates General of 1789, and ended with the coup of 18 Brumaire in 1799 and the formation of the French Consulate. Many of its ideas are fundamental principles of liberal democracy, while its values and institutions remain central to modern political discourse.
Its causes were a combination of social, political and economic factors, which the Ancien Régime proved unable to manage. A financial crisis and social distress led in May 1789 to the convocation of the Estates General, which was converted into a National Assembly in June. The Storming of the Bastille on 14 July led to a series of radical measures by the Assembly, among them the abolition of feudalism, state control over the Catholic Church in France, and a declaration of rights.
The next three years were dominated by struggle for political control, exacerbated by economic depression. Military defeats following the outbreak of the French Revolutionary Wars in April 1792 resulted in the insurrection of 10 August 1792. The monarchy was abolished and replaced by the French First Republic in September, while Louis XVI was executed in January 1793.
After another revolt in June 1793, the constitution was suspended and power passed from the National Convention to the Committee of Public Safety. About 16,000 people were executed in a Reign of Terror, which ended in July 1794. Weakened by external threats and internal opposition, the Republic was replaced in 1795 by the Directory. Four years later in 1799, the Consulate seized power in a coup led by Napoleon.
=== Napoleon and 19th century (1799–1914) ===
Napoleon became First Consul in 1799 and later Emperor of the French Empire (1804–1814; 1815). Changing sets of European coalitions declared wars on Napoleon's empire. His armies conquered most of continental Europe with swift victories such as the battles of Jena-Auerstadt and Austerlitz. Members of the Bonaparte family were appointed monarchs in some of the newly established kingdoms.
These victories led to the worldwide expansion of French revolutionary ideals and reforms, such as the metric system, Napoleonic Code and Declaration of the Rights of Man. In 1812 Napoleon attacked Russia, reaching Moscow. Thereafter his army disintegrated through supply problems, disease, Russian attacks, and finally winter. After this catastrophic campaign and the ensuing uprising of European monarchies against his rule, Napoleon was defeated. About a million Frenchmen died during the Napoleonic Wars. After his brief return from exile, Napoleon was finally defeated in 1815 at the Battle of Waterloo, and the Bourbon monarchy was restored with new constitutional limitations.
The discredited Bourbon dynasty was overthrown by the July Revolution of 1830, which established the constitutional July Monarchy; French troops began the conquest of Algeria. Unrest led to the French Revolution of 1848 and the end of the July Monarchy. The abolition of slavery and the introduction of male universal suffrage was re-enacted in 1848. In 1852, president of the French Republic, Louis-Napoléon Bonaparte, Napoleon I's nephew, was proclaimed emperor of the Second Empire, as Napoleon III. He multiplied French interventions abroad, especially in Crimea, Mexico and Italy. Napoleon III was unseated following defeat in the Franco-Prussian War of 1870, and his regime was replaced by the Third Republic. By 1875, the French conquest of Algeria was complete, with approximately 825,000 Algerians killed from famine, disease, and violence.
France had colonial possessions since the beginning of the 17th century, but in the 19th and 20th centuries its empire extended greatly and became the second-largest behind the British Empire. Including metropolitan France, the total area reached almost 13 million square kilometres in the 1920s and 1930s, 9% of the world's land. Known as the Belle Époque, the turn of the century was characterised by optimism, regional peace, economic prosperity and technological, scientific and cultural innovations. In 1905, state secularism was officially established.
=== Early to mid-20th century (1914–1946) ===
France was invaded by Germany and defended by Great Britain at the start of World War I in August 1914. A rich industrial area in the north was occupied. France and the Allies emerged victorious against the Central Powers at tremendous human cost. It left 1.4 million French soldiers dead, 4% of its population. Interwar was marked by intense international tensions and social reforms introduced by the Popular Front government (e.g., annual leave, eight-hour workdays, women in government).
In 1940, France was invaded and quickly defeated by Nazi Germany. France was divided into a German occupation zone in the north, an Italian occupation zone and an unoccupied territory, the rest of France, which consisted of southern France and the French empire. The Vichy government, an authoritarian regime collaborating with Germany, ruled the unoccupied territory. Free France, the government-in-exile led by Charles de Gaulle, was set up in London.
From 1942 to 1944, about 160,000 French citizens, including around 75,000 Jews, were deported to death and concentration camps. On 6 June 1944, the Allies invaded Normandy, and in August they invaded Provence. The Allies and French Resistance emerged victorious, and French sovereignty was restored with the Provisional Government of the French Republic (GPRF). This interim government, established by de Gaulle, continued to wage war against Germany and to purge collaborators from office. It made important reforms e.g. suffrage extended to women and the creation of a social security system.
=== 1946–present ===
A new constitution resulted in the Fourth Republic (1946–1958), which saw strong economic growth (les Trente Glorieuses). France was a founding member of NATO and attempted to regain control of French Indochina, but was defeated by the Viet Minh in 1954. France faced another anti-colonialist conflict in Algeria, then part of France and home to over one million European settlers (Pied-Noir). The French systematically used torture and repression, including extrajudicial killings to keep control. This conflict nearly led to a coup and civil war.
During the May 1958 crisis, the weak Fourth Republic gave way to the Fifth Republic, which included a strengthened presidency. The war concluded with the Évian Accords in 1962 which led to Algerian independence, at a high price: between half a million and one million deaths and over 2 million internally-displaced Algerians. Around one million Pied-Noirs and Harkis fled from Algeria to France. A vestige of the empire is the French overseas departments and territories.
During the Cold War, de Gaulle pursued a policy of "national independence" towards the Western and Eastern blocs. He withdrew from NATO's military-integrated command (while remaining within the alliance), launched a nuclear development programme and made France the fourth nuclear power. He restored cordial Franco-German relations to create a European counterweight between American and Soviet spheres of influence. However, he opposed any development of a supranational Europe, favouring sovereign nations. The revolt of May 1968 had an enormous social impact; it was a watershed moment when a conservative moral ideal (religion, patriotism, respect for authority) shifted to a more liberal moral ideal (secularism, individualism, sexual revolution). Although the revolt was a political failure (the Gaullist party emerged stronger than before) it announced a split between the French and de Gaulle, who resigned.
In the post-Gaullist era, France remained one of the most developed economies in the world but faced crises that resulted in high unemployment rates and increasing public debt. In the late 20th and early 21st centuries, France has been at the forefront of the development of a supranational European Union, notably by signing the Maastricht Treaty in 1992, establishing the eurozone in 1999 and signing the Treaty of Lisbon in 2007. France has fully reintegrated into NATO and since participated in most NATO-sponsored wars. Since the 19th century, France has received many immigrants, often male foreign workers from European Catholic countries who generally returned home when not employed. During the 1970s France faced an economic crisis and allowed new immigrants (mostly from the Maghreb, in northwest Africa) to permanently settle in France with their families and acquire citizenship. It resulted in hundreds of thousands of Muslims living in subsidised public housing and suffering from high unemployment rates. The government had a policy of assimilation of immigrants, where they were expected to adhere to French values and norms.
Since the 1995 public transport bombings, France has been targeted by Islamist organisations, notably the Charlie Hebdo attack in 2015 which provoked the largest public rallies in French history, gathering 4.4 million people, the November 2015 Paris attacks which resulted in 130 deaths, the deadliest attack on French soil since World War II and the deadliest in the European Union since the Madrid train bombings in 2004. Opération Chammal, France's military efforts to contain ISIS, killed over 1,000 ISIS troops between 2014 and 2015.
== Geography ==
The vast majority of France's territory and population is situated in Western Europe and is called Metropolitan France. It is bordered by the North Sea in the north, the English Channel in the northwest, the Atlantic Ocean in the west and the Mediterranean Sea in the southeast. Its land borders consist of Belgium and Luxembourg in the northeast, Germany and Switzerland in the east, Italy and Monaco in the southeast, and Andorra and Spain in the south and southwest. Except for the northeast, most of France's land borders are roughly delineated by natural boundaries and geographic features: to the south and southeast, the Pyrenees and the Alps and the Jura, respectively, and to the east, the Rhine river. Metropolitan France includes various coastal islands, of which the largest is Corsica. Metropolitan France is situated mostly between latitudes 41° and 51° N, and longitudes 6° W and 10° E, on the western edge of Europe, and thus lies within the northern temperate zone. Its continental part covers about 1000 km from north to south and from east to west.
Metropolitan France covers 551,500 square kilometres (212,935 sq mi), the largest among European Union members. France's total land area, with its overseas departments and territories (excluding Adélie Land), is 643,801 km2 (248,573 sq mi), 0.45% of the total land area on Earth. France possesses a wide variety of landscapes, from coastal plains in the north and west to the mountain ranges of the Alps in the southeast, the Massif Central in the south-central and Pyrenees in the southwest.
Due to its numerous overseas departments and territories scattered across the planet, France possesses the second-largest exclusive economic zone (EEZ) in the world, covering 11,035,000 km2 (4,261,000 sq mi). Its EEZ covers approximately 8% of the total surface of all the EEZs of the world.
=== Geology, topography and hydrography ===
Metropolitan France has a wide variety of topographic sets and natural landscapes. During the Hercynian uplift in the Paleozoic Era, the Armorican Massif, the Massif Central, the Morvan, the Vosges and Ardennes ranges and the island of Corsica were formed. These massifs delineate several sedimentary basins such as the Aquitaine Basin in the southwest and the Paris Basin in the north. Various routes of natural passage, such as the Rhône Valley, allow easy communication. The Alpine, Pyrenean and Jura mountains are much younger and have less eroded forms. At 4,810.45 metres (15,782 ft) above sea level, Mont Blanc, located in the Alps on the France–Italy border, is the highest point in Western Europe. Although 60% of municipalities are classified as having seismic risks (though moderate).
The coastlines offer contrasting landscapes: mountain ranges along the French Riviera, coastal cliffs such as the Côte d'Albâtre, and wide sandy plains in the Languedoc. Corsica lies off the Mediterranean coast. France has an extensive river system consisting of the four major rivers Seine, the Loire, the Garonne, the Rhône and their tributaries, whose combined catchment includes over 62% of the metropolitan territory. The Rhône divides the Massif Central from the Alps and flows into the Mediterranean Sea at the Camargue. The Garonne meets the Dordogne just after Bordeaux, forming the Gironde estuary, the largest estuary in Western Europe which after approximately 100 kilometres (62 mi) empties into the Atlantic Ocean. Other water courses drain towards the Meuse and Rhine along the northeastern borders. France has 11,000,000 km2 (4,200,000 sq mi) of marine waters within three oceans under its jurisdiction, of which 97% are overseas.
=== Environment ===
France was one of the first countries to create an environment ministry, in 1971. France is ranked 19th by carbon dioxide emissions due to the country's heavy investment in nuclear power following the 1973 oil crisis, which now accounts for 75 per cent of its electricity production and results in less pollution. According to the 2020 Environmental Performance Index conducted by Yale and Columbia, France was the fifth most environmentally conscious country in the world.
Like all European Union state members, France agreed to cut carbon emissions by at least 20% of 1990 levels by 2020. As of 2009, French carbon dioxide emissions per capita are lower than that of China. The country was set to impose a carbon tax in 2009; however, the plan was abandoned due to fears of it burdening French businesses.
Forests account for 31 per cent of France's land area—the fourth-highest proportion in Europe—representing an increase of 7 per cent since 1990. French forests are some of the most diverse in Europe, comprising more than 140 species of trees. France had a 2018 Forest Landscape Integrity Index mean score of 4.52/10, ranking it 123rd globally. There are nine national parks and 46 natural parks in France. A regional nature park (French: parc naturel régional or PNR) is a public establishment in France between local authorities and the national government covering an inhabited rural area of outstanding beauty, to protect the scenery and heritage as well as setting up sustainable economic development in the area. As of 2019 there are 54 PNRs in France.
== Politics ==
France is a representative democracy organised as a unitary semi-presidential republic. Democratic traditions and values are deeply rooted in French culture, identity and politics. The Constitution of the Fifth Republic was approved by referendum in 1958, establishing a framework consisting of executive, legislative and judicial branches. It sought to address the instability of the Third and Fourth Republics by combining elements of both the parliamentary and presidential systems, while greatly strengthening the authority of the executive relative to the legislature.
=== Government ===
The executive branch has two leaders. The president, who has been Emmanuel Macron since 2017, is the head of state, elected directly by universal adult suffrage for a five-year term. The prime minister, who has been François Bayrou since 2024, is the head of government, appointed by the President to lead the government. The president has the power to dissolve Parliament or circumvent it by submitting referendums directly to the people; the president also appoints judges and civil servants, negotiates and ratifies international agreements, as well as serves as commander-in-chief of the Armed Forces. The prime minister determines public policy and oversees the civil service, with an emphasis on domestic matters. In the 2022 presidential election, Macron was re-elected. Two months later, in the legislative elections, Macron lost his parliamentary majority and had to form a minority government.
The legislature consists of the French Parliament, a bicameral body made up of a lower house, the National Assembly and an upper house, the Senate. Legislators in the National Assembly, known as députés, represent local constituencies and are directly elected for five-year terms. The Assembly has the power to dismiss the government by majority vote. Senators are chosen by an electoral college for six-year terms, with half the seats submitted to election every three years. The Senate's legislative powers are limited; in the event of disagreement between the two chambers, the National Assembly has the final say. The parliament is responsible for determining the rules and principles concerning most areas of law, political amnesty, and fiscal policy; however, the government may draft specific details concerning most laws.
From World War II until 2017, French politics was dominated by two politically opposed groupings: one left-wing, the French Section of the Workers' International, which was succeeded by the Socialist Party (in 1969); and the other right-wing, the Gaullist Party, whose name changed over time to the Rally of the French People (1947), the Union of Democrats for the Republic (1958), the Rally for the Republic (1976), the Union for a Popular Movement (2007) and The Republicans (since 2015). In the 2017 presidential and legislative elections, the radical centrist party La République En Marche! (LREM) became the dominant force, overtaking both Socialists and Republicans. LREM's opponent in the second round of the 2017 and 2022 presidential elections was the growing far-right party National Rally (RN). Since 2020, Europe Ecology – The Greens (EELV) have performed well in mayoral elections in major cities while on a national level, an alliance of Left parties (the NUPES) was the second-largest voting block elected to the lower house in 2022. Right-wing populist RN became the largest opposition party in the National Assembly in 2022.
The electorate is constitutionally empowered to vote on amendments passed by the Parliament and bills submitted by the president. Referendums have played a key role in shaping French politics and even foreign policy; voters have decided on such matters as Algeria's independence, the election of the president by popular vote, the formation of the EU, and the reduction of presidential term limits.
=== Administrative divisions ===
France is divided into 18 regions (located in Europe and overseas), five overseas collectivities, one overseas territory, one special collectivity—New Caledonia and one uninhabited island directly under the authority of the Minister of Overseas France—Clipperton.
==== Regions ====
Since 2016, France is divided into 18 administrative regions: 13 regions in metropolitan France (including Corsica), and five overseas. The regions are further subdivided into 101 departments, which are numbered mainly alphabetically. The department number is used in postal codes and was formerly used on vehicle registration plates. Among the 101 French departments, five (French Guiana, Guadeloupe, Martinique, Mayotte, and Réunion) are in overseas regions (ROMs) that are simultaneously overseas departments (DOMs), enjoying the same status as metropolitan departments and are thereby included in the European Union.
The 101 departments are subdivided into 335 arrondissements, which are, in turn, subdivided into 2,054 cantons. These cantons are then divided into 36,658 communes, which are municipalities with an elected municipal council. Three communes—Paris, Lyon and Marseille—are subdivided into 45 municipal arrondissements.
==== Overseas territories and collectivities ====
In addition to the 18 regions and 101 departments, the French Republic has five overseas collectivities (French Polynesia, Saint Barthélemy, Saint Martin, Saint Pierre and Miquelon, and Wallis and Futuna), one sui generis collectivity (New Caledonia), one overseas territory (French Southern and Antarctic Lands), and one island possession in the Pacific Ocean (Clipperton Island). Overseas collectivities and territories form part of the French Republic, but do not form part of the European Union or its fiscal area (except for Saint Barthélemy, which seceded from Guadeloupe in 2007). The Pacific Collectivities (COMs) of French Polynesia, Wallis and Futuna, and New Caledonia continue to use the CFP franc whose value is strictly linked to that of the euro. In contrast, the five overseas regions used the French franc and now use the euro.
=== Foreign relations ===
France is a founding member of the United Nations and serves as one of the permanent members of the UN Security Council with veto rights. In 2015, it was described as "the best networked state in the world" due to its membership in more international institutions than any other country; these include the G7, World Trade Organization (WTO), the Pacific Community (SPC) and the Indian Ocean Commission (COI). It is an associate member of the Association of Caribbean States (ACS) and a leading member of the Organisation internationale de la Francophonie (OIF) of 84 French-speaking countries.
As a significant hub for international relations, France has the third-largest assembly of diplomatic missions, second only to China and the United States. It also hosts the headquarters of several international organisations, including the OECD, UNESCO, Interpol, the International Bureau of Weights and Measures, and the OIF.
French foreign policy after World War II has been largely shaped by membership in the European Union, of which it was a founding member. Since the 1960s, France has developed close ties with reunified Germany to become the most influential driving force of the EU. Since 1904, France has maintained an "Entente cordiale" with the United Kingdom, and there has been a strengthening of links between the countries, especially militarily.
France is a member of the North Atlantic Treaty Organization (NATO), but under President de Gaulle excluded itself from the joint military command, in protest of the Special Relationship between the United States and Britain, and to preserve the independence of French foreign and security policies. Under Nicolas Sarkozy, France rejoined the NATO joint military command on 4 April 2009.
France retains strong political and economic influence in its former African colonies (Françafrique) and has supplied economic aid and troops for peacekeeping missions in Ivory Coast and Chad. From 2012 to 2021, France and other African states intervened in support of the Malian government in the Northern Mali conflict.
In 2017, France was the world's fourth-largest donor of development aid in absolute terms, behind the United States, Germany, and the United Kingdom. This represents 0.43% of its GNP, the 12th highest among the OECD. Aid is provided by the governmental French Development Agency, which finances primarily humanitarian projects in sub-Saharan Africa, with an emphasis on "developing infrastructure, access to health care and education, the implementation of appropriate economic policies and the consolidation of the rule of law and democracy".
=== Military ===
The French Armed Forces (Forces armées françaises) are the military and paramilitary forces of France, under the President of the Republic as supreme commander. They consist of the French Army (Armée de Terre), the French Navy (Marine Nationale, formerly called Armée de Mer), the French Air and Space Force (Armée de l'Air et de l'Espace), and the National Gendarmerie (Gendarmerie nationale), which serves as both military police and civil police in rural areas. Together they are among the largest armed forces in the world and the largest in the EU. According to a 2015 study by Crédit Suisse, the French Armed Forces ranked as the world's sixth-most powerful military, and the second most powerful in Europe. France's annual military expenditure in 2023 was US$61.3 billion, or 2.1% of its GDP, making it the eighth biggest military spender in the world. There has been no national conscription since 1997.
France has been a recognised nuclear state since 1960. It is a party to both the Comprehensive Nuclear-Test-Ban Treaty (CTBT) and the Nuclear Non-Proliferation Treaty. The French nuclear force (formerly known as "Force de Frappe") consists of four Triomphant class submarines equipped with submarine-launched ballistic missiles. In addition to the submarine fleet, it is estimated that France has about 60 ASMP medium-range air-to-ground missiles with nuclear warheads; 50 are deployed by the Air and Space Force using the Mirage 2000N long-range nuclear strike aircraft, while around 10 are deployed by the French Navy's Super Étendard Modernisé (SEM) attack aircraft, which operate from the nuclear-powered aircraft carrier Charles de Gaulle (R91).
France has major military industries and one of the largest aerospace sectors in the world. The country has produced such equipment as the Rafale fighter, the Charles de Gaulle aircraft carrier, the Exocet missile and the Leclerc tank among others. France is a major arms seller, with most of its arsenal's designs available for the export market, except for nuclear-powered devices.
One French intelligence unit, the Directorate-General for External Security, is considered to be a component of the Armed Forces under the authority of the Ministry of Defence. The other, the Directorate-General for Internal Security operates under the authority of the Ministry of the Interior. France's cybersecurity capabilities are regularly ranked as some of the most robust of any nation in the world.
French weapons exported totaled 27 billion euros in 2022, up from 11.7 billion euros the previous year 2021. Additionally, the UAE alone contributed more than 16 billion euros arms to the French total. Among the largest French defence companies are Dassault, Thales and Safran.
=== Law ===
France uses a civil legal system, wherein law arises primarily from written statutes; judges are not to make law, but merely to interpret it (though the amount of judicial interpretation in certain areas makes it equivalent to case law in a common law system). Basic principles of the rule of law were laid in the Napoleonic Code (which was largely based on royal law codified under King Louis XIV). In agreement with the principles of the Declaration of the Rights of Man and of the Citizen, the law should only prohibit actions detrimental to society.
French law is divided into two principal areas: private law and public law. Private law includes, in particular, civil law and criminal law. Public law includes, in particular, administrative law and constitutional law. However, in practical terms, French law comprises three principal areas of law: civil law, criminal law, and administrative law. Criminal laws can only address the future and not the past (criminal ex post facto laws are prohibited). While administrative law is often a subcategory of civil law in many countries, it is completely separated in France and each body of law is headed by a specific supreme court: ordinary courts (which handle criminal and civil litigation) are headed by the Court of Cassation and administrative courts are headed by the Council of State. To be applicable, every law must be officially published in the Journal officiel de la République française.
France does not recognise religious law as a motivation for the enactment of prohibitions; it has long abolished blasphemy laws and sodomy laws (the latter in 1791). However, "offences against public decency" (contraires aux bonnes mœurs) or disturbing public order (trouble à l'ordre public) have been used to repress public expressions of homosexuality or street prostitution.
France generally has a positive reputation regarding LGBTQ rights. Since 1999, civil unions for homosexual couples have been permitted, and since 2013, same-sex marriage and LGBT adoption are legal. Some consider hate speech laws in France to be too broad or severe, undermining freedom of speech.
France has laws against racism and antisemitism, while the 1990 Gayssot Act prohibits Holocaust denial. In 2024, France became the first nation in the European Union to explicitly protect abortion in its constitution.
Freedom of religion is constitutionally guaranteed by the 1789 Declaration of the Rights of Man and of the Citizen. The 1905 French law on the Separation of the Churches and the State is the basis for laïcité (state secularism): the state does not formally recognise any religion, except in Alsace-Moselle, which continues to subsidize education and clergy of Catholicism, Lutheranism, Calvinism, and Judaism. Nonetheless, France does recognise religious associations. The Parliament has listed many religious movements as dangerous cults since 1995 and has banned wearing conspicuous religious symbols in schools since 2004. In 2010, it banned the wearing of face-covering Islamic veils in public; human rights groups such as Amnesty International and Human Rights Watch described the law as discriminatory towards Muslims. However, it is supported by most of the population.
== Economy ==
France has a social market economy characterised by sizeable government involvement and diversified sectors. For roughly two centuries, the French economy has consistently ranked among the ten largest globally; it is currently the world's ninth largest by purchasing power parity, the seventh largest by nominal GDP, and the second largest in the European Union by both metrics. France is considered a great power with considerable economic strength, being a member of the Group of Seven leading industrialised countries, the Organisation for Economic Co-operation and Development (OECD), and the Group of Twenty largest economies. France ranked 12th in the 2024 Global Innovation Index, compared to 16th in 2019.
France's economy is highly diversified; services represent two-thirds of both the workforce and GDP, while the industrial sector accounts for a fifth of GDP and a similar proportion of employment. France is the third-biggest manufacturing country in Europe, behind Germany and Italy, and ranks eighth in the world by manufacturing output, at 1.9 per cent. Less than 2 per cent of GDP is generated by the primary sector, namely agriculture; however, France's agricultural sector is among the largest in value and leads the EU in overall production.
In 2018, France was the fifth-largest trading nation in the world and the second-largest in Europe, with the value of exports representing over a fifth of GDP. Its membership in the eurozone and the broader European single market facilitates access to capital, goods, services, and skilled labour. Despite protectionist policies over certain industries, particularly in agriculture, France has generally played a leading role in fostering free trade and commercial integration in Europe to enhance its economy. In 2019, it ranked first in Europe and 13th in the world in foreign direct investment, with European countries and the United States being leading sources. According to the Bank of France (founded in 1800), the leading recipients of FDI were manufacturing, real estate, finance and insurance. The Paris Region has the highest concentration of multinational firms in mainland Europe.
Under the doctrine of Dirigisme, the government historically played a major role in the economy; policies such as indicative planning and nationalisation are credited for contributing to three decades of unprecedented postwar economic growth known as Trente Glorieuses. At its peak in 1982, the public sector accounted for one-fifth of industrial employment and over four-fifths of the credit market. Beginning in the late 20th century, France loosened regulations and state involvement in the economy, with most leading companies now being privately owned; state ownership now dominates only transportation, defence and broadcasting. Policies aimed at promoting economic dynamism and privatisation have improved France's economic standing globally: it is among the world's 10 most innovative countries in the 2020 Bloomberg Innovation Index, and the 15th most competitive, according to the 2019 Global Competitiveness Report (up two places from 2018).
The Paris stock exchange (French: La Bourse de Paris) is one of the oldest in the world, created in 1724. In 2000, it merged with counterparts in Amsterdam and Brussels to form Euronext, which in 2007 merged with the New York stock exchange to form NYSE Euronext, the world's largest stock exchange. Euronext Paris, the French branch of NYSE Euronext, is Europe's second-largest stock exchange market. Some examples of the most valuable French companies include LVMH, L'Oréal and Sociéte Générale.
France has historically been one of the world's major agricultural centres and remains a "global agricultural powerhouse"; France is the world's sixth-biggest exporter of agricultural products, generating a trade surplus of over €7.4 billion. Nicknamed "the granary of the old continent", over half its total land area is farmland, of which 45 per cent is devoted to permanent field crops such as cereals. The country's diverse climate, extensive arable land, modern farming technology, and EU subsidies have made it Europe's leading agricultural producer and exporter.
=== Tourism ===
With 100 million international tourist arrivals in 2023, France is the world's top tourist destination, ahead of Spain (85.2 million) and the United States (66.5 million). However, it ranks third in tourism-derived income due to the shorter duration of visits. The most popular tourist sites include (annual visitors): Eiffel Tower (6.2 million), Château de Versailles (2.8 million), Muséum national d'Histoire naturelle (2 million), Pont du Gard (1.5 million), Arc de Triomphe (1.2 million), Mont Saint-Michel (1 million), Sainte-Chapelle (683,000), Château du Haut-Kœnigsbourg (549,000), Puy de Dôme (500,000), Musée Picasso (441,000), and Carcassonne (362,000).
France, especially Paris, has some of the world's largest museums, including the Louvre, which is the most visited art museum in the world (7.7 million visitors in 2022), the Musée d'Orsay (3.3 million), mostly devoted to Impressionism, the Musée de l'Orangerie (1.02 million), which is home to eight large Water Lily murals by Claude Monet, as well as the Centre Georges Pompidou (3 million), dedicated to contemporary art. Disneyland Paris is Europe's most popular theme park, with 15 million combined visitors to the resort's Disneyland Park and Walt Disney Studios Park in 2009. With more than 10 million tourists a year, the French Riviera (French: Côte d'Azur), in Southeast France, is the second leading tourist destination in the country, after the Paris Region. With 6 million tourists a year, the castles of the Loire Valley (French: châteaux) and the Loire Valley itself are the third leading tourist destination in France.
France has 52 sites inscribed in UNESCO's World Heritage List and features cities of high cultural interest, beaches and seaside resorts, ski resorts, as well as rural regions that many enjoy for their beauty and tranquillity (green tourism). Small and picturesque French villages are promoted through the association Les Plus Beaux Villages de France (literally "The Most Beautiful Villages of France"). The "Remarkable Gardens" label is a list of the over 200 gardens classified by the Ministry of Culture. This label is intended to protect and promote remarkable gardens and parks. France attracts many religious pilgrims on their way to St. James, or to Lourdes, a town in the Hautes-Pyrénées that hosts several million visitors a year.
=== Energy ===
France is the world's tenth-largest producer of electricity. Électricité de France (EDF), which is majority-owned by the French government, is the country's main producer and distributor of electricity, and one of the world's largest electric utility companies, ranking third in revenue globally. In 2018, EDF produced roughly one-fifth of the European Union's electricity, primarily from nuclear power. In 2021, France was the biggest energy exporter in Europe, mostly to the UK and Italy, and the largest net exporter of electricity in the world.
Since the 1973 oil crisis, France has pursued a strong policy of energy security, namely through heavy investment in nuclear energy. It is one of 32 countries with nuclear power plants, ranking second in the world by the number of operational nuclear reactors, at 56. Consequently, 70% of France's electricity is generated by nuclear power, the highest proportion in the world by a wide margin; only Slovakia and Ukraine also derive a majority of electricity from nuclear power, at roughly 53% and 51%, respectively. France is considered a world leader in nuclear technology, with reactors and fuel products being major exports.
France's significant reliance on nuclear power has resulted in comparatively slower adoption of renewable energy relative to other Western nations. Nevertheless, between 2008 and 2019, France's production capacity from renewable energies rose consistently and nearly doubled. Hydropower is by far the leading source, accounting for over half the country's renewable energy sources and contributing 13% of its electricity, the highest proportion in Europe after Norway and Turkey. As with nuclear power, most hydroelectric plants, such as Eguzon, Étang de Soulcem, and Lac de Vouglans, are managed by EDF. France aims to further expand hydropower into 2040.
=== Transport ===
France's railway network, which stretches 29,473 kilometres (18,314 mi) as of 2008, is the second most extensive in Western Europe after Germany. It is operated by the SNCF, and high-speed trains include the Thalys, the Eurostar and TGV, which travels at 320 km/h (199 mph). The Eurostar, along with the Eurotunnel Shuttle, connects with the United Kingdom through the Channel Tunnel. Rail connections exist to all other neighbouring countries in Europe except Andorra. Intra-urban connections are also well developed, with most major cities having underground or tramway services complementing bus services.
There are approximately 1,027,183 kilometres (638,262 mi) of serviceable roadway in France, ranking it the most extensive network of the European continent. The Paris Region is enveloped with the densest network of roads and highways, which connect it with virtually all parts of the country. French roads also handle substantial international traffic, connecting with cities in neighbouring Belgium, Luxembourg, Germany, Switzerland, Italy, Spain, Andorra and Monaco. There is no annual registration fee or road tax; however, usage of the mostly privately owned motorways is through tolls except in the vicinity of large communes. The new car market is dominated by domestic brands such as Renault, Peugeot and Citroën. France possesses the Millau Viaduct, the world's tallest bridge, and has built many important bridges such as the Pont de Normandie. Diesel and petrol-driven cars and lorries cause a large part of the country's air pollution and greenhouse gas emissions.
There are 464 airports in France. Charles de Gaulle Airport, located in the vicinity of Paris, is the largest and busiest airport in the country, handling the vast majority of popular and commercial traffic and connecting Paris with virtually all major cities across the world. Air France is the national carrier airline, although numerous private airline companies provide domestic and international travel services. There are ten major ports in France, the largest of which is in Marseille, which also is the largest bordering the Mediterranean Sea. 12,261 kilometres (7,619 mi) of waterways traverse France including the Canal du Midi, which connects the Mediterranean Sea to the Atlantic Ocean through the Garonne river.
== Demographics ==
With an estimated population of 68,605,616 people, France is the 20th most populous country in the world, the third-most populous in Europe (after Russia and Germany), and the second most populous in the European Union (after Germany).
For much of the 21st century, France has been an outlier among developed countries, particularly in Europe, for its relatively high rate of natural population growth; by birth rates alone, it was responsible for almost all natural population growth in the European Union in 2006. Between 2006 and 2016, France saw the second-highest overall increase in population in the EU and was one of only four EU countries where natural births accounted for the most population growth. This was the highest rate since the end of the baby boom in 1973 and coincides with the rise in the total fertility rate from a nadir of 1.7 in 1994 to 2.0 in 2010.
Since 2011, France's fertility rate has been steadily declining; it stood at 1.79 per woman in 2023, below the replacement rate of 2.1 and well below the high of 4.41 in 1800. France's fertility rate and crude birth rate nonetheless remain the highest in the EU and among the highest in Europe overall, where the average is 1.5. The mean age of French women at the birth of their first child was 29.1, slightly younger than the EU average of 29.7.
Like many developed nations, the French population is aging: The average age is 41.7 years, while roughly one-fifth of French people are 65 or over. It is projected that one in three French will be over 60 by 2024. Life expectancy at birth is 82.7 years, the 12th highest in the world; French Polynesia and the French region of Réunion ranked fourth and 11th in life expectancy, at 84.07 years and 83.55, respectively.
From 2006 to 2011, population growth averaged 0.6 percent per year; since 2011, annual growth has been between 0.4 and 0.5 percent annually, and France is projected to continue growing until 2044. Immigrants are major contributors to this trend; in 2010, roughly one in four newborns (27 percent) in metropolitan France had at least one foreign-born parent and another 24 percent had at least one parent born outside Europe (excluding French overseas territories). In 2021, the share of children of foreign-born mothers was 23 percent.
=== Major cities ===
France is a highly urbanised country, with its largest cities (in terms of metropolitan area population in 2021) being Paris (13,171,056 inh.), Lyon (2,308,818), Marseille (1,888,788), Lille (1,521,660), Toulouse (1,490,640), Bordeaux (1,393,764), Nantes (1,031,953), Strasbourg (864,993), Montpellier (823,120), and Rennes (771,320). (Note: since its 2020 revision of metropolitan area borders, INSEE considers that Nice is a metropolitan area separate from the Cannes-Antibes metropolitan area; these two combined would have a population of 1,019,905, as of the 2021 census). Rural flight was a perennial political issue throughout most of the 20th century.
=== Ethnic groups ===
Historically, French people were mainly of Celtic-Gallic origin, with a significant admixture of Italic (Romans) and Germanic (Franks) groups reflecting centuries of respective migration and settlement. Through the course of the Middle Ages, France incorporated various neighbouring ethnic and linguistic groups, as evidenced by Breton elements in the west, Aquitanian in the southwest, Scandinavian in the northwest, Alemannic in the northeast, and Ligurian in the southeast.
Large-scale immigration over the last century and a half have led to a more multicultural society; beginning with the French Revolution and further codified in the French Constitution of 1958, the government is prohibited from collecting data on ethnicity and ancestry; most demographic information is drawn from private sector organisations or academic institutions. In 2004, the Institut Montaigne estimated that within Metropolitan France, 51 million people were White (85% of the population), 6 million were Northwest African (10%), 2 million were Black (3.3%), and 1 million were Asian (1.7%).
A 2008 poll conducted jointly by the nstitut national d'études démographiques and the French National Institute of Statistics estimated that the largest minority ancestry groups were Italian (5 million), followed by Northwest African (3–6 million), Sub-Saharan African (2.5 million), Armenian (500,000), and Turkish (200,000). There are also sizeable minorities of other European ethnic groups, namely Spanish, Portuguese, Polish, and Greek. France has a significant Gitan (Romani) population, numbering between 20,000 and 400,000; many foreign Roma are expelled back to Bulgaria and Romania frequently.
=== Immigration ===
It is currently estimated that 40% of the French population is descended at least partially from the different waves of immigration since the early 20th century; between 1921 and 1935 alone, about 1.1 million net immigrants came to France. The next largest wave came in the 1960s when around 1.6 million pieds noirs returned to France following the independence of its Northwest African possessions, Algeria and Morocco. They were joined by numerous former colonial subjects from North and West Africa, as well as numerous European immigrants from Spain and Portugal.
France remains a major destination for immigrants, accepting about 200,000 legal immigrants annually. In 2005, it was Western Europe's leading recipient of asylum seekers, with an estimated 50,000 applications (albeit a 15% decrease from 2004). In 2010, France received about 48,100 asylum applications—placing it among the top five asylum recipients in the world. In subsequent years it saw the number of applications increase, ultimately doubling to 100,412 in 2017. The European Union allows free movement between the member states, although France established controls to curb Eastern European migration. Foreigners' rights are established in the Code of Entry and Residence of Foreigners and of the Right to Asylum. Immigration remains a contentious political issue.
In 2008, the INSEE (National Institute of Statistics and Economic Studies) estimated that the total number of foreign-born immigrants was around 5 million (8% of the population), while their French-born descendants numbered 6.5 million, or 11% of the population. Thus, nearly a fifth of the country's population were either first or second-generation immigrants, of which more than 5 million were of European origin and 4 million of Maghrebi ancestry. In 2008, France granted citizenship to 137,000 persons, mostly from Morocco, Algeria and Turkey. In 2022, more than 320,000 migrants came to France, with the majority coming from Africa.
In 2014, the INSEE reported a significant increase in the number of immigrants coming from Spain, Portugal and Italy between 2009 and 2012. According to the institute, this increase resulted from the 2008 financial crisis. Statistics on Spanish immigrants in France show a growth of 107 per cent between 2009 and 2012, with the population growing from 5,300 to 11,000. Of the total of 229,000 foreigners coming to France in 2012, nearly 8% were Portuguese, 5% British, 5% Spanish, 4% Italian, 4% German, 3% Romanian, and 3% Belgian.
=== Language ===
The official language of France is French, a Romance language derived from Latin. Since 1635, the Académie française has been France's official authority on the French language, although its recommendations carry no legal weight. There are also regional languages spoken in France, such as Occitan, Breton, Catalan, Flemish (Dutch dialect), Alsatian (German dialect), Basque, and Corsican (Italian dialect). Italian was the official language of Corsica until 9 May 1859. Although regional languages do not have the status of official languages, they are recognized by the Article 75-1 of the French constitution as part of France's heritage.
The government of France does not regulate the choice of language in publications by individuals, but the use of French is required by law in commercial and workplace communications. In addition to mandating the use of French in the territory of the Republic, the French government tries to promote French in the European Union and globally through institutions such as the Organisation internationale de la Francophonie. Besides French, there exist 77 vernacular minority languages of France, eight spoken in French metropolitan territory and 69 in the French overseas territories.
According to the 2007 Adult Education survey, part of a project by the European Union and carried out in France by the INSEE and based on a sample of 15,350 persons, French was the native language of 87.2% of the total population, or roughly 55.81 million people, followed by Arabic (3.6%, 2.3 million), Portuguese (1.5%, 960,000), Spanish (1.2%, 770,000) and Italian (1.0%, 640,000). Native speakers of other languages made up the remaining 5.2% of the population.
=== Religion ===
France is a secular country in which freedom of religion is a constitutional right. The French policy on religion is based on the concept of laïcité, a strict separation of church and state under which the government and public life are kept completely secular, detached from any religion. The region of Alsace and Moselle, which was part of the German Empire when state secularism was established in France, is an exception to the general French norm since the local law stipulates official status and state funding for Lutheranism, Catholicism, and Judaism.
Catholicism has been the main religion in France for more than a millennium, and it was once the country's state religion. Its role nowadays has been greatly reduced; nevertheless, in 2012, among the 47,000 religious buildings in France 94% were Catholic churches. After alternating between royal and secular republican governments during the 19th century, in 1905 France passed the 1905 law on the Separation of the Churches and the State, which established the aforementioned principle of laïcité.
The government is prohibited from recognising specific rights to any religious community (with the exception of legacy statutes like those of military chaplains and the aforementioned local law in Alsace-Moselle). It recognises religious organisations according to formal legal criteria that do not address religious doctrine, and religious organisations are expected to refrain from intervening in policymaking. Some religious groups, such as Scientology, the Children of God, the Unification Church, and the Order of the Solar Temple, are considered cults (sectes in French, which is considered a pejorative term) and are not granted the same status as recognised religions.
=== Health ===
The French health care system is one of universal health care largely financed by government national health insurance. In its 2000 assessment of world health care systems, the World Health Organization found that France provided the "close to best overall health care" in the world. The French health care system was ranked first worldwide by the World Health Organization in 1997. In 2011, France spent 11.6% of its GDP on health care, or US$4,086 per capita, a figure much higher than the average spent by countries in Europe. Approximately 77% of health expenditures are covered by government-funded agencies.
Care is generally free for people affected by chronic diseases such as cancer, AIDS or cystic fibrosis. The life expectancy at birth is 78 years for men and 85 years for women. There are 3.22 physicians for every 1000 inhabitants, and average health care spending per capita was US$4,719 in 2008. As of 2007, approximately 140,000 inhabitants (0.4%) of France are living with HIV/AIDS.
=== Education ===
In 1802, Napoleon created the lycée, the second and final stage of secondary education that prepares students for higher education studies or a profession. Jules Ferry is considered the father of the French modern school, leading reforms in the late 19th century that established free, secular and compulsory education (currently mandatory until the age of 16).
French education is centralised and divided into three stages: primary, secondary, and higher education. The Programme for International Student Assessment, coordinated by the OECD, ranked France's education as near the OECD average in 2018. Schoolchildren in France reported greater concern about the disciplinary climate and behaviour in classrooms compared to other OECD countries.
Higher education is divided between public universities and the prestigious and selective Grandes écoles, such as Sciences Po Paris for political studies, HEC Paris for economics, Polytechnique, the École des hautes études en sciences sociales for social studies and the École nationale supérieure des mines de Paris that produce high-profile engineers, or the École nationale d'administration for careers in the Grands Corps of the state. The Grandes écoles have been criticised for alleged elitism, producing many if not most of France's high-ranking civil servants, CEOs and politicians.
== Culture ==
=== Art ===
The origins of French art were very much influenced by Flemish art and by Italian art at the time of the Renaissance. Jean Fouquet, the most famous medieval French painter, is said to have been the first to travel to Italy and experience the Early Renaissance firsthand. The Renaissance painting School of Fontainebleau was directly inspired by Italian painters such as Primaticcio and Rosso Fiorentino, who both worked in France. Two of the most famous French artists of the time of the Baroque era, Nicolas Poussin and Claude Lorrain, lived in Italy.
French artists developed the rococo style in the 18th century, as a more intimate imitation of the old baroque style, the works of the court-endorsed artists Antoine Watteau, François Boucher and Jean-Honoré Fragonard being the most representative in the country. The French Revolution brought great changes, as Napoleon favoured artists of neoclassic style such as Jacques-Louis David and the highly influential Académie des Beaux-Arts defined the style known as Academism.
In the second part of the 19th century, France's influence over painting grew, with the development of new styles of painting such as Impressionism and Symbolism. The most famous impressionist painters of the period were Camille Pissarro, Édouard Manet, Edgar Degas, Claude Monet and Auguste Renoir. The second generation of impressionist-style painters, Paul Cézanne, Paul Gauguin, Toulouse-Lautrec and Georges Seurat, were also at the avant-garde of artistic evolutions, as well as the fauvist artists Henri Matisse, André Derain and Maurice de Vlaminck.
At the beginning of the 20th century, Cubism was developed by Georges Braque and the Spanish painter Pablo Picasso, living in Paris. Other foreign artists also settled and worked in or near Paris, such as Vincent van Gogh, Marc Chagall, Amedeo Modigliani and Wassily Kandinsky.
There are many art museums in France, the most famous of which being the state-owned Musée du Louvre, which collects artwork from the 18th century and earlier. The Musée d'Orsay was inaugurated in 1986 in the old railway station Gare d'Orsay, in a major reorganisation of national art collections, to gather French paintings from the second part of the 19th century (mainly Impressionism and Fauvism movements). It was voted the best museum in the world in 2018. Modern works are presented in the Musée National d'Art Moderne, which moved in 1976 to the Centre Georges Pompidou. These three state-owned museums are visited by close to 17 million people a year.
=== Architecture ===
During the Middle Ages, many fortified castles were built by feudal nobles to mark their powers. Some French castles that survived are Chinon, Château d'Angers, the massive Château de Vincennes and the so-called Cathar castles. During this era, France had been using Romanesque architecture like most of Western Europe.
Gothic architecture, originally named Opus Francigenum meaning "French work", was born in Île-de-France and was the first French style of architecture to be imitated throughout Europe. Northern France is the home of some of the most important Gothic cathedrals and basilicas, the first of these being the Saint Denis Basilica (used as the royal necropolis); other important French Gothic cathedrals are Notre-Dame de Chartres and Notre-Dame d'Amiens. The kings were crowned in another important Gothic church: Notre-Dame de Reims.
The final victory in the Hundred Years' War marked an important stage in the evolution of French architecture. It was the time of the French Renaissance and several artists from Italy were invited to the French court; many residential palaces were built in the Loire Valley, from 1450 as a first reference the Château de Montsoreau. Examples of such residential castles include the Château de Chambord, the Château de Chenonceau, or the Château d'Amboise.
Following the Renaissance and the end of the Middle Ages, Baroque architecture replaced the traditional Gothic style. However, in France, Baroque architecture found greater success in the secular domain than in the religious one. In the secular domain, the Palace of Versailles has many Baroque features. Jules Hardouin Mansart, who designed the extensions to Versailles, was one of the most influential French architects of the Baroque era; he is famous for his dome at Les Invalides. Some of the most impressive provincial Baroque architecture is found in places that were not yet French such as Place Stanislas in Nancy. On the military architectural side, Vauban designed some of the most efficient fortresses in Europe and became an influential military architect; as a result, imitations of his works can be found all over Europe, the Americas, Russia and Turkey.
After the Revolution, the Republicans favoured Neoclassicism although it was introduced in France before the revolution with such buildings as the Parisian Pantheon or the Capitole de Toulouse. Built during the first French Empire, the Arc de Triomphe and Sainte Marie-Madeleine represent the best example of Empire-style architecture. Under Napoleon III, a new wave of urbanism and architecture was given birth; extravagant buildings such as the neo-Baroque Palais Garnier were built. The urban planning of the time was very organised and rigorous; most notably, Haussmann's renovation of Paris. The architecture associated with this era is named Second Empire in English, the term being taken from the Second French Empire. At this time there was a strong Gothic resurgence across Europe and in France; the associated architect was Eugène Viollet-le-Duc. In the late 19th century, Gustave Eiffel designed many bridges, such as the Garabit viaduct, and remains one of the most influential bridge designers of his time, although he is best remembered for the Eiffel Tower.
In the 20th century, French-Swiss architect Le Corbusier designed several buildings in France. More recently, French architects have combined both modern and old architectural styles. The Louvre Pyramid is an example of modern architecture added to an older building. The most difficult buildings to integrate within French cities are skyscrapers, as they are visible from afar. For instance, in Paris, since 1977, new buildings had to be under 37 metres (121 ft). France's largest financial district is La Défense, where a significant number of skyscrapers are located. Other massive buildings that are a challenge to integrate into their environment are large bridges; an example of the way this has been done is the Millau Viaduct. Some famous modern French architects include Jean Nouvel, Dominique Perrault, Christian de Portzamparc and Paul Andreu.
=== Literature and philosophy ===
The earliest French literature dates from the Middle Ages when what is now known as modern France did not have a single, uniform language. There were several languages and dialects, and writers used their own spelling and grammar. Some authors of French medieval texts, such as Tristan and Iseult and Lancelot-Grail are unknown. Three famous medieval authors are Chrétien de Troyes, Christine de Pizan (langue d'oïl), and Duke William IX of Aquitaine (langue d'oc). Much medieval French poetry and literature was inspired by the legends of the Carolingian cycle, such as the Song of Roland and the chansons de geste. The Roman de Renart, written in 1175 by Perrout de Saint Cloude, tells the story of the medieval character Reynard ('the Fox') and is another example of early French writing. An important 16th-century writer was François Rabelais, who wrote five popular early picaresque novels. Rabelais was also in regular communication with Marguerite de Navarre, author of the Heptameron. Another 16th-century author was Michel de Montaigne, whose most famous work, Essais, started a literary genre.
French literature and poetry flourished during the 18th and 19th centuries. Denis Diderot is best known as the main editor of the Encyclopédie, whose aim was to sum up all the knowledge of his century and to fight ignorance and obscurantism. During that same century, Charles Perrault was a prolific writer of children's fairy tales including Puss in Boots, Cinderella, Sleeping Beauty and Bluebeard. At the start of the 19th century, symbolist poetry was an important movement in French literature, with poets such as Charles Baudelaire, Paul Verlaine and Stéphane Mallarmé.
The 19th century saw the writings of many French authors. Victor Hugo is sometimes seen as "the greatest French writer of all time" for excelling in all literary genres. Hugo's verse has been compared to that of Shakespeare, Dante and Homer. His novel Les Misérables is widely seen as one of the greatest novels ever written and The Hunchback of Notre Dame has remained immensely popular. Other major authors of that century include Alexandre Dumas (The Three Musketeers and The Count of Monte-Cristo), Jules Verne (Twenty Thousand Leagues Under the Seas), Émile Zola (Les Rougon-Macquart), Honoré de Balzac (La Comédie humaine), Guy de Maupassant, Théophile Gautier and Stendhal (The Red and the Black, The Charterhouse of Parma), whose works are among the most well known in France and the world.
In the early 20th century France was a haven for literary freedom. Works banned for obscenity in the US, the UK and other Anglophone nations were published in France decades before they were available in the respective authors' home countries. The French were disinclined to punish literary figures for their writing, and prosecutions were rare. Important writers of the 20th century include Marcel Proust, Louis-Ferdinand Céline, Jean Cocteau, Albert Camus, and Jean-Paul Sartre. Antoine de Saint-Exupéry wrote The Little Prince, which is one of the best selling books in history.
==== Philosophy ====
Medieval philosophy was dominated by Scholasticism until the emergence of Humanism in the Renaissance. Modern philosophy began in France in the 17th century with the philosophy of René Descartes, Blaise Pascal and Nicolas Malebranche. Descartes was the first Western philosopher since ancient times to attempt to build a philosophical system from the ground up rather than building on the work of predecessors. France in the 18th century saw major philosophical contributions from Voltaire who came to embody the Enlightenment and Jean-Jacques Rousseau whose work highly influenced the French Revolution. French philosophers made major contributions to the field in the 20th century including the existentialist works of Simone de Beauvoir, Camus, and Sartre. Other influential contributions during this time include the moral and political works of Simone Weil, contributions to structuralism including from Claude Lévi-Strauss and the post-structuralist works by Michel Foucault.
=== Music ===
France experienced a golden age in the 17th century thanks to Louis XIV, who employed talented musicians and composers in the royal court. Composers of this period include Marc-Antoine Charpentier, François Couperin, Michel-Richard Delalande, Jean-Baptiste Lully and Marin Marais, all of them composers at the court. After the death of the "Roi Soleil", French musical creation lost dynamism, but in the next century the music of Jean-Philippe Rameau reached some prestige. Rameau became the dominant composer of French opera and the leading French composer of the harpsichord.
In the field of classical music, France has produced a number of notable composers such as Gabriel Fauré, Claude Debussy, Maurice Ravel, and Hector Berlioz. Claude Debussy and Maurice Ravel are the most prominent figures associated with Impressionist music. The two composers invented new musical forms and new sounds. Debussy was among the most influential composers of the late 19th and early 20th centuries, and his use of non-traditional scales and chromaticism influenced many composers who followed. His music is noted for its sensory content and frequent usage of atonality. Erik Satie was a key member of the early-20th-century Parisian avant-garde. Francis Poulenc's best-known works are his piano suite Trois mouvements perpétuels (1919), the ballet Les biches (1923), the Concert champêtre (1928) for harpsichord and orchestra, the opera Dialogues des Carmélites (1957) and the Gloria (1959) for soprano, choir and orchestra. In the middle of the 20th century, Maurice Ohana, Pierre Schaeffer and Pierre Boulez contributed to the evolution of contemporary classical music.
French music then followed the rapid emergence of pop and rock music in the middle of the 20th century. Although English-speaking creations achieved popularity in the country, French pop music, known as chanson française, has also remained very popular. Among the most important French artists of the century are Édith Piaf, Georges Brassens, Léo Ferré, Charles Aznavour and Serge Gainsbourg. Modern pop music has seen the rise of popular French hip hop, French rock, techno/funk, and turntablists/DJs. Although there are very few rock bands in France compared to English-speaking countries, bands such as Noir Désir, Mano Negra, Niagara, Les Rita Mitsouko and more recently Superbus, Phoenix and Gojira, or Shaka Ponk, have reached worldwide popularity.
=== Cinema ===
France has historical and strong links with cinema, with two Frenchmen, Auguste and Louis Lumière (known as the Lumière Brothers) credited with creating cinema in 1895. The world's first female filmmaker, Alice Guy-Blaché, was also from France. Several important cinematic movements, including the late 1950s and 1960s Nouvelle Vague, began in the country. It is noted for having a strong film industry, due in part to protections afforded by the government. France remains a leader in filmmaking, as of 2015 producing more films than any other European country. The nation also hosts the Cannes Festival, one of the most important and famous film festivals in the world.
Apart from its strong and innovative film tradition, France has also been a gathering spot for artists from across Europe and the world. For this reason, French cinema is sometimes intertwined with the cinema of foreign nations. Directors from nations such as Poland (Roman Polanski, Krzysztof Kieślowski, Andrzej Żuławski), Argentina (Gaspar Noé, Edgardo Cozarinsky), Russia (Alexandre Alexeieff, Anatole Litvak), Austria (Michael Haneke) and Georgia (Géla Babluani, Otar Iosseliani) are prominent in the ranks of French cinema. Conversely, French directors have had prolific and influential careers in other countries, such as Luc Besson, Jacques Tourneur or Francis Veber in the United States. Although the French film market is dominated by Hollywood, France is the only nation in the world where American films make up the smallest share of total film revenues, at 50%, compared with 77% in Germany and 69% in Japan. French films account for 35% of the total film revenues of France, which is the highest percentage of national film revenues in the developed world outside the United States, compared to 14% in Spain and 8% in the UK. In 2013, France was the second-largest exporter of films in the world, after the United States.
As part of its advocacy of cultural exception, a political concept of treating culture differently from other commercial products, France succeeded in convincing all EU members to refuse to include culture and audiovisuals in the list of liberalised sectors of the WTO in 1993. This decision was confirmed in a vote by UNESCO in 2005.
=== Fashion ===
Fashion has been an important industry and cultural export of France since the 17th century, and modern "haute couture" originated in Paris in the 1860s. Today, Paris, along with London, Milan, and New York City, is considered one of the world's fashion capitals, and the city is home or headquarters to many of the premier fashion houses. The expression Haute couture is, in France, a legally protected name, guaranteeing certain quality standards.
The association of France with fashion and style (French: la mode) dates largely to the reign of Louis XIV. France renewed its dominance of the high fashion (French: couture or haute couture) industry in the years 1860–1960 through the establishment of the great couturier houses such as Chanel, Dior, and Givenchy. The French perfume industry is the world leader in its sector and is centred on the town of Grasse.
According to 2017 data compiled by Deloitte, Louis Vuitton Moet Hennessey (LVMH), a French brand, is the largest luxury company in the world by sales, selling more than twice the amount of its nearest competitor. Moreover, France also possesses 3 of the top 10 luxury goods companies by sales (LVMH, Kering SA, L'Oréal), more than any other country in the world.
=== Media ===
In 2021, regional daily newspapers, such as Ouest-France, Sud Ouest, La Voix du Nord, Dauphiné Libéré, Le Télégramme, and Le Progrès, more than doubled the sales of national newspapers, such as Le Monde, Le Figaro, L'Équipe (sports), Le Parisien, and Les Echos (finance). Free dailies, distributed in metropolitan centres, continue to increase their market share. The sector of weekly magazines includes more than 400 specialised weekly magazines published in the country.
The most influential news magazines are the left-wing Le Nouvel Observateur, centrist L'Express and right-wing Le Point (in 2009 more than 400,000 copies), but the highest circulation numbers for weeklies are attained by TV magazines and by women's magazines, among them Marie Claire and ELLE, which have foreign versions. Influential weeklies also include investigative and satirical papers Le Canard Enchaîné and Charlie Hebdo, as well as Paris Match. As in most industrialised nations, the print media have been affected by a severe crisis with the rise of the internet. In 2008, the government launched a major initiative to help the sector reform and become financially independent, but in 2009 it had to give €600,000 to help the print media cope with the 2008 financial crisis, in addition to existing subsidies.
In 1974, after years of centralised monopoly on radio and television, the governmental agency ORTF was split into several national institutions, but the three already-existing TV channels and four national radio stations remained under state control. It was only in 1981 that the government allowed free broadcasting in the territory.
=== Cuisine ===
Different regions have different styles. In the north, butter and cream are common ingredients, whereas olive oil is more commonly used in the south. Each region of France has traditional specialties: cassoulet in the southwest, choucroute in Alsace, quiche in the Lorraine region, beef bourguignon in Burgundy, Provençal tapenade, etc. France is most famous for its wines and cheeses, which are often named for the territory where they are produced (AOC). A meal typically consists of three courses, entrée ('starter'), plat principal ('main course'), and fromage ('cheese') or dessert, sometimes with a salad served before the cheese or dessert.
French cuisine is also regarded as a key element of the quality of life and the attractiveness of France. A French publication, the Michelin Guide, awards Michelin stars for excellence to a select few establishments. The acquisition or loss of a star can have dramatic effects on the success of a restaurant. By 2006, the Michelin Guide had awarded 620 stars to French restaurants.
In addition to its wine tradition, France is also a major producer of beer and rum. The three main French brewing regions are Alsace (60% of national production), Nord-Pas-de-Calais, and Lorraine. French rum is made in distilleries on islands in the Atlantic and Indian oceans.
=== Sports ===
France hosts "the world's biggest annual sporting event", the annual cycling race Tour de France. Other popular sports played in France include football, judo, tennis, rugby union and pétanque. France has hosted events such as the 1938 and 1998 FIFA World Cups, the 2007 Rugby World Cup, and the 2023 Rugby World Cup. The country also hosted the 1960 European Nations' Cup, UEFA Euro 1984, UEFA Euro 2016 and 2019 FIFA Women's World Cup. The Stade de France in Saint-Denis is France's largest stadium and was the venue for the 1998 FIFA World Cup and 2007 Rugby World Cup finals. Since 1923, France is famous for its 24 Hours of Le Mans sports car endurance race. Several major tennis tournaments take place in France, including the Paris Masters and the French Open, one of the four Grand Slam tournaments. French martial arts include Savate and Fencing.
France has a close association with the Modern Olympic Games; it was a French aristocrat, Baron Pierre de Coubertin, who suggested the Games' revival, at the end of the 19th century. Paris hosted the second Games in 1900, and has hosted the Olympics on five further occasions: the 1924 Summer Olympics, the 2024 Summer Olympics both in Paris and three Winter Games (1924 in Chamonix, 1968 in Grenoble and 1992 in Albertville). France introduced Olympics for deaf people (Deaflympics) in 1924.
Both the national football team and the national rugby union team are nicknamed "Les Bleus". Football is the most popular sport in France, with over 1,800,000 registered players and over 18,000 registered clubs.
The French Open, also called Roland-Garros, is a major tennis tournament held over two weeks between late May and early June at the Stade Roland-Garros in Paris. It is the premier clay court tennis championship event in the world and the second of four annual Grand Slam tournaments.
Rugby union is popular, particularly in Paris and the southwest of France. The national rugby union team has competed at every Rugby World Cup; it takes part in the annual Six Nations Championship.
== See also ==
Outline of France
== Notes ==
== References ==
== Further reading ==
== External links ==
France at Organisation for Economic Co-operation and Development
France at UCB Libraries GovPubs
France at the EU
Wikimedia Atlas of France
Geographic data related to France at OpenStreetMap
Key Development Forecasts for France from International Futures
=== Economy ===
INSEE
OECD France statistics
=== Government ===
France.fr – official French tourism site (in English)
Gouvernement.fr – official site of the government (in French)
Official site of the French public service – links to various administrations and institutions
Official site of the National Assembly
=== Culture ===
Contemporary French Civilization. Archived 27 August 2007 at the Wayback Machine. Journal, University of Illinois.
FranceGuide. Archived 17 June 2015 at the Wayback Machine. Official site of the French Government Tourist Office. |
Free content | Free content, libre content, libre information, or free information is any kind of creative work, such as a work of art, a book, a software program, or any other creative content for which there are very minimal copyright and other legal limitations on usage, modification and distribution. These are works or expressions which can be freely studied, applied, copied and modified by anyone for any purpose including, in some cases, commercial purposes. Free content encompasses all works in the public domain and also those copyrighted works whose licenses honor and uphold the definition of free cultural work.
In most countries, the Berne Convention grants copyright holders control over their creations by default. Therefore, copyrighted content must be explicitly declared free by the authors, which is usually accomplished by referencing or including licensing statements from within the work. The right to reuse such a work is granted by the authors in a license known as a free license, a free distribution license, or an open license, depending on the rights assigned. These freedoms given to users in the reuse of works (that is, the right to freely use, study, modify or distribute these works, possibly also for commercial purposes) are often associated with obligations (to cite the original author, to maintain the original license of the reused content) or restrictions (excluding commercial use, banning certain media) chosen by the author. There are a number of standardized licenses offering varied options that allow authors to choose the type of reuse of their work that they wish to authorize or forbid.
== Definition ==
There are a number of different definitions of free content in regular use. Legally, however, free content is very similar to open content. An analogy is a use of the rival terms free software and open-source, which describe ideological differences rather than legal ones. The term Open Source, by contrast, sought to encompass them all in one movement. For instance, the Open Knowledge Foundation's Open Definition describes "open" as synonymous with the definition of free in the "Definition of Free Cultural Works" (as also in the Open Source Definition and Free Software Definition). For such free/open content both movements recommend the same three Creative Commons licenses, the CC BY, CC BY-SA, and CC0.
== Legal background ==
=== Copyright ===
Copyright is a legal concept, which gives the author or creator of a work legal control over the duplication and public performance of their work. In many jurisdictions, this is limited by a time period after which the works then enter the public domain. Copyright laws are a balance between the rights of creators of intellectual and artistic works and the rights of others to build upon those works. During the time period of copyright the author's work may only be copied, modified, or publicly performed with the consent of the author, unless the use is a fair use. Traditional copyright control limits the use of the work of the author to those who either pay royalties to the author for usage of the author's content or limit their use to fair use. Secondly, it limits the use of content whose author cannot be found. Finally, it creates a perceived barrier between authors by limiting derivative works, such as mashups and collaborative content. Although open content has been described as a counterbalance to copyright, open content licenses rely on a copyright holder's power to license their work, as copyleft which also utilizes copyright for such a purpose.
=== Public domain ===
The public domain is a range of creative works whose copyright has expired or was never established, as well as ideas and facts which are ineligible for copyright. A public domain work is a work whose author has either relinquished to the public or no longer can claim control over, the distribution and usage of the work. As such, any person may manipulate, distribute, or otherwise use the work, without legal ramifications. A work in the public domain or released under a permissive license may be referred to as "copycenter".
=== Copyleft ===
Copyleft is a play on the word copyright and describes the practice of using copyright law to remove restrictions on distributing copies and modified versions of a work. The aim of copyleft is to use the legal framework of copyright to enable non-author parties to be able to reuse and, in many licensing schemes, modify content that is created by an author. Unlike works in the public domain, the author still maintains copyright over the material, however, the author has granted a non-exclusive license to any person to distribute, and often modify, the work. Copyleft licenses require that any derivative works be distributed under the same terms and that the original copyright notices be maintained. A symbol commonly associated with copyleft is a reversal of the copyright symbol, facing the other way; the opening of the C points left rather than right. Unlike the copyright symbol, the copyleft symbol does not have a codified meaning.
== Usage ==
Projects that provide free content exist in several areas of interest, such as software, academic literature, general literature, music, images, video, and engineering. Technology has reduced the cost of publication and reduced the entry barrier sufficiently to allow for the production of widely disseminated materials by individuals or small groups. Projects to provide free literature and multimedia content have become increasingly prominent owing to the ease of dissemination of materials that are associated with the development of computer technology. Such dissemination may have been too costly prior to these technological developments.
=== Media ===
In media, which includes textual, audio, and visual content, free licensing schemes such as some of the licenses made by Creative Commons have allowed for the dissemination of works under a clear set of legal permissions. Not all Creative Commons licenses are entirely free; their permissions may range from very liberal general redistribution and modification of the work to a more restrictive redistribution-only licensing. Since February 2008, Creative Commons licenses which are entirely free carry a badge indicating that they are "approved for free cultural works". Repositories exist which exclusively feature free material and provide content such as photographs, clip art, music, and literature. While extensive reuse of free content from one website in another website is legal, it is usually not sensible because of the duplicate content problem. Wikipedia is amongst the most well-known databases of user-uploaded free content on the web. While the vast majority of content on Wikipedia is free content, some copyrighted material is hosted under fair-use criteria.
=== Software ===
Free and open-source software, which is often referred to as open source software and free software, is a maturing technology with companies using them to provide services and technology to both end-users and technical consumers. The ease of dissemination increases modularity, which allows for smaller groups to contribute to projects as well as simplifying collaboration. Some claim that open source development models offer similar peer-recognition and collaborative benefit incentive as in more classical fields such as scientific research, with the social structures that result leading to decreased production costs.
Given sufficient interest in a software component, by using peer-to-peer distribution methods, distribution costs may be reduced, easing the burden of infrastructure maintenance on developers. As distribution is simultaneously provided by consumers, these software distribution models are scalable; that is, the method is feasible regardless of the number of consumers. In some cases, free software vendors may use peer-to-peer technology as a method of dissemination. Project hosting and code distribution is not a problem for most free projects as a number of providers offer these services free of charge.
=== Engineering and technology ===
Free content principles have been translated into fields such as engineering, where designs and engineering knowledge can be readily shared and duplicated, in order to reduce overheads associated with project development. Open design principles can be applied in engineering and technological applications, with projects in mobile telephony, small-scale manufacture, the automotive industry, and even agricultural areas. Technologies such as distributed manufacturing can allow computer-aided manufacturing and computer-aided design techniques to be able to develop small-scale production of components for the development of new, or repair of existing, devices. Rapid fabrication technologies underpin these developments, which allow end-users of technology to be able to construct devices from pre-existing blueprints, using software and manufacturing hardware to convert information into physical objects.
=== Academia ===
In academic work, the majority of works are not free, although the percentage of works that are open access is growing. Open access refers to online research outputs that are free of all restrictions to access and free of many restrictions on use (e.g. certain copyright and license restrictions). Authors may see open access publishing as a way of expanding the audience that is able to access their work to allow for greater impact, or support it for ideological reasons. Open access publishers such as PLOS and BioMed Central provide capacity for review and publishing of free works; such publications are currently more common in science than humanities. Various funding institutions and governing research bodies have mandated that academics must produce their works to be open-access, in order to qualify for funding, such as the US National Institutes of Health, Research Councils UK (effective 2016) and the European Union (effective 2020).
At an institutional level, some universities, such as the Massachusetts Institute of Technology, have adopted open access publishing by default by introducing their own mandates. Some mandates may permit delayed publication and may charge researchers for open access publishing. For teaching purposes, some universities, including MIT, provide freely available course content, such as lecture notes, video resources and tutorials. This content is distributed via Internet to the general public. Publication of such resources may be either by a formal institution-wide program, or informally, by individual academics or departments.
Open content publication has been seen as a method of reducing costs associated with information retrieval in research, as universities typically pay to subscribe for access to content that is published through traditional means. Subscriptions for non-free content journals may be expensive for universities to purchase, though the articles are written and peer-reviewed by academics themselves at no cost to the publisher. This has led to disputes between publishers and some universities over subscription costs, such as the one that occurred between the University of California and the Nature Publishing Group.
=== Education ===
Free and open content has been used to develop alternative routes towards higher education. Open content is a free way of obtaining higher education that is "focused on collective knowledge and the sharing and reuse of learning and scholarly content." There are multiple projects and organizations that promote learning through open content, including OpenCourseWare and Khan Academy. Some universities, like MIT, Yale, and Tufts are making their courses freely available on the internet.
There are also a number of organizations promoting the creation of openly licensed textbooks such as the University of Minnesota's Open Textbook Library, Connexions, OpenStax College, the Saylor Academy, Open Textbook Challenge, and Wikibooks.
=== Legislation ===
Any country has its own law and legal system, sustained by its legislation, which consists of documents. In a democratic country, laws are published as open content, in principle free content; but in general, there are no explicit licenses attributed for the text of each law, so the license must be assumed as an implied license. Only a few countries have explicit licenses in their law-documents, as the UK's Open Government Licence (a CC BY compatible license). In the other countries, the implied license comes from its proper rules (general laws and rules about copyright in government works). The automatic protection provided by the Berne Convention does not apply to the texts of laws: Article 2.4 excludes the official texts from the automatic protection. It is also possible to "inherit" the license from context. The set of country's law-documents is made available through national repositories. Examples of law-document open repositories: LexML Brazil, Legislation.gov.uk, and N-Lex. In general, a law-document is offered in more than one (open) official version, but the main one is that published by a government gazette. So, law-documents can eventually inherit license expressed by the repository or by the gazette that contains it.
== History ==
=== Origins and Open Content Project ===
The concept of applying free software licenses to content was introduced by Michael Stutz, who in 1997 wrote the paper "Applying Copyleft to Non-Software Information" for the GNU Project. The term "open content" was coined by David A. Wiley in 1998 and evangelized via the Open Content Project, describing works licensed under the Open Content License (a non-free share-alike license, see 'Free content' below) and other works licensed under similar terms.
The website of the Open Content Project once defined open content as 'freely available for modification, use and redistribution under a license similar to those used by the open-source / free software community'. However, such a definition would exclude the Open Content License because that license forbids charging for content; a right required by free and open-source software licenses.
=== 5Rs definition ===
It has since come to describe a broader class of content without conventional copyright restrictions. The openness of content can be assessed under the '5Rs Framework' based on the extent to which it can be retained, reused, revised, remixed and redistributed by members of the public without violating copyright law. Unlike free content and content under open-source licenses, there is no clear threshold that a work must reach to qualify as 'open content'.
The 5Rs are put forward on the Open Content Project website as a framework for assessing the extent to which content is open:
Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage)
Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup)
Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)
This broader definition distinguishes open content from open-source software, since the latter must be available for commercial use by the public. However, it is similar to several definitions for open educational resources, which include resources under noncommercial and verbatim licenses.
=== Successor projects ===
In 2003, David Wiley announced that the Open Content Project had been succeeded by Creative Commons and their licenses; Wiley joined as "Director of Educational Licenses".
In 2005, the Open Icecat project was launched, in which product information for e-commerce applications was created and published under the Open Content License. It was embraced by the tech sector, which was already quite open source minded.
In 2006, a Creative Commons' successor project, the Definition of Free Cultural Works, was introduced for free content. It was put forth by Erik Möller, Richard Stallman, Lawrence Lessig, Benjamin Mako Hill, Angela Beesley, and others. The Definition of Free Cultural Works is used by the Wikimedia Foundation. In 2009, the Attribution and Attribution-ShareAlike Creative Commons licenses were marked as "Approved for Free Cultural Works".
==== Open Knowledge Foundation ====
Another successor project is the Open Knowledge Foundation, founded by Rufus Pollock in Cambridge, in 2004 as a global non-profit network to promote and share open content and data.
In 2007 the OKF gave an Open Knowledge Definition for "content such as music, films, books; data be it scientific, historical, geographic or otherwise; government and other administrative information". In October 2014 with version 2.0 Open Works and Open Licenses were defined and "open" is described as synonymous to the definitions of open/free in the Open Source Definition, the Free Software Definition, and the Definition of Free Cultural Works.
A distinct difference is the focus given to the public domain, open access, and readable open formats. OKF recommends six conformant licenses: three of OKN's (Open Data Commons Public Domain Dedication and Licence, Open Data Commons Attribution License, Open Data Commons Open Database License) and the CC BY, CC BY-SA, and CC0 Creative Commons licenses.
== See also ==
Digital rights
Open source
Free education
Free software movement
Freedom of information
Information wants to be free
Open publishing
Open-source hardware
Project Gutenberg [Knowledge for free – The Emergence of Open Educational Resources]. 2007, ISBN 92-64-03174-X.
== Explanatory notes ==
== References ==
== Further reading ==
D. Atkins; J. S. Brown; A. L. Hammond (February 2007). A Review of the Open Educational Resources (OER) Movement: Achievements, Challenges, and New Opportunities (PDF). Report to The William and Flora Hewlett Foundation.
Organisation for Economic Co-operation and Development (OECD): Giving Know (Archived 7 July 2017 at the Wayback Machine)
== External links ==
Media related to Open content at Wikimedia Commons |
Free public transport | Free public transport, often called fare-free public transit or zero-fare public transport, is public transport which is fully funded by means other than collecting fares from passengers. It may be funded by national, regional or local government through taxation, and/or by commercial sponsorship by businesses. Alternatively, the concept of "free-ness" may take other forms, such as no-fare access via a card which may or may not be paid for in its entirety by the user.
On 29 February 2020, Luxembourg became the first country in the world to make all public transport in the country (buses, trams, and trains) free to use. On 1 October 2022, Malta made its public transport free on most routes, though unlike in Luxembourg, this applies only to residents.
As some transit lines intended to operate with fares initially start service, the organisation may elect not to collect fares for an introductory period to create interest or to test operations.
== Types ==
=== City-wide systems ===
Several mid-size European cities and many smaller towns around the world have converted their public transportation networks to zero-fare. The city of Hasselt in Belgium is a notable example: fares were abolished in 1997 and ridership was as much as "13 times higher" by 2006. Tallinn, the capital city of Estonia with more than 420,000 inhabitants, switched to free public transport in 2013 after a public vote.
In the U.S. state of Washington, 14 rural transit systems have adopted zero-fare policies, either permanently or through pilots in the 2020s. Fares for passengers aged 18 and younger have been free on most local and inter-city transit systems in the state since September 2022. The program was part of a larger statewide transportation package and also includes inter-city Amtrak trains operated by the state, as well as the Washington State Ferries system.
Kharkiv in Ukraine is the largest city in the world with free public transport with a population of 1,420,000 residents, where free public transport for everyone has been introduced in 2022.
Since 2025, local transport in Belgrade, a city with 1,380,000 inhabitants, has been free.
=== Local services ===
Local zero-fare shuttles or inner-city loops are far more common than citywide systems. They often use buses or trams. These may be set up by a city government to ease bottlenecks or fill short gaps in the transport network.
Zero-fare transport is often operated as part of the services offered within a public facility, such as a hospital or university campus shuttle or an airport inter-terminal shuttle.
Some zero-fare services may be built to avoid the need for large transport construction. Port cities where shipping would require very high bridges might provide zero-fare ferries instead. These are free at the point of use, just as the use of a bridge might have been.
Machinery installed within a building or shopping centre can be seen as 'zero-fare transport': elevators, escalators and moving sidewalks are often provided by property owners and funded through the sales of goods and services. Community bicycle programs, providing free bicycles for short-term public use could be thought of as zero-fare transport. In Australia, Melbourne and Adelaide have a free tram zone in their CBDs to encourage car commuters to keep the cars outside of the nucleus of the city.
A common example of zero-fare transport is student transport, where students travelling to or from school do not need to pay. The University of Wisconsin–Stevens Point partly funds the Stevens Point Transit system. All students at the university can use any of the four citywide campus routes and the other four bus routes throughout the city free of charge. The university also funds two late night bus routes to serve the downtown free of charge with a goal of cutting down drunk driving. The University of Nottingham offers free Hopper Bus between its University Park and Jubilee, Sutton Bonington and Royal Derby Hospital campuses, where no other bus companies operate direct routes between. However, this service requires passengers to tap their university ID to board, meaning that members of the public cannot ride on these buses.
In some regions transport is free because the revenues are lower than expenses as fare collection is already partially paid by government or company or service (for example BMO railway road in Moscow, most part of is used to as service transport and officially pick up passengers).
Many large amusement parks have trams servicing large parking lots or distant areas. Disneyland in Anaheim, California, runs a tram from its entrance, across the parking lot, and across the street to its hotel as well as the bus stop for Orange County and Los Angeles local transit buses. Six Flags Magic Mountain in Valencia, California, provides tram service throughout its parking lot.
In July 2017, Dubai announced it would offer free bus services for a short period of time on selected days.
In the northwestern United States, some tribal governments offer free bus service on their respective reservations, including on the Muckleshoot, Spokane, Umatilla and Yakama Indian Reservations.
=== Emergency relief ===
During natural disasters, pandemics, and other area-wide emergencies, some transit agencies offer zero-fare transport.
==== United States ====
Sonoma–Marin Area Rail Transit commuter rail temporarily offered free service for those needing transportation alternatives during the 2017 Tubbs Fire and 2019 Kincade Fire.
Some agencies, including the Central Ohio Transit Authority and King County Metro, offer free public transport during snow emergencies to reduce the number of vehicles on the street.
==== COVID-19 pandemic ====
During the COVID-19 pandemic, several agencies paused the collection of fares to alleviate concerns that the virus could be transmitted on surfaces, to keep travelers from coming into close contact with employees, or to allow rear door boarding on their vehicles. These agencies are mostly located in smaller cities where the farebox recovery ratio is low as they could afford to implement this policy without a major hit to revenue. A study was conducted to detail the ways that fare collection during the pandemic varied geographically and demographically. During this time, 63.5% of the 263 public transit agencies studied had suspended fare collection. Geographically, the alleviation of fares was common around urban centers like San Francisco, Los Angeles, Seattle, New York City, etc and less obvious in northwestern states.
== Benefits ==
=== Operational benefits ===
Transport operators can benefit from faster boarding and shorter dwell times, allowing faster timetabling of services. Although some of these benefits can be achieved in other ways, such as off-vehicle ticket sales and modern types of electronic fare collection, zero-fare transport avoids equipment and personnel costs.
Passenger aggression may be reduced. In 2008 bus drivers of Société des Transports Automobiles (STA) in Essonne held strikes demanding zero-fare transport for this reason. They claim that 90% of the aggression is related to refusal to pay the fare.
A randomized controlled trial conducted in Santiago, Chile, found that access to fare-free public transport increased overall travel by 12%, particularly boosting off-peak travel by 23% due to a rise in both public transport and non-motorized trips.
=== Commercial benefits ===
Some zero-fare transport services are funded by private businesses, such as the merchants in a shopping mall, in the hope that doing so will increase sales or other revenue from increased foot traffic or ease of travel. Employers often operate free shuttles as a benefit to their employees, or as part of a congestion mitigation agreement with a local government.
=== Community benefits ===
Zero-fare transport can make the system more accessible and fair for low-income residents. Other benefits are the same as those attributed to public transport generally:
Road traffic can benefit from decreased congestion and faster average road speeds, fewer traffic accidents, easier parking, savings from reduced wear and tear on roads
Increased public access, especially for the poor and low waged, which can in turn benefit social integration, businesses and those looking for work
Environmental and public health benefits including decreased air pollution and noise pollution from road traffic
Research findings from Stroud & Bekhit (2023) about inclusivity gaps in FFPT research studies
Research of fare-free public transport (FFTP) studies by Stroud and Bekhit (2025) reveals that only 25% of the studies significantly consider non-dominant groups of the population in their research, with extensive knowledge gaps about FFPT impacts on marginalized communities.
=== Global benefits ===
Global benefits of zero-fare transport are also the same as those attributed to public transport generally. If use of personal cars is discouraged, zero-fare public transport could mitigate the problems of global warming and oil depletion. On average, cars emit one pound of CO2 per mile driven. Public transport helps to reduce the number of vehicles being driven which results in decreasing carbon emissions. Cars are also responsible for emitting other pollutants such as antifreeze.
== Drawbacks ==
Several large U.S. municipalities have attempted zero-fare systems, but many of these implementations have been judged unsuccessful by policy makers. A 2002 National Center for Transportation Research report suggests that, while transit ridership does tend to increase, there are also some disadvantages:
An increase in vandalism, resulting in increased costs for security and vehicle maintenance
In large transit systems, significant revenue shortfalls unless additional funding was provided
An increase in driver complaints and staff turnover, although farebox-related arguments were eliminated
Slower service overall (not collecting fares has the effect of speeding boarding, but increased crowding tends to swamp out this effect unless additional vehicles are added)
Declines in schedule adherence
This U.S. report suggests that, while ridership does increase overall, the goal of enticing drivers to take transit instead of driving is not necessarily met: because fare-free systems tend to attract a certain number of "problem riders", zero-fare systems may have the unintended effect of convincing some 'premium' riders to go back to driving their cars. It should be kept in mind that this was a study that only looked at U.S. cities, and the author's conclusions may be less applicable in other countries that have better social safety nets and less crime than the large U.S. cities studied.
== Countries with countrywide zero-fare transport ==
Luxembourg was the first country to offer free public transport (trams, trains, and buses) for everyone across the entire country. Since 29 February 2020, all public transport has been free in the country, with the exception of the first class on trains.
Estonia wants to become entirely zero-fare. Counties in Estonia are allowed to make public transport free. Between 2018 and 2024, buses were free of charge in 11 of Estonia's 15 counties. Public transport in Estonia's capital, Tallinn, has been free to local residents since 2013. As of January 2024, free local transport in the counties was largely abolished, but remains available for people up to 19 years of age and those aged 63 and over.
Malta became fare free for all residents on 1 October 2022.
There are UK-wide provisions for free bus travel for senior citizens (60-years-old and over in Scotland, Wales, Northern Ireland and Greater London, state pension age for England). The Scottish government has also implemented free bus travel across the country for people under 22-years-old since 31 January 2022, while the Scottish National Blind Person Scheme allows free rail and ferry travel for blind persons. The senior citizens bus pass also apply to rail and rapid transit (the Tube) in Greater London, Wales, and Northern Ireland.
Romania has made public transportation including buses, subways and inter-country trains free for all pre-university students. However university students only have the option for a 50% discount on individual inter-country train tickets or inter-city subscriptions.
In the Netherlands, students with Dutch citizenship get free public transportation country-wide in trains, trams, buses and metro. Students who are studying at universities of applied sciences and universities need to finish their degree ten years after starting it or they will need to pay back the amount of money.
Throughout Spain, from 1 September to 31 December 2022, all multi-trip ticket train journeys on commuter services and medium-distance routes (less than 300 kilometres (190 mi)) were made free of charge.
Since March 2024, the Hungarian national railway company MÁV does not charge those of ages 65 and over and 14 and under for transportation. Buses of the company Volánbusz can also be used free of charge from people of these same age ranges.
== List of towns and cities with area-wide zero-fare transport ==
=== Europe ===
=== Asia ===
=== Americas ===
==== Brazil ====
==== Canada ====
==== United States ====
== Perception and analysis ==
Fare-free transit has been repeatedly demonstrated to increase ridership—especially during non-peak travel periods—and customer satisfaction. Several analyses have shown ridership increased by as much as 15% overall and about 45% during the off-peak periods. The effects on public transport operators included schedule adherence problems because of the increased ridership and more complaints about rowdiness from younger passengers, though obviously there were no more direct conflicts with passengers regarding fare collection. When the University of California, Los Angeles covered fares for the university community, ridership increased by 56% in the first year and solo driving fell by 20% (though one older study showed no measurable impact on automobile use).
In the United States, mass transit systems that collect fares are only expected to generate about 10% of the annual revenue themselves, with the remainder covered by either public or private investment and advertisements. Therefore, politicians and social-justice advocacy groups, such as the Swedish network Planka.nu, see zero-fare public transport as a low-cost, high-impact approach to reducing economic inequality. It has also been argued that transportation to and from work is essential to the employer in the managing of work hours, so financing of public transportation should fall to employers rather than private individuals or public funds.
== See also ==
Car-free movement
Effects of the car on societies
Movimento Passe Livre, Brazilian movement campaigning for free public transport
Planka.nu Swedish membership network which pays the penalty fare if you get caught without paying ticket
Reduced fare programs
Transport divide
Universal basic services
Universal transit pass
Urban vitality
9-Euro-Ticket (in Germany in June, July and August 2022)
== References ==
== External links ==
freepublictransports.com Network of groups promoting free public transport
freepublictransit.org Advocacy website
Argument against free public transport
Luxembourg to trial free public transport to tackle congestion. Sky News report on YouTube. Published/uploaded on 23 December 2019. |
Fuel economy in aircraft | The fuel economy in aircraft is the measure of the transport energy efficiency of aircraft.
Fuel efficiency is increased with better aerodynamics and by reducing weight, and with improved engine brake-specific fuel consumption and propulsive efficiency or thrust-specific fuel consumption.
Endurance and range can be maximized with the optimum airspeed, and economy is better at optimum altitudes, usually higher. An airline efficiency depends on its fleet fuel burn, seating density, air cargo and passenger load factor, while operational procedures like maintenance and routing can save fuel.
Average fuel burn of new aircraft fell 45% from 1968 to 2014, a compounded annual reduction 1.3% with a variable reduction rate.
In 2018, CO₂ emissions totalled 747 million tonnes for passenger transport, for 8.5 trillion revenue passenger kilometres (RPK), giving an average of 88 grams CO₂ per RPK; this represents 28 g of fuel per kilometre, or a 3.5 L/100 km (67 mpg‑US) fuel consumption per passenger, on average. The worst-performing flights are short trips of from 500 to 1500 kilometres because the fuel used for takeoff is relatively large compared to the amount expended in the cruise segment, and because less fuel-efficient regional jets are typically used on shorter flights.
New technology can reduce engine fuel consumption, like higher pressure and bypass ratios, geared turbofans, open rotors, hybrid electric or fully electric propulsion; and airframe efficiency with retrofits, better materials and systems and advanced aerodynamics.
== Flight efficiency theory ==
A powered aircraft counters its weight through aerodynamic lift and counters its aerodynamic drag with thrust. The aircraft's maximum range is determined by the level of efficiency with which thrust can be applied to overcome the aerodynamic drag.
=== Aerodynamics ===
A subfield of fluid dynamics, aerodynamics studies the physics of a body moving through the air. As lift and drag are functions of air speed, their relationships are major determinants of an aircraft's design efficiency.
Aircraft efficiency is augmented by maximizing lift-to-drag ratio, which is attained by minimizing parasitic drag, and lift-generated induced drag, the two components of aerodynamic drag. As parasitic drag increases and induced drag decreases with speed, there is an optimum speed where the sum of both is minimal; this is the best glide ratio. For powered aircraft, the optimum glide ratio has to be balanced with thrust efficiency.
Parasitic drag is constituted by form drag and skin-friction drag, and grows with the square of the speed in the drag equation. The form drag is minimized by having the smallest frontal area and by streamlining the aircraft for a low drag coefficient, while skin friction is proportional to the body's surface area, and can be reduced by maximizing laminar flow.
Induced drag can be reduced by decreasing the size of the airframe, fuel and payload weight, and by increasing the wing aspect ratio or by using wingtip devices at the cost of increased structure weight.
==== Design speed ====
By increasing efficiency, a lower cruise-speed augments the range and reduces the environmental impact of aviation. According to a research project completed in 2024 and focusing on short to medium range passenger aircraft, design for subsonic instead of transonic speed (about 15% less speed) with turboprop instead of turbofan propulsion would save 21% of fuel compared to an aircraft of conventional design speed and similar characteristics in terms of size, range and expected general technology improvements. Another analysis from 2014 compared the Airbus 320 from 2009 with a hypothetical turboprop successor flying at a 33% lower Mach number, concluding that the slower aircraft would have 36% less fuel consumption. Both state that the decrease of fuel costs enabled by lower design speed would overcompensate the increase of time-related costs resp. the decrease in revenue passenger miles flown per day. In other words, subsonic turboprop aircraft would be more profitable than transonic turbofan aircraft even at current energy prices without additional costs related to climate action like emission fees, aviation fuel taxation or higher prices for sustainable aviation fuels compared to fossile kerosene.
For supersonic flight, drag increases at Mach 1.0 but decreases again after the transition. With a specifically designed aircraft, such as the (discontinued) Aerion AS2, the Mach 1.1 range at 3,700 nmi is 70% of the maximum range of 5,300 nmi at Mach 0.95, but increases to 4,750 nmi at Mach 1.4 for 90% before falling again.
==== Wingtip devices ====
Wingtip devices increase the effective wing aspect ratio, lowering lift-induced drag caused by wingtip vortices and improving the lift-to-drag ratio without increasing the wingspan. (Wingspan is limited by the available width in the ICAO Aerodrome Reference Code.) Airbus installed wingtip fences on its planes since the A310-300 in 1985, and Sharklet blended-winglets for the A320 were launched during the November 2009 Dubai Airshow. They add 200 kilograms (440 lb) but offer a 3.5% fuel burn reduction on flights over 2,800 km (1,500 nmi).
On average, among large commercial jets, Boeing 737-800s benefit the most from winglets. They average a 6.69% increase in efficiency but depending on the route have a fuel savings distribution spanning from 4.6% to 10.5%. Airbus A319s see the most consistent fuel and emissions savings from winglets. Airbus A321s average a 4.8% improvement in fuel consumption, but have the widest swing based on routes and individual aircraft, recognizing anywhere from 0.2% improvement to 10.75%.
=== Weight ===
As the weight indirectly generates lift-induced drag, its minimization leads to better aircraft efficiency. For a given payload, a lighter airframe generates a lower drag. Minimizing weight can be achieved through the airframe's configuration, materials science and construction methods. To obtain a longer range, a larger fuel fraction of the maximum takeoff weight is needed, adversely affecting efficiency.
The deadweight of the airframe and fuel is non-payload that must be lifted to altitude and kept aloft, contributing to fuel consumption. A reduction in airframe weight enables the use of smaller, lighter engines. The weight savings in both allow for a lighter fuel load for a given range and payload. A rule-of-thumb is that a reduction in fuel consumption of about 0.75% results from each 1% reduction in weight.
The payload fraction of modern twin-aisle aircraft is 18.4% to 20.8% of their maximum take-off weight, while single-aisle airliners are between 24.9% and 27.7%. An aircraft weight can be reduced with light-weight materials such as titanium, carbon fiber and other composite plastics if the expense can be recouped over the aircraft's lifetime. Fuel efficiency gains reduce the fuel carried, reducing the take-off weight for a positive feedback. For example, the Airbus A350 design includes a majority of light-weight composite materials. The Boeing 787 Dreamliner was the first airliner with a mostly composite airframe.
==== Flight distance ====
For long-haul flights, the airplane needs to carry additional fuel, leading to higher fuel consumption. Above a certain distance it becomes more fuel-efficient to make a halfway stop to refuel, despite the energy losses in descent and climb. For example, a Boeing 777-300 reaches that point at 3,000 nautical miles (5,600 km). It is more fuel-efficient to make a non-stop flight at less than this distance and to make a stop when covering a greater total distance.
Very long non-stop passenger flights suffer from the weight penalty of the extra fuel required, which means limiting the number of available seats to compensate. For such flights, the critical fiscal factor is the quantity of fuel burnt per seat-nautical mile. For these reasons, the world's longest commercial flights were cancelled c. 2013. An example is Singapore Airlines' former New York to Singapore flight, which could carry only 100 passengers (all business class) on the 10,300-mile (16,600 km) flight. According to an industry analyst, "It [was] pretty much a fuel tanker in the air." Singapore Airlines Flights 21 and 22 were re-launched in 2018 with more seats in an A350-900ULR.
In the late 2000s/early 2010s, rising fuel prices coupled with the 2008 financial crisis and the Great Recession caused the cancellation of many ultra-long haul, non-stop flights. This included the services provided by Singapore Airlines from Singapore to both Newark and Los Angeles that was ended in late 2013. But as fuel prices decreased and more fuel-efficient aircraft have come into service, many ultra-long-haul routes were reinstated or newly scheduled (see Longest flights).
=== Propulsive efficiency ===
The efficiency can be defined as the amount of energy imparted to the plane per unit of energy in the fuel. The rate at which energy is imparted equals thrust multiplied by airspeed.
To get thrust, an aircraft engine is either a shaft engine – piston engine or turboprop, with its efficiency inversely proportional to its brake-specific fuel consumption – coupled with a propeller having its own propulsive efficiency; or a jet engine with its efficiency given by its airspeed divided by the thrust-specific fuel consumption and the specific energy of the fuel.
Turboprops have an optimum speed below 460 miles per hour (740 km/h). This is less than jets used by major airlines today, however propeller planes are much more efficient. The Bombardier Dash 8 Q400 turboprop is used for this reason as a regional airliner.
Jet fuel cost and emissions reduction have renewed interest in the propfan concept for jetliners with an emphasis on engine/airframe efficiency that might come into service beyond the Boeing 787 and Airbus A350XWB. For instance, Airbus has patented aircraft designs with twin rear-mounted counter-rotating propfans. Propfans bridge the gap between turboprops, losing efficiency beyond Mach 0.5-0.6, and high-bypass turbofans, more efficient beyond Mach 0.8. NASA has conducted an Advanced Turboprop Project (ATP), where they researched a variable-pitch propfan that produced less noise and achieved high speeds.
== Operations ==
In Europe in 2017, the average airline fuel consumption per passenger was 3.4 L/100 km (69 mpg‑US), 24% less than in 2005, but as the traffic grew by 60% to 1,643 billion passenger kilometres, CO₂ emissions were up by 16% to 163 million tonnes for 99.8 g/km CO₂ per passenger.
In 2018, the US airlines had a fuel consumption of 58 mpg‑US (4.06 L/100 km) per revenue passenger for domestic flights,
or 32.5 g of fuel per km, generating 102 g CO₂ / RPK of emissions.
=== Seating classes ===
In 2013, the World Bank evaluated the business class carbon footprint as 3.04 times higher than economy class in wide-body aircraft, and first class 9.28 times higher, due to premium seating taking more space, lower weight factors, and larger baggage allowances (assuming Load Factors of 80% for Economy Class, 60% for Business Class, and 40% for First Class).
=== Speed ===
At constant propulsive efficiency, the maximum range speed is when the ratio between velocity and drag is minimal, while maximum endurance is attained at the best lift-to-drag ratio.
=== Altitude ===
Air density decreases with altitude, thus lowering drag, assuming the aircraft maintains a constant equivalent airspeed. However, air pressure and temperature both decrease with altitude, causing the maximum power or thrust of aircraft engines to reduce. To minimize fuel consumption, an aircraft should cruise close to the maximum altitude at which it can generate sufficient lift to maintain its altitude. As the aircraft's weight decreases throughout the flight, due to fuel burn, its optimum cruising altitude increases.
In a piston engine, the decrease in pressure at higher altitudes can be mitigated by the installation of a turbocharger.
Decreasing temperature at higher altitudes increases thermal efficiency.
=== Airlines ===
Since early 2006 until 2008, Scandinavian Airlines was flying slower, from 860 to 780 km/h, to save on fuel costs and curb emissions of carbon dioxide.
From 2010 to 2012, the most fuel-efficient US domestic airline was Alaska Airlines, due partly to its regional affiliate Horizon Air flying turboprops.
In 2014, MSCI ranked Ryanair as the lowest-emissions-intensity airline in its ACWI index with 75 g CO2-e/revenue passenger kilometre – below Easyjet at 82 g, the average at 123 g and Lufthansa at 132 g – by using high-density 189-seat Boeing 737-800s. In 2015 Ryanair emitted 8.64 Bn t of CO2 for 545,034 sectors flown: 15.85 t per 776 miles (674 nmi; 1,249 km) average sector (or 5.04 t of fuel: 4.04 kg/km) representing 95 kg per 90.6 million passengers (30.4 kg of fuel: 3.04 L/100 km or 76 g CO2/km).
In 2016, over the transpacific routes, the average fuel consumption was 31 pax-km per L (3.23 L/100 km [73 mpg‑US] per passenger). The most fuel-efficient were Hainan Airlines and ANA with 36 pax-km/L (2.78 L/100 km [85 mpg‑US] per passenger) while Qantas was the least efficient at 22 pax-km/L (4.55 L/100 km [51.7 mpg‑US] per passenger).
Key drivers for efficiency were the air freight share for 48%, seating density for 24%, aircraft fuel burn for 16% and passenger load factor for 12%.
That same year, Cathay Pacific and Cathay Dragon consumed 4,571,000 tonnes of fuel to transport 123,478 million revenue passenger kilometers, or 37 g/RPK, 25% better than in 1998: 4.63 L/100 km (50.8 mpg‑US).
Again in 2016, the Aeroflot Group fuel consumption is 22.9g/ASK, or 2.86 L/100 km (82 mpg‑US) per seat, 3.51 L/100 km (67.0 mpg‑US) per passenger at its 81.5% load factor.
Fuel economy in air transport comes from the fuel efficiency of the aircraft + engine model, combined with airline efficiency: seating configuration, passenger load factor and air cargo. Over the transatlantic route, the most-active intercontinental market, the average fuel consumption in 2017 was 34 pax-km per L (2.94 L/100 km [80 mpg‑US] per passenger). The most fuel-efficient airline was Norwegian Air Shuttle with 44 pax-km/L (2.27 L/100 km [104 mpg‑US] per passenger), thanks to its fuel-efficient Boeing 787-8, a high 85% passenger load factor and a high density of 1.36 seat/m2 due to a low 9% premium seating. On the other side, the least efficient was British Airways at 27 pax-km/L (3.7 L/100 km [64 mpg‑US] per passenger), using fuel-inefficient Boeing 747-400s with a low density of 0.75 seat/m2 due to a high 25% premium seating, in spite of a high 82% load factor.
In 2018, CO₂ emissions totalled 918 Mt with passenger transport accounting for 81% or 744 Mt, for 8.2 trillion revenue passenger kilometres: an average fuel economy of 90.7 g/RPK CO₂ - 29 g/km of fuel (3.61 L/100 km [65.2 mpg‑US] per passenger)
In 2019, Wizz Air stated a 57 g/RPK CO₂ emissions (equivalent to 18.1 g/km of fuel, 2.27 L/100 km [104 mpg‑US] per passenger), 40% lower than IAG or Lufthansa (95 g CO₂/RPK - 30 g/km of fuel, 3.8 L/100 km [62 mpg‑US] per passenger), due to their business classes, lower-density seating, and flight connections.
In 2021, the highest seating density in its A330neo, with 459 single-class seats, enabled Cebu Pacific to claim the lowest carbon footprint with 1.4 kg (3 lb) of fuel per seat per 100 km, equivalent to 1.75 L/100 km [134 mpg‑US] per seat.
=== Procedures ===
Continuous Descent Approaches can reduce emissions.
Beyond single-engine taxi, electric taxiing could allow taxiing on APU power alone, with the main engines shut down, to lower the fuel burn.
Airbus presented the following measures to save fuel, in its example of an Airbus A330 flying 2,500 nautical miles (4,600 km) on a route like Bangkok–Tokyo: direct routing saves 190 kg (420 lb) fuel by flying 40 km (25 mi) less; 600 kg (1,300 lb) more fuel is consumed if flying 600 m (2,000 ft) below optimum altitude without vertical flight profile optimization; cruising Mach 0.01 above the optimum speed consumes 800 kg (1,800 lb) more fuel; 1,000 kg (2,200 lb) more fuel on board consumes 150 kg (330 lb) more fuel while 100 litres (22 imp gal; 26 US gal) of unused potable water consumes 15 kg (33 lb) more fuel.
Operational procedures can save 35 kg (77 lb) fuel for every 10-minute reduction in use of the Auxiliary power unit (APU), 15 kg (33 lb) with a reduced flap approach and 30 kg (66 lb) with reduced thrust reversal on landing. Maintenance can also save fuel: 100 kg (220 lb) more fuel is consumed without an engine wash schedule; 50 kg (110 lb) with a 5 mm (0.20 in) slat rigging gap, 40 kg (88 lb) with a 10 mm (0.39 in) spoiler rigging gap, and 15 kg (33 lb) with a damaged door seal.
Yield management allows the optimization of the load factor, benefiting the fuel efficiency, as is the air traffic management optimization.
By taking advantage of wake updraft like migrating birds (biomimicry), Airbus believes an aircraft can save 5-10% of fuel by flying in formation, 1.5–2 nmi (2.8–3.7 km) behind the preceding one.
After Airbus A380 tests showing 12% savings, test flights were scheduled for 2020 with two Airbus A350s, before transatlantic flight trials with airlines in 2021.
Certification for shorter separation is enabled by ADS-B in oceanic airspace, and the only modification required would be flight control systems software.
Comfort would not be affected and trials are limited to two aircraft to reduce complexity but the concept could be expanded to include more.
Commercial operations could begin in 2025 with airline schedule adjustments, and other manufacturers' aircraft could be included.
While routes are up to 10% longer than necessary, modernized air traffic control systems using ADS-B technology like the FAA NextGen or European SESAR could allow more direct routing, but there is resistance from air traffic controllers.
== History ==
=== Past ===
Modern jet aircraft have twice the fuel efficiency of the earliest jet airliners. Late 1950s piston airliners like the Lockheed L-1049 Super Constellation and DC-7 were 1% to 28% more energy-intensive than 1990s jet airliners which cruise 40 to 80% faster. The early jet airliners were designed at a time when air crew labor costs were higher relative to fuel costs. Despite the high fuel consumption, because fuel was inexpensive in that era the higher speed resulted in favorable economical returns since crew costs and amortization of capital investment in the aircraft could be spread over more seat-miles flown per day.
Productivity including speed went from around 150 ASK/MJ*km/h for the 1930s DC-3 to 550 for the L-1049 in the 1950s, and from 200 for the DH-106 Comet 3 to 900 for the 1990s B737-800.
Today's turboprop airliners have better fuel-efficiency than current jet airliners, in part because of their propellers. In 2012, turboprop airliner usage was correlated with US regional carriers' fuel efficiency.
Jet airliners became 70% more fuel efficient between 1967 and 2007, 40% due to improvements in engine efficiency and 30% from airframes.
Efficiency gains were larger early in the jet age than later, with a 55-67% gain from 1960 to 1980 and a 20-26% gain from 1980 to 2000.
Average fuel burn of new aircraft fell 45% from 1968 to 2014, a compounded annual reduction 1.3% with variable reduction rate.
Concorde, a supersonic transport, managed about 17 passenger-miles to the Imperial gallon, which is 16.7 L/100 km per passenger; similar to a business jet, but much worse than a subsonic turbofan aircraft. Airbus states a fuel rate consumption of their A380 at less than 3 L/100 km per passenger (78 passenger-miles per US gallon).
Newer aircraft like the Boeing 787 Dreamliner, Airbus A350 and Bombardier CSeries, are 20% more fuel efficient per passenger kilometre than previous generation aircraft. For the 787, this is achieved through more fuel-efficient engines and lighter composite material airframes, and also through more aerodynamic shapes, winglets, more advanced computer systems for optimising routes and aircraft loading.
A life-cycle assessment based on the Boeing 787 shows a 20% emission savings compared to conventional aluminium airliners, 14-15% fleet-wide when encompassing a fleet penetration below 100%, while the air travel demand would increase due to lower operating costs.
Lufthansa, when it ordered both, stated the Airbus A350-900 and the Boeing 777X-9 will consume an average of 2.9 L/100 km (81 mpg‑US) per passenger.
The Airbus A321 featuring Sharklet wingtip devices consumes 2.2 L/100 km (110 mpg‑US) per person with a 200-seat layout for WOW Air.
Airbus airliners delivered in 2019 had a carbon intensity of 66.6 g of CO2e per passenger-kilometre, improving to 63.5g in 2020.
== Example values ==
The aviation fuel density used is 6.7 lb/USgal or 0.8 kg/L.
=== Commuter flights ===
For flights of 300 nmi (560 km):
=== Regional flights ===
For flights of 500–700 nmi (930–1,300 km)
=== Short-haul flights ===
For flights of 1,000 nmi (1,900 km):
=== Medium-haul flights ===
For flights around 2,000–3,000 nmi (3,700–5,600 km), transcontinental (e.g. Washington Dulles – Seattle-Tacoma is 2,000 nmi) to short transatlantic flights (e.g. New York JFK – London-Heathrow is 3,000 nmi).
=== Long-haul flights ===
For flights around 5,000 to 7,000 nmi (9,300 to 13,000 km), including transpacific flights (e.g. Hong Kong – San Francisco International is 6,000 nmi).
For a comparison with ground transportation - much slower and with shorter range than air travel - a Volvo bus 9700 averages 0.41 L/100 km (570 mpg‑US) per seat for 63 seats. In highway travel an average auto has the potential for 1.61 L/100 km (146 mpg‑US) per seat (assuming 4 seats) and for a 5-seat 2014 Toyota Prius, 0.98 L/100 km (240 mpg‑US). While this shows the capabilities of the vehicles, the load factors (percentage of seats occupied) may differ between personal use (commonly just the driver in the car) and societal averages for long-distance auto use, and among those of particular airlines.
=== General aviation ===
For private aircraft in general aviation, current FAI Aeroplane Efficiency records are :
33.92 km/kg fuel or 3.9 L/100 km in a Aeroprakt-40 two seater for 300– 500 kg MTOW airplanes (C-1a class) (1.95 L/100 km per seat).
37.22 km/kg fuel or 3.56 L/100 km in a Monnett Sonerai single-seat racer for 500-1,000 kg MTOW airplanes(C-1b class)
9.19 km/kg or 13.6 L/100 km in a four-seat diesel-powered Cessna 182 for 1,000-1,750 kg MTOW airplanes (C-1c class) (3.4 L/100 km per seat).
3.08 km/kg or 40.6 L/100 km in a Cirrus SF50 seven-seat jet for 1.75-3 t MTOW airplanes (C-1d class) (5.8 L/100 km per seat).
A four-seat Dyn'Aéro MCR4S powered by a Rotax 914 consumes 8.3 L/100 km at 264 km/h (2.1 L/100 km per seat).
=== Business aircraft ===
== Future ==
NASA and Boeing flight-tested a 500 lb (230 kg) blended wing body (BWB) X-48B demonstrator from August 2012 to April 2013. This design provides greater fuel efficiency, since the whole craft produces lift, not just the wings. The BWB concept offers advantages in structural, aerodynamic and operating efficiencies over today's more-conventional fuselage-and-wing designs. These features translate into greater range, fuel economy, reliability and life-cycle savings, as well as lower manufacturing costs. NASA has created a cruise efficient STOL (CESTOL) concept.
Fraunhofer Institute for Manufacturing Engineering and Applied Materials Research (IFAM) have researched a sharkskin-imitating paint that would reduce drag through a riblet effect. Aviation is a major potential application for new technologies such as aluminium metal foam and nanotechnology.
The International Air Transport Association (IATA) technology roadmap envisions improvements in aircraft configuration and aerodynamics. It projects the following reductions in engine fuel consumption, compared to baseline aircraft in service in 2015:
10-15% from higher pressure and bypass ratios, lighter materials, implemented in 2010–2019
20-25% from high pressure core + ultra-high by-pass ratio geared turbofan, from ~2020-25
30% from open rotors (propfans), from ~2030
40-80% from hybrid electric propulsion (depending on battery use), from ~2030-40
up to 100% due to Fully electric propulsion (primary energy from renewable source), from ~2035-40.
Moreover, it projects the following gains for aircraft design technologies:
6 to 12% from airframe retrofits (winglets, riblets, lightweight cabin furnishing) currently available
4 to 10% from materials and Structure (composite structure, adjustable landing gear, fly-by-wire) also currently available
1 to 4% from electric taxiing from 2020+
5 to 15% from advanced aerodynamics (hybrid/natural laminar flow, variable camber, spiroid wingtip) from 2020–25
30% from strut-braced wings (with advanced turbofan engines, ~2030-35)
35% from a double bubble fuselage like the Aurora D8 (with advanced turbofan engines, ~2035)
30-35% from a box/joined closed wing (with advanced turbofan engines, ~2035-40)
27 to 50% from a blended wing body design (with hybrid propulsion, ~2040)
Up to 100% with fully electric aircraft (short range, ~2035-45)
Today's tube-and-wing configuration could remain in use until the 2030s due to drag reductions from active flutter suppression for slender flexible-wings and natural and hybrid laminar flow.
Large, ultra high bypass engines will need upswept gull wings or overwing nacelles as Pratt & Whitney continue to develop their geared turbofan to save a projected 10–15% of fuel costs by the mid-2020s.
NASA indicates this configuration could gain up to 45% with advanced aerodynamics, structures and geared turbofans, but longer term suggests savings of up to 50% by 2025 and 60% by 2030 with new ultra-efficient configurations and propulsion architectures: hybrid wing body, truss-braced wing, lifting body designs, embedded engines, and boundary-layer ingestion.
By 2030 hybrid-electric architectures may be ready for 100 seaters and distributed propulsion with tighter integration of airframe may enable further efficiency and emissions improvements.
Research projects such as Boeing's ecoDemonstrator program have sought to identify ways of improving the fuel economy of commercial aircraft operations. The U.S. government has encouraged such research through grant programs, including the FAA's Continuous Lower Energy, Emissions and Noise (CLEEN) program, and NASA's Environmentally Responsible Aviation (ERA) Project.
Multiple concepts are projected to reduce fuel consumption:
the Airbus/Rolls-Royce E-Thrust is a hybrid electric with a gas turbine engine and electric ducted fans with energy storage allowing peak power for takeoff and climb while for the descent the engine is shut down and the fans recover energy to recharge the batteries;
Empirical Systems Aerospace (ESAero) is developing the 150-seat ECO-150 concept for turboelectric distributed propulsion with two turboshaft engines mounted on the wing and driving generators powering ducted fans embedded in the inboard wing sections, effectively increasing the bypass ratio and propulsive efficiency for 20–30% fuel savings over the Boeing 737 NG, while providing some powered lift;
NASA's single-aisle turbo-electric aircraft with an aft boundary layer propulsor (STARC-ABL) is a conventional tube-and-wing 737-sized airliner with an aft-mounted electric fan ingesting the fuselage boundary layer hybrid-electric propulsion, with 5.4 MW of power distributed to three electric motors: the design will be evaluated by Aurora Flight Sciences;
The Boeing blended wing body (BWB) with a wide fuselage mated to high-aspect-ratio wings is more aerodynamically efficient because the entire aircraft contributes to the lift and it has less surface area, producing less drag and offering weight savings due to lower wing loading, while noise is shielded by locating the engines on the aft upper surface;
Developed with the U.S. Air Force Research Laboratory and refined with NASA, the Lockheed Martin Hybrid Wing Body (HWB) combines a blended forward fuselage and wing with a conventional aft fuselage and T-tail for compatibility with existing infrastructure and airdrop; the engines in overwing nacelles on struts over the trailing edge enable higher-bypass-ratio engines with 5% less drag, provide acoustic shielding and increases lift without a thrust or drag penalty at low speed;
Airbus-backed German Bauhaus-Luftfahrt designed the Propulsive Fuselage concept, reducing drag with a fan in the tail ingesting air flowing over the fuselage via an annular (ring-shaped) inlet and re-energizes the wake, driven with a gearbox or as a turbo-electric configuration;
Conceived by the Massachusetts Institute of Technology for NASA, Aurora Flight Sciences developed the "double-bubble" D8, a 180-seat aircraft with a wide lifting fuselage, twin-aisle cabin to replace A320 and B737 narrowbodies, and boundary-layer ingestion with engines in the tail driving distortion-tolerant fans for a 49% fuel-burn reduction over the B737NG;
The Boeing truss-braced wing (TBW) concept was developed for the NASA-funded Subsonic Ultra Green Aircraft Research program with an aspect ratio of 19.5 compared to 11 for the Boeing 787: the strut relieves some bending moment and a braced wing can be lighter than a cantilevered wing or longer for the same weight, having better lift-to-drag ratio by lowering the induced drag and thinner, facilitating natural laminar flow and reducing wave drag at transonic speeds;
Dzyne Technologies reduces the thickness of the blended wing body for a 110–130-seat super-regional, a configuration usually too thick for a narrowbody replacement and better suited for large aircraft, by placing the landing gear outward and storing baggage in the wing roots, enabling 20% fuel savings;
the French research agency ONERA designed two concepts for a 180-seat airliner Versatile Aircraft (NOVA) including turbofans with higher bypass ratios and fan diameter: a gull wing with increased dihedral inboard to accommodate larger geared turbofans under without lengthening the gear and the other with engines embedded in the tail to ingest the low-energy fuselage boundary layer flow and re-energize the wake to reduce drag;
with Cranfield University, Rolls-Royce developed the Distributed Open Rotor (DORA) with high-aspect-ratio wing and V-tail to minimize drag, and turbogenerators on the wing driving electric propellers along the inboard leading edge with open rotor high-propulsive efficiency and increasing the effective bypass ratio.
== Climate change ==
The growth of air travel outpaces its fuel-economy improvements and corresponding CO2 emissions, compromising climate sustainability. Although low-cost carriers' higher seat-density increases fuel economy and lowers greenhouse gas emissions per-passenger-kilometer, the lower airfares cause a rebound effect of more flights and larger overall emissions. The tourism industry could shift emphasis to emissions eco-efficiency in CO2 per unit of revenue or profit instead of fuel economy, favoring shorter trips and ground transportation over flying long journeys to reduce greenhouse gas emissions.
== See also ==
Energy efficiency in transport
Range (aeronautics)
== References ==
== External links ==
Air Transport Department, Cranfield University (2008). "Fuel and air transport" (PDF). European Commission.
"Aircraft Technology Roadmap to 2050" (PDF). IATA. 2019.
Scott W. Ashcraft; Andres S. Padron; Kyle A. Pascioni; Gary W. Stout Jr.; Dennis L. Huff (October 2011). "Review of Propulsion Technologies for N+3 Subsonic Vehicle Concepts" (PDF). Glenn Research Center, Cleveland, Ohio. NASA.
"Air Transport and Energy Efficiency" (PDF). World Bank. February 2012.
Elyse Moody (1 March 2012). "Focus on Fuel Savings". Overhaul & Maintenance. Aviation Week.
Yongha Park; Morton E. O'Kelly (December 2014). "Fuel burn rates of commercial passenger aircraft: Variations by seat configuration and stage distance Article". The Ohio State University. Journal of Transport Geography. 41: 137–147. doi:10.1016/j.jtrangeo.2014.08.017.
Irene Kwan and Daniel Rutherford (November 2015). "Transatlantic airline fuel efficiency ranking, 2014" (PDF). International Council on Clean Transportation.
James Albright (27 February 2016). "Getting the Most Miles from Your Jet-A". Business & Commercial Aviation. Aviation Week. |
Gas turbine | A gas turbine or gas turbine engine is a type of continuous flow internal combustion engine. The main parts common to all gas turbine engines form the power-producing part (known as the gas generator or core) and are, in the direction of flow:
a rotating gas compressor
a combustor
a compressor-driving turbine.
Additional components have to be added to the gas generator to suit its application. Common to all is an air inlet but with different configurations to suit the requirements of marine use, land use or flight at speeds varying from stationary to supersonic. A propelling nozzle is added to produce thrust for flight. An extra turbine is added to drive a propeller (turboprop) or ducted fan (turbofan) to reduce fuel consumption (by increasing propulsive efficiency) at subsonic flight speeds. An extra turbine is also required to drive a helicopter rotor or land-vehicle transmission (turboshaft), marine propeller or electrical generator (power turbine). Greater thrust-to-weight ratio for flight is achieved with the addition of an afterburner.
The basic operation of the gas turbine is a Brayton cycle with air as the working fluid: atmospheric air flows through the compressor that brings it to higher pressure; energy is then added by spraying fuel into the air and igniting it so that the combustion generates a high-temperature flow; this high-temperature pressurized gas enters a turbine, producing a shaft work output in the process, used to drive the compressor; the unused energy comes out in the exhaust gases that can be repurposed for external work, such as directly producing thrust in a turbojet engine, or rotating a second, independent turbine (known as a power turbine) that can be connected to a fan, propeller, or electrical generator. The purpose of the gas turbine determines the design so that the most desirable split of energy between the thrust and the shaft work is achieved. The fourth step of the Brayton cycle (cooling of the working fluid) is omitted, as gas turbines are open systems that do not reuse the same air.
Gas turbines are used to power aircraft, trains, ships, electric generators, pumps, gas compressors, and tanks.
== Timeline of development ==
50: Earliest records of Hero's engine (aeolipile). It most likely served no practical purpose, and was rather more of a curiosity; nonetheless, it demonstrated an important principle of physics that all modern turbine engines rely on.
1000: The "Trotting Horse Lamp" (Chinese: 走马灯, zŏumădēng) was used by the Chinese at lantern fairs as early as the Northern Song dynasty. When the lamp is lit, the heated airflow rises and drives an impeller with horse-riding figures attached on it, whose shadows are then projected onto the outer screen of the lantern.
1500: The Smoke jack was drawn by Leonardo da Vinci: Hot air from a fire rises through a single-stage axial turbine rotor mounted in the exhaust duct of the fireplace and turns the roasting spit by gear-chain connection.
1791: A patent was given to John Barber, an Englishman, for the first true gas turbine. His invention had most of the elements present in the modern day gas turbines. The turbine was designed to power a horseless carriage.
1894: Sir Charles Parsons patented the idea of propelling a ship with a steam turbine, and built a demonstration vessel, the Turbinia, easily the fastest vessel afloat at the time.
1899: Charles Gordon Curtis patented the first gas turbine engine in the US.
1900: Sanford Alexander Moss submitted a thesis on gas turbines. In 1903, Moss became an engineer for General Electric's Steam Turbine Department in Lynn, Massachusetts. While there, he applied some of his concepts in the development of the turbocharger.
1903: A Norwegian, Ægidius Elling, built the first gas turbine that was able to produce more power than needed to run its own components, which was considered an achievement in a time when knowledge about aerodynamics was limited. Using rotary compressors and turbines it produced 8 kW (11 hp).
1904: A gas turbine engine designed by Franz Stolze, based on his earlier 1873 patent application, is built and tested in Berlin. The Stolze gas turbine was too inefficient to sustain its own operation.
1906: The Armengaud-Lemale gas turbine tested in France. This was a relatively large machine which included a 25-stage centrifugal compressor designed by Auguste Rateau and built by the Brown Boveri Company. The gas turbine could sustain its own air compression but was too inefficient to produce useful work.
1910: The first operational Holzwarth gas turbine (pulse combustion) achieves an output of 150 kW (200 hp). Planned output of the machine was 750 kW (1,000 hp) and its efficiency is below that of contemporary reciprocating engines.
1920s The practical theory of gas flow through passages was developed into the more formal (and applicable to turbines) theory of gas flow past airfoils by A. A. Griffith resulting in the publishing in 1926 of An Aerodynamic Theory of Turbine Design. Working testbed designs of axial turbines suitable for driving a propeller were developed by the Royal Aeronautical Establishment.
1930: Having found no interest from the RAF for his idea, Frank Whittle patented the design for a centrifugal gas turbine for jet propulsion. The first successful test run of his engine occurred in England in April 1937.
1932: The Brown Boveri Company of Switzerland starts selling axial compressor and turbine turbosets as part of the turbocharged steam generating Velox boiler. Following the gas turbine principle, the steam evaporation tubes are arranged within the gas turbine combustion chamber; the first Velox plant is erected at a French Steel mill in Mondeville, Calvados.
1936: The first constant flow industrial gas turbine is commissioned by the Brown Boveri Company and goes into service at Sun Oil's Marcus Hook refinery in Pennsylvania, US.
1937: Working proof-of-concept prototype turbojet engine runs in UK (Frank Whittle's) and Germany (Hans von Ohain's Heinkel HeS 1). Henry Tizard secures UK government funding for further development of Power Jets engine.
1939: The First 4 MW utility power generation gas turbine is built by the Brown Boveri Company for an emergency power station in Neuchâtel, Switzerland. The turbojet powered Heinkel He 178, the world's first jet aircraft, makes its first flight.
1940: Jendrassik Cs-1, a turboprop engine, made its first bench run. The Cs-1 was designed by Hungarian engineer György Jendrassik, and was intended to power a Hungarian twin-engine heavy fighter, the RMI-1. Work on the Cs-1 stopped in 1941 without the type having powered any aircraft.
1944: The Junkers Jumo 004 engine enters full production, powering the first German military jets such as the Messerschmitt Me 262. This marks the beginning of the reign of gas turbines in the sky.
1946: National Gas Turbine Establishment formed from Power Jets and the RAE turbine division to bring together Whittle and Hayne Constant's work. In Beznau, Switzerland the first commercial reheated/recuperated unit generating 27 MW was commissioned.
1947: A Metropolitan Vickers G1 (Gatric) becomes the first marine gas turbine when it completes sea trials on the Royal Navy's M.G.B 2009 vessel. The Gatric was an aeroderivative gas turbine based on the Metropolitan Vickers F2 jet engine.
1995: Siemens becomes the first manufacturer of large electricity producing gas turbines to incorporate single crystal turbine blade technology into their production models, allowing higher operating temperatures and greater efficiency.
2011: Mitsubishi Heavy Industries tests the first >60% efficiency combined cycle gas turbine (the M501J) at its Takasago, Hyōgo, works.
2019: Doosan Enerbility began developing a large gas turbine for power generation in 2013 and completed development in 2019. A model was installed at a Gimpo Combined Heat and Power Plant in 2023 and began commercial operation.
== Theory of operation ==
In an ideal gas turbine, gases undergo four thermodynamic processes: an isentropic compression, an isobaric (constant pressure) combustion, an isentropic expansion and isobaric heat rejection. Together, these make up the Brayton cycle, also known as the "constant pressure cycle". It is distinguished from the Otto cycle, in that all the processes (compression, ignition combustion, exhaust), occur at the same time, continuously.
In a real gas turbine, mechanical energy is changed irreversibly (due to internal friction and turbulence) into pressure and thermal energy when the gas is compressed (in either a centrifugal or axial compressor). Heat is added in the combustion chamber and the specific volume of the gas increases, accompanied by a slight loss in pressure. During expansion through the stator and rotor passages in the turbine, irreversible energy transformation once again occurs. Fresh air is taken in, in place of the heat rejection.
Air is taken in by a compressor, called a gas generator, with either an axial or centrifugal design, or a combination of the two. This air is then ducted into the combustor section which can be of a annular, can, or can-annular design. In the combustor section, roughly 70% of the air from the compressor is ducted around the combustor itself for cooling purposes. The remaining roughly 30% the air is mixed with fuel and ignited by the already burning air-fuel mixture, which then expands producing power across the turbine. This expansion of the mixture then leaves the combustor section and has its velocity increased across the turbine section to strike the turbine blades, spinning the disc they are attached to, thus creating useful power. Of the power produced, 60-70% is solely used to power the gas generator. The remaining power is used to power what the engine is being used for, typically an aviation application, being thrust in a turbojet, driving the fan of a turbofan, rotor or accessory of a turboshaft, and gear reduction and propeller of a turboprop.
If the engine has a power turbine added to drive an industrial generator or a helicopter rotor, the exit pressure will be as close to the entry pressure as possible with only enough energy left to overcome the pressure losses in the exhaust ducting and expel the exhaust. For a turboprop engine there will be a particular balance between propeller power and jet thrust which gives the most economical operation. In a turbojet engine only enough pressure and energy is extracted from the flow to drive the compressor and other components. The remaining high-pressure gases are accelerated through a nozzle to provide a jet to propel an aircraft.
The smaller the engine, the higher the rotation rate of the shaft must be to attain the required blade tip speed. Blade-tip speed determines the maximum pressure ratios that can be obtained by the turbine and the compressor. This, in turn, limits the maximum power and efficiency that can be obtained by the engine. In order for tip speed to remain constant, if the diameter of a rotor is reduced by half, the rotational speed must double. For example, large jet engines operate around 10,000–25,000 rpm, while micro turbines spin as fast as 500,000 rpm.
Mechanically, gas turbines can be considerably less complex than Reciprocating engines. Simple turbines might have one main moving part, the compressor/shaft/turbine rotor assembly, with other moving parts in the fuel system. This, in turn, can translate into price. For instance, costing 10,000 ℛℳ for materials, the Jumo 004 proved cheaper than the Junkers 213 piston engine, which was 35,000 ℛℳ, and needed only 375 hours of lower-skill labor to complete (including manufacture, assembly, and shipping), compared to 1,400 for the BMW 801. This, however, also translated into poor efficiency and reliability. More advanced gas turbines (such as those found in modern jet engines or combined cycle power plants) may have 2 or 3 shafts (spools), hundreds of compressor and turbine blades, movable stator blades, and extensive external tubing for fuel, oil and air systems; they use temperature resistant alloys, and are made with tight specifications requiring precision manufacture. All this often makes the construction of a simple gas turbine more complicated than a piston engine.
Moreover, to reach optimum performance in modern gas turbine power plants the gas needs to be prepared to exact fuel specifications. Fuel gas conditioning systems treat the natural gas to reach the exact fuel specification prior to entering the turbine in terms of pressure, temperature, gas composition, and the related Wobbe index.
The primary advantage of a gas turbine engine is its power to weight ratio.
Since significant useful work can be generated by a relatively lightweight engine, gas turbines are perfectly suited for aircraft propulsion.
Thrust bearings and journal bearings are a critical part of a design. They are hydrodynamic oil bearings or oil-cooled rolling-element bearings. Foil bearings are used in some small machines such as micro turbines and also have strong potential for use in small gas turbines/auxiliary power units
=== Creep ===
A major challenge facing turbine design, especially turbine blades, is reducing the creep that is induced by the high temperatures and stresses that are experienced during operation. Higher operating temperatures are continuously sought in order to increase efficiency, but come at the cost of higher creep rates. Several methods have therefore been employed in an attempt to achieve optimal performance while limiting creep, with the most successful ones being high performance coatings and single crystal superalloys. These technologies work by limiting deformation that occurs by mechanisms that can be broadly classified as dislocation glide, dislocation climb and diffusional flow.
Protective coatings provide thermal insulation of the blade and offer oxidation and corrosion resistance. Thermal barrier coatings (TBCs) are often stabilized zirconium dioxide-based ceramics and oxidation/corrosion resistant coatings (bond coats) typically consist of aluminides or MCrAlY (where M is typically Fe and/or Cr) alloys. Using TBCs limits the temperature exposure of the superalloy substrate, thereby decreasing the diffusivity of the active species (typically vacancies) within the alloy and reducing dislocation and vacancy creep. It has been found that a coating of 1–200 μm can decrease blade temperatures by up to 200 °C (392 °F).
Bond coats are directly applied onto the surface of the substrate using pack carburization and serve the dual purpose of providing improved adherence for the TBC and oxidation resistance for the substrate. The Al from the bond coats forms Al2O3 on the TBC-bond coat interface which provides the oxidation resistance, but also results in the formation of an undesirable interdiffusion (ID) zone between itself and the substrate. The oxidation resistance outweighs the drawbacks associated with the ID zone as it increases the lifetime of the blade and limits the efficiency losses caused by a buildup on the outside of the blades.
Nickel-based superalloys boast improved strength and creep resistance due to their composition and resultant microstructure. The gamma (γ) FCC nickel is alloyed with aluminum and titanium in order to precipitate a uniform dispersion of the coherent Ni3(Al,Ti) gamma-prime (γ') phases. The finely dispersed γ' precipitates impede dislocation motion and introduce a threshold stress, increasing the stress required for the onset of creep. Furthermore, γ' is an ordered L12 phase that makes it harder for dislocations to shear past it. Further Refractory elements such as rhenium and ruthenium can be added in solid solution to improve creep strength. The addition of these elements reduces the diffusion of the gamma prime phase, thus preserving the fatigue resistance, strength, and creep resistance. The development of single crystal superalloys has led to significant improvements in creep resistance as well. Due to the lack of grain boundaries, single crystals eliminate Coble creep and consequently deform by fewer modes – decreasing the creep rate. Although single crystals have lower creep at high temperatures, they have significantly lower yield stresses at room temperature where strength is determined by the Hall-Petch relationship. Care needs to be taken in order to optimize the design parameters to limit high temperature creep while not decreasing low temperature yield strength.
== Types ==
=== Jet engines ===
Airbreathing jet engines are gas turbines optimized to produce thrust from the exhaust gases, or from ducted fans connected to the gas turbines. Jet engines that produce thrust from the direct impulse of exhaust gases are often called turbojets. While still in service with many militaries and civilian operators, turbojets have mostly been phased out in favor of the turbofan engine due to the turbojet's low fuel efficiency, and high noise. Those that generate thrust with the addition of a ducted fan are called turbofans or (rarely) fan-jets. These engines produce nearly 80% of their thrust by the ducted fan, which can be seen from the front of the engine. They come in two types, low-bypass turbofan and high bypass, the difference being the amount of air moved by the fan, called "bypass air". These engines offer the benefit of more thrust without extra fuel consumption.
Gas turbines are also used in many liquid-fuel rockets, where gas turbines are used to power a turbopump to permit the use of lightweight, low-pressure tanks, reducing the empty weight of the rocket.
=== Turboprop engines ===
A turboprop engine is a turbine engine that drives an aircraft propeller using a reduction gear to translate high turbine section operating speed (often in the 10s of thousands) into low thousands necessary for efficient propeller operation. The benefit of using the turboprop engine is to take advantage of the turbine engines high power-to-weight ratio to drive a propeller, thus allowing a more powerful, but also smaller engine to be used. Turboprop engines are used on a wide range of business aircraft such as the Pilatus PC-12, commuter aircraft such as the Beechcraft 1900, and small cargo aircraft such as the Cessna 208 Caravan or De Havilland Canada Dash 8, and large aircraft (typically military) such as the Airbus A400M transport, Lockheed AC-130 and the 60-year-old Tupolev Tu-95 strategic bomber. While military turboprop engines can vary, in the civilian market there are two primary engines to be found: the Pratt & Whitney Canada PT6, a free-turbine turboshaft engine, and the Honeywell TPE331, a fixed turbine engine (formerly designated as the Garrett AiResearch 331).
=== Aeroderivative gas turbines ===
Aeroderivative gas turbines are generally based on existing aircraft gas turbine engines and are smaller and lighter than industrial gas turbines.
Aeroderivatives are used in electrical power generation due to their ability to be shut down and handle load changes more quickly than industrial machines. They are also used in the marine industry to reduce weight. Common types include the General Electric LM2500, General Electric LM6000, and aeroderivative versions of the Pratt & Whitney PW4000, Pratt & Whitney FT4 and Rolls-Royce RB211.
=== Amateur gas turbines ===
Increasing numbers of gas turbines are being used or even constructed by amateurs.
In its most straightforward form, these are commercial turbines acquired through military surplus or scrapyard sales, then operated for display as part of the hobby of engine collecting. In its most extreme form, amateurs have even rebuilt engines beyond professional repair and then used them to compete for the land speed record.
The simplest form of self-constructed gas turbine employs an automotive turbocharger as the core component. A combustion chamber is fabricated and plumbed between the compressor and turbine sections.
More sophisticated turbojets are also built, where their thrust and light weight are sufficient to power large model aircraft. The Schreckling design constructs the entire engine from raw materials, including the fabrication of a centrifugal compressor wheel from plywood, epoxy and wrapped carbon fibre strands.
Several small companies now manufacture small turbines and parts for the amateur. Most turbojet-powered model aircraft are now using these commercial and semi-commercial microturbines, rather than a Schreckling-like home-build.
=== Auxiliary power units ===
Small gas turbines are used as auxiliary power units (APUs) to supply auxiliary power to larger, mobile, machines such as an aircraft, and are a turboshaft design. They supply:
compressed air for air cycle machine style air conditioning and ventilation,
compressed air start-up power for larger jet engines,
mechanical (shaft) power to a gearbox to drive shafted accessories, and
electrical, hydraulic and other power-transmission sources to consuming devices remote from the APU.
=== Industrial gas turbines for power generation ===
Industrial gas turbines differ from aeronautical designs in that the frames, bearings, and blading are of heavier construction. They are also much more closely integrated with the devices they power—often an electric generator—and the secondary-energy equipment that is used to recover residual energy (largely heat).
They range in size from portable mobile plants to large, complex systems weighing more than a hundred tonnes housed in purpose-built buildings. When the gas turbine is used solely for shaft power, its thermal efficiency is about 30%. However, it may be cheaper to buy electricity than to generate it. Therefore, many engines are used in CHP (Combined Heat and Power) configurations that can be small enough to be integrated into portable container configurations.
Gas turbines can be particularly efficient when waste heat from the turbine is recovered by a heat recovery steam generator (HRSG) to power a conventional steam turbine in a combined cycle configuration. The 605 MW General Electric 9HA achieved a 62.22% efficiency rate with temperatures as high as 1,540 °C (2,800 °F).
For 2018, GE offers its 826 MW HA at over 64% efficiency in combined cycle due to advances in additive manufacturing and combustion breakthroughs, up from 63.7% in 2017 orders and on track to achieve 65% by the early 2020s.
In March 2018, GE Power achieved a 63.08% gross efficiency for its 7HA turbine.
Aeroderivative gas turbines can also be used in combined cycles, leading to a higher efficiency, but it will not be as high as a specifically designed industrial gas turbine. They can also be run in a cogeneration configuration: the exhaust is used for space or water heating, or drives an absorption chiller for cooling the inlet air and increase the power output, technology known as turbine inlet air cooling.
Another significant advantage is their ability to be turned on and off within minutes, supplying power during peak, or unscheduled, demand. Since single cycle (gas turbine only) power plants are less efficient than combined cycle plants, they are usually used as peaking power plants, which operate anywhere from several hours per day to a few dozen hours per year—depending on the electricity demand and the generating capacity of the region. In areas with a shortage of base-load and load following power plant capacity or with low fuel costs, a gas turbine powerplant may regularly operate most hours of the day. A large single-cycle gas turbine typically produces 100 to 400 megawatts of electric power and has 35–40% thermodynamic efficiency.
=== Industrial gas turbines for mechanical drive ===
Industrial gas turbines that are used solely for mechanical drive or used in collaboration with a recovery steam generator differ from power generating sets in that they are often smaller and feature a dual shaft design as opposed to a single shaft. The power range varies from 1 megawatt up to 50 megawatts. These engines are connected directly or via a gearbox to either a pump or compressor assembly. The majority of installations are used within the oil and gas industries. Mechanical drive applications increase efficiency by around 2%.
Oil and gas platforms require these engines to drive compressors to inject gas into the wells to force oil up via another bore, or to compress the gas for transportation. They are also often used to provide power for the platform. These platforms do not need to use the engine in collaboration with a CHP system due to getting the gas at an extremely reduced cost (often free from burn off gas). The same companies use pump sets to drive the fluids to land and across pipelines in various intervals.
==== Compressed air energy storage ====
One modern development seeks to improve efficiency in another way, by separating the compressor and the turbine with a compressed air store. In a conventional turbine, up to half the generated power is used driving the compressor. In a compressed air energy storage configuration, power is used to drive the compressor, and the compressed air is released to operate the turbine when required.
=== Turboshaft engines ===
Turboshaft engines are used to drive compressors in gas pumping stations and natural gas liquefaction plants. They are also used in aviation to power all but the smallest modern helicopters, and function as an auxiliary power unit in large commercial aircraft. A primary shaft carries the compressor and its turbine which, together with a combustor, is called a Gas Generator. A separately spinning power-turbine is usually used to drive the rotor on helicopters. Allowing the gas generator and power turbine/rotor to spin at their own speeds allows more flexibility in their design.
=== Radial gas turbines ===
=== Scale jet engines ===
Also known as miniature gas turbines or micro-jets.
With this in mind the pioneer of modern Micro-Jets, Kurt Schreckling, produced one of the world's first Micro-Turbines, the FD3/67. This engine can produce up to 22 newtons of thrust, and can be built by most mechanically minded people with basic engineering tools, such as a metal lathe.
=== Microturbines ===
Evolved from piston engine turbochargers, aircraft APUs or small jet engines, microturbines are 25 to 500 kilowatt turbines the size of a refrigerator.
Microturbines have around 15% efficiencies without a recuperator, 20 to 30% with one and they can reach 85% combined thermal-electrical efficiency in cogeneration.
== External combustion ==
Most gas turbines are internal combustion engines but it is also possible to manufacture an external combustion gas turbine which is, effectively, a turbine version of a hot air engine.
Those systems are usually indicated as EFGT (Externally Fired Gas Turbine) or IFGT (Indirectly Fired Gas Turbine).
External combustion has been used for the purpose of using pulverized coal or finely ground biomass (such as sawdust) as a fuel. In the indirect system, a heat exchanger is used and only clean air with no combustion products travels through the power turbine. The thermal efficiency is lower in the indirect type of external combustion; however, the turbine blades are not subjected to combustion products and much lower quality (and therefore cheaper) fuels are able to be used.
When external combustion is used, it is possible to use exhaust air from the turbine as the primary combustion air. This effectively reduces global heat losses, although heat losses associated with the combustion exhaust remain inevitable.
Closed-cycle gas turbines based on helium or supercritical carbon dioxide also hold promise for use with future high temperature solar and nuclear power generation.
== In surface vehicles ==
Gas turbines are often used on ships, locomotives, helicopters, tanks, and to a lesser extent, on cars, buses, and motorcycles.
A key advantage of jets and turboprops for airplane propulsion – their superior performance at high altitude compared to piston engines, particularly naturally aspirated ones – is irrelevant in most automobile applications. Their power-to-weight advantage, though less critical than for aircraft, is still important.
Gas turbines offer a high-powered engine in a very small and light package. However, they are not as responsive and efficient as small piston engines over the wide range of RPMs and powers needed in vehicle applications. In series hybrid vehicles, as the driving electric motors are mechanically detached from the electricity generating engine, the responsiveness, poor performance at low speed and low efficiency at low output problems are much less important. The turbine can be run at optimum speed for its power output, and batteries and ultracapacitors can supply power as needed, with the engine cycled on and off to run it only at high efficiency. The emergence of the continuously variable transmission may also alleviate the responsiveness problem.
Turbines have historically been more expensive to produce than piston engines, though this is partly because piston engines have been mass-produced in huge quantities for decades, while small gas turbine engines are rarities; however, turbines are mass-produced in the closely related form of the turbocharger.
The turbocharger is basically a compact and simple free shaft radial gas turbine which is driven by the piston engine's exhaust gas. The centripetal turbine wheel drives a centrifugal compressor wheel through a common rotating shaft. This wheel supercharges the engine air intake to a degree that can be controlled by means of a wastegate or by dynamically modifying the turbine housing's geometry (as in a variable geometry turbocharger).
It mainly serves as a power recovery device which converts a great deal of otherwise wasted thermal and kinetic energy into engine boost.
Turbo-compound engines (actually employed on some semi-trailer trucks) are fitted with blow down turbines which are similar in design and appearance to a turbocharger except for the turbine shaft being mechanically or hydraulically connected to the engine's crankshaft instead of to a centrifugal compressor, thus providing additional power instead of boost. While the turbocharger is a pressure turbine, a power recovery turbine is a velocity one.
=== Passenger road vehicles (cars, bikes, and buses) ===
A number of experiments have been conducted with gas turbine powered automobiles, the largest by Chrysler. More recently, there has been some interest in the use of turbine engines for hybrid electric cars. For instance, a consortium led by micro gas turbine company Bladon Jets has secured investment from the Technology Strategy Board to develop an Ultra Lightweight Range Extender (ULRE) for next-generation electric vehicles. The objective of the consortium, which includes luxury car maker Jaguar Land Rover and leading electrical machine company SR Drives, is to produce the world's first commercially viable – and environmentally friendly – gas turbine generator designed specifically for automotive applications.
The common turbocharger for gasoline or diesel engines is also a turbine derivative.
==== Concept cars ====
The first serious investigation of using a gas turbine in cars was in 1946 when two engineers, Robert Kafka and Robert Engerstein of Carney Associates, a New York engineering firm, came up with the concept where a unique compact turbine engine design would provide power for a rear wheel drive car. After an article appeared in Popular Science, there was no further work, beyond the paper stage.
Early concepts (1950s/60s)
In 1950, designer F.R. Bell and Chief Engineer Maurice Wilks from British car manufacturers Rover unveiled the first car powered with a gas turbine engine. The two-seater JET1 had the engine positioned behind the seats, air intake grilles on either side of the car, and exhaust outlets on the top of the tail. During tests, the car reached top speeds of 140 km/h (87 mph), at a turbine speed of 50,000 rpm. After being shown in the United Kingdom and the United States in 1950, JET1 was further developed, and was subjected to speed trials on the Jabbeke highway in Belgium in June 1952, where it exceeded 240 km/h (150 mph). The car ran on petrol, paraffin (kerosene) or diesel oil, but fuel consumption problems proved insurmountable for a production car. JET1 is on display at the London Science Museum.
A French turbine-powered car, the SOCEMA-Grégoire, was displayed at the October 1952 Paris Auto Show. It was designed by the French engineer Jean-Albert Grégoire.
The first turbine-powered car built in the US was the GM Firebird I which began evaluations in 1953. While photos of the Firebird I may suggest that the jet turbine's thrust propelled the car like an aircraft, the turbine actually drove the rear wheels. The Firebird I was never meant as a commercial passenger car and was built solely for testing & evaluation as well as public relation purposes. Additional Firebird concept cars, each powered by gas turbines, were developed for the 1953, 1956 and 1959 Motorama auto shows. The GM Research gas turbine engine also was fitted to a series of transit buses, starting with the Turbo-Cruiser I of 1953.
Starting in 1954 with a modified Plymouth, the American car manufacturer Chrysler demonstrated several prototype gas turbine-powered cars from the early 1950s through the early 1980s. Chrysler built fifty Chrysler Turbine Cars in 1963 and conducted the only consumer trial of gas turbine-powered cars. Each of their turbines employed a unique rotating recuperator, referred to as a regenerator that increased efficiency.
In 1954, Fiat unveiled a concept car with a turbine engine, called Fiat Turbina. This vehicle, looking like an aircraft with wheels, used a unique combination of both jet thrust and the engine driving the wheels. Speeds of 282 km/h (175 mph) were claimed.
In the 1960s, Ford and GM also were developing gas turbine semi-trucks. Ford displayed the Big Red at the 1964 World's Fair. With the trailer, it was 29 m (96 ft) long, 4.0 m (13 ft) high, and painted crimson red. It contained the Ford-developed gas turbine engine, with output power and torque of 450 kW (600 hp) and 1,160 N⋅m (855 lb⋅ft). The cab boasted a highway map of the continental U.S., a mini-kitchen, bathroom, and a TV for the co-driver. The fate of the truck was unknown for several decades, but it was rediscovered in early 2021 in private hands, having been restored to running order. The Chevrolet division of GM built the Turbo Titan series of concept trucks with turbine motors as analogs of the Firebird concepts, including Turbo Titan I (c. 1959, shares GT-304 engine with Firebird II), Turbo Titan II (c. 1962, shares GT-305 engine with Firebird III), and Turbo Titan III (1965, GT-309 engine); in addition, the GM Bison gas turbine truck was shown at the 1964 World's Fair.
Emissions and fuel economy (1970s/80s)
As a result of the U.S. Clean Air Act Amendments of 1970, research was funded into developing automotive gas turbine technology. Design concepts and vehicles were conducted by Chrysler, General Motors, Ford (in collaboration with AiResearch), and American Motors (in conjunction with Williams Research). Long-term tests were conducted to evaluate comparable cost efficiency. Several AMC Hornets were powered by a small Williams regenerative gas turbine weighing 250 lb (113 kg) and producing 80 hp (60 kW; 81 PS) at 4450 rpm.
In 1982, General Motors used an Oldsmobile Delta 88 powered by a gas turbine using pulverised coal dust. This was considered for the United States and the western world to reduce dependence on middle east oil at the time
Toyota demonstrated several gas turbine powered concept cars, such as the Century gas turbine hybrid in 1975, the Sports 800 Gas Turbine Hybrid in 1979 and the GTV in 1985. No production vehicles were made. The GT24 engine was exhibited in 1977 without a vehicle.
Later development
In the early 1990s, Volvo introduced the Volvo ECC which was a gas turbine powered hybrid electric vehicle.
In 1993, General Motors developed a gas turbine powered EV1 series hybrid—as a prototype of the General Motors EV1. A Williams International 40 kW turbine drove an alternator which powered the battery–electric powertrain. The turbine design included a recuperator. In 2006, GM went into the EcoJet concept car project with Jay Leno.
At the 2010 Paris Motor Show Jaguar demonstrated its Jaguar C-X75 concept car. This electrically powered supercar has a top speed of 204 mph (328 km/h) and can go from 0 to 62 mph (0 to 100 km/h) in 3.4 seconds. It uses lithium-ion batteries to power four electric motors which combine to produce 780 bhp. It will travel 68 miles (109 km) on a single charge of the batteries, and uses a pair of Bladon Micro Gas Turbines to re-charge the batteries extending the range to 560 miles (900 km).
==== Racing cars ====
The first race car (in concept only) fitted with a turbine was in 1955 by a US Air Force group as a hobby project with a turbine loaned them by Boeing and a race car owned by Firestone Tire & Rubber company. The first race car fitted with a turbine for the goal of actual racing was by Rover and the BRM Formula One team joined forces to produce the Rover-BRM, a gas turbine powered coupe, which entered the 1963 24 Hours of Le Mans, driven by Graham Hill and Richie Ginther. It averaged 107.8 mph (173.5 km/h) and had a top speed of 142 mph (229 km/h). American Ray Heppenstall joined Howmet Corporation and McKee Engineering together to develop their own gas turbine sports car in 1968, the Howmet TX, which ran several American and European events, including two wins, and also participated in the 1968 24 Hours of Le Mans. The cars used Continental gas turbines, which eventually set six FIA land speed records for turbine-powered cars.
For open wheel racing, 1967's revolutionary STP-Paxton Turbocar fielded by racing and entrepreneurial legend Andy Granatelli and driven by Parnelli Jones nearly won the Indianapolis 500; the Pratt & Whitney ST6B-62 powered turbine car was almost a lap ahead of the second place car when a gearbox bearing failed just three laps from the finish line. The next year the STP Lotus 56 turbine car won the Indianapolis 500 pole position even though new rules restricted the air intake dramatically. In 1971 Team Lotus principal Colin Chapman introduced the Lotus 56B F1 car, powered by a Pratt & Whitney STN 6/76 gas turbine. Chapman had a reputation of building radical championship-winning cars, but had to abandon the project because there were too many problems with turbo lag.
==== Buses ====
General Motors fitted the GT-30x series of gas turbines (branded "Whirlfire") to several prototype buses in the 1950s and 1960s, including Turbo-Cruiser I (1953, GT-300); Turbo-Cruiser II (1964, GT-309); Turbo-Cruiser III (1968, GT-309); RTX (1968, GT-309); and RTS 3T (1972).
The arrival of the Capstone Turbine has led to several hybrid bus designs, starting with HEV-1 by AVS of Chattanooga, Tennessee in 1999, and closely followed by Ebus and ISE Research in California, and DesignLine Corporation in New Zealand (and later the United States). AVS turbine hybrids were plagued with reliability and quality control problems, resulting in liquidation of AVS in 2003. The most successful design by Designline is now operated in 5 cities in 6 countries, with over 30 buses in operation worldwide, and order for several hundred being delivered to Baltimore, and New York City.
Brescia Italy is using serial hybrid buses powered by microturbines on routes through the historical sections of the city.
==== Motorcycles ====
The MTT Turbine Superbike appeared in 2000 (hence the designation of Y2K Superbike by MTT) and is the first production motorcycle powered by a turbine engine – specifically, a Rolls-Royce Allison model 250 turboshaft engine, producing about 283 kW (380 bhp). Speed-tested to 365 km/h or 227 mph (according to some stories, the testing team ran out of road during the test), it holds the Guinness World Record for most powerful production motorcycle and most expensive production motorcycle, with a price tag of US$185,000.
=== Trains ===
Several locomotive classes have been powered by gas turbines, the most recent incarnation being Bombardier's JetTrain.
=== Tanks ===
The Third Reich Wehrmacht Heer's development division, the Heereswaffenamt (Army Ordnance Board), studied a number of gas turbine engine designs for use in tanks starting in mid-1944. The first gas turbine engine design intended for use in armored fighting vehicle propulsion, the BMW 003-based GT 101, was meant for installation in the Panther tank. Towards the end of the war, a Jagdtiger was fitted with one of the aforementioned gas turbines.
The second use of a gas turbine in an armored fighting vehicle was in 1954 when a unit, PU2979, specifically developed for tanks by C. A. Parsons and Company, was installed and trialed in a British Conqueror tank. The Stridsvagn 103 was developed in the 1950s and was the first mass-produced main battle tank to use a turbine engine, the Boeing T50. Since then, gas turbine engines have been used as auxiliary power units in some tanks and as main powerplants in Soviet/Russian T-80s and U.S. M1 Abrams tanks, among others. They are lighter and smaller than diesel engines at the same sustained power output but the models installed to date are less fuel efficient than the equivalent diesel, especially at idle, requiring more fuel to achieve the same combat range. Successive models of M1 have addressed this problem with battery packs or secondary generators to power the tank's systems while stationary, saving fuel by reducing the need to idle the main turbine. T-80s can mount three large external fuel drums to extend their range. Russia has stopped production of the T-80 in favor of the diesel-powered T-90 (based on the T-72), while Ukraine has developed the diesel-powered T-80UD and T-84 with nearly the power of the gas-turbine tank. The French Leclerc tank's diesel powerplant features the "Hyperbar" hybrid supercharging system, where the engine's turbocharger is completely replaced with a small gas turbine which also works as an assisted diesel exhaust turbocharger, enabling engine RPM-independent boost level control and a higher peak boost pressure to be reached (than with ordinary turbochargers). This system allows a smaller displacement and lighter engine to be used as the tank's power plant and effectively removes turbo lag. This special gas turbine/turbocharger can also work independently from the main engine as an ordinary APU.
A turbine is theoretically more reliable and easier to maintain than a piston engine since it has a simpler construction with fewer moving parts, but in practice, turbine parts experience a higher wear rate due to their higher working speeds. The turbine blades are highly sensitive to dust and fine sand so that in desert operations air filters have to be fitted and changed several times daily. An improperly fitted filter, or a bullet or shell fragment that punctures the filter, can damage the engine. Piston engines (especially if turbocharged) also need well-maintained filters, but they are more resilient if the filter does fail.
Like most modern diesel engines used in tanks, gas turbines are usually multi-fuel engines.
== Marine applications ==
=== Naval ===
Gas turbines are used in many naval vessels, where they are valued for their high power-to-weight ratio and their ships' resulting acceleration and ability to get underway quickly.
The first gas-turbine-powered naval vessel was the Royal Navy's motor gunboat MGB 2009 (formerly MGB 509) converted in 1947. Metropolitan-Vickers fitted their F2/3 jet engine with a power turbine. The Steam Gun Boat Grey Goose was converted to Rolls-Royce gas turbines in 1952 and operated as such from 1953. The Bold class Fast Patrol Boats Bold Pioneer and Bold Pathfinder built in 1953 were the first ships created specifically for gas turbine propulsion.
The first large-scale, partially gas-turbine powered ships were the Royal Navy's Type 81 (Tribal class) frigates with combined steam and gas powerplants. The first, HMS Ashanti was commissioned in 1961.
The German Navy launched the first Köln-class frigate in 1961 with 2 Brown, Boveri & Cie gas turbines in the world's first combined diesel and gas propulsion system.
The Soviet Navy commissioned in 1962 the first of 25 Kashin-class destroyer with 4 gas turbines in combined gas and gas propulsion system. Those vessels used 4 M8E gas turbines, which generated 54,000–72,000 kW (72,000–96,000 hp). Those ships were the first large ships in the world to be powered solely by gas turbines.
The Danish Navy had 6 Søløven-class torpedo boats (the export version of the British Brave class fast patrol boat) in service from 1965 to 1990, which had 3 Bristol Proteus (later RR Proteus) Marine Gas Turbines rated at 9,510 kW (12,750 shp) combined, plus two General Motors Diesel engines, rated at 340 kW (460 shp), for better fuel economy at slower speeds. And they also produced 10 Willemoes Class Torpedo / Guided Missile boats (in service from 1974 to 2000) which had 3 Rolls-Royce Marine Proteus Gas Turbines also rated at 9,510 kW (12,750 shp), same as the Søløven-class boats, and 2 General Motors Diesel Engines, rated at 600 kW (800 shp), also for improved fuel economy at slow speeds.
The Swedish Navy produced 6 Spica-class torpedo boats between 1966 and 1967 powered by 3 Bristol Siddeley Proteus 1282 turbines, each delivering 3,210 kW (4,300 shp). They were later joined by 12 upgraded Norrköping class ships, still with the same engines. With their aft torpedo tubes replaced by antishipping missiles they served as missile boats until the last was retired in 2005.
The Finnish Navy commissioned two Turunmaa-class corvettes, Turunmaa and Karjala, in 1968. They were equipped with one 16,410 kW (22,000 shp) Rolls-Royce Olympus TM1 gas turbine and three Wärtsilä marine diesels for slower speeds. They were the fastest vessels in the Finnish Navy; they regularly achieved speeds of 35 knots, and 37.3 knots during sea trials. The Turunmaas were decommissioned in 2002. Karjala is today a museum ship in Turku, and Turunmaa serves as a floating machine shop and training ship for Satakunta Polytechnical College.
The next series of major naval vessels were the four Canadian Iroquois-class helicopter carrying destroyers first commissioned in 1972. They used 2 ft-4 main propulsion engines, 2 ft-12 cruise engines and 3 Solar Saturn 750 kW generators.
The first U.S. gas-turbine powered ship was the U.S. Coast Guard's Point Thatcher, a cutter commissioned in 1961 that was powered by two 750 kW (1,000 shp) turbines utilizing controllable-pitch propellers. The larger Hamilton-class High Endurance Cutters, was the first class of larger cutters to utilize gas turbines, the first of which (USCGC Hamilton) was commissioned in 1967. Since then, they have powered the U.S. Navy's Oliver Hazard Perry-class frigates, Spruance and Arleigh Burke-class destroyers, and Ticonderoga-class guided missile cruisers. USS Makin Island, a modified Wasp-class amphibious assault ship, is to be the Navy's first amphibious assault ship powered by gas turbines.
The marine gas turbine operates in a more corrosive atmosphere due to the presence of sea salt in air and fuel and use of cheaper fuels.
=== Civilian maritime ===
Up to the late 1940s, much of the progress on marine gas turbines all over the world took place in design offices and engine builder's workshops and development work was led by the British Royal Navy and other Navies. While interest in the gas turbine for marine purposes, both naval and mercantile, continued to increase, the lack of availability of the results of operating experience on early gas turbine projects limited the number of new ventures on seagoing commercial vessels being embarked upon.
In 1951, the diesel–electric oil tanker Auris, 12,290 deadweight tonnage (DWT) was used to obtain operating experience with a main propulsion gas turbine under service conditions at sea and so became the first ocean-going merchant ship to be powered by a gas turbine. Built by Hawthorn Leslie at Hebburn-on-Tyne, UK, in accordance with plans and specifications drawn up by the Anglo-Saxon Petroleum Company and launched on the UK's Princess Elizabeth's 21st birthday in 1947, the ship was designed with an engine room layout that would allow for the experimental use of heavy fuel in one of its high-speed engines, as well as the future substitution of one of its diesel engines by a gas turbine. The Auris operated commercially as a tanker for three-and-a-half years with a diesel–electric propulsion unit as originally commissioned, but in 1951 one of its four 824 kW (1,105 bhp) diesel engines – which were known as "Faith", "Hope", "Charity" and "Prudence" – was replaced by the world's first marine gas turbine engine, a 890 kW (1,200 bhp) open-cycle gas turbo-alternator built by British Thompson-Houston Company in Rugby. Following successful sea trials off the Northumbrian coast, the Auris set sail from Hebburn-on-Tyne in October 1951 bound for Port Arthur in the US and then Curaçao in the southern Caribbean returning to Avonmouth after 44 days at sea, successfully completing her historic trans-Atlantic crossing. During this time at sea the gas turbine burnt diesel fuel and operated without an involuntary stop or mechanical difficulty of any kind. She subsequently visited Swansea, Hull, Rotterdam, Oslo and Southampton covering a total of 13,211 nautical miles. The Auris then had all of its power plants replaced with a 3,910 kW (5,250 shp) directly coupled gas turbine to become the first civilian ship to operate solely on gas turbine power.
Despite the success of this early experimental voyage the gas turbine did not replace the diesel engine as the propulsion plant for large merchant ships. At constant cruising speeds the diesel engine simply had no peer in the vital area of fuel economy. The gas turbine did have more success in Royal Navy ships and the other naval fleets of the world where sudden and rapid changes of speed are required by warships in action.
The United States Maritime Commission were looking for options to update WWII Liberty ships, and heavy-duty gas turbines were one of those selected. In 1956 the John Sergeant was lengthened and equipped with a General Electric 4,900 kW (6,600 shp) HD gas turbine with exhaust-gas regeneration, reduction gearing and a variable-pitch propeller. It operated for 9,700 hours using residual fuel (Bunker C) for 7,000 hours. Fuel efficiency was on a par with steam propulsion at 0.318 kg/kW (0.523 lb/hp) per hour, and power output was higher than expected at 5,603 kW (7,514 shp) due to the ambient temperature of the North Sea route being lower than the design temperature of the gas turbine. This gave the ship a speed capability of 18 knots, up from 11 knots with the original power plant, and well in excess of the 15 knot targeted. The ship made its first transatlantic crossing with an average speed of 16.8 knots, in spite of some rough weather along the way. Suitable Bunker C fuel was only available at limited ports because the quality of the fuel was of a critical nature. The fuel oil also had to be treated on board to reduce contaminants and this was a labor-intensive process that was not suitable for automation at the time. Ultimately, the variable-pitch propeller, which was of a new and untested design, ended the trial, as three consecutive annual inspections revealed stress-cracking. This did not reflect poorly on the marine-propulsion gas-turbine concept though, and the trial was a success overall. The success of this trial opened the way for more development by GE on the use of HD gas turbines for marine use with heavy fuels. The John Sergeant was scrapped in 1972 at Portsmouth PA.
Boeing launched its first passenger-carrying waterjet-propelled hydrofoil Boeing 929, in April 1974. Those ships were powered by two Allison 501-KF gas turbines.
Between 1971 and 1981, Seatrain Lines operated a scheduled container service between ports on the eastern seaboard of the United States and ports in northwest Europe across the North Atlantic with four container ships of 26,000 tonnes DWT. Those ships were powered by twin Pratt & Whitney gas turbines of the FT 4 series. The four ships in the class were named Euroliner, Eurofreighter, Asialiner and Asiafreighter. Following the dramatic Organization of the Petroleum Exporting Countries (OPEC) price increases of the mid-1970s, operations were constrained by rising fuel costs. Some modification of the engine systems on those ships was undertaken to permit the burning of a lower grade of fuel (i.e., marine diesel). Reduction of fuel costs was successful using a different untested fuel in a marine gas turbine but maintenance costs increased with the fuel change. After 1981 the ships were sold and refitted with, what at the time, was more economical diesel-fueled engines but the increased engine size reduced cargo space.
The first passenger ferry to use a gas turbine was the GTS Finnjet, built in 1977 and powered by two Pratt & Whitney FT 4C-1 DLF turbines, generating 55,000 kW (74,000 shp) and propelling the ship to a speed of 31 knots. However, the Finnjet also illustrated the shortcomings of gas turbine propulsion in commercial craft, as high fuel prices made operating her unprofitable. After four years of service, additional diesel engines were installed on the ship to reduce running costs during the off-season. The Finnjet was also the first ship with a combined diesel–electric and gas propulsion. Another example of commercial use of gas turbines in a passenger ship is Stena Line's HSS class fastcraft ferries. HSS 1500-class Stena Explorer, Stena Voyager and Stena Discovery vessels use combined gas and gas setups of twin GE LM2500 plus GE LM1600 power for a total of 68,000 kW (91,000 shp). The slightly smaller HSS 900-class Stena Carisma, uses twin ABB–STAL GT35 turbines rated at 34,000 kW (46,000 shp) gross. The Stena Discovery was withdrawn from service in 2007, another victim of too high fuel costs.
In July 2000, the Millennium became the first cruise ship to be powered by both gas and steam turbines. The ship featured two General Electric LM2500 gas turbine generators whose exhaust heat was used to operate a steam turbine generator in a COGES (combined gas electric and steam) configuration. Propulsion was provided by two electrically driven Rolls-Royce Mermaid azimuth pods. The liner RMS Queen Mary 2 uses a combined diesel and gas configuration.
In marine racing applications the 2010 C5000 Mystic catamaran Miss GEICO uses two Lycoming T-55 turbines for its power system.
== Advances in technology ==
Gas turbine technology has steadily advanced since its inception and continues to evolve. Development is actively producing both smaller gas turbines and more powerful and efficient engines. Aiding in these advances are computer-based design (specifically computational fluid dynamics and finite element analysis) and the development of advanced materials: Base materials with superior high-temperature strength (e.g., single-crystal superalloys that exhibit yield strength anomaly) or thermal barrier coatings that protect the structural material from ever-higher temperatures. These advances allow higher compression ratios and turbine inlet temperatures, more efficient combustion and better cooling of engine parts.
Computational fluid dynamics (CFD) has contributed to substantial improvements in the performance and efficiency of gas turbine engine components through enhanced understanding of the complex viscous flow and heat transfer phenomena involved. For this reason, CFD is one of the key computational tools used in design and development of gas turbine engines.
The simple-cycle efficiencies of early gas turbines were practically doubled by incorporating inter-cooling, regeneration (or recuperation), and reheating. These improvements, of course, come at the expense of increased initial and operation costs, and they cannot be justified unless the decrease in fuel costs offsets the increase in other costs. The relatively low fuel prices, the general desire in the industry to minimize installation costs, and the tremendous increase in the simple-cycle efficiency to about 40 percent left little desire for opting for these modifications.
On the emissions side, the challenge is to increase turbine inlet temperatures while at the same time reducing peak flame temperature in order to achieve lower NOx emissions and meet the latest emission regulations. In May 2011, Mitsubishi Heavy Industries achieved a turbine inlet temperature of 1,600 °C (2,900 °F) on a 320 megawatt gas turbine, and 460 MW in gas turbine combined-cycle power generation applications in which gross thermal efficiency exceeds 60%.
Compliant foil bearings were commercially introduced to gas turbines in the 1990s. These can withstand over a hundred thousand start/stop cycles and have eliminated the need for an oil system. The application of microelectronics and power switching technology have enabled the development of commercially viable electricity generation by microturbines for distribution and vehicle propulsion.
In 2013, General Electric started the development of the GE9X with a compression ratio of 61:1.
== Advantages and disadvantages ==
The following are advantages and disadvantages of gas-turbine engines:
Advantages include:
Very high power-to-weight ratio compared to reciprocating engines.
Smaller than most reciprocating engines of the same power rating.
Smooth rotation of the main shaft produces far less vibration than a reciprocating engine.
Fewer moving parts than reciprocating engines results in lower maintenance cost and higher reliability/availability over its service life.
Greater reliability, particularly in applications where sustained high power output is required.
Waste heat is dissipated almost entirely in the exhaust. This results in a high-temperature exhaust stream that is very usable for boiling water in a combined cycle, or for cogeneration.
Lower peak combustion pressures than reciprocating engines in general.
High shaft speeds in smaller "free turbine units", although larger gas turbines employed in power generation operate at synchronous speeds.
Low lubricating oil cost and consumption.
Can run on a wide variety of fuels.
Very low toxic emissions of CO and HC due to excess air, complete combustion and no "quench" of the flame on cold surfaces.
Disadvantages include:
Core engine costs can be high due to the use of exotic materials, especially in applications where high reliability is required (e.g. aircraft propulsion)
Less efficient than reciprocating engines at idle speed.
Longer startup than reciprocating engines.
Less responsive to changes in power demand compared with reciprocating engines.
Characteristic whine can be hard to suppress. The exhaust (particularly on turbojets) can also produce a distinctive roaring sound.
== Major manufacturers ==
Siemens Energy
Ansaldo
Mitsubishi Heavy Industries
Rolls-Royce
GE Aviation
GE Vernova
Silmash
ODK
Pratt & Whitney
P&W Canada
Solar Turbines
Alstom
Zorya-Mashproekt
MTU Aero Engines
MAN Turbo
IHI Corporation
Kawasaki Heavy Industries
HAL
BHEL
MAPNA
Techwin
Doosan Heavy
Doosan Enerbility
Shanghai Electric
Harbin Electric
AECC
== Testing ==
British, German, other national and international test codes are used to standardize the procedures and definitions used to test gas turbines. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the turbine and associated systems. In the United States, ASME has produced several performance test codes on gas turbines. This includes ASME PTC 22–2014. These ASME performance test codes have gained international recognition and acceptance for testing gas turbines. The single most important and differentiating characteristic of ASME performance test codes, including PTC 22, is that the test uncertainty of the measurement indicates the quality of the test and is not to be used as a commercial tolerance.
== See also ==
List of aircraft engines
Centrifugal compressor
Gas turbine modular helium reactor
Rotating detonation engine
Pneumatic motor
Pulsejet
Steam turbine
Turbine engine failure
Wind turbine
== References ==
== Further reading ==
Stationary Combustion Gas Turbines including Oil & Over-Speed Control System description
"Aircraft Gas Turbine Technology" by Irwin E. Treager, McGraw-Hill, Glencoe Division, 1979, ISBN 0-07-065158-2.
"Gas Turbine Theory" by H.I.H. Saravanamuttoo, G.F.C. Rogers and H. Cohen, Pearson Education, 2001, 5th ed., ISBN 0-13-015847-X.
Leyes II, Richard A.; Fleming, William A. (1999). The History of North American Small Gas Turbine Aircraft Engines. Washington, DC: Smithsonian Institution. ISBN 978-1-56347-332-6.
R. M. "Fred" Klaass and Christopher DellaCorte, "The Quest for Oil-Free Gas Turbine Engines," SAE Technical Papers, No. 2006-01-3055, available at sae.org
"Model Jet Engines" by Thomas Kamps ISBN 0-9510589-9-1 Traplet Publications
Aircraft Engines and Gas Turbines, Second Edition by Jack L. Kerrebrock, The MIT Press, 1992, ISBN 0-262-11162-4.
"Forensic Investigation of a Gas Turbine Event" by John Molloy, M&M Engineering
"Gas Turbine Performance, 2nd Edition" by Philip Walsh and Paul Fletcher, Wiley-Blackwell, 2004 ISBN 978-0-632-06434-2
Advanced Technologies for Gas Turbines (Report). Washington, DC: The National Academies Press. 2020. doi:10.17226/25630. ISBN 978-0-309-66422-6.
== External links ==
Armagnac, Alden P. (December 1939). "New Era In Power To Turn Wheels". Popular Science. p. 81.
Technology Speed of Civil Jet Engines
MIT Gas Turbine Laboratory Archived 21 July 2010 at the Wayback Machine
MIT Microturbine research
California Distributed Energy Resource guide – Microturbine generators
Introduction to how a gas turbine works from "how stuff works.com" Archived 16 June 2008 at the Wayback Machine
Aircraft gas turbine simulator for interactive learning
An online handbook on stationary gas turbine technologies compiled by the US DOE. Archived 1 July 2017 at the Wayback Machine |
Gasoline | Gasoline (North American English) or petrol (Commonwealth English) is a petrochemical product characterized as a transparent, yellowish, and flammable liquid normally used as a fuel for spark-ignited internal combustion engines. When formulated as a fuel for engines, gasoline is chemically composed of organic compounds derived from the fractional distillation of petroleum and later chemically enhanced with gasoline additives. It is a high-volume profitable product produced in crude oil refineries.
The ability of a particular gasoline blend to resist premature ignition (which causes knocking and reduces efficiency in reciprocating engines) is measured by its octane rating. Tetraethyl lead was once widely used to increase the octane rating but is not used in modern automotive gasoline due to the health hazard. Aviation, off-road motor vehicles, and racing car engines still use leaded gasolines. Other substances are frequently added to gasoline to improve chemical stability and performance characteristics, control corrosion, and provide fuel system cleaning. Gasoline may contain oxygen-containing chemicals such as ethanol, MTBE, or ETBE to improve combustion.
== History and etymology ==
English dictionaries show that the term gasoline originates from gas plus the chemical suffixes -ole and -ine. Petrol derives from the Medieval Latin word petroleum (L. petra, rock + oleum, oil).
Interest in gasoline-like fuels started with the invention of internal combustion engines suitable for use in transportation applications. The so-called Otto engines were developed in Germany during the last quarter of the 19th century. The fuel for these early engines was a relatively volatile hydrocarbon obtained from coal gas. With a boiling point near 85 °C (185 °F) (n-octane boils at 125.62 °C (258.12 °F)), it was well suited for early carburetors (evaporators). The development of a "spray nozzle" carburetor enabled the use of less volatile fuels. Further improvements in engine efficiency were attempted at higher compression ratios, but early attempts were blocked by the premature explosion of fuel, known as knocking. In 1891, the Shukhov cracking process became the world's first commercial method to break down heavier hydrocarbons in crude oil to increase the percentage of lighter products compared to simple distillation.
== Chemical analysis and production ==
Commercial gasoline as well as other liquid transportation fuels are complex mixtures of hydrocarbons. The performance specification also varies with season, requiring less volatile blends during summer, in order to minimize evaporative losses.
Gasoline is produced in oil refineries. Roughly 72 liters (19 U.S. gal) of gasoline is derived from a 160-liter (42 U.S. gal) barrel of crude oil. Material separated from crude oil via distillation, called virgin or straight-run gasoline, does not meet specifications for modern engines (particularly the octane rating; see below), but can be pooled to the gasoline blend.
The bulk of a typical gasoline consists of a homogeneous mixture of hydrocarbons with between four and twelve carbon atoms per molecule (commonly referred to as C4–C12). It is a mixture of paraffins (alkanes), olefins (alkenes), napthenes (cycloalkanes), and aromatics. The use of the term paraffin in place of the standard chemical nomenclature alkane is particular to the oil industry (which relies extensively on jargon). The composition of a gasoline depends upon:
the oil refinery that makes the gasoline, as not all refineries have the same set of processing units;
the crude oil feed used by the refinery;
the grade of gasoline sought (in particular, the octane rating).
The various refinery streams blended to make gasoline have different characteristics. Some important streams include the following:
Straight-run gasoline, sometimes referred to as naphtha (and also light straight run naphtha "LSR" and light virgin naphtha "LVN"), is distilled directly from crude oil. Once the leading source of fuel, naphtha's low octane rating required organometallic fuel additives (primarily tetraethyllead) prior to their phaseout from the gasoline pool which started in 1975 in the United States. Straight run naphtha is typically low in aromatics (depending on the grade of the crude oil stream) and contains some cycloalkanes (naphthenes) and no olefins (alkenes). Between 0 and 20 percent of this stream is pooled into the finished gasoline because the quantity of this fraction in the crude is less than fuel demand and the fraction's Research Octane Number (RON) is too low. The chemical properties (namely RON and Reid vapor pressure (RVP)) of the straight-run gasoline can be improved through reforming and isomerization. However, before feeding those units, the naphtha needs to be split into light and heavy naphtha. Straight-run gasoline can also be used as a feedstock for steam-crackers to produce olefins.
Reformate, produced from straight run gasoline in a catalytic reformer, has a high octane rating with high aromatic content and relatively low olefin content. Most of the benzene, toluene, and xylene (the so-called BTX hydrocarbons) are more valuable as chemical feedstocks and are thus removed to some extent. Also the BTX content is regulated.
Catalytic cracked gasoline, or catalytic cracked naphtha, produced with a catalytic cracker, has a moderate octane rating, high olefin content, and moderate aromatic content.
Hydrocrackate (heavy, mid, and light), produced with a hydrocracker, has a medium to low octane rating and moderate aromatic levels.
Alkylate is produced in an alkylation unit, using isobutane and C3-/C4-olefins as feedstocks. Finished alkylate contains no aromatics or olefins and has a high MON (Motor Octane Number) Alkylate was used during world war 2 in aviation fuel. Since the late 1980s it is sold as a specialty fuel for (handheld) gardening and forestry tools with a combustion engine.
Isomerate is obtained by isomerizing low-octane straight-run gasoline into iso-paraffins (non-chain alkanes, such as isooctane). Isomerate has a medium RON and MON, but no aromatics or olefins.
Butane is usually blended in the gasoline pool, although the quantity of this stream is limited by the RVP specification.
Oxygenates (more specifically alcohols and esters) are mostly blended into the pool in the US as ethanol. In Europe and other countries, the blends can contain ethanol in addition to Methyl tertiary-butyl ether (MTBE) and Ethyl tert-butyl ether (ETBE). MTBE in the United States was banned by most states in the early to mid 2000s. A few countries still allow methanol as well to be blended directly into gasoline, especially in China. More about oxygenates and blending is covered further in this article.
The terms above are the jargon used in the oil industry, and the terminology varies.
Currently, many countries set limits on gasoline aromatics in general, benzene in particular, and olefin (alkene) content. Such regulations have led to an increasing preference for alkane isomers, such as isomerate or alkylate, as their octane rating is higher than n-alkanes. In the European Union, the benzene limit is set at one percent by volume for all grades of automotive gasoline. This is usually achieved by avoiding feeding C6, in particular cyclohexane, to the reformer unit, where it would be converted to benzene. Therefore, only (desulfurized) heavy virgin naphtha (HVN) is fed to the reformer unit
Gasoline can also contain other organic compounds, such as organic ethers (deliberately added), plus small levels of contaminants, in particular organosulfur compounds (which are usually removed at the refinery).
On average, U.S. petroleum refineries produce about 19 to 20 gallons of gasoline, 11 to 13 gallons of distillate fuel diesel fuel and 3 to 4 gallons of jet fuel from each 42 U.S. gallons (160 liters) barrel of crude oil. The product ratio depends upon the processing in an oil refinery and the crude oil assay.
== Physical properties ==
=== Density ===
The specific gravity of gasoline ranges from 0.71 to 0.77, with higher densities having a greater volume fraction of aromatics. Finished marketable gasoline is traded (in Europe) with a standard reference of 0.755 kilograms per liter (6.30 lb/U.S. gal), (7,5668 lb/ imp gal) its price is escalated or de-escalated according to its actual density. Because of its low density, gasoline floats on water, and therefore water cannot generally be used to extinguish a gasoline fire unless applied in a fine mist.
=== Stability ===
Quality gasoline should be stable for six months if stored properly, but can degrade over time. Gasoline stored for a year will most likely be able to be burned in an internal combustion engine without too much trouble. Gasoline should ideally be stored in an airtight container (to prevent oxidation or water vapor mixing in with the gas) that can withstand the vapor pressure of the gasoline without venting (to prevent the loss of the more volatile fractions) at a stable cool temperature (to reduce the excess pressure from liquid expansion and to reduce the rate of any decomposition reactions). When gasoline is not stored correctly, gums and solids may result, which can corrode system components and accumulate on wet surfaces, resulting in a condition called "stale fuel". Gasoline containing ethanol is especially subject to absorbing atmospheric moisture, then forming gums, solids, or two phases (a hydrocarbon phase floating on top of a water-alcohol phase).
The presence of these degradation products in the fuel tank or fuel lines plus a carburetor or fuel injection components makes it harder to start the engine or causes reduced engine performance On resumption of regular engine use, the buildup may or may not be eventually cleaned out by the flow of fresh gasoline. The addition of a fuel stabilizer to gasoline can extend the life of fuel that is not or cannot be stored properly, though removal of all fuel from a fuel system is the only real solution to the problem of long-term storage of an engine or a machine or vehicle. Typical fuel stabilizers are proprietary mixtures containing mineral spirits, isopropyl alcohol, 1,2,4-trimethylbenzene or other additives. Fuel stabilizers are commonly used for small engines, such as lawnmower and tractor engines, especially when their use is sporadic or seasonal (little to no use for one or more seasons of the year). Users have been advised to keep gasoline containers more than half full and properly capped to reduce air exposure, to avoid storage at high temperatures, to run an engine for ten minutes to circulate the stabilizer through all components prior to storage, and to run the engine at intervals to purge stale fuel from the carburetor.
Gasoline stability requirements are set by the standard ASTM D4814. This standard describes the various characteristics and requirements of automotive fuels for use over a wide range of operating conditions in ground vehicles equipped with spark-ignition engines.
=== Combustion energy content ===
A gasoline-fueled internal combustion engine obtains energy from the combustion of gasoline's various hydrocarbons with oxygen from the ambient air, yielding carbon dioxide and water as exhaust. The combustion of octane, a representative species, performs the chemical reaction:
2 C8H18 + 25 O2 → 16 CO2 + 18 H2O
By weight, combustion of gasoline releases about 46.7 megajoules per kilogram (13.0 kWh/kg; 21.2 MJ/lb) or by volume 33.6 megajoules per liter (9.3 kWh/L; 127 MJ/U.S. gal; 121,000 BTU/U.S. gal), quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75 percent more or less than the average. On average, about 74 liters (20 U.S. gal) of gasoline are available from a barrel of crude oil (about 46 percent by volume), varying with the quality of the crude and the grade of the gasoline. The remainder is products ranging from tar to naphtha.
A high-octane-rated fuel, such as liquefied petroleum gas (LPG), has an overall lower power output at the typical 10:1 compression ratio of an engine design optimized for gasoline fuel. An engine tuned for LPG fuel via higher compression ratios (typically 12:1) improves the power output. This is because higher-octane fuels allow for a higher compression ratio without knocking, resulting in a higher cylinder temperature, which improves efficiency. Also, increased mechanical efficiency is created by a higher compression ratio through the concomitant higher expansion ratio on the power stroke, which is by far the greater effect. The higher expansion ratio extracts more work from the high-pressure gas created by the combustion process. An Atkinson cycle engine uses the timing of the valve events to produce the benefits of a high expansion ratio without the disadvantages, chiefly detonation, of a high compression ratio. A high expansion ratio is also one of the two key reasons for the efficiency of diesel engines, along with the elimination of pumping losses due to throttling of the intake airflow.
The lower energy content of LPG by liquid volume in comparison to gasoline is due mainly to its lower density. This lower density is a property of the lower molecular weight of propane (LPG's chief component) compared to gasoline's blend of various hydrocarbon compounds with heavier molecular weights than propane. Conversely, LPG's energy content by weight is higher than gasoline's due to a higher hydrogen-to-carbon ratio.
Molecular weights of the species in the representative octane combustion are 114, 32, 44, and 18 for C8H18, O2, CO2, and H2O, respectively; therefore one kilogram (2.2 lb) of fuel reacts with 3.51 kilograms (7.7 lb) of oxygen to produce 3.09 kilograms (6.8 lb) of carbon dioxide and 1.42 kilograms (3.1 lb) of water.
== Octane rating ==
Spark-ignition engines are designed to burn gasoline in a controlled process called deflagration. However, the unburned mixture may autoignite by pressure and heat alone, rather than igniting from the spark plug at exactly the right time, causing a rapid pressure rise that can damage the engine. This is often referred to as engine knocking or end-gas knock. Knocking can be reduced by increasing the gasoline's resistance to autoignition, which is expressed by its octane rating. A detailed analysis further explores the conditions where premium fuel provides actual performance benefits versus when it is unnecessary.
Octane rating is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are different conventions for expressing octane ratings, so the same physical fuel may have several different octane ratings based on the measure used. One of the best known is the research octane number (RON).
The octane rating of typical commercially available gasoline varies by country. In Finland, Sweden, and Norway, 95 RON is the standard for regular unleaded gasoline and 98 RON is also available as a more expensive option.
In the United Kingdom, over 95 percent of gasoline sold has 95 RON and is marketed as Unleaded or Premium Unleaded. Super Unleaded, with 97/98 RON and branded high-performance fuels (e.g., Shell V-Power, BP Ultimate) with 99 RON make up the balance. Gasoline with 102 RON may rarely be available for racing purposes.
In the U.S., octane ratings in unleaded fuels vary between 85 and 87 AKI (91–92 RON) for regular, 89–90 AKI (94–95 RON) for mid-grade (equivalent to European regular), up to 90–94 AKI (95–99 RON) for premium (European premium).
As South Africa's largest city, Johannesburg, is located on the Highveld at 1,753 meters (5,751 ft) above sea level, the Automobile Association of South Africa recommends 95-octane gasoline at low altitude and 93-octane for use in Johannesburg because "The higher the altitude the lower the air pressure, and the lower the need for a high octane fuel as there is no real performance gain".
Octane rating became important as the military sought higher output for aircraft engines in the late 1920s and the 1940s. A higher octane rating allows a higher compression ratio or supercharger boost, and thus higher temperatures and pressures, which translate to higher power output. Some scientists even predicted that a nation with a good supply of high-octane gasoline would have the advantage in air power. In 1943, the Rolls-Royce Merlin aero engine produced 980 kilowatts (1,320 hp) using 100 RON fuel from a modest 27 liters (1,600 cu in) displacement. By the time of Operation Overlord, both the RAF and USAAF were conducting some operations in Europe using 150 RON fuel (100/150 avgas), obtained by adding 2.5 percent aniline to 100-octane avgas. By this time, the Rolls-Royce Merlin 66 was developing 1,500 kilowatts (2,000 hp) using this fuel.
== Additives ==
=== Antiknock additives ===
==== Tetraethyl lead ====
Gasoline, when used in high-compression internal combustion engines, tends to auto-ignite or "detonate" causing damaging engine knocking (also called "pinging" or "pinking"). To address this problem, tetraethyl lead (TEL) was widely adopted as an additive for gasoline in the 1920s. With a growing awareness of the seriousness of the extent of environmental and health damage caused by lead compounds, however, and the incompatibility of lead with catalytic converters, governments began to mandate reductions in gasoline lead.
In the U.S., the Environmental Protection Agency issued regulations to reduce the lead content of leaded gasoline over a series of annual phases, scheduled to begin in 1973 but delayed by court appeals until 1976. By 1995, leaded fuel accounted for only 0.6 percent of total gasoline sales and under 1,800 metric tons (2,000 short tons; 1,800 long tons) of lead per year. From 1 January 1996, the U.S. Clean Air Act banned the sale of leaded fuel for use in on-road vehicles in the U.S. The use of TEL also necessitated other additives, such as dibromoethane.
European countries began replacing lead-containing additives by the end of the 1980s, and by the end of the 1990s, leaded gasoline was banned within the entire European Union with an exception for Avgas 100LL for general aviation. The UAE started to switch to unleaded in the early 2000s.
Reduction in the average lead content of human blood may be a major cause for falling violent crime rates around the world including South Africa. A study found a correlation between leaded gasoline usage and violent crime (see Lead–crime hypothesis). Other studies found no correlation.
In August 2021, the UN Environment Programme announced that leaded gasoline had been eradicated worldwide, with Algeria being the last country to deplete its reserves. UN Secretary-General António Guterres called the eradication of leaded petrol an "international success story". He also added: "Ending the use of leaded petrol will prevent more than one million premature deaths each year from heart disease, strokes and cancer, and it will protect children whose IQs are damaged by exposure to lead". Greenpeace called the announcement "the end of one toxic era". However, leaded gasoline continues to be used in aeronautic, auto racing, and off-road applications. The use of leaded additives is still permitted worldwide for the formulation of some grades of aviation gasoline such as 100LL, because the required octane rating is difficult to reach without the use of leaded additives.
Different additives have replaced lead compounds. The most popular additives include aromatic hydrocarbons, ethers (MTBE and ETBE), and alcohols, most commonly ethanol.
==== Lead replacement petrol ====
Lead replacement petrol (LRP) was developed for vehicles designed to run on leaded fuels and incompatible with unleaded fuels. Rather than tetraethyllead, it contains other metals such as potassium compounds or methylcyclopentadienyl manganese tricarbonyl (MMT); these are purported to buffer soft exhaust valves and seats so that they do not suffer recession due to the use of unleaded fuel.
LRP was marketed during and after the phaseout of leaded motor fuels in the United Kingdom, Australia, South Africa, and some other countries. Consumer confusion led to a widespread mistaken preference for LRP rather than unleaded, and LRP was phased out 8 to 10 years after the introduction of unleaded.
Leaded gasoline was withdrawn from sale in Britain after 31 December 1999, seven years after EEC regulations signaled the end of production for cars using leaded gasoline in member states. At this stage, a large percentage of cars from the 1980s and early 1990s which ran on leaded gasoline were still in use, along with cars that could run on unleaded fuel. However, the declining number of such cars on British roads saw many gasoline stations withdrawing LRP from sale by 2003.
==== MMT ====
Methylcyclopentadienyl manganese tricarbonyl (MMT) is used in Canada and the U.S. to boost octane rating. Its use in the U.S. has been restricted by regulations, although it is currently allowed. Its use in the European Union is restricted by Article 8a of the Fuel Quality Directive following its testing under the Protocol for the evaluation of effects of metallic fuel-additives on the emissions performance of vehicles.
=== Fuel stabilizers (antioxidants and metal deactivators) ===
Gummy, sticky resin deposits result from oxidative degradation of gasoline during long-term storage. These harmful deposits arise from the oxidation of alkenes and other minor components in gasoline (see drying oils). Improvements in refinery techniques have generally reduced the susceptibility of gasolines to these problems. Previously, catalytically or thermally cracked gasolines were most susceptible to oxidation. The formation of gums is accelerated by copper salts, which can be neutralized by additives called metal deactivators.
This degradation can be prevented through the addition of 5–100 ppm of antioxidants, such as phenylenediamines and other amines. Hydrocarbons with a bromine number of 10 or above can be protected with the combination of unhindered or partially hindered phenols and oil-soluble strong amine bases, such as hindered phenols. "Stale" gasoline can be detected by a colorimetric enzymatic test for organic peroxides produced by oxidation of the gasoline.
Gasolines are also treated with metal deactivators, which are compounds that sequester (deactivate) metal salts that otherwise accelerate the formation of gummy residues. The metal impurities might arise from the engine itself or as contaminants in the fuel.
=== Detergents ===
Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion and allow easier starting in cold climates. High levels of detergent can be found in Top Tier Detergent Gasolines. The specification for Top Tier Detergent Gasolines was developed by four automakers: GM, Honda, Toyota, and BMW. According to the bulletin, the minimal U.S. EPA requirement is not sufficient to keep engines clean. Typical detergents include alkylamines and alkyl phosphates at a level of 50–100 ppm.
=== Ethanol ===
==== European Union ====
In the EU, 5 percent ethanol can be added within the common gasoline spec (EN 228). Discussions are ongoing to allow 10 percent blending of ethanol (available in Finnish, French and German gasoline stations). In Finland, most gasoline stations sell 95E10, which is 10 percent ethanol, and 98E5, which is 5 percent ethanol. Most gasoline sold in Sweden has 5–15 percent ethanol added. Three different ethanol blends are sold in the Netherlands—E5, E10 and hE15. The last of these differs from standard ethanol–gasoline blends in that it consists of 15 percent hydrous ethanol (i.e., the ethanol–water azeotrope) instead of the anhydrous ethanol traditionally used for blending with gasoline.
From 2009 to 2022, renewable percentage in gasoline slowly increased from 5% to 10%, even though EU-produced ethanol can achieve a climate-neutral production capability and most EU cars can use E10. E10 availability is low even in larger countries like Germany (26%) and France (58%). 8 countries in the EU have not adopted E10 as of 2024.
==== Brazil ====
The Brazilian National Agency of Petroleum, Natural Gas and Biofuels (ANP) requires gasoline for automobile use to have 27.5 percent of ethanol added to its composition. Pure hydrated ethanol is also available as a fuel.
==== Australia ====
Australia uses both E10 (up to 10% ethanol) and E85 (up to 85% ethanol) in its gasoline. New South Wales mandated biofuel in its Biofuels Act 2007, and Queensland had a biofuel mandate since 2017. Fuel pumps must be clearly labeled with its ethanol/biodiesel content.
==== U.S. ====
The federal Renewable Fuel Standard (RFS) effectively requires refiners and blenders to blend renewable biofuels (mostly ethanol) with gasoline, sufficient to meet a growing annual target of total gallons blended. Although the mandate does not require a specific percentage of ethanol, annual increases in the target combined with declining gasoline consumption have caused the typical ethanol content in gasoline to approach 10 percent. Most fuel pumps display a sticker that states that the fuel may contain up to 10 percent ethanol, an intentional disparity that reflects the varying actual percentage. In parts of the U.S., ethanol is sometimes added to gasoline without an indication that it is a component.
==== India ====
In October 2007, the Government of India decided to make five percent ethanol blending (with gasoline) mandatory. Currently, 10 percent ethanol blended product (E10) is being sold in various parts of the country. Ethanol has been found in at least one study to damage catalytic converters.
=== Dyes ===
Though gasoline is a naturally colorless liquid, many gasolines are dyed in various colors to indicate their composition and acceptable uses. In Australia, the lowest grade of gasoline (RON 91) was dyed a light shade of red/orange, but is now the same color as the medium grade (RON 95) and high octane (RON 98), which are dyed yellow. In the U.S., aviation gasoline (avgas) is dyed to identify its octane rating and to distinguish it from kerosene-based jet fuel, which is left colorless. In Canada, the gasoline for marine and farm use is dyed red and is not subject to fuel excise tax in most provinces.
=== Oxygenate blending ===
Oxygenate blending adds oxygen-bearing compounds such as methanol, MTBE, ETBE, TAME, TAEE, ethanol, and biobutanol. The presence of these oxygenates reduces the amount of carbon monoxide and unburned fuel in the exhaust. In many areas throughout the U.S., oxygenate blending is mandated by EPA regulations to reduce smog and other airborne pollutants. For example, in Southern California fuel must contain two percent oxygen by weight, resulting in a mixture of 5.6 percent ethanol in gasoline. The resulting fuel is often known as reformulated gasoline (RFG) or oxygenated gasoline, or, in the case of California, California reformulated gasoline (CARBOB). The federal requirement that RFG contain oxygen was dropped on 6 May 2006 because the industry had developed VOC-controlled RFG that did not need additional oxygen.
MTBE was phased out in the U.S. due to groundwater contamination and the resulting regulations and lawsuits. Ethanol and, to a lesser extent, ethanol-derived ETBE are common substitutes. A common ethanol-gasoline mix of 10 percent ethanol mixed with gasoline is called gasohol or E10, and an ethanol-gasoline mix of 85 percent ethanol mixed with gasoline is called E85. The most extensive use of ethanol takes place in Brazil, where the ethanol is derived from sugarcane. In 2004, over 13 billion liters (3.4×10^9 U.S. gal) of ethanol was produced in the U.S. for fuel use, mostly from corn and sold as E10. E85 is slowly becoming available in much of the U.S., though many of the relatively few stations vending E85 are not open to the general public.
The use of bioethanol and bio-methanol, either directly or indirectly by conversion of ethanol to bio-ETBE, or methanol to bio-MTBE is encouraged by the European Union Directive on the Promotion of the use of biofuels and other renewable fuels for transport. Since producing bioethanol from fermented sugars and starches involves distillation, though, ordinary people in much of Europe cannot legally ferment and distill their own bioethanol at present (unlike in the U.S., where getting a BATF distillation permit has been easy since the 1973 oil crisis).
== Safety ==
=== Toxicity ===
The safety data sheet for a 2003 Texan unleaded gasoline shows at least 15 hazardous chemicals occurring in various amounts, including benzene (up to five percent by volume), toluene (up to 35 percent by volume), naphthalene (up to one percent by volume), trimethylbenzene (up to seven percent by volume), methyl tert-butyl ether (MTBE) (up to 18 percent by volume, in some states), and about 10 others. Hydrocarbons in gasoline generally exhibit low acute toxicities, with LD50 of 700–2700 mg/kg for simple aromatic compounds. Benzene and many antiknocking additives are carcinogenic.
People can be exposed to gasoline in the workplace by swallowing it, breathing in vapors, skin contact, and eye contact. Gasoline is toxic. The National Institute for Occupational Safety and Health (NIOSH) has also designated gasoline as a carcinogen. Physical contact, ingestion, or inhalation can cause health problems. Since ingesting large amounts of gasoline can cause permanent damage to major organs, a call to a local poison control center or emergency room visit is indicated.
Contrary to common misconception, swallowing gasoline does not generally require special emergency treatment, and inducing vomiting does not help, and can make it worse. According to poison specialist Brad Dahl, "even two mouthfuls wouldn't be that dangerous as long as it goes down to your stomach and stays there or keeps going". The U.S. CDC's Agency for Toxic Substances and Disease Registry says not to induce vomiting, lavage, or administer activated charcoal.
=== Inhalation for intoxication ===
Inhaled (huffed) gasoline vapor is a common intoxicant. Users concentrate and inhale gasoline vapor in a manner not intended by the manufacturer to produce euphoria and intoxication. Gasoline inhalation has become epidemic in some poorer communities and indigenous groups in Australia, Canada, New Zealand, and some Pacific Islands. The practice is thought to cause severe organ damage, along with other effects such as intellectual disability and various cancers.
In Canada, Native children in the isolated Northern Labrador community of Davis Inlet were the focus of national concern in 1993, when many were found to be sniffing gasoline. The Canadian and provincial Newfoundland and Labrador governments intervened on several occasions, sending many children away for treatment. Despite being moved to the new community of Natuashish in 2002, serious inhalant abuse problems have continued. Similar problems were reported in Sheshatshiu in 2000 and also in Pikangikum First Nation. In 2012, the issue once again made the news media in Canada.
Australia has long faced a petrol (gasoline) sniffing problem in isolated and impoverished aboriginal communities. Although some sources argue that sniffing was introduced by U.S. servicemen stationed in the nation's Top End during World War II or through experimentation by 1940s-era Cobourg Peninsula sawmill workers, other sources claim that inhalant abuse (such as glue inhalation) emerged in Australia in the late 1960s. Chronic, heavy petrol sniffing appears to occur among remote, impoverished indigenous communities, where the ready accessibility of petrol has helped to make it a common substance for abuse.
In Australia, petrol sniffing now occurs widely throughout remote Aboriginal communities in the Northern Territory, Western Australia, northern parts of South Australia, and Queensland. The number of people sniffing petrol goes up and down over time as young people experiment or sniff occasionally. "Boss", or chronic, sniffers may move in and out of communities; they are often responsible for encouraging young people to take it up. In 2005, the Government of Australia and BP Australia began the usage of Opal fuel in remote areas prone to petrol sniffing. Opal is a non-sniffable fuel (which is much less likely to cause a high) and has made a difference in some indigenous communities.
=== Flammability ===
Gasoline is flammable with low flash point of −23 °C (−9 °F). Gasoline has a lower explosive limit of 1.4 percent by volume and an upper explosive limit of 7.6 percent. If the concentration is below 1.4 percent, the air-gasoline mixture is too lean and does not ignite. If the concentration is above 7.6 percent, the mixture is too rich and also does not ignite. However, gasoline vapor rapidly mixes and spreads with air, making unconstrained gasoline quickly flammable.
=== Gasoline exhaust ===
The exhaust gas generated by burning gasoline is harmful to both the environment and to human health. After CO is inhaled into the human body, it readily combines with hemoglobin in the blood, and its affinity is 300 times that of oxygen. Therefore, the hemoglobin in the lungs combines with CO instead of oxygen, causing the human body to be hypoxic, causing headaches, dizziness, vomiting, and other poisoning symptoms. In severe cases, it may lead to death. Hydrocarbons only affect the human body when their concentration is quite high, and their toxicity level depends on the chemical composition. The hydrocarbons produced by incomplete combustion include alkanes, aromatics, and aldehydes. Among them, a concentration of methane and ethane over 35 g/m3 (0.035 oz/cu ft) will cause loss of consciousness or suffocation, a concentration of pentane and hexane over 45 g/m3 (0.045 oz/cu ft) will have an anesthetic effect, and aromatic hydrocarbons will have more serious effects on health, blood toxicity, neurotoxicity, and cancer. If the concentration of benzene exceeds 40 ppm, it can cause leukemia, and xylene can cause headache, dizziness, nausea, and vomiting. Human exposure to large amounts of aldehydes can cause eye irritation, nausea, and dizziness. In addition to carcinogenic effects, long-term exposure can cause damage to the skin, liver, kidneys, and cataracts. After NOx enters the alveoli, it has a severe stimulating effect on the lung tissue. It can irritate the conjunctiva of the eyes, cause tearing, and cause pink eyes. It also has a stimulating effect on the nose, pharynx, throat, and other organs. It can cause acute wheezing, breathing difficulties, red eyes, sore throat, and dizziness causing poisoning. Fine particulates are also dangerous to health.
== Environmental effect ==
The air pollution in many large cities has changed from coal-burning pollution to "motor vehicle pollution". In the U.S., transportation is the largest source of carbon emissions, accounting for 30 percent of the total carbon footprint of the U.S. Combustion of gasoline produces 2.35 kilograms per liter (19.6 lb/U.S. gal) of carbon dioxide, a greenhouse gas.
Unburnt gasoline and evaporation from the tank, when in the atmosphere, react in sunlight to produce photochemical smog. Vapor pressure initially rises with some addition of ethanol to gasoline, but the increase is greatest at 10 percent by volume. At higher concentrations of ethanol above 10 percent, the vapor pressure of the blend starts to decrease. At a 10 percent ethanol by volume, the rise in vapor pressure may potentially increase the problem of photochemical smog. This rise in vapor pressure could be mitigated by increasing or decreasing the percentage of ethanol in the gasoline mixture. The chief risks of such leaks come not from vehicles, but gasoline delivery truck accidents and leaks from storage tanks. Because of this risk, most (underground) storage tanks now have extensive measures in place to detect and prevent any such leaks, such as monitoring systems (Veeder-Root, Franklin Fueling).
Production of gasoline consumes 1.5 liters per kilometer (0.63 U.S. gal/mi) of water by driven distance.
Gasoline use causes a variety of deleterious effects to the human population and to the climate generally. The harms imposed include a higher rate of premature death and ailments, such as asthma, caused by air pollution, higher healthcare costs for the public generally, decreased crop yields, missed work and school days due to illness, increased flooding and other extreme weather events linked to global climate change, and other social costs. The costs imposed on society and the planet are estimated to be $3.80 per gallon of gasoline, in addition to the price paid at the pump by the user. The damage to the health and climate caused by a gasoline-powered vehicle greatly exceeds that caused by electric vehicles.
Gasoline can be released into the environment as an uncombusted liquid fuel, as a flammable liquid, or as a vapor by way of leakages occurring during its production, handling, transport and delivery. Gasoline contains known carcinogens, and gasoline exhaust is a health risk. Gasoline is often used as a recreational inhalant and can be harmful or fatal when used in such a manner. When burned, one liter (0.26 U.S. gal) of gasoline emits about 2.3 kilograms (5.1 lb) of CO2, a greenhouse gas, contributing to human-caused climate change. Oil products, including gasoline, were responsible for about 32% of CO2 emissions worldwide in 2021.
=== Carbon dioxide ===
About 2.353 kilograms per liter (19.64 lb/U.S. gal) of carbon dioxide (CO2) are produced from burning gasoline that does not contain ethanol. Most of the retail gasoline now sold in the U.S. contains about 10 percent fuel ethanol (or E10) by volume. Burning E10 produces about 2.119 kilograms per liter (17.68 lb/U.S. gal) of CO2 that is emitted from the fossil fuel content. If the CO2 emissions from ethanol combustion are considered, then about 2.271 kilograms per liter (18.95 lb/U.S. gal) of CO2 are produced when E10 is combusted.
Worldwide 7 liters of gasoline are burnt for every 100 km driven by cars and vans.
In 2021, the International Energy Agency stated, "To ensure fuel economy and CO2 emissions standards are effective, governments must continue regulatory efforts to monitor and reduce the gap between real-world fuel economy and rated performance."
=== Contamination of soil and water ===
Gasoline enters the environment through the soil, groundwater, surface water, and air. Therefore, humans may be exposed to gasoline through methods such as breathing, eating, and skin contact. For example, using gasoline-filled equipment, such as lawnmowers, drinking gasoline-contaminated water close to gasoline spills or leaks to the soil, working at a gasoline station, inhaling gasoline volatile gas when refueling at a gasoline station is the easiest way to be exposed to gasoline.
== Use and pricing ==
The International Energy Agency said in 2021 that "road fuels should be taxed at a rate that reflects their impact on people's health and the climate".
=== Europe ===
Countries in Europe impose substantially higher taxes on fuels such as gasoline when compared to the U.S. The price of gasoline in Europe is typically higher than that in the U.S. due to this difference.
=== U.S. ===
From 1998 to 2004, the price of gasoline fluctuated between $0.26 and $0.53 per liter ($1 and $2/U.S. gal). After 2004, the price increased until the average gasoline price reached a high of $1.09 per liter ($4.11/U.S. gal) in mid-2008 but receded to approximately $0.69 per liter ($2.60/U.S. gal) by September 2009. The U.S. experienced an upswing in gasoline prices through 2011, and, by 1 March 2012, the national average was $0.99 per liter ($3.74/U.S. gal). California prices are higher because the California government mandates unique California gasoline formulas and taxes.
In the U.S., most consumer goods bear pre-tax prices, but gasoline prices are posted with taxes included. Taxes are added by federal, state, and local governments. As of 2009, the federal tax was $0.049 per liter ($0.184/U.S. gal) for gasoline and $0.064 per liter ($0.244/U.S. gal) for diesel (excluding red diesel).
About nine percent of all gasoline sold in the U.S. in May 2009 was premium grade, according to the Energy Information Administration. Consumer Reports magazine says, "If [your owner's manual] says to use regular fuel, do so—there's no advantage to a higher grade." The Associated Press said premium gas—which has a higher octane rating and costs more per gallon than regular unleaded—should be used only if the manufacturer says it is "required". Cars with turbocharged engines and high compression ratios often specify premium gasoline because higher octane fuels reduce the incidence of "knock", or fuel pre-detonation. The price of gasoline varies considerably between the summer and winter months.
There is a considerable difference between summer oil and winter oil in gasoline vapor pressure (Reid Vapor Pressure, RVP), which is a measure of how easily the fuel evaporates at a given temperature. The higher the gasoline volatility (the higher the RVP), the easier it is to evaporate. The conversion between the two fuels occurs twice a year, once in autumn (winter mix) and the other in spring (summer mix). The winter blended fuel has a higher RVP because the fuel must be able to evaporate at a low temperature for the engine to run normally. If the RVP is too low on a cold day, the vehicle will be difficult to start; however, the summer blended gasoline has a lower RVP. It prevents excessive evaporation when the outdoor temperature rises, reduces ozone emissions, and reduces smog levels. At the same time, vapor lock is less likely to occur in hot weather.
== Gasoline production by country ==
== Comparison with other fuels ==
Below is a table of the energy density (per volume) and specific energy (per mass) of various transportation fuels as compared with gasoline. In the rows with gross and net, they are from the Oak Ridge National Laboratory's Transportation Energy Data Book.
== See also ==
Chevron published a free high-quality technical guide Motor Gasolines Technical Review using common language that explains gasoline production, blending, and combustion in an engine. The report covers the US and other locations globally.
== Explanatory notes ==
== References ==
=== Bibliography ===
== External links ==
CNN/Money: Global gas prices
EEP: European gas prices
Transportation Energy Data Book
Energy Supply Logistics Searchable Directory of US Terminals
High octane fuel, leaded and LRP gasoline—article from robotpig.net
CDC – NIOSH Pocket Guide to Chemical Hazards
Comparison of Regular, Midgrade, and Premium Fuel
Images
Down the Gasoline Trail Handy Jam Organization, 1935 (Cartoon) |
General Electric | General Electric Company (GE) was an American multinational conglomerate founded in 1892, incorporated in the state of New York and headquartered in Boston.
Over the years, the company had multiple divisions, including aerospace, energy, healthcare, lighting, locomotives, appliances, and finance. In 2020, GE ranked among the Fortune 500 as the 33rd largest firm in the United States by gross revenue. In 2023, the company was ranked 64th in the Forbes Global 2000. In 2011, GE ranked among the Fortune 20 as the 14th most profitable company, but later very severely underperformed the market (by about 75%) as its profitability collapsed. Two employees of GE—Irving Langmuir (1932) and Ivar Giaever (1973)—have been awarded the Nobel Prize. From 1986 until 2013, GE was the owner of the NBC television network through its purchase of its former subsidiary RCA before its acquisition of NBC's parent company NBCUniversal by Comcast in 2011.
Following the Great Recession of the late 2000s decade, General Electric began selling off various divisions and assets, including its appliances and financial capital divisions, under Jeff Immelt's leadership as CEO. John Flannery, Immelt's replacement in 2017, further divested General Electric's assets in locomotives and lighting in order to focus the company more on aviation. Restrictions on air travel during the COVID-19 pandemic caused General Electric's revenue to fall significantly in 2020. Ultimately, GE's final CEO Larry Culp announced in November 2021 that General Electric was to be broken up into three separate, public companies by 2024. GE Aerospace, the aerospace company, is GE's legal successor. GE HealthCare, the health technology company, was spun off from GE in 2023. GE Vernova, the energy company, was founded when GE finalized the split. Following these transactions, GE Aerospace took the General Electric name and ticker symbols, while the old General Electric ceased to exist as a conglomerate.
== History ==
=== Formation ===
During 1889, Thomas Edison (1847–1931) had business interests in many electricity-related companies, including Edison Lamp Company, a lamp manufacturer in East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady, New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting devices; and Edison Electric Light Company, the patent-holding company and financial arm for Edison's lighting experiments, backed by J. P. Morgan (1837–1913) and the Vanderbilt family.
Henry Villard, a long-time Edison supporter and investor, proposed to consolidate all of these business interests. The proposal was supported by Samuel Insull - who served as his secretary and, later, financier - as well other investors. In 1889, Drexel, Morgan & Co.—a company founded by J. P. Morgan and Anthony J. Drexel—financed Edison's research and helped merge several of Edison's separate companies under one corporation, forming Edison General Electric Company, which was incorporated in New York on April 24, 1889. The new company acquired Sprague Electric Railway & Motor Company in the same year. The consolidation did not involve all of the companies established by Edison; notably, the Edison Illuminating Company, which would later become Consolidated Edison, was not part of the merger.
In 1880, Gerald Waldo Hart formed the American Electric Company of New Britain, Connecticut, which merged a few years later with Thomson-Houston Electric Company, led by Charles Coffin. In 1887, Hart left to become superintendent of the Edison Electric Company. General Electric was formed through the 1892 merger of Edison General Electric Company and Thomson-Houston Electric Company with the support of Drexel, Morgan & Co. The original plants of both companies continue to operate under the GE banner to this day.
The General Electric business was incorporated in New York, with the Schenectady plant used as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed.
In 1893, General Electric bought the business of Rudolf Eickemeyer in Yonkers, New York, along with all of its patents and designs. Eickemeyer's firm had developed transformers for use in the transmission of electrical power.
=== Public company ===
In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average, where it remained a part of the index for 122 years, though not continuously.
In 1911, General Electric absorbed the National Electric Lamp Association (NELA) into its lighting business. GE established its lighting division headquarters at Nela Park in East Cleveland, Ohio. The lighting division has since remained in the same location.
=== RCA and NBC ===
Owen D. Young, who was then GE's general counsel and vice president, through GE, founded the Radio Corporation of America (RCA) in 1919. This came after Young, while working with senior naval officers, purchased the Marconi Wireless Telegraph Company of America, which was a subsidiary of the British company Marconi Wireless and Signal Company. He aimed to expand international radio communications. GE used RCA as its retail arm for radio sales. In 1926, RCA co-founded the National Broadcasting Company (NBC), which built two radio broadcasting networks. In 1930, General Electric was charged with antitrust violations and was ordered to divest itself of RCA.
=== Television ===
In 1927, Ernst Alexanderson of GE made the first demonstration of television broadcast reception at his General Electric Realty Plot home at 1132 Adams Road in Schenectady, New York. On January 13, 1928, he made what was said to be the first broadcast to the public in the United States on GE's W2XAD: the pictures were picked up on 1.5 square inches (9.7 square centimeters) screens in the homes of four GE executives. The sound was broadcast on GE's WGY (AM).
Experimental television station W2XAD evolved into the station WRGB, which, along with WGY and WGFM (now WRVE), was owned and operated by General Electric until 1983. In 1965, the company expanded into cable with the launch of a franchise, which was awarded to a non-exclusive franchise in Schenectady through subsidiary General Electric Cablevision Corporation. On February 15, 1965, General Electric expanded its holdings in order to acquire more television stations to meet the maximum limit of the FCC, and more cable holdings through subsidiaries General Electric Broadcasting Company and General Electric Cablevision Corporation.
The company also owned television stations such as KOA-TV (now KCNC-TV) in Denver and WSIX-TV (later WNGE-TV, now WKRN) in Nashville, but like WRGB, General Electric sold off most of its broadcasting holdings, but held on to the Denver television station until in 1986, when General Electric bought out RCA and made it into an owned-and-operated station by NBC. It even stayed on until 1995 when it was transferred to a joint venture between CBS and Group W in a swap deal, alongside KUTV in Salt Lake City for longtime CBS O&O in Philadelphia, WCAU-TV.
==== Former General Electric-owned stations ====
Stations are arranged in alphabetical order by state and city of license.
(**) Indicates a station that was built and signed on by General Electric.
=== Radio stations ===
=== Power generation ===
Led by Sanford Alexander Moss, GE moved into the new field of aircraft turbosuperchargers. This technology also led to the development of industrial gas turbine engines used for power production. GE introduced the first set of superchargers during World War I and continued to develop them during the interwar period. Superchargers became indispensable in the years immediately before World War II. GE supplied 300,000 turbosuperchargers for use in fighter and bomber engines. This work led the U.S. Army Air Corps to select GE to develop the nation's first jet engine during the war. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. GE was ranked ninth among United States corporations in the value of wartime production contracts. However, their early work with Whittle's designs was later handed to Allison Engine Company. GE Aviation then emerged as one of the world's largest engine manufacturers, bypassing the British company Rolls-Royce plc.
Some consumers boycotted GE light bulbs, refrigerators, and other products during the 1980s and 1990s. The purpose of the boycott was to protest against GE's role in nuclear weapons production.
In 2002, GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009.
In 2018, GE Power garnered press attention when a model 7HA gas turbine in Texas was shut down for two months due to the break of a turbine blade. This model uses similar blade technology to GE's newest and most efficient model, the 9HA. After the break, GE developed new protective coatings and heat treatment methods. Gas turbines represent a significant portion of GE Power's revenue, and also represent a significant portion of the power generation fleet of several utility companies in the United States. Chubu Electric of Japan and Électricité de France also had units that were impacted. Initially, GE did not realize the turbine blade issue of the 9FB unit would impact the new HA units.
=== Computing ===
GE was one of the eight major computer companies of the 1960s along with IBM, Burroughs, NCR, Control Data Corporation, Honeywell, RCA, and UNIVAC. GE had a line of general purpose and special purpose computers, including the GE 200, GE 400, and GE 600 series general-purpose computers, the GE/PAC 4000 series real-time process control computers, and the DATANET-30 and Datanet 355 message switching computers (DATANET-30 and 355 were also used as front end processors for GE mainframe computers). A Datanet 500 computer was designed but never sold.
In 1956 Homer Oldfield was promoted to General Manager of GE's Computer Department. He facilitated the invention and construction of the Bank of America ERMA system, the first computerized system designed to read magnetized numbers on checks. But he was fired from GE in 1958 by Ralph J. Cordiner for overstepping his bounds and successfully gaining the ERMA contract. Cordiner was strongly against GE entering the computer business because he did not see the potential in it.
In 1962, GE started developing its GECOS (later renamed GCOS) operating system, originally for batch processing, but later extended to time-sharing and transaction processing. Versions of GCOS are still in use today. From 1964 to 1969, GE and Bell Laboratories (which soon dropped out) joined with MIT to develop the Multics operating system on the GE 645 mainframe computer. The project took longer than expected and was not a major commercial success, but it demonstrated concepts such as single-level storage, dynamic linking, hierarchical file system, and ring-oriented security. Active development of Multics continued until 1985.
GE got into computer manufacturing because, in the 1950s, they were the largest user of computers outside the United States federal government, aside from being the first business in the world to own a computer. Its major appliance manufacturing plant "Appliance Park" was the first non-governmental site to host one. However, in 1970, GE sold its computer division to Honeywell, exiting the computer manufacturing industry, though it retained its timesharing operations for some years afterward. GE was a big provider of computer time-sharing services through General Electric Information Services (GEIS, now GXS), offering online computing services that included GEnie.
In 2000, when United Technologies Corp. planned to buy Honeywell, GE made a counter-offer that was approved by Honeywell. On July 3, 2001, the European Union issued a statement that "prohibit the proposed acquisition by General Electric Co. of Honeywell Inc.". The reasons given were it "would create or strengthen dominant positions on several markets and that the remedies proposed by GE were insufficient to resolve the competition concerns resulting from the proposed acquisition of Honeywell".
On June 27, 2014, GE partnered with collaborative design company Quirky to announce its connected LED bulb called Link. The Link bulb is designed to communicate with smartphones and tablets using a mobile app called Wink.
=== Acquisitions and divestments ===
In December 1985, GE reacquired the RCA Corporation, primarily to gain ownership of the NBC television network for $6.28 billion; this merger surpassed the Capital Cities/ABC merger from earlier that year as the largest non-oil company merger in world business history. The remainder of RCA's divisions and assets were sold to various companies, including Bertelsmann Music Group which acquired RCA Records. Thomson SA, which licensed the manufacture of RCA and GE branded electronics, traced its roots to Thomson-Houston, one of the original components of GE. Also in 1986, Kidder, Peabody & Co., a U.S.-based securities firm, was sold to GE and following heavy losses was sold to PaineWebber in 1994.
In 1993, GE sold its Aerospace business to Martin Marietta.
In 1997, Genpact was founded as a unit of General Electric in Gurgaon. The company was founded as GE Capital International Services (GECIS). In the beginning, GECIS created processes for outsourcing back-office activities for GE Capital such as processing car loans and credit card transactions. It was an experimental concept at the time and the beginning of the business process outsourcing (BPO) industry. GE sold 60% stake in Genpact to General Atlantic and Oak Hill Capital Partners in 2005 and hived off Genpact into an independent business. GE is still a major client to Genpact today for services in customer service, finance, information technology, and analytics.
In 2001, GE acquired Spanish-language broadcaster Telemundo and incorporated it into its National Broadcasting Company, Inc. subsidiary.
In 2002, Francisco Partners and Norwest Venture Partners acquired a division of GE called GE Information Systems (GEIS). The new company, named GXS, is based in Gaithersburg, Maryland. GXS is a provider of business-to-business e-commerce solutions. GE maintains a minority stake in GXS. Also in 2002, GE Wind Energy was formed when GE bought the wind turbine manufacturing assets of Enron Wind after the Enron scandals.
In 2004, GE bought 80% of Vivendi Universal Entertainment, the parent of Universal Pictures from Vivendi. Vivendi Universal was merged with NBC to form NBCUniversal. GE then owned 80% of NBCUniversal and Vivendi owned 20%. In 2004, GE completed the spin-off of most of its mortgage and life insurance assets into an independent company, Genworth Financial, based in Richmond, Virginia.
In May 2007, GE acquired Smiths Aerospace for $4.8 billion. Also in 2007, GE Oil & Gas acquired Vetco Gray for $1.9 billion, followed by the acquisition of Hydril Pressure & Control in 2008 for $1.1 billion.
GE Plastics was sold in 2008 to SABIC (Saudi Arabia Basic Industries Corporation). In May 2008, GE announced it was exploring options for divesting the bulk of its consumer and industrial business.
On December 3, 2009, it was announced that NBCUniversal would become a joint venture between GE and cable television operator Comcast. Comcast would hold a controlling interest in the company, while GE would retain a 49% stake and would buy out shares owned by Vivendi.
Vivendi would sell its 20% stake in NBCUniversal to GE for US$5.8 billion. Vivendi would sell 7.66% of NBCUniversal to GE for US$2 billion if the GE/Comcast deal was not completed by September 2010 and then sell the remaining 12.34% stake of NBCUniversal to GE for US$3.8 billion when the deal was completed or to the public via an IPO if the deal was not completed.
On March 1, 2010, GE announced plans to sell its 20.85% stake in Turkey-based Garanti Bank. In August 2010, GE Healthcare signed a strategic partnership to bring cardiovascular Computed Tomography (CT) technology from start-up Arineta Ltd. of Israel to the hospital market. In October 2010, GE acquired gas engines manufacturer Dresser Industries in a $3 billion deal and also bought a $1.6 billion portfolio of retail credit cards from Citigroup Inc. On October 14, 2010, GE announced the acquisition of data migration & SCADA simulation specialists Opal Software. In December 2010, for the second time that year (after the Dresser acquisition), GE bought the oil sector company Wellstream, an oil pipe maker, for 800 million pounds ($1.3 billion).
In March 2011, GE announced that it had completed the acquisition of privately held Lineage Power Holdings from The Gores Group. In April 2011, GE announced it had completed its purchase of John Wood plc's Well Support Division for $2.8 billion.
In 2011, GE Capital sold its $2 billion Mexican assets to Santander for $162 million and exited the business in Mexico. Santander additionally assumed the portfolio debts of GE Capital in the country. Following this, GE Capital focused on its core business and shed its non-core assets.
In June 2012, CEO and President of GE Jeff Immelt said that the company would invest ₹3 billion to accelerate its businesses in Karnataka. In October 2012, GE acquired $7 billion worth of bank deposits from MetLife Inc.
On March 19, 2013, Comcast bought GE's shares in NBCU for $16.7 billion, ending the company's longtime stake in television and film media.
In April 2013, GE acquired oilfield pump maker Lufkin Industries for $2.98 billion.
In April 2014, it was announced that GE was in talks to acquire the global power division of French engineering group Alstom for a figure of around $13 billion. A rival joint bid was submitted in June 2014 by Siemens and Mitsubishi Heavy Industries (MHI) with Siemens seeking to acquire Alstom's gas turbine business for €3.9 billion, and MHI proposing a joint venture in steam turbines, plus a €3.1 billion cash investment. In June 2014, a formal offer from GE worth $17 billion was agreed by the Alstom board. Part of the transaction involved the French government taking a 20% stake in Alstom to help secure France's energy and transport interests and French jobs. A rival offer from Siemens Mitsubishi Heavy Industries was rejected. The acquisition was expected to be completed in 2015. In October 2014, GE announced it was considering the sale of its Polish banking business Bank BPH.
Later in 2014, General Electric announced plans to open its global operations center in Cincinnati, Ohio. The Global Operations Center opened in October 2016 as home to GE's multifunctional shared services organization. It supports the company's finance/accounting, human resources, information technology, supply chain, legal and commercial operations, and is one of GE's four multifunctional shared services centers worldwide in Pudong, China; Budapest, Hungary; and Monterrey, Mexico.
In April 2015, GE announced its intention to sell off its property portfolio, worth $26.5 billion, to Wells Fargo and The Blackstone Group. It was announced in April 2015 that GE would sell most of its finance unit and return around $90 billion to shareholders as the firm looked to trim down on its holdings and rid itself of its image of a "hybrid" company, working in both banking and manufacturing. In August 2015, GE Capital agreed to sell its Healthcare Financial Services business to Capital One for US$9 billion. The transaction involved US$8.5 billion of loans made to a wide array of sectors, including senior housing, hospitals, medical offices, outpatient services, pharmaceuticals, and medical devices. Also in August 2015, GE Capital agreed to sell GE Capital Bank's on-line deposit platform to Goldman Sachs. Terms of the transaction were not disclosed, but the sale included US$8 billion of on-line deposits and another US$8 billion of brokered certificates of deposit. The sale was part of GE's strategic plan to exit the U.S. banking sector and to free itself from tightening banking regulations. GE also aimed to shed its status as a "systematically important financial institution".
In September 2015, GE Capital agreed to sell its transportation finance unit to Canada's Bank of Montreal. The unit sold had US$8.7 billion (CA$11.5 billion) of assets, 600 employees, and 15 offices in the U.S. and Canada. The exact terms of the sale were not disclosed, but the final price would be based on the value of the assets at closing, plus a premium according to the parties. In October 2015, activist investor Nelson Peltz's fund Trian bought a $2.5 billion stake in the company.
In January 2016, Haier acquired GE's appliance division for $5.4 billion. In October 2016, GE Renewable Energy agreed to pay €1.5 billion to Doughty Hanson & Co for LM Wind Power during 2017.
At the end of October 2016, it was announced that GE was under negotiations for a deal valued at about $30 billion to combine GE Oil & Gas with Baker Hughes. The transaction would create a publicly traded entity controlled by GE. It was announced that GE Oil & Gas would sell off its water treatment business, GE Water & Process Technologies, as part of its divestment agreement with Baker Hughes. The deal was cleared by the EU in May 2017, and by the United States Department of Justice in June 2017. The merger agreement was approved by shareholders at the end of June 2017. On July 3, 2017, the transaction was completed, and Baker Hughes became a GE company and was renamed Baker Hughes, a GE Company (BHGE). In November 2018, GE reduced its stake in Baker Hughes to 50.4%. On October 18, 2019, GE reduced its stake to 36.8% and the company was renamed back to Baker Hughes.
In May 2017, GE had signed $15 billion of business deals with Saudi Arabia. Saudi Arabia is one of GE's largest customers. In September 2017, GE announced the sale of its Industrial Solutions Business to ABB. The deal closed on June 30, 2018.
=== Fraud allegations and notice of possible SEC civil action ===
On August 15, 2019, Harry Markopolos, a financial fraud investigator known for his discovery of a Ponzi scheme run by Bernard Madoff, accused General Electric of being a "bigger fraud than Enron," alleging $38 billion in accounting fraud. GE denied wrongdoing.
On October 6, 2020, General Electric reported it received a Wells notice from the Securities and Exchange Commission stating the SEC may take civil action for possible violations of securities laws.
==== Insufficient reserves for long-term care policies ====
It is alleged that GE is "hiding" (i.e., under-reserved) $29 billion in losses related to its long-term care business.
According to an August 2019 Fitch Ratings report, there are concerns that GE has not set aside enough money to cover its long-term care liabilities.
In 2018, a lawsuit (the Bezio case) was filed in New York state court on behalf of participants in GE's 401(k) plan and shareowners alleging violations of Section 11 of the Securities Act of 1933 based on alleged misstatements and omissions related to insurance reserves and performance of GE's business segments.
The Kansas Insurance Department (KID) is requiring General Electric to make $14.5 billion of capital contributions for its insurance contracts during the 7-year period ending in 2024.
GE reported the total liability related to its insurance contracts increased significantly from 2016 to 2019:
December 31, 2016 $26.1 billion
December 31, 2017 $38.6 billion
December 31, 2018 $35.6 billion
December 31, 2019 $39.6 billion
In 2018, GE announced that the issuance of the new standard by the Financial Accounting Standards Board (FASB) regarding Financial Services – Insurance (Topic 944) would materially affect its financial statements. Mr. Markopolos estimated there would be a $US 10.5 billion charge when the new accounting standard is adopted in the first quarter of 2021.
==== Anticipated $8 billion loss upon disposition of Baker Hughes ====
In 2017, GE acquired a 62.5% interest in Baker Hughes (BHGE) when it combined its oil & gas business with Baker Hughes Incorporated.
In 2018, GE reduced its interest to 50.4%, resulting in the realization of a $2.1 billion loss. GE is planning to divest its remaining interest and has warned that the divestment will result in an additional loss of $8.4 billion (assuming a BHGE share price of $23.57 per share). In response to the fraud allegations, GE noted the amount of the loss would be $7.4 billion if the divestment occurred on July 26, 2019. Mr. Markopolos noted that BHGE is an asset available for sale and therefore mark-to-market accounting is required.
Markopolos noted GE's current ratio was only 0.67. He expressed concerns that GE may file for bankruptcy if there is a recession.
=== Final years and three-way split (2018–2024) ===
In 2018, the GE Pension Plan reported losses of US$3.3 billion on plan assets.
In 2018, General Electric changed the discount rate used to calculate the actuarial liabilities of its pension plans. The rate was increased from 3.64% to 4.34%. Consequently, the reported liability for the underfunded pension plans decreased by $7 billion year-over-year, from $34.2 billion in 2017 to $27.2 billion in 2018.
In October 2018, General Electric announced it would "freeze pensions" for about 20,000 salaried U.S. employees. The employees will be moved to a defined contribution retirement plan in 2021.
On March 30, 2020, General Electric factory workers protested to convert jet engine factories to make ventilators during the COVID-19 crisis.
In June 2020, GE made an agreement to sell its Lighting business to Savant Systems, Inc. Financial details of the transaction were not disclosed.
In November 2020, General Electric warned it would be cutting jobs waiting for a recovery due to the COVID-19 pandemic.
On November 9, 2021, the company announced it would divide itself into three public companies. On July 18, 2022, GE unveiled the brand names of the companies it had devised through its planned separation: GE Aerospace, GE HealthCare, and GE Vernova. The new companies are respectively focused on aerospace, healthcare, and energy (renewable energy, power, and digital). The first spin-off of GE HealthCare was finalized on January 4, 2023; GE continues to hold 10.24% of shares and intends to sell the remaining over time. This was followed by the spin-off of GE's portfolio of energy businesses, which became GE Vernova on April 2, 2024. Following these transactions, GE became an aviation-focused company; GE Aerospace is the legal successor of the original GE. The company's legal name is still General Electric Company.
== Financial performance ==
=== Dividends ===
General Electric was a longtime "dividend aristocrat" (a company with a long history of maintaining dividend payments to shareholders). Until 2017, the company had never cut dividends for 119 years before a 50% dividend reduction from 24 cents per share to 12 cents per share. In 2018, GE further reduced its quarterly dividend from 12 cents to 1 cent per share.
== Stock ==
As a publicly traded company on the New York Stock Exchange, GE stock was one of the 30 components of the Dow Jones Industrial Average from 1907 to 2018, the longest continuous presence of any company on the index, and during this time the only company that was part of the original Dow Jones Industrial Index created in 1896. In August 2000, the company had a market capitalization of $601 billion, and was the most valuable company in the world. On June 26, 2018, the stock was removed from the index and replaced with Walgreens Boots Alliance. In the years leading to its removal, GE was the worst performing stock in the Dow, falling more than 55 percent year on year and more than 25 percent year to date. The company continued to lose value after being removed from the index.
General Electric Co. announced on July 30, 2021 (the completion of) a reverse stock split of GE common stock at a ratio of 1-for-8 and trading on a split-adjusted basis with a new ISIN number (US3696043013) starting on August 2, 2021.
== Corporate affairs ==
In 1959, General Electric was accused of promoting the largest illegal cartel in the United States since the adoption of the Sherman Antitrust Act of 1890 in order to maintain artificially high prices. In total, 29 companies and 45 executives would be convicted. Subsequent parliamentary inquiries revealed that "white-collar crime" was by far the most costly form of crime for the United States' finances.
GE is a multinational conglomerate headquartered in Boston, Massachusetts. However its main offices are located at 30 Rockefeller Plaza at Rockefeller Center in New York City, known now as the Comcast Building. It was formerly known as the GE Building for the prominent GE logo on the roof; NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the center since its construction in the 1930s. GE moved its corporate headquarters from the GE Building on Lexington Avenue to Fairfield, Connecticut in 1974. In 2016, GE announced a move to the South Boston Waterfront neighborhood of Boston, Massachusetts, partly as a result of an incentive package provide by state and city governments. The first group of workers arrived in the summer of 2016, and the full move will be completed by 2018. Due to poor financial performance and corporate downsizing, GE sold the land it planned to build its new headquarters building on, instead choosing to occupy neighboring leased buildings.
GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. As of 2011, the company spent more on U.S. lobbying than any other company.
In 2005, GE launched its "Ecomagination" initiative in an attempt to position itself as a "green" company.
GE is one of the biggest players in the wind power industry and is developing environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S." and has set goals for its subsidiaries to lower their greenhouse gas emissions.
On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO.
In July 2010, GE agreed to pay $23.4 million to settle an SEC complaint without admitting or denying the allegations that two of its subsidiaries bribed Iraqi government officials to win contracts under the U.N. oil-for-food program between 2002 and 2003.
In February 2017, GE announced that the company intends to close the gender gap by promising to hire and place 20,000 women in technical roles by 2020. The company is also seeking to have a 50:50 male-to-female gender representation in all entry-level technical programs.
In October 2017, GE announced they would be closing research and development centers in Shanghai, Munich and Rio de Janeiro. The company spent $5 billion on R&D in the last year.
On February 25, 2019, GE sold its diesel locomotive business to Wabtec.
=== CEO ===
As of October 2018, John L. Flannery was replaced by H. Lawrence "Larry" Culp Jr. as chairman and CEO, in a unanimous vote of the GE Board of Directors.
Charles A. Coffin (1913–1922)
Owen D. Young (1922–1939, 1942–1945)
Philip D. Reed (1940–1942, 1945–1958)
Ralph J. Cordiner (1958–1963)
Gerald L. Phillippe (1963–1972)
Fred J. Borch (1967–1972)
Reginald H. Jones (1972–1981)
Jack Welch (1981–2001)
Jeff Immelt (2001–2017)
John L. Flannery (2017–2018)
H. Lawrence Culp Jr. (2018–2024)
=== Corporate recognition and rankings ===
In 2011, Fortune ranked GE the sixth-largest firm in the U.S., and the 14th-most profitable. Other rankings for 2011–2012 include the following:
#18 company for leaders (Fortune)
#82 green company (Newsweek)
#91 most admired company (Fortune)
#19 most innovative company (Fast Company).
In 2012, GE's brand was valued at $28.8 billion. CEO Jeff Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE.
Tom Geismar later stated that looking back at the logos of the 1910s, 1920s, and 1930s, one can clearly judge that they are old-fashioned. Chermayeff & Geismar, along with colleagues Bill Brown and Ivan Chermaev, created the modern 1980 logo. They, in turn, argued that even now the old logos look out of date, earlier they were good. The changes included a new corporate color palette, small modifications to the GE logo, a new customized font (GE Inspira) and a new slogan, "Imagination at work", composed by David Lucas, to replace the slogan "We Bring Good Things to Life" used since 1979. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising. The changes were designed by Wolff Olins and are used on GE's marketing, literature, and website. In 2014, a second typeface family was introduced: GE Sans and Serif by Bold Monday, created under art direction by Wolff Olins.
As of 2016, GE had appeared on the Fortune 500 list for 22 years and held the 11th rank. GE was removed from the Dow Jones Industrial Average on June 28, 2018, after the value had dropped below 1% of the index's weight.
== Businesses ==
GE's primary business divisions are:
GE Additive
GE Aerospace
GE Capital
GE Digital
GE Healthcare
GE Power
GE Renewable Energy
GE Research
Through these businesses, GE participates in markets that include the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), industrial automation, medical imaging equipment, motors, aircraft jet engines, and aviation services. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance, it offers a range of financial services. It has a presence in over 100 countries.
General Imaging manufacturers GE digital cameras.
Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success.
As of August 2015 GE is planning to set up a silicon carbide chip packaging R&D center in coalition with SUNY Polytechnic Institute in Utica, New York. The project will create 470 jobs with the potential to grow to 820 jobs within 10 years.
On September 14, 2015, GE announced the creation of a new unit: GE Digital, which will bring together its software and IT capabilities. The new business unit will be headed by Bill Ruh, who joined GE in 2011 from Cisco Systems and has since worked on GE's software efforts.
Morgan Stanley sold a stake in GE HealthCare Technologies for $1.1 billion as part of a deal to swap General Electric Co. debt for GE HealthCare stock.
=== Former divisions ===
GE Industrial was a division providing appliances, lighting, and industrial products; factory automation systems; plastics, silicones, and quartz products; security and sensors technology; and equipment financing, management, and operating services. As of 2007, it had 70,000 employees, generating $17.7 billion in revenue. After some major realignments in late 2007, GE Industrial was organized in two main sub businesses:
GE Consumer & Industrial
Appliances
Electrical Distribution
Lighting
GE Enterprise Solutions
Digital Energy
GE Fanuc Intelligent Platforms
Security
Sensing & Inspection Technologies
The former GE Plastics division was sold in August 2007 and is now SABIC Innovative Plastics.
On May 4, 2008, it was announced that GE would auction off its appliances business for an expected sale of $5–8 billion. However, this plan fell through as a result of the recession.
The former GE Appliances and Lighting segment was dissolved in 2014 when GE's appliance division was attempted to be sold to Electrolux for $5.4 billion, but eventually sold it to Haier in June 2016 due to antitrust filing against Electrolux. GE Lighting (consumer lighting) and the newly created Current, powered by GE, which deals in commercial LED, solar, EV, and energy storage, became stand-alone businesses within the company, until the sale of the latter to American Industrial Partners in April 2019.
The former GE Transportation division merged with Wabtec on February 25, 2019, leaving GE with a 24.9% holding in Wabtec.
On July 1, 2020, GE Lighting was acquired by Savant Systems and remains headquartered at Nela Park in East Cleveland, Ohio.
== Environmental record ==
=== Carbon footprint ===
General Electric Company reported Total CO2e emissions (direct + indirect) for the twelve months ending 31 December 2020 at 2,080 Kt (-310 /-13% y-o-y). There has been a consistent declining trend in reported emissions since 2016.
=== Pollution ===
Some of GE's activities have given rise to large-scale air and water pollution. Based on data from 2000, Researchers at the Political Economy Research Institute listed the corporation as the fourth-largest corporate producer of air pollution in the United States (behind only E. I. Du Pont de Nemours & Co., United States Steel Corp., and ConocoPhillips), with more than 4.4 million pounds per year (2,000 tons) of toxic chemicals released into the air. GE has also been implicated in the creation of toxic waste. According to United States Environmental Protection Agency (EPA) documents, only the United States Government, Honeywell, and Chevron Corporation are responsible for producing more Superfund toxic waste sites.
In 1983, New York State Attorney General Robert Abrams filed suit in the United States District Court for the Northern District of New York to compel GE to pay for the clean-up of what was claimed to be more than 100,000 tons of chemicals dumped from their plant in Waterford, New York, which polluted nearby groundwater and the Hudson River. In 1999, the company agreed to pay a $250 million settlement in connection with claims it polluted the Housatonic River (at Pittsfield, Massachusetts) and other sites with polychlorinated biphenyls (PCBs) and other hazardous substances.
In 2003, acting on concerns that the plan proposed by GE did not "provide for adequate protection of public health and the environment," EPA issued an administrative order for the company to "address cleanup at the GE site" in Rome, Georgia, also contaminated with PCBs.
The nuclear reactors involved in the 2011 crisis at Fukushima I in Japan were GE designs, and the architectural designs were done by Ebasco, formerly owned by GE. Concerns over the design and safety of these reactors were raised as early as 1972, but tsunami danger was not discussed at that time. As of 2014, the same model nuclear reactors designed by GE are operating in the US; however, as of May 31, 2019, the controversial Pilgrim Nuclear Generating Station, in Plymouth, Massachusetts, has been shut down and is in the process of decommission.
==== Pollution of the Hudson River ====
GE heavily contaminated the Hudson River with PCBs between 1947 and 1977. This pollution caused a range of harmful effects to wildlife and people who eat fish from the river. In 1983 EPA declared a 200-mile (320 km) stretch of the river, from Hudson Falls to New York City, to be a Superfund site requiring cleanup. This Superfund site is considered to be one of the largest in the nation. In addition to receiving extensive fines, GE is continuing its sediment removal operations, pursuant to the Superfund orders, in the 21st century.
==== Pollution of the Housatonic River ====
From c. 1932 until 1977, GE polluted the Housatonic River with PCB discharges from its plant at Pittsfield, Massachusetts. EPA designated the Pittsfield plant and several miles of the Housatonic to be a Superfund site in 1997, and ordered GE to remediate the site. Aroclor 1254 and Aroclor 1260, products manufactured by Monsanto, were the principal contaminants that were discharged into the river. The highest concentrations of PCBs in the Housatonic River are found in Woods Pond in Lenox, Massachusetts, just south of Pittsfield, where they have been measured up to 110 mg/kg in the sediment. About 50% of all the PCBs currently in the river are estimated to be retained in the sediment behind Woods Pond dam. This is estimated to be about 11,000 pounds (5,000 kg) of PCBs. Formerly filled oxbows are also polluted. Waterfowl and fish who live in and around the river contain significant levels of PCBs and can present health risks if consumed. In 2020 GE completed remediation and restoration of its 10 manufacturing plant areas within the city of Pittsfield. As of 2023 plans for cleanup of the river south of the city are not finalized.
== Social responsibility ==
=== Environmental initiatives ===
The environmental work and research of GE can be seen as early as 1968 with the experimental Delta electric car built by the GE Research and Development Center led by Bruce Laumeister. The electric car led to the production shortly after of the cutting-edge technology of the first commercially produced all-electric Elec-Trak garden tractor, which was manufactured from around 1969 until 1975.
On June 6, 2011, GE announced that it had licensed solar thermal technology from California-based eSolar for use in power plants that use both solar and natural gas.
On May 26, 2011, GE unveiled its EV Solar Carport, a carport that incorporates solar panels on its roof, with electric vehicle charging stations under its cover.
In May 2005, GE announced the launch of a program called "Ecomagination", intended, in the words of CEO Jeff Immelt, "to develop tomorrow's solutions such as solar energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger durable materials, efficient lighting, and water purification technology". The announcement prompted an op-ed piece in The New York Times to observe that, "while General Electric's increased emphasis on clean technology will probably result in improved products and benefit its bottom line, Mr. Immelt's credibility as a spokesman on national environmental policy is fatally flawed because of his company's intransigence in cleaning up its own toxic legacy."
GE has said that it will invest $1.4 billion in clean technology research and development in 2008 as part of its Ecomagination initiative. As of October 2008, the scheme had resulted in 70 green products being brought to market, ranging from halogen lamps to biogas engines. In 2007, GE raised the annual revenue target for its Ecomagination initiative from $20 billion in 2010 to $25 billion following positive market response to its new product lines. In 2010, GE continued to raise its investment by adding $10 billion into Ecomagination over the next five years.
GE Energy's renewable energy business has expanded greatly to keep up with growing U.S. and global demand for clean energy. Since entering the renewable energy industry in 2002, GE has invested more than $850 million in renewable energy commercialization. In August 2008, it acquired Kelman Ltd, a Northern Ireland-based company specializing in advanced monitoring and diagnostics technologies for transformers used in renewable energy generation and announced an expansion of its business in Northern Ireland in May 2010. In 2009, GE's renewable energy initiatives, which include solar power, wind power and GE Jenbacher gas engines using renewable and non-renewable methane-based gases, employ more than 4,900 people globally and have created more than 10,000 supporting jobs.
GE Energy and Orion New Zealand (Orion) have announced the implementation of the first phase of a GE network management system to help improve power reliability for customers. GE's ENMAC Distribution Management System is the foundation of Orion's initiative. The system of smart grid technologies will significantly improve the network company's ability to manage big network emergencies and help it restore power faster when outages occur.
In June 2018, GE Volunteers, an internal group of GE employees, along with the Malaysian Nature Society, transplanted more than 270 plants from the Taman Tugu forest reserve so that they may be replanted in a forest trail that is under construction.
=== Educational initiatives ===
GE Healthcare is collaborating with the Wayne State University School of Medicine and the Medical University of South Carolina to offer an integrated radiology curriculum during their respective MD Programs led by investigators of the Advanced Diagnostic Ultrasound in Microgravity study. GE has donated over one million dollars of Logiq E Ultrasound equipment to these two institutions.
=== Marketing initiatives ===
Between September 2011 and April 2013, GE ran a content marketing campaign dedicated to telling the stories of "innovators—people who are reshaping the world through act or invention." The initiative included 30 3-minute films from leading documentary film directors (Albert Maysles, Jessica Yu, Leslie Iwerks, Steve James, Alex Gibney, Lixin Fan, Gary Hustwit and others), and a user-generated competition that received over 600 submissions, out of which 20 finalists were chosen.
Short Films, Big Ideas was launched at the 2011 Toronto International Film Festival in partnership with cinelan. Stories included breakthroughs in Slingshot (water vapor distillation system), cancer research, energy production, pain management, and food access. Each of the 30 films received world premiere screenings at a major international film festival, including the Sundance Film Festival and the Tribeca Film Festival. The winning amateur director film, The Cyborg Foundation, was awarded a US$100,000 prize at the 2013 Sundance Film Festival. According to GE, the campaign garnered more than 1.5 billion total media impressions, 14 million online views, and was seen in 156 countries.
In January 2017, GE signed an estimated $7 million deal with the Boston Celtics to have its corporate logo put on the NBA team's jersey.
=== Charity ===
On March 3, 2022, GE published an international memo pledging to donate $4.5 million to Ukraine amid Russian invasion. According to the memo, $4 million will be used for medical equipment, $400,000 for emergency cash for refugees, and $100,000 will go to Airlink, an NGO that helps communities in crisis.
== Political affiliation ==
In the 1950s, GE sponsored Ronald Reagan's TV career and launched him on the lecture circuit. GE has also designed social programs, supported civil rights organizations, and funded minority education programs.
== Notable appearances in media ==
In the early 1950s, Kurt Vonnegut was a writer for GE. A number of his novels and stories (notably Cat's Cradle and Player Piano) refer to the fictional city of Ilium, which appears to be loosely based on Schenectady, New York. The Ilium Works is the setting for the short story "Deer in the Works".
In 1981, GE won a Clio award for its 30 Soft White Light Bulbs commercial, We Bring Good Things to Life. The slogan "We Bring Good Things to Life" was created by Phil Dusenberry at the ad agency BBDO.
GE was the primary focus of a 1991 short subject Academy Award-winning documentary, Deadly Deception: General Electric, Nuclear Weapons, and Our Environment, that juxtaposed GE's "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons.
GE was frequently mentioned and parodied in the NBC comedy sitcom 30 Rock from 2006 to 2013. Former General Electric CEO Jack Welch even cameoed as himself, appearing in the season four episode "Future Husband". The episode is a satirical reference to the real-world acquisition of NBC Universal from General Electric by Comcast in November 2009.
In 2013, GE received a National Jefferson Award for Outstanding Service by a Major Corporation.
== Branding ==
The General Electric logo has a blue circle with a white outline. It has four white lines which "suggest the blades of a midcentury tabletop fan." In the center of the circle is the letters "GE." Its design has changed little throughout the company's history. The logo is officially known as the Monogram but is also known by some as "the meatball."
== See also ==
GE Technology Infrastructure
Knolls Atomic Power Laboratory
List of assets owned by General Electric
Phoebus cartel
Top 100 US Federal Contractors
== Notes ==
== References ==
== Further reading ==
Carlson, W. Bernard (1991). Innovation as a Social Process: Elihu Thomson and the Rise of General Electric. Cambridge University Press. ISBN 0-521-39317-5.
Woodbury, David O. Elihu Thomson, Beloved Scientist (Boston: Museum of Science, 1944)
Haney, John L. The Elihu Thomson Collection American Philosophical Society Yearbook 1944.
Hammond, John W. Men and Volts: The Story of General Electric, published 1941, 436 pages.
Mill, John M. Men and Volts at War: The Story of General Electric in World War II, published 1947.
Irmer, Thomas. Gerard Swope. In Immigrant Entrepreneurship: German-American Business Biographies, 1720 to the Present, vol. 4, edited by Jeffrey Fear. German Historical Institute.
== External links ==
Official website
Business data for General Electric: |
George Cayley | Sir George Cayley, 6th Baronet (27 December 1773 – 15 December 1857) was an English engineer, inventor, and aviator. He is one of the most important people in the history of aeronautics. Many consider him to be the first true scientific aerial investigator and the first person to understand the underlying principles and forces of flight and the creator of the wire wheel.
In 1799, he set forth the concept of the modern aeroplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control.
He was a pioneer of aeronautical engineering and is sometimes referred to as "the father of aviation." He identified the four forces which act on a heavier-than-air flying vehicle: weight, lift, drag and thrust. Modern aeroplane design is based on those discoveries and on the importance of cambered wings, also proposed by Cayley. He constructed the first flying model aeroplane and also diagrammed the elements of vertical flight.
He also designed the first glider reliably reported to carry a human aloft. He correctly predicted that sustained flight would not occur until a lightweight engine was developed to provide adequate thrust and lift. The Wright brothers acknowledged his importance to the development of aviation.
Cayley represented the Whig party as Member of Parliament for Scarborough from 1832 to 1835, and in 1838, helped found the UK's first Polytechnic Institute, the Royal Polytechnic Institution (now University of Westminster) and served as its chairman for many years. He was elected as a Vice-President of the Yorkshire Philosophical Society in 1824. He was a founding member of the British Association for the Advancement of Science and was a distant cousin of the mathematician Arthur Cayley.
== General engineering projects ==
Cayley, from Brompton-by-Sawdon, near Scarborough in Yorkshire, inherited Brompton Hall and Wydale Hall and other estates on the death of his father, the 5th baronet. Captured by the optimism of the times, he engaged in a wide variety of engineering projects. Among the many things that he developed are self-righting lifeboats, tension-spoke wheels,
the "Universal Railway" (his term for caterpillar tractors), automatic signals for railway crossings, seat belts, small scale helicopters, and a kind of prototypical internal combustion engine fuelled by gunpowder (Gunpowder engine). He suggested that a more practical engine might be made using gaseous vapours rather than gunpowder, thus foreseeing the modern internal combustion engine. He also contributed in the fields of prosthetics, air engines, electricity, theatre architecture, ballistics, optics and land reclamation, and held the belief that these advancements should be freely available.
According to the Institution of Mechanical Engineers, George Cayley was the inventor of the hot air engine in 1807: "The first successfully working hot air engine was Cayley's, in which much ingenuity was displayed in overcoming practical difficulties arising from the high working temperature." His second hot air engine of 1837 was a forerunner of the internal combustion engine: "In 1837, Sir George Cayley, Bart., Assoc. Inst. C.E., applied the products of combustion from closed furnaces, so that they should act directly upon a piston in a cylinder. Plate No. 9 represents a pair of engines upon this principle, together equal to 8 HP, when the piston travels at the rate of 220 feet per minute."
== Flying machines ==
Cayley is mainly remembered for his pioneering studies and experiments with flying machines, including the working, piloted glider that he designed and built. He wrote a landmark three-part treatise titled "On Aerial Navigation" (1809–1810), which was published in Nicholson's Journal of Natural Philosophy, Chemistry and the Arts. The 2007 discovery of sketches in Cayley's school notebooks (held in the archive of the Royal Aeronautical Society Library) revealed that even at school Cayley was developing his ideas on the theories of flight. It has been claimed that these images indicate that Cayley identified the principle of a lift-generating inclined plane as early as 1792. To measure the drag on objects at different speeds and angles of attack, he later built a "whirling-arm apparatus", a development of earlier work in ballistics and air resistance. He also experimented with rotating wing sections of various forms in the stairwells at Brompton Hall.
These scientific experiments led him to develop an efficient cambered airfoil and to identify the four vector forces that influence an aircraft: thrust, lift, drag, and weight. He discovered the importance of the dihedral angle for lateral stability in flight, and deliberately set the centre of gravity of many of his models well below the wings for this reason; these principles influenced the development of hang gliders. As a result of his investigations into many other theoretical aspects of flight, many now acknowledge him as the first aeronautical engineer. His emphasis on lightness led him to invent a new method of constructing lightweight wheels which is in common use today. For his landing wheels, he shifted the spoke's forces from compression to tension by making them from tightly-stretched string, in effect "reinventing the wheel". Wire soon replaced the string in practical applications and over time the wire wheel came into common use on bicycles, cars, aeroplanes and many other vehicles.
The model glider successfully flown by Cayley in 1804 had the layout of a modern aircraft, with a kite-shaped wing towards the front and an adjustable tailplane at the back consisting of horizontal stabilisers and a vertical fin. A movable weight allowed adjustment of the model's centre of gravity. In 1843 he was the first to suggest the idea of a convertiplane. At some time before 1849 he designed and built a biplane in which an unknown ten-year-old boy flew. Later, with the continued assistance of his grandson George John Cayley and his resident engineer Thomas Vick, he developed a larger scale glider (also probably fitted with "flappers") which flew across Brompton Dale in front of Wydale Hall in 1853. The first adult aviator has been claimed to be either Cayley's coachman, footman or butler. One source (Gibbs-Smith) has suggested that it was John Appleby, a Cayley employee; however, there is no definitive evidence to fully identify the pilot. An entry in volume IX of the 8th Encyclopædia Britannica of 1855 is the most contemporaneous authoritative account regarding the event. A 2007 biography of Cayley (Richard Dee's The Man Who Discovered Flight: George Cayley and the First Airplane) claims the first pilot was Cayley's grandson George John Cayley (1826–1878).
A replica of the 1853 machine was flown at the original site in Brompton Dale by Derek Piggott in 1973 for TV and in the mid-1980s for the IMAX film On the Wing. The glider is currently on display at the Yorkshire Air Museum.
A second replica of the Cayley Glider was built in 2003 by a team from BAE Systems to commemorate the 150th anniversary of the original flight. Built using modern materials and techniques, the craft was test flown by Alan McWhirter at RAF Pocklington, before being flown by Sir Richard Branson on 5 July 2003 at Brompton Dale, the site of the original gliders flight. Virgin Atlantic sponsored construction of the replica glider. In 2005, the replica glider was transported and rebuilt in Salina, Kansas, as part of the ground show for the return of the 'round-the-world' Virgin Atlantic GlobalFlyer flight, with the glider being towed by a vehicle along the runway in front of the gathered crowds. Returning to the UK, the replica glider was flown once more for a segment on The One Show. Again towed by a vehicle, the glider undertook its longest and highest flights during the filming and was flown by Dave Holborn. Placed into storage at BAE System's Farnborough site, it was donated to the South Yorkshire Aircraft Museum in 2021 and is now on display.
== Memorial ==
Cayley died in 1857 and was buried in the graveyard of All Saints' Church in Brompton-by-Sawdon.
He is commemorated in Scarborough at the University of Hull, Scarborough Campus, where a hall of residence and a teaching building are named after him. He is one of many scientists and engineers commemorated by having a hall of residence and a bar at Loughborough University named after him. The University of Westminster also honours Cayley's contribution to the formation of the institution with a gold plaque at the entrance of the Regent Street building.
There are display boards and a video film at the Royal Air Force Museum London in Hendon honouring Cayley's achievements and a modern exhibition and film "Pioneers of Aviation" at the Yorkshire Air Museum, Elvington, York.
The Sir George Cayley Sailwing Club is a North Yorkshire-based free flight club, affiliated to the British Hang Gliding and Paragliding Association, which has borne his name since its founding in 1975.
In 1974, Cayley was inducted into the International Air & Space Hall of Fame.
== Family ==
On 3 July 1795 Cayley married Sarah Walker, daughter of his first tutor George Walker. (J W Clay's expanded edition of Dugdale's Visitation of Yorkshire incorrectly gives the date as 9 July 1795, as does George Cayley's entry in the Oxford Dictionary of National Biography.) They had ten children, of whom three died young. Sarah died on 8 December 1854.
== See also ==
Early flying machines
Matthew Piers Watt Boulton
William Samuel Henson
Timeline of aviation – 18th century
Timeline of aviation – 19th century
Kite types
== Notes ==
== References ==
Gibbs-Smith, Charles H. Notes and Records of the Royal Society of London, Vol. 17, No. 1 (May 1962), pp. 36–56
Gibbs-Smith, C.H. Aviation. London, NMSO, 2002
Gerard Fairlie and Elizabeth Cayley, The Life of a Genius, Hodder and Stoughton, 1965.
== External links ==
Hansard 1803–2005: contributions in Parliament by Sir George Cayley
Cayley's principles of flight, models and gliders
Cayley's gliders
Some pioneers of air engine design
Sir George Cayley – Making Aviation Practical
Sir George Cayley
"Sir George Cayley – The Man: His Work" a 1954 Flight article
"Aerodynamics in 1804" a 1954 Flight art
"Cayley's 1853 Aeroplane" a 1973 Flight article
Ackroyd, J.A.D. "Sir George Cayley, the father of Aeronautics". Notes and Records of the Royal Society of London 56 (2002) Part 1 (2), pp167–181, Part 2 (3), pp333–348
Cayley's Flying Machines |
Glider (aircraft) | A glider is a fixed-wing aircraft that is supported in flight by the dynamic reaction of the air against its lifting surfaces, and whose free flight does not depend on an engine. Most gliders do not have an engine, although motor-gliders have small engines for extending their flight when necessary by sustaining the altitude (normally a sailplane relies on rising air to maintain altitude) with some being powerful enough to take off by self-launch.
There are a wide variety of types differing in the construction of their wings, aerodynamic efficiency, location of the pilot, controls and intended purpose. Most exploit meteorological phenomena to maintain or gain height. Gliders are principally used for the air sports of gliding, hang gliding and paragliding. However some spacecraft have been designed to descend as gliders and in the past military gliders have been used in warfare. Some simple and familiar types of glider are toys such as paper planes and balsa wood gliders.
== Etymology ==
Glider is the agent noun form of the verb to glide. It derives from Middle English gliden, which in turn derived from Old English glīdan. The oldest meaning of glide may have denoted a precipitous running or jumping, as opposed to a smooth motion. Scholars are uncertain as to its original derivation, with possible connections to "slide", and "light" having been advanced.
== History ==
Early pre-modern accounts of flight are in most cases difficult to verify and it is unclear whether each craft was a glider, kite or parachute and to what degree they were truly controllable. Often the event is only recorded a long time after it allegedly took place. A 17th-century account reports an attempt at flight by the 9th-century poet Abbas Ibn Firnas near Córdoba, Spain which ended in heavy back injuries. The monk Eilmer of Malmesbury is reported by William of Malmesbury (c. 1080 – c. 1143), a fellow monk and historian, to have flown off the roof of his Abbey in Malmesbury, England, sometime between 1000 and 1010 AD, gliding about 200 metres (220 yd) before crashing and breaking his legs. According to these reports, both used a set of (feathery) wings, and both blamed their crash on the lack of a tail. Hezârfen Ahmed Çelebi is alleged to have flown a glider with eagle-like wings over the Bosphorus strait from the Galata Tower to Üsküdar district in Istanbul around 1630–1632.
=== 19th century ===
The first heavier-than-air (i.e. non-balloon) man-carrying aircraft that were based on published scientific principles were Sir George Cayley's series of gliders which achieved brief wing-borne hops from around 1849. Thereafter gliders were built by pioneers such as Jean Marie Le Bris, John J. Montgomery, Otto Lilienthal, Percy Pilcher, Octave Chanute and Augustus Moore Herring to develop aviation. Lilienthal was the first to make repeated successful flights (eventually totaling over 2,000) and was the first to use rising air to prolong his flight. Using a Montgomery tandem-wing glider, Daniel Maloney was the first to demonstrate high-altitude controlled flight using a balloon-launched glider launched from 4,000 feet in 1905.
The Wright Brothers developed a series of three manned gliders after preliminary tests with a kite as they worked towards achieving powered flight. They returned to glider testing in 1911 by removing the motor from one of their later designs.
=== Development ===
In the inter-war years, recreational gliding flourished in Germany under the auspices of Rhön-Rossitten. In the United States, the Schweizer brothers of Elmira, New York, manufactured sport sailplanes to meet the new demand. Sailplanes continued to evolve in the 1930s, and sport gliding has become the main application of gliders. As their performance improved, gliders began to be used to fly cross-country and now regularly fly hundreds or even over a thousand of kilometers in a day, if the weather is suitable.
Military gliders were developed by during World War II by a number of countries for landing troops,. A glider – the Colditz Cock – was even built secretly by POWs as a potential escape method at Oflag IV-C near the end of the war in 1944.
=== Development of flexible-wing hang gliders ===
Foot-launched aircraft had been flown by Lilienthal and at the meetings at Wasserkuppe in the 1920s. However the innovation that led to modern hang gliders was in 1951 when Francis Rogallo and Gertrude Rogallo applied for a patent for a fully flexible wing with a stiffening structure. The American space agency NASA began testing in various flexible and semi-rigid configurations of this Rogallo wing in 1957 in order to use it as a recovery system for the Gemini space capsules. Charles Richards and Paul Bikle developed the concept producing a wing that was simple to build which was capable of slow flight and as gentle landing. Between 1960 and 1962 Barry Hill Palmer used this concept to make foot-launched hang gliders, followed in 1963 by Mike Burns who built a kite-hang glider called Skiplane. In 1963, John W. Dickenson began commercial production.
=== Development of paragliders ===
January 10, 1963 American Domina Jalbert filed a patent US Patent 3131894 on the Parafoil which had sectioned cells in an aerofoil shape; an open leading edge and a closed trailing edge, inflated by passage through the air – the ram-air design. The 'Sail Wing' was developed further for recovery of NASA space capsules by David Barish. Testing was done by using ridge lift. After tests on Hunter Mountain, New York in September 1965, he went on to promote "slope soaring" as a summer activity for ski resorts (apparently without great success). NASA originated the term "paraglider" in the early 1960s, and ‘paragliding’ was first used in the early 1970s to describe foot-launching of gliding parachutes. Although their use is mainly recreational, unmanned paragliders have also been built for military applications e.g. Atair Insect.
== Recreational types ==
The main application today of glider aircraft is sport and recreation.
=== Sailplane ===
Gliders were developed from the 1920s for recreational purposes. As pilots began to understand how to use rising air, gliders were developed with a high lift-to-drag ratio. These allowed longer glides to the next source of 'lift', and so increase their chances of flying long distances. This gave rise to the popular sport known as gliding although the term can also be used to refer to merely descending flight. Such gliders designed for soaring are sometimes called sailplanes.
Gliders were mainly built of wood and metal but the majority now have composite materials using glass, carbon fibre and aramid fibers. To minimise drag, these types have a fuselage and long narrow wings, i.e. a high aspect ratio. In the beginning, there were huge differences in the appearance of early-sailplanes. As technology and materials developed, the aspiration for the perfect balance between lift/drag, climbing ratio and gliding speed, made engineers from various producers create similar designs across the world. Both single-seat and two-seat gliders are available.
Initially training was done by short 'hops' in primary gliders which are very basic aircraft with no cockpit and minimal instruments. Since shortly after World War II training has always been done in two-seat dual control gliders, but high performance two-seaters are also used to share the workload and the enjoyment of long flights. Originally skids were used for landing, but the majority now land on wheels, often retractable. Some gliders, known as motor gliders, are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of span and flaps.
A class of ultralight sailplanes, including some known as microlift gliders and some as 'airchairs', has been defined by the FAI based on a maximum weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some additional crash safety as the pilot can be strapped in an upright seat within a deformable structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Several commercial ultralight gliders have come and gone, but most current development is done by individual designers and home builders.
=== Hang gliders ===
Unlike a sailplane, a hang glider is capable of being carried, foot launched and landed solely by the use of the pilot's legs.
In the original and still most common designs, Class 1, the pilot is suspended from the center of the flexible wing and controls the aircraft by shifting their weight.
Class 2 (designated by the FAI as Sub-Class O-2) have a rigid primary structure with movable aerodynamic surfaces, such as spoilers, as the primary method of control. The pilot is often enclosed by means of a fairing. These offer the best performance and are the most expensive.
Class 4 hang gliders are unable to demonstrate consistent ability to safely take-off and/or land in nil-wind conditions, but otherwise are capable of being launched and landed by the use of the pilot's legs.
Class 5 hang gliders have a rigid primary structure with movable aerodynamic surfaces as the primary method of control and can safely take-off and land in nil-wind conditions. No pilot fairings are permitted.
In a hang glider the shape of the wing is determined by a structure, and it is this that distinguishes them from the other main type of foot-launched aircraft, paragliders, technically Class 3. Some hang gliders have engines, and are known as powered hang gliders. Due to their commonality of parts, construction and design, they are usually considered by aviation authorities to be hang gliders, even though they may use the engine for the entire flight. Some flexible wing powered aircraft, Ultralight trikes, have a wheeled undercarriage, and so are not hang gliders.
=== Paragliders ===
A paraglider is a free-flying, foot-launched aircraft. The pilot sits in a harness suspended below a fabric wing. Unlike a hang glider whose wings have frames, the form of a paraglider wing is formed by the pressure of air entering vents or cells in the front of the wing. This is known as a ram-air wing (similar to the smaller parachute design). The paraglider's light and simple design allows them to be packed and carried in large backpacks, and make them one of the simplest and economical modes of flight. Competition level wings can achieve glide ratios up to 1:10 and fly around speeds of 45 km/h (28 mph).
Like sailplanes and hang gliders, paragliders use rising air (thermals or ridge lift) to gain height. This process is the basis for most recreational flights and competitions, though aerobatics and 'spot landing competitions' also occur. Launching is often done by jogging down a slope, but winch launches behind a towing vehicle are also used. A Paramotor is a paraglider wing powered by a motor attached to the back of the pilot, and is also known as a powered paraglider. A variation of this is the paraplane, which has a motor mounted on a wheeled frame rather than the pilot's back.
=== Comparison of gliders, hang gliders and paragliders ===
There can be confusion between gliders, hang gliders, and paragliders. Paragliders and hang gliders are both foot-launched glider aircraft and in both cases the pilot is suspended ("hangs") below the lift surface. "Hang glider" is the term for those where the airframe contains rigid structures, whereas the primary structure of paragliders is supple, consisting mainly of woven material.
== Military gliders ==
Military gliders were used mainly during the Second World War for carrying troops and heavy equipment (see Glider infantry) to a combat zone, including the British Airspeed Horsa, Russian Polikarpov BDP S-1, American Waco CG-3, Japanese Kokusai Ku-8, and German Junkers Ju 322. These aircraft were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. Advantages over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
== Research aircraft ==
Even after the development of powered aircraft, gliders have been built for research, where the lack of powerplant reduces complexity and construction costs and speeds development, particularly where new and poorly understood aerodynamic ideas are being tested that might require significant airframe changes. Examples have included delta wings, flying wings, lifting bodies and other unconventional lifting surfaces where existing theories were not sufficiently developed to estimate full scale characteristics.
Unpowered flying wings built for aerodynamic research include the Horten flying wings, the scaled glider version of the Armstrong Whitworth A.W.52 jet powered flying wing.
Lifting bodies were also developed using unpowered prototypes. Although the idea can be dated to Vincent Justus Burnelli in 1921, interest was nearly non-existent until it appeared to be a solution for returning spacecraft. Traditional space capsules have little directional control while conventionally winged craft cannot handle the stresses of re-entry, whereas a lifting body combines the benefits of both. The lifting bodies use the fuselage itself to generate lift without employing the usual thin and flat wing so as to minimize the drag and structure of a wing for very high supersonic or hypersonic flight as might be experienced during the re-entry of a spacecraft. Examples of type are the Northrop HL-10 and Martin-Marietta X-24.
The NASA Paresev Rogallo flexible wing glider was built to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible wing airfoil for modern hang gliders.
== Rocket gliders ==
Rocket-powered aircraft consume their fuel quickly and so most must land unpowered unless there is another power source. The first rocket plane was the Lippisch Ente, and later examples include the Messerschmitt Me 163 rocket-powered interceptor. The American series of research aircraft starting with the Bell X-1 in 1946 up to the North American X-15 spent more time flying unpowered than under power. In the 1960s research was also done on unpowered lifting bodies and on the X-20 Dyna-Soar project, but although the X20 was cancelled, this research eventually led to the Space Shuttle.
NASA's Space Shuttle first flew on April 12, 1981. The Shuttle re-entered at Mach 25 at the end of each spaceflight, landing entirely as a glider. The Space Shuttle and its Soviet equivalent, the Buran shuttle, were by far the fastest ever aircraft. Recent examples of rocket glider include the privately funded SpaceShipOne which is intended for sub-orbital flight and the XCOR EZ-Rocket which is being used to test engines.
== Rotary wing ==
Most unpowered rotary-wing aircraft are kites rather than gliders, i.e. they are usually towed behind a car or boat rather than being capable of free flight. These are known as rotor kites. However rotary-winged gliders, 'gyrogliders', were investigated that could descend like an autogyro, using the lift from rotors to reduce the vertical speed. These were evaluated as a method of dropping people or equipment from other aircraft.
== Unmanned gliders ==
=== Paper airplane ===
A paper plane, paper aeroplane (UK), paper airplane (US), paper glider, paper dart or dart is a toy aircraft (usually a glider) made out of paper or paperboard; the practice of constructing paper planes is sometimes referred to as aerogami (Japanese: kamihikōki), after origami, the Japanese art of paper folding.
=== Model gliders ===
Model glider aircraft are flying or non-flying models of existing or imaginary gliders, often scaled-down versions of full size planes, using lightweight materials such as polystyrene, balsa wood, foam and fibreglass. Designs range from simple glider aircraft, to accurate scale models, some of which can be very large.
Larger outdoor models are usually radio-controlled gliders that are piloted remotely from the ground with a transmitter. These can remain airborne for extended periods by using the lift produced by slopes and thermals. These can be winched into wind by a line attached to a hook under the fuselage with a ring, so that the line will drop when the model is overhead. Other methods of launching include towing aloft using a model powered aircraft, catapult-launching using an elastic bungee cord and hand-launching. When hand-launching the newer "discus" style of wing-tip hand-launching has largely supplanted the earlier "javelin" type of launch.
=== Glide bombs ===
A glide bomb is a bomb with aerodynamic surfaces to allow a gliding flightpath rather than a ballistic one. This allows the bomber aircraft to stand off from the target and launch the bomb from a safe distance. Most types have a remote control system which enables the aircraft to direct the bomb accurately to the target. Glide bombs were developed in Germany from as early as 1915. In World War II they were most successful as anti-shipping weapons. Some air forces today are equipped with gliding devices that can remotely attack airbases with a cluster bomb warhead.
== See also ==
Flight
Gliding flight
Radio-controlled glider
Unpowered aircraft
Underwater glider
Boomerang
== References == |
Glider (sailplane) | A glider or sailplane is a type of glider aircraft used in the leisure activity and sport of gliding (also called soaring). This unpowered aircraft can use naturally occurring currents of rising air in the atmosphere to gain altitude. Sailplanes are aerodynamically streamlined and so can fly a significant distance forward for a small decrease in altitude.
In North America the term 'sailplane' is also used to describe this type of aircraft. In other parts of the English-speaking world, the word 'glider' is more common.
== Types ==
Gliders benefit from producing very low drag for any given amount of lift, and this is best achieved with long, thin wings, a slender fuselage and smooth surfaces with an absence of protuberances. Aircraft with these features are able to soar – climb efficiently in rising air produced by thermals or hills. In still air, sailplanes can glide long distances at high speed with a minimum loss of height in between.
Sailplanes have rigid wings and either skids or undercarriage. In contrast hang gliders and paragliders use the pilot's feet for the start of the launch and for the landing. These latter types are described in separate articles, though their differences from sailplanes are covered below. Sailplanes are usually launched by winch or aerotow, though other methods, auto tow and bungee, are occasionally used.
These days almost all gliders are sailplanes, but in the past many gliders were not. These types did not soar. They were simply engine-less aircraft towed by another aircraft to a desired destination and then cast off for landing. The prime example of non-soaring gliders were military gliders (such as those used in the Second World War). They were often used just once and then usually abandoned after landing, having served their purpose.
Motor gliders are gliders with engines which can be used for extending a flight and even, in some cases, for take-off. Some high-performance motor gliders (known as "self-sustaining" gliders) may have an engine-driven retractable propeller which can be used to sustain flight. Other motor gliders have enough thrust to launch themselves before the propeller is retracted and are known as "self-launching" gliders. Another type is the self-launching "touring motor glider", where the pilot can switch the engine on and off in flight without retracting the propeller.
== History ==
Sir George Cayley's gliders achieved brief wing-borne hops from around 1849. In the 1890s, Otto Lilienthal built gliders using weight shift for control. In the early 1900s, the Wright Brothers built gliders using movable surfaces for control. In 1903, they successfully added an engine.
After World War I gliders were first built for sporting purposes in Germany. Germany's strong links to gliding were to a large degree due to post-World War I regulations forbidding the construction and flight of motorised planes in Germany, so the country's aircraft enthusiasts often turned to gliders and were actively encouraged by the German government, particularly at flying sites suited to gliding flight like the Wasserkuppe.
The sporting use of gliders rapidly evolved in the 1930s and is now their main application. As their performance improved, gliders began to be used for cross-country flying and now regularly fly hundreds or even thousands of kilometres in a day if the weather is suitable.
== Design ==
Early gliders had no cockpit and the pilot sat on a small seat located just ahead of the wing. These were known as "primary gliders" and they were usually launched from the tops of hills, though they are also capable of short hops across the ground while being towed behind a vehicle. To enable gliders to soar more effectively than primary gliders, the designs minimized drag. Gliders now have very smooth, narrow fuselages and very long, narrow wings with a high aspect ratio and winglets.
The early gliders were made mainly of wood with metal fastenings, stays and control cables. Later fuselages made of fabric-covered steel tube were married to wood and fabric wings for lightness and strength. New materials such as carbon-fiber, fiber glass and Kevlar have since been used with computer-aided design to increase performance. The first glider to use glass-fiber extensively was the Akaflieg Stuttgart FS-24 Phönix which first flew in 1957. This material is still used because of its high strength to weight ratio and its ability to give a smooth exterior finish to reduce drag. Drag has also been minimized by more aerodynamic shapes and retractable undercarriages. Flaps are fitted to the trailing edges of the wings on some gliders to optimise lift and drag at a wide range of speeds.
With each generation of materials and with the improvements in aerodynamics, the performance of gliders has increased. One measure of performance is the glide ratio. A ratio of 30:1 means that in smooth air a glider can travel forward 30 meters while losing only 1 meter of altitude. Comparing some typical gliders that might be found in the fleet of a gliding club – the Grunau Baby from the 1930s had a glide ratio of just 17:1, the glass-fiber Libelle of the 1960s increased that to 36:1, and modern flapped 18 meter gliders such as the ASG29 have a glide ratio of over 50:1. The largest open-class glider, the Eta, has a span of 30.9 meters and has a glide ratio over 70:1. Compare this to the Gimli Glider, a Boeing 767 which ran out of fuel mid-flight and was found to have a glide ratio of 12:1, or to the Space Shuttle with a glide ratio of 4.5:1.
High aerodynamic efficiency is essential to achieve a good gliding performance, and so gliders often have aerodynamic features seldom found in other aircraft. The wings of a modern racing glider are designed by computers to create a low-drag laminar flow airfoil. After the wings' surfaces have been shaped by a mould to great accuracy, they are then highly polished. Vertical winglets at the ends of the wings decrease drag and so improve wing efficiency. Special aerodynamic seals are used at the ailerons, rudder and elevator to prevent the flow of air through control surface gaps. Turbulator devices in the form of a zig-zag tape or multiple blow holes positioned in a span-wise line along the wing are used to trip laminar flow air into turbulent flow at a desired location on the wing. This flow control prevents the formation of laminar flow bubbles and ensures the absolute minimum drag. Bug-wipers may be installed to wipe the wings while in flight and remove insects that are disturbing the smooth flow of air over the wing.
Modern competition gliders carry jettisonable water ballast (in the wings and sometimes in the vertical stabilizer). The extra weight provided by the water ballast is advantageous if the lift is likely to be strong, and may also be used to adjust the glider's center of mass. Moving the center of mass toward the rear by carrying water in the vertical stabilizer reduces the required down-force from the horizontal stabilizer and the resultant drag from that down-force. Although heavier gliders have a slight disadvantage when climbing in rising air, they achieve a higher speed at any given glide angle. This is an advantage in strong conditions when the gliders spend only a small amount of time climbing in thermals. The pilot can jettison the water ballast before it becomes a disadvantage in weaker thermal conditions. Another use of water ballast is to dampen air turbulence such as might be encountered during ridge soaring. To avoid undue stress on the airframe, gliders must jettison any water ballast before landing.
Most gliders are built in Europe and are designed to EASA Certification Specification CS-22 (previously Joint Aviation Requirements-22). These define minimum standards for safety in a wide range of characteristics such as controllability and strength. For example, gliders must have design features to minimize the possibility of incorrect assembly (gliders are often stowed in disassembled configuration, with at least the wings being detached). Automatic connection of the controls during rigging is the common method of achieving this.
== Launch and flight ==
The two most common methods of launching sailplanes are by aerotow and by winch. When aerotowed, the sailplane is towed behind a powered aircraft using a rope about 60 metres (200 ft) long. The sailplane pilot releases the rope after reaching the desired altitude. However, the rope can be released by the towplane also in case of emergency. Winch launching uses a powerful stationary engine located on the ground at the far end of the launch area. The sailplane is attached to one end of 800 to 1,200 metres (2,600 to 3,900 ft) of cable and the winch rapidly winds it in. The sailplane can gain about 270 to 910 metres (900 to 3,000 ft) of height with a winch launch, depending on the headwind. Less often, automobiles are used to pull sailplanes into the air, either by pulling them directly or through the use of a reverse pulley in a similar manner to the winch launch. Elastic ropes (known as bungees) are occasionally used at some sites to launch gliders from slopes, if there is sufficient wind blowing up the hill. Bungee launching was the predominant method of launching early gliders. Some modern gliders can self-launch by using retractable engines or just retractable propellers. (see motor glider). These engines can use internal combustion or battery power.
Once launched, gliders try to gain height using thermals, ridge lift, lee waves or convergence zones and can remain airborne for hours. This is known as "soaring". By finding lift sufficiently often, experienced pilots fly cross-country, often on pre-declared tasks of hundreds of kilometers, usually back to the original launch site. Cross-country flying and aerobatics are the two forms of competitive gliding. For information about the forces in gliding flight, see lift-to-drag ratio.
== Glide slope control ==
Pilots need some form of control over the glide slope to land the glider. In powered aircraft, this is done by reducing engine thrust. In gliders, other methods are used to either reduce the lift generated by the wing, increase the drag of the entire glider, or both. Glide slope is the distance traveled for each unit of height lost. In a steady wings-level glide with no wind, glide slope is the same as the lift/drag ratio (L/D) of the glider, called "L-over-D". Reducing lift from the wings and/or increasing drag will reduce the L/D allowing the glider to descend at a steeper angle with no increase in airspeed. Simply pointing the nose downwards only converts altitude into a higher airspeed with a minimal initial reduction in total energy. Gliders, because of their long low wings, create a high ground effect which can significantly increase the glide angle and make it difficult to bring the glider to Earth in a short distance.
Sideslipping
A slip is performed by crossing the controls (rudder to right with ailerons to left, for example) so that the glider is no longer flying aligned with the air flow. This will present one side of the fuselage to the air-flow significantly increasing drag. Early gliders primarily used slipping for glide slope control.
Spoilers
Spoilers are movable control surfaces in the top of the wing, usually located mid-chord or near the spar which are raised into the air-flow to eliminate (spoil) the lift from the wing area behind the spoiler, disrupting the spanwise distribution of lift and increasing lift-induced drag. Spoilers significantly increase drag.
Air brakes
Air brakes, also known as dive brakes, are devices whose primary purpose is to increase drag. On gliders, the spoilers act as air brakes. They are positioned on top of the wing and below the wing also. When slightly opened the upper brakes will spoil the lift, but when fully opened will present a large surface and so can provide significant drag. Some gliders have terminal velocity dive brakes, which provide enough drag to keep its speed below maximum permitted speed, even if the glider were pointing straight down. This capability is considered a safer way to descend without instruments through cloud than the only alternative which is an intentional spin.
Flaps
Flaps are movable surfaces on the trailing edge of the wing, inboard of the ailerons. The primary purpose of flaps is to increase the camber of the wing and so increase the maximum lift coefficient and reduce the stall speed. Another feature that some flapped gliders possess is negative flaps that are also able to deflect the trailing edge upward a small amount. This feature is included on some competition gliders in order to reduce the pitching moment acting on the wing and so reduce the downwards force that must be provided by the horizontal stabiliser; this reduces the induced drag acting on the stabilizer. On some types the flaps and ailerons are linked, known a 'flaperons'. Simultaneous movement of these allows a greater rate of roll.
Parachute
Some high performance gliders from the 1960s and 1970s were designed to carry a small drogue parachute because their air brakes were not particularly effective. This was stored in the tail-cone of the glider during flight. When deployed, a parachute causes a large increase in drag, but has a significant disadvantage over the other methods of controlling the glide slope. This is because a parachute does not allow the pilot to finely adjust the glide slope. Consequently, a pilot may have to jettison the parachute entirely, if the glider is not going to reach the desired landing area.
== Landing ==
Early glider designs used skids for landing, but modern types generally land on wheels. Some of the earliest gliders used a dolly with wheels for taking off and the dolly was jettisoned as the glider left the ground, leaving just the skid for landing. A glider may be designed so the center of gravity (CG) is behind the main wheel so the glider sits nose high on the ground. Other designs may have the CG forward of the main wheel so the nose rests on a nose-wheel or skid when stopped. Skids are now mainly used only on training gliders such as the Schweizer SGS 2–33. Skids are around 100 millimetres (4 in) wide by 900 mm (3 ft) long and run from the nose to the main wheel. Skids help with braking after landing by allowing the pilot to put forward pressure on the control stick, thus creating friction between the skid and the ground. The wing tips also have small skids or wheels to protect the wing tips from ground contact.
In most high performance gliders the undercarriage can be raised to reduce drag in flight and lowered for landing. Wheel brakes are provided to allow stopping once on the ground. These may be engaged by fully extending the spoilers/air-brakes or by using a separate control. Although there is only a single main wheel, the glider's wing can be kept level by using the flight controls until it is almost stationary.
Pilots usually land back at the airfield from which they took off, but a landing is possible in any flat field about 250 metres long. Ideally, should circumstances permit, a glider would fly a standard pattern, or circuit, in preparation for landing, typically starting at a height of 300 metres (1,000 ft). Glide slope control devices are then used to adjust the height to assure landing at the desired point. The ideal landing pattern positions the glider on final approach so that a deployment of 30–60% of the spoilers/dive brakes/flaps brings it to the desired touchdown point. In this way the pilot has the option of opening or closing the spoilers/air-brakes to extend or steepen the descent to reach the touchdown point. This gives the pilot wide safety margins should unexpected events occur. If such control devices are not sufficient, the pilot may utilize maneuvers such as a forward slip to further steepen the glider slope.
== Auxiliary engines ==
Most gliders require assistance to launch, though some have an engine powerful enough to launch unaided. In addition, a high proportion of new gliders have an engine which will sustain the glider in the air, but is insufficiently powerful to launch the glider. Compared with self-launchers these lower powered engines have advantages in weight, lower costs and pilot licensing. The engines can be electric, jet, or two-stroke gasoline.
== Instrumentation and other technical aids ==
Gliders in continental Europe use metric units, like km/h for airspeed and m/s for lift and sink rate. In the United States, United Kingdom, Australia and some other countries gliders use knots and ft/min in common with commercial aviation worldwide.
In addition to an altimeter, compass, and an airspeed indicator, gliders are often equipped with a variometer and an airband radio (transceiver), each of which may be required in some countries. A transponder may be installed to assist controllers when the glider is crossing busy or controlled airspace. This may be supplemented by ADS-B. Without these devices access to some airspace may become increasingly restricted in some countries. In countries where cloud-flying is allowed, an artificial horizon or a turn and slip indicator are used when there is zero visibility. Increasingly, anti-collision warning systems such as FLARM are also used and are even mandatory in some European countries. An Emergency Position-Indicating Radio Beacon (ELT) may also be fitted into the glider to reduce search and rescue time in case of an accident.
Much more than in other types of aviation, glider pilots depend on the variometer, which is a very sensitive vertical speed indicator, to measure the climb or sink rate of the plane. This enables the pilot to detect minute changes caused when the glider enters rising or sinking air masses. Most often electronic 'varios' are fitted to a glider, though mechanical varios are often installed as back-up. The electronic variometers produce a modulated sound of varying amplitude and frequency depending on the strength of the lift or sink, so that the pilot can concentrate on centering a thermal, watching for other traffic, on navigation, and weather conditions. Rising air is announced to the pilot as a rising tone, with increasing pitch as the lift increases. Conversely, descending air is announced with a lowering tone, which advises the pilot to escape the sink area as soon as possible. (Refer to the variometer article for more information).
Variometers are sometimes fitted with mechanical or electronic devices to indicate the optimal speed to fly for given conditions. The MacCready setting can be input electronically or adjusted using a ring surrounding the dial. These devices are based on the mathematical theory attributed to Paul MacCready though it was first described by Wolfgang Späte in 1938. MacCready theory solves the problem of how fast a pilot should cruise between thermals, given both the average lift the pilot expects in the next thermal climb, as well as the amount of lift or sink encountered in cruise mode. Electronic variometers make the same calculations automatically, after allowing for factors such as the glider's theoretical performance, water ballast, headwinds/tailwinds and insects on the leading edges of the wings.
Soaring flight computers running specialized soaring software, have been designed for use in gliders. Using GPS technology in conjunction with a barometric device these tools can:
Provide the glider's position in 3 dimensions by a moving map display
Alert the pilot to nearby airspace restrictions
Indicate position along track and remaining distance and course direction
Show airports within theoretical gliding distance
Determine wind direction and speed at current altitude
Show historical lift information
Create a GPS log of the flight to provide proof for contests and gliding badges
Provide "final" glide information (i.e., showing if the glider can reach the finish without additional lift).
Indicate the best speed to fly under current conditions
After the flight the GPS data may be replayed on computer software for analysis and to follow the trace of one or more gliders against a backdrop of a map, an aerial photograph or the airspace.
== Markings ==
So that ground-based observers may identify gliders in flight or in gliding competition, registration marks ("insignias" or "competition numbers" or "contest ID") are displayed in large characters on the underside of a single wing, and also on the fin and rudder. Registration marks are assigned by gliding associations such as the US Soaring Society of America, and are unrelated to national registrations issued by entities such as the US Federal Aviation Administration. This need for visual ID has somewhat been supplanted by GPS position recording. Insignias are useful in two ways: First, they are used in radio communications between gliders, as pilots use their competition number as their call signs. Secondly, to easily tell a glider's contest ID when flying in close proximity to one another to alert them of potential dangers. For example, during gatherings of multiple gliders within thermals (known as "gaggles"), one pilot might report "Six-Seven-Romeo I am right below you".
Fibreglass gliders are invariably painted white to minimise their skin temperature in sunlight. Fibreglass resin loses strength as its temperature rises into the range achievable in direct sun on a hot day. Color is not used except for a few small bright patches on wing tips; these patches (typically orange or red) improving a glider's visibility to other airborne aircraft. Such patches are obligatory for mountain flying in France. Non-fibreglass gliders made of aluminum or wood are not so subject to deterioration at higher temperatures and are often quite brightly painted.
== Comparison of types ==
There is sometimes confusion about gliders/sailplanes, hang gliders and paragliders. In particular, paragliders and hang gliders are both foot-launched. The main differences between the types are:
== Competition classes ==
Eight competition classes of glider have been defined by the FAI. They are:
Standard Class (No flaps, 15 m wing-span, water ballast allowed)
15 metre Class (Flaps allowed, 15 m wing-span, water ballast allowed)
18 metre Class (Flaps allowed, 18 m wing-span, water ballast allowed)
Open Class (No restrictions except a limit of 850 kg for the maximum all-up weight)
Two Seater Class (maximum wing-span of 20 m), also known by the German name "Doppelsitzer"
Club Class (This class allows a wide range of older small gliders with different performance, so the scores have to be adjusted by handicapping. Water ballast is not allowed).
World Class (The FAI Gliding Commission which is part of the FAI and an associated body called Organisation Scientifique et Technique du Vol à Voile (OSTIV) announced a competition in 1989 for a low-cost glider, which had moderate performance, was easy to assemble and to handle, and was safe for low-hours pilots to fly. The winning design was announced in 1993 as the Warsaw Polytechnic PW-5. This allows competitions to be run with only one type of glider.)
Ultralight Class, for gliders with a maximum mass less than 220 kg.
== Major manufacturers ==
A large proportion of gliders have been and are still made in Germany, the birthplace of the sport. In Germany there are several manufacturers but the three principal companies are:
DG Flugzeugbau GmbH
Schempp-Hirth GmbH
Alexander Schleicher GmbH & Co
Germany also has Stemme and Lange Aviation. Elsewhere in the world, there are other manufacturers such as Jonker Sailplanes in South Africa, Sportinė Aviacija in Lithuania, Allstar PZL in Poland, Let Kunovice and HpH in the Czech Republic and AMS Flight in Slovenia.
== See also ==
Glider types
List of gliders
Military glider
History
Rhön-Rossitten Gesellschaft
Schweizer brothers
Gliding as a sport
Gliding
Gliding competition
Other unpowered aircraft
Rotor kite
Unpowered aircraft
Unpowered flying toys and models
Paper plane
Radio-controlled glider
== References ==
== External links ==
Information about all types of glider
Sailplane Directory at the Wayback Machine (archived 21 April 2016) – An enthusiast's web-site that lists manufacturers and models of gliders, past and present.
FAI webpages
FAI records Archived 17 October 2021 at the Wayback Machine – sporting aviation page with international world soaring records in distances, speeds, routes, and altitude
National Gliding Federations at the Wayback Machine (archived 23 November 2010) |
Global Positioning System | The Global Positioning System (GPS) is a satellite-based hyperbolic navigation system owned by the United States Space Force and operated by Mission Delta 31. It is one of the global navigation satellite systems (GNSS) that provide geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information. It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls, and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.
== Overview ==
The GPS project was started by the U.S. Department of Defense in 1973. The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993. After Korean Air Lines Flight 007 was shot down when it mistakenly entered Soviet airspace, President Ronald Reagan determined that the GPS system would be made available for civilian use as of 1988; however, initially this civilian use was limited to an average accuracy of 100 meters (330 ft) by use of Selective Availability (SA), a deliberate error introduced into the GPS data that military receivers could correct for.
As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during the Gulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s, Differential GPS systems from the US Coast Guard, Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The U.S. military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, United States President Bill Clinton signed a bill ordering that Selective Availability be disabled on May 1, 2000; and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block III satellites and Next Generation Operational Control System (OCX) which was authorized by the U.S. Congress in 2000. When Selective Availability was discontinued, GPS was accurate to about 5 meters (16 ft). GPS receivers that use the L5 band have much higher accuracy of 30 centimeters (12 in), while those for high-end applications such as engineering and land surveying are accurate to within 2 cm (3⁄4 in) and can even provide sub-millimeter accuracy with long-term measurements. Consumer devices such as smartphones can be accurate to 4.9 m (16 ft) or better when used with assistive services like Wi-Fi positioning.
As of July 2023, 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.
== History ==
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, combining ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. The work of Gladys West on the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator System, developed in the early 1940s. In 1955, Friedwardt Winterberg proposed a test of general relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to 10 kilometers per day (6 mi/d).
=== Predecessors ===
When the Soviet Union launched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) monitored its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC I computer to perform the heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system. In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.
TRANSIT was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra-secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two-thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Soviet SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN System. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for U.S. Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory (NRL) continued making advances with their Timation (Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the first atomic clock into orbit and the fourth launched in 1977.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.
=== Development ===
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in the gravity field and radar refraction among others, had to be resolved. A team led by Harold L. Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named Navstar. Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym). With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).
The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory of Air Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computing ionospheric corrections to GPS location. Of note is work done by Australian space scientist Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down by a Soviet interceptor aircraft after straying in prohibited airspace because of navigational errors, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $11 billion in 2024).
Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known as Selective Availability. This changed on May 1, 2000, with U.S. President Bill Clinton signing a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, in view of the widespread growth of differential GPS services by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis. Selective Availability was removed from the GPS architecture beginning with GPS-III.
Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market. As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than 3.5 meters (11 ft), although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses".
=== Timeline and modernization ===
In 1972, the U.S. Air Force Central Inertial Guidance Test Facility (Holloman Air Force Base) conducted developmental flight tests of four prototype GPS receivers in a Y configuration over White Sands Missile Range, using ground-based pseudo-satellites.
In 1978, the first experimental Block-I GPS satellite was launched.
In 1983, after Soviet Union interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed, although it had been publicly known as early as 1979, that the CA code (Coarse/Acquisition code) would be available to civilian users.
By 1985, ten more experimental Block-I satellites had been launched to validate the concept.
Beginning in 1988, command and control of these satellites was moved from Onizuka AFS, California to the 2nd Satellite Control Squadron (2SCS) located at Schriever Space Force Base in Colorado Springs, Colorado.
On February 14, 1989, the first modern Block-II satellite was launched.
The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.
In 1991, DARPA's project to create a miniature GPS receiver successfully ended, replacing the previous 16 kg (35 lb) military receivers with a 1.25 kg (2.8 lb) all-digital handheld GPS receiver.
In 1991, TomTom, a Dutch sat-nav manufacturer was founded.
In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and replaced by the 50th Space Wing.
By December 1993, GPS achieved initial operational capability (IOC), with a full constellation (24 satellites) available and providing the Standard Positioning Service (SPS).
Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).
In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.
In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety, and in 2000 the United States Congress authorized the effort, referring to it as GPS III.
On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing civilian users to receive a non-degraded signal globally.
In 2004, the United States government signed an agreement with the European Community establishing cooperation related to GPS and Europe's Galileo system.
In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.
In November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.
In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.
On September 14, 2007, the aging mainframe-based Ground segment Control System was transferred to the new Architecture Evolution Plan.
On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.
On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying: "There's only a small risk we will not continue to exceed our performance standard."
On January 11, 2010, an update of ground control systems caused a software incompatibility with 8,000 to 10,000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, California.
On February 25, 2010, the U.S. Air Force awarded the contract to Raytheon Company to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.
July 24, 2020, operation of the GPS constellation is transferred to the newly established U.S. Space Force as part of its establishment.
On October 13, 2023, the Space Force activated PNT Delta (Provisional) to manage US navigation warfare assets. 2SOPS and GPS operations were realigned under this new Delta.
=== Awards ===
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the U.S. Air Force, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago".
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
Ivan Getting, emeritus president of The Aerospace Corporation and an engineer at Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation).
Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006. Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B. In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity. On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation. On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; ...They've re-written, in a major way, the infrastructure of our world."
== Principles ==
The GPS satellites carry very stable atomic clocks that are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly. Since the speed of radio waves (speed of light) is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision.
Each GPS satellite carries an accurate record of its own position and time, and broadcasts that data continuously. Based on data received from multiple GPS satellites, an end user's GPS receiver can calculate its own four-dimensional position in spacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).
=== More detailed description ===
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale
A message that includes the time of transmission (TOT) of the code epoch (in GPS time scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid, which is essentially mean sea level. These coordinates may be displayed, such as on a moving map display, or recorded or used by some other system, such as a vehicle guidance system.
=== User-satellite geometry ===
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase.
=== Receiver in continuous operation ===
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a tracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
=== Non-navigation applications ===
GPS requires four or more satellites to be visible for accurate navigation. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevation close to 0m, and the elevation of an aircraft may be known. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.
== Structure ==
The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment. The U.S. Space Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.
=== Space segment ===
The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), in medium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half of a sidereal day, i.e., 11 hours and 58 minutes, so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from everywhere on the Earth's surface (see animation at right). The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.
Orbiting at an altitude of approximately 20,200 km (12,600 mi); orbital radius of approximately 26,600 km (16,500 mi), each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
As of February 2019, there are 31 satellites in the GPS constellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail. With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position.
=== Control segment ===
The control segment (CS) is composed of:
a master control station (MCS),
an alternative master control station,
four dedicated ground antennas, and
six dedicated monitor stations.
The MCS can also access Satellite Control Network (SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, Florida, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington, DC. The tracking information is sent to the MCS at Schriever Space Force Base 25 km (16 mi) ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.
When a satellite's orbit is being adjusted, the satellite is marked unhealthy, so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again. The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification.
OCS replaced the 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces.
OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional. The U.S. Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for enhancing GPS's mission capabilities, enabling U.S. Space Force to enhance GPS operational services to U.S. combat forces, civil partners and domestic and international users. The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions of dollars less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides information assurance improvements over the current GPS OCS program.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.
Built on a flexible architecture that can rapidly adapt to changing needs of GPS users allowing immediate access to GPS data and constellation status through secure, accurate and reliable information.
Provides the warfighter with more secure, actionable and predictive information to enhance situational awareness.
Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.
Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.
Supports higher volume near real-time command and control capabilities and abilities.
On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.
The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget. In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.
=== User segment ===
The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. As of 2006, even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA), references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
== Applications ==
While originally a military project, GPS is considered a dual-use technology, meaning it has significant civilian applications as well.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.
=== Civilian ===
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Amateur radio: clock synchronization required for several digital modes such as FT8, FT4 and JS8; also used with APRS for position reporting; is often critical during emergency and disaster communications support.
Atmosphere: studying the troposphere delays (recovery of the water vapor content) and ionosphere delays (recovery of the number of free electrons). Recovery of Earth surface displacements due to the atmospheric pressure loading.
Astronomy: both positional and clock synchronization data is used in astrometry and celestial mechanics and precise orbit determination. GPS is also used in both amateur astronomy with small telescopes as well as by professional observatories for finding extrasolar planets.
Automated vehicle: applying precise vehicle location, coupled with highly detailed maps, provides the context needed for cars and trucks to function without a human driver.
Cartography: both civilian and military cartographers use GPS extensively.
Cellular telephony: clock synchronization enables time transfer, which is critical for synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.
Clock synchronization: the accuracy of GPS time signals (±10 ns) is second only to the atomic clocks they are based on, and is used in applications such as GPS disciplined oscillators.
Disaster relief/emergency services: many emergency services depend upon GPS for location and timing capabilities.
GPS-equipped radiosondes and dropsondes: measure and calculate the atmospheric pressure, wind speed and direction up to 27 km (89,000 ft) from the Earth's surface.
Radio occultation for weather and atmospheric science applications.
Fleet tracking: used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
Geodesy: determination of Earth orientation parameters including the daily and sub-daily polar motion, and length-of-day variabilities, Earth's center-of-mass – geocenter motion, and low-degree gravity field parameters.
Geofencing: vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate devices that are attached to or carried by a person, vehicle, or pet. The application can provide continuous tracking and send notifications if the target leaves a designated (or "fenced-in") area.
Geotagging: applies location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1.
GPS aircraft tracking
GPS for mining: the use of RTK GPS has significantly improved several mining operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.
GPS data mining: It is possible to aggregate GPS data from multiple users to understand movement patterns, common trajectories and interesting locations. GPS data is today used in transportation and disaster engineering to forecast mobility in normal and evacuation situations (e.g., hurricanes, wildfires, earthquakes).
GPS tours: location determines what content to display; for instance, information about an approaching point of interest.
Mental health: tracking mental health functioning and sociability.
Navigation: navigators value digitally precise velocity and orientation measurements, as well as precise positions in real-time with a support of orbit and clock corrections.
Orbit determination of low-orbiting satellites with GPS receiver installed on board, such as GOCE, GRACE, Jason-1, Jason-2, TerraSAR-X, TanDEM-X, CHAMP, Sentinel-3, and some cubesats, e.g., CubETH.
Phasor measurements: GPS enables highly accurate timestamping of power system measurements, making it possible to compute phasors.
Recreation: for example, Geocaching, Geodashing, GPS drawing, waymarking, and other kinds of location based mobile games such as Pokémon Go.
Reference frames: realization and densification of the terrestrial reference frames in the framework of Global Geodetic Observing System. Co-location in space between Satellite laser ranging and microwave observations for deriving global geodetic parameters.
Robotics: self-navigating, autonomous robots using GPS sensors, which calculate latitude, longitude, time, speed, and heading.
Sport: used in football and rugby for the control and analysis of the training load.
Surveying: surveyors use absolute locations to make maps and determine property boundaries.
Tectonics: GPS enables direct fault motion measurement of earthquakes. Between earthquakes GPS can be used to measure crustal motion and deformation to estimate seismic strain buildup for creating seismic hazard maps.
Telematics: GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
==== Restrictions on civilian use ====
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above 60,000 ft (18 km) above sea level and 1,000 kn (500 m/s; 2,000 km/h; 1,000 mph), or designed or modified for use with unmanned missiles and aircraft, are classified as munitions (weapons)—which means they require State Department export licenses. This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach 30 km (100,000 feet). These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
=== Military ===
As of 2009, military GPS applications include:
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commander's Digital Assistant and lower ranks use the Soldier Digital Assistant.
Frequency-Hopping Radio Clock Coordination: Military radio systems using frequency hopping modes, such as SINCGARS and HAVEQUICK, require all radios within a network to have the same time input to their internal clocks (+/-4 seconds in the case of SINCGARS) to be on the correct frequency at a given time. Military GPS receivers, such as the Precision Lightweight GPS Receiver (PLGR) and Defense Advanced GPS Receiver (DAGR), are used by radio operators within a radio network to properly input an accurate time to said radios internal clock. More modern military radios have internal GPS receivers that synchronize the internal clock automatically.
Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery shells. Embedded GPS receivers able to withstand accelerations of 12,000 g or about 118 km/s2 (260,000 mph/s) have been developed for use in 155-millimeter (6.1 in) howitzer shells.
Search and rescue.
Reconnaissance: Patrol movement can be managed more closely.
GPS satellites carry a set of nuclear detonation detectors consisting of an optical sensor called a bhangmeter, an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System. General William Shelton has stated that future satellites may drop this feature to save money.
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.
GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows. GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones. China uses jamming to discourage US surveillance aircraft near the contested Spratly Islands. North Korea has mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations. Iranian Armed Forces disrupted the civilian airliner plane Flight PS752's GPS when it shot down the aircraft.
In the Russo-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.
=== Timekeeping ===
==== Leap seconds ====
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain new leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI – GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.: Section 1.2.2
The GPS navigation message includes the difference between GPS time and UTC. As of January 2017, GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016. Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
==== Accuracy ====
GPS time is theoretically accurate to about 14 nanoseconds, due to the clock drift relative to International Atomic Time that the atomic clocks in GPS transmitters experience. Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.
==== Relativistic corrections ====
The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.
==== Format ====
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero).
== Communication ==
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
=== Message format ===
Each GPS satellite continuously broadcasts a navigation message on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds (12+1⁄2 minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.
The first subframe of each frame encodes the week number and the time within the week, as well as the data about the health of the satellite. The second and the third subframes contain the ephemeris – the precise orbit for the satellite. The fourth and fifth subframes contain the almanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or 12+1⁄2 minutes.
All satellites broadcast at the same frequencies, encoding signals using unique code-division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.
The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.
=== Satellite frequencies ===
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique: 607 where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space. One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.: 607
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010. On February 5, 2016, the 12th and final Block IIF satellite was launched. The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."
In 2011, a conditional waiver was granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the effects from the lower 10 MHz of spectrum are minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some effect on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses. Aviation Week magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.
=== Demodulation and decoding ===
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
== Navigation equations ==
=== Problem statement ===
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscript i denotes the satellite and has the value 1, 2, ..., n, where n ≥ 4. When the time of message reception indicated by the on-board receiver clock is
t
~
i
{\displaystyle {\tilde {t}}_{i}}
, the true reception time is
t
i
=
t
~
i
−
b
{\displaystyle t_{i}={\tilde {t}}_{i}-b}
, where b is the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is
t
~
i
−
b
−
s
i
{\displaystyle {\tilde {t}}_{i}-b-s_{i}}
, where si is the satellite time. Assuming the message traveled at the speed of light, c, the distance traveled is
(
t
~
i
−
b
−
s
i
)
c
{\displaystyle \left({\tilde {t}}_{i}-b-s_{i}\right)c}
.
For n satellites, the equations to satisfy are:
d
i
=
(
t
~
i
−
b
−
s
i
)
c
,
i
=
1
,
2
,
…
,
n
{\displaystyle d_{i}=\left({\tilde {t}}_{i}-b-s_{i}\right)c,\;i=1,2,\dots ,n}
where di is the geometric distance or range between receiver and satellite i (the values without subscripts are the x, y, and z components of receiver position):
d
i
=
(
x
−
x
i
)
2
+
(
y
−
y
i
)
2
+
(
z
−
z
i
)
2
{\displaystyle d_{i}={\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}}
Defining pseudoranges as
p
i
=
(
t
~
i
−
s
i
)
c
{\displaystyle p_{i}=\left({\tilde {t}}_{i}-s_{i}\right)c}
, we see they are biased versions of the true range:
p
i
=
d
i
+
b
c
,
i
=
1
,
2
,
.
.
.
,
n
{\displaystyle p_{i}=d_{i}+bc,\;i=1,2,...,n}
.
Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee. When n is greater than four, this system is overdetermined and a fitting method must be used.
The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called the geometric dilution of position (GDOP) factors, calculated from the relative sky directions of the satellites used. The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.
=== Geometric interpretation ===
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
==== Spheres ====
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; see trilateration (more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points. One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface.
In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be.
==== Hyperboloids ====
If the pseudorange between the receiver and satellite i and the pseudorange between the receiver and satellite j are subtracted, pi − pj, the common receiver clock bias (b) cancels out, resulting in a difference of distances di − dj. The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperbola on a plane and a hyperboloid of revolution (more specifically, a two-sheeted hyperboloid) in 3D space (see Multilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each with foci at a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.
==== Inscribed sphere ====
The receiver position can be interpreted as the center of an inscribed sphere (insphere) of radius bc, given by the receiver clock bias b (scaled by the speed of light c). The insphere location is such that it touches other spheres. The circumscribing spheres are centered at the GPS satellites, whose radii equal the measured pseudoranges pi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric ranges di.: 36–37
==== Hypercones ====
The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This produces pseudoranges with large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock bias b. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensional spacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes a hypercone (or spherical cone), with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones.
=== Solution methods ===
==== Least squares ====
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.
(
x
^
,
y
^
,
z
^
,
b
^
)
=
arg
min
(
x
,
y
,
z
,
b
)
∑
i
(
(
x
−
x
i
)
2
+
(
y
−
y
i
)
2
+
(
z
−
z
i
)
2
+
b
c
−
p
i
)
2
{\displaystyle \left({\hat {x}},{\hat {y}},{\hat {z}},{\hat {b}}\right)={\underset {\left(x,y,z,b\right)}{\arg \min }}\sum _{i}\left({\sqrt {(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}}}+bc-p_{i}\right)^{2}}
==== Iterative ====
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
==== Closed-form ====
One closed-form solution to the above set of equations was developed by S. Bancroft. Its properties are well known; in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares problems, generally provide more accurate solutions.
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."
Other closed-form solutions were published afterwards, although their adoption in practice is unclear.
=== Error sources and analysis ===
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft or from intentional signal degradation through selective availability, which limited accuracy to ≈ 6–12 m (20–40 ft), but has been switched off since May 1, 2000.
== Accuracy enhancement and surveying ==
== Regulatory spectrum issues concerning GPS receivers ==
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation". With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum". For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Space Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration (NASA), U.S. Department of the Interior, and U.S. Department of Transportation.
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum". In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector". GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS".
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. According to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it". The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.
On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". LightSquared is challenging the FCC's action.
== Similar systems ==
Following the United States's deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include:
The Russian Global Navigation Satellite System (GLONASS) was developed at the same time as GPS, but suffered from incomplete coverage of the globe until the mid-2000s. GLONASS reception in addition to GPS can be combined in a receiver thereby allowing for additional satellites available to enable faster position fixes and improved accuracy, to within two meters (6.6 ft). In October 2011, the full orbital constellation of 24 satellites enabled full global coverage. The GLONASS satellites' designs have undergone several upgrades, with the latest version, GLONASS-K2, launched in 2023.
China's BeiDou Navigation Satellite System began global services in 2018 and finished its full deployment in 2020. It consists of satellites in three different orbits, including 24 satellites in medium-circle orbits (covering the world), 3 satellites in inclined geosynchronous orbits (covering the Asia-Pacific region), and 3 satellites in geostationary orbits (covering China).
The Galileo navigation satellite system, a global system being developed by the European Union and other partner countries, began operation in 2016, and has been fully deployed by 2020. In November 2018, the FCC approved use of Galileo in the US. As of September 2024, there are 25 launched satellites that operate in the constellation. It is expected that the next generation of satellites will begin to become operational after 2026 to replace the first generation, which can then be used for backup capabilities.
Japan's Quasi-Zenith Satellite System (QZSS) is a GPS satellite-based augmentation system to enhance GPS's accuracy in Asia-Oceania, with satellite navigation independent of GPS scheduled for 2023.
The Indian Regional Navigation Satellite System (Operational name 'NavIC', Navigation with Indian Constellation), deployed by India.
== Backup system ==
In the event of adverse space weather or the deployment of an anti-satellite weapon against GPS, the United States has no terrestrial backup system. The potential cost of such an event to the U.S. economy is estimated at $1 billion per day. The LORAN-C system was turned off in North America in 2010 and Europe in 2015. eLoran is proposed as an American terrestrial backup system, but as of 2024 has not received approval or funding.
China continues to operate LORAN-C transmitters, and Russia has a similar system called CHAYKA ("Seagull").
== See also ==
== Notes ==
== References ==
== Further reading ==
"NAVSTAR GPS User Equipment Introduction" (PDF). United States Coast Guard. September 1996. Archived from the original (PDF) on October 21, 2013. Retrieved August 22, 2008.
Parkinson; Spilker (1996). The global positioning system. American Institute of Aeronautics and Astronautics. ISBN 978-1-56347-106-3.
Mendizabal, Jaizki; Berenguer, Roc; Melendez, Juan (2009). GPS and Galileo. McGraw Hill. ISBN 978-0-07-159869-9.
Bowditch, Nathaniel (2002). The American Practical Navigator – Chapter 11 Satellite Navigation . United States government.
Global Positioning System. Open Courseware from Massachusetts Institute of Technology, 2012.
Milner, Greg (2016). Pinpoint: How GPS is Changing Technology, Culture, and Our Minds. W. W. Norton. ISBN 978-0-393-08912-7.
== External links ==
FAA GPS FAQ
GPS.gov – General public education website created by the U.S. Government |
Green transport hierarchy | The green transport hierarchy (Canada), also called reverse traffic pyramid, street user hierarchy (US), sustainable transport hierarchy (Wales), urban transport hierarchy or road user hierarchy (Australia, UK) is a hierarchy of modes of passenger transport prioritising green transport. It is a concept used in transport reform groups worldwide and in policy design. In 2020, the UK government consulted about adding to the Highway Code a road user hierarchy prioritising pedestrians. It is a key characteristic of Australian transport planning.
== History ==
The Green Transportation Hierarchy: A Guide for Personal & Public Decision-Making by Chris Bradshaw was first published September 1994 and revised June 2004. As part of a pedestrian advocacy group in the United States, he proposed the hierarchy ranking passenger transport based on environmental emissions. The reviewed ranking listed, in order: walking, cycling, public transport, car sharing, and finally private car.
It was first prepared for Ottawalk and the Transportation Working Committee of the Ottawa-Carleton Round-table on the Environment in January 1992, only stating 'Walk, Cycle, Bus, Truck, Car'.
== Factors ==
Mode
Energy source
Trip length
Trip speed
Vehicle size
Passenger load factor
Trip segment
Trip purpose
Traveller
== Adoption ==
Chris Bradshaw directed the hierarchy at both individual lifestyle choices and public authorities who should officially direct their resources – funds, moral suasion, and formal sanctions – based on the factors.
Bradshaw described the hierarchy to be logical, but the effect of applying it to seem radical.
The model rejects the concept of the balanced transportation system, where users are assumed to be free to choose from amongst many different yet ‘equally valid’ modes. This is because choices incorporating factors that are ranked low (walking, cycling, public transport) are seen as generally having a high impact on other choices.
== See also ==
== References ==
== External links ==
Original 1992 paper |
Greenhouse gas | Greenhouse gases (GHGs) are the gases in the atmosphere that raise the surface temperature of planets such as the Earth. Unlike other gases, greenhouse gases absorb the radiations that a planet emits, resulting in the greenhouse effect. The Earth is warmed by sunlight, causing its surface to radiate heat, which is then mostly absorbed by greenhouse gases. Without greenhouse gases in the atmosphere, the average temperature of Earth's surface would be about −18 °C (0 °F), rather than the present average of 15 °C (59 °F).
The five most abundant greenhouse gases in Earth's atmosphere, listed in decreasing order of average global mole fraction, are: water vapor, carbon dioxide, methane, nitrous oxide, ozone. Other greenhouse gases of concern include chlorofluorocarbons (CFCs and HCFCs), hydrofluorocarbons (HFCs), perfluorocarbons, SF6, and NF3. Water vapor causes about half of the greenhouse effect, acting in response to other gases as a climate change feedback.
Human activities since the beginning of the Industrial Revolution (around 1750) have increased carbon dioxide by over 50%, and methane levels by 150%. Carbon dioxide emissions are causing about three-quarters of global warming, while methane emissions cause most of the rest. The vast majority of carbon dioxide emissions by humans come from the burning of fossil fuels, with remaining contributions from agriculture and industry.: 687 Methane emissions originate from agriculture, fossil fuel production, waste, and other sources. The carbon cycle takes thousands of years to fully absorb CO2 from the atmosphere, while methane lasts in the atmosphere for an average of only 12 years.
Natural flows of carbon happen between the atmosphere, terrestrial ecosystems, the ocean, and sediments. These flows have been fairly balanced over the past one million years, although greenhouse gas levels have varied widely in the more distant past. Carbon dioxide levels are now higher than they have been for three million years. If current emission rates continue then global warming will surpass 2.0 °C (3.6 °F) sometime between 2040 and 2070. This is a level which the Intergovernmental Panel on Climate Change (IPCC) says is "dangerous".
== Properties and mechanisms ==
Greenhouse gases are infrared active, meaning that they absorb and emit infrared radiation in the same long wavelength range as what is emitted by the Earth's surface, clouds and atmosphere.: 2233
99% of the Earth's dry atmosphere (excluding water vapor) is made up of nitrogen (N2) (78%) and oxygen (O2) (21%). Because their molecules contain two atoms of the same element, they have no asymmetry in the distribution of their electrical charges, and so are almost totally unaffected by infrared thermal radiation, with only an extremely minor effect from collision-induced absorption. A further 0.9% of the atmosphere is made up by argon (Ar), which is monatomic, and so completely transparent to thermal radiation. On the other hand, carbon dioxide (0.04%), methane, nitrous oxide and even less abundant trace gases account for less than 0.1% of Earth's atmosphere, but because their molecules contain atoms of different elements, there is an asymmetry in electric charge distribution which allows molecular vibrations to interact with electromagnetic radiation. This makes them infrared active, and so their presence causes greenhouse effect.
=== Radiative forcing ===
Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat. A planet's surface temperature depends on this balance between incoming and outgoing energy. When Earth's energy balance is shifted, its surface becomes warmer or cooler, leading to a variety of changes in global climate. Radiative forcing is a metric calculated in watts per square meter, which characterizes the impact of an external change in a factor that influences climate. It is calculated as the difference in top-of-atmosphere (TOA) energy balance immediately caused by such an external change. A positive forcing, such as from increased concentrations of greenhouse gases, means more energy arriving than leaving at the top-of-atmosphere, which causes additional warming, while negative forcing, like from sulfates forming in the atmosphere from sulfur dioxide, leads to cooling.: 2245
Within the lower atmosphere, greenhouse gases exchange thermal radiation with the surface and limit radiative heat flow away from it, which reduces the overall rate of upward radiative heat transfer.: 139 The increased concentration of greenhouse gases is also cooling the upper atmosphere, as it is much thinner than the lower layers, and any heat re-emitted from greenhouse gases is more likely to travel further to space than to interact with the fewer gas molecules in the upper layers. The upper atmosphere is also shrinking as the result.
== Contributions of specific gases to the greenhouse effect ==
Anthropogenic changes to the natural greenhouse effect are sometimes referred to as the enhanced greenhouse effect.: 2223
This table shows the most important contributions to the overall greenhouse effect, without which the average temperature of Earth's surface would be about −18 °C (0 °F), instead of around 15 °C (59 °F). This table also specifies tropospheric ozone, because this gas has a cooling effect in the stratosphere, but a warming influence comparable to nitrous oxide and CFCs in the troposphere.
=== Special role of water vapor ===
Water vapor is the most important greenhouse gas overall, being responsible for 41–67% of the greenhouse effect, but its global concentrations are not directly affected by human activity. While local water vapor concentrations can be affected by developments such as irrigation, it has little impact on the global scale due to its short residence time of about nine days. Indirectly, an increase in global temperatures will also increase water vapor concentrations and thus their warming effect, in a process known as water vapor feedback. It occurs because the Clausius–Clapeyron relation holds that more water vapor will be present per unit volume at elevated temperatures. Thus, local atmospheric concentration of water vapor varies from less than 0.01% in extremely cold regions up to 3% by mass in saturated air at about 32 °C.
=== Global warming potential (GWP) and CO2 equivalents ===
== List of all greenhouse gases ==
The contribution of each gas to the enhanced greenhouse effect is determined by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 84 times stronger than the same mass of carbon dioxide over a 20-year time frame. Since the 1980s, greenhouse gas forcing contributions (relative to year 1750) are also estimated with high accuracy using IPCC-recommended expressions derived from radiative transfer models.
The concentration of a greenhouse gas is typically measured in parts per million (ppm) or parts per billion (ppb) by volume. A CO2 concentration of 420 ppm means that 420 out of every million air molecules is a CO2 molecule. The first 30 ppm increase in CO2 concentrations took place in about 200 years, from the start of the Industrial Revolution to 1958; however the next 90 ppm increase took place within 56 years, from 1958 to 2014. Similarly, the average annual increase in the 1960s was only 37% of what it was in 2000 through 2007.
Many observations are available online in a variety of Atmospheric Chemistry Observational Databases. The table below shows the most influential long-lived, well-mixed greenhouse gases, along with their tropospheric concentrations and direct radiative forcings, as identified by the Intergovernmental Panel on Climate Change (IPCC). Abundances of these trace gases are regularly measured by atmospheric scientists from samples collected throughout the world. It excludes water vapor because changes in its concentrations are calculated as a climate change feedback indirectly caused by changes in other greenhouse gases, as well as ozone, whose concentrations are only modified indirectly by various refrigerants that cause ozone depletion. Some short-lived gases (e.g. carbon monoxide, NOx) and aerosols (e.g. mineral dust or black carbon) are also excluded because of limited role and strong variation, along with minor refrigerants and other halogenated gases, which have been mass-produced in smaller quantities than those in the table.: 731–738 and Annex III of the 2021 IPCC WG1 Report: 4–9
== Factors affecting concentrations ==
Atmospheric concentrations are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound or absorption by bodies of water).: 512
=== Airborne fraction ===
The proportion of an emission remaining in the atmosphere after a specified time is the "airborne fraction" (AF). The annual airborne fraction is the ratio of the atmospheric increase in a given year to that year's total emissions. The annual airborne fraction for CO2 had been stable at 0.45 for the past six decades even as the emissions have been increasing. This means that the other 0.55 of emitted CO2 is absorbed by the land and atmosphere carbon sinks within the first year of an emission. In the high-emission scenarios, the effectiveness of carbon sinks will be lower, increasing the atmospheric fraction of CO2 even though the raw amount of emissions absorbed will be higher than in the present.: 746
=== Atmospheric lifetime ===
Major greenhouse gases are well mixed and take many years to leave the atmosphere.
The atmospheric lifetime of a greenhouse gas refers to the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime. This can be represented through the following formula, where the lifetime
τ
{\displaystyle \tau }
of an atmospheric species X in a one-box model is the average time that a molecule of X remains in the box.
τ
{\displaystyle \tau }
can also be defined as the ratio of the mass
m
{\displaystyle m}
(in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box
(
F
out
{\displaystyle F_{\text{out}}}
),
chemical loss of X
(
L
{\displaystyle L}
),
and deposition of X
(
D
{\displaystyle D}
)
(all in kg/s):
τ
=
m
F
out
+
L
+
D
{\displaystyle \tau ={\frac {m}{F_{\text{out}}+L+D}}}
.
If input of this gas into the box ceased, then after time
τ
{\displaystyle \tau }
, its concentration would decrease by about 63%.
Changes to any of these variables can alter the atmospheric lifetime of a greenhouse gas. For instance, methane's atmospheric lifetime is estimated to have been lower in the 19th century than now, but to have been higher in the second half of the 20th century than after 2000. Carbon dioxide has an even more variable lifetime, which cannot be specified down to a single number.: 2237 Scientists instead say that while the first 10% of carbon dioxide's airborne fraction (not counting the ~50% absorbed by land and ocean sinks within the emission's first year) is removed "quickly", the vast majority of the airborne fraction – 80% – lasts for "centuries to millennia". The remaining 10% stays for tens of thousands of years. In some models, this longest-lasting fraction is as large as 30%.
=== During geologic time scales ===
== Monitoring ==
Greenhouse gas monitoring involves the direct measurement of atmospheric concentrations and direct and indirect measurement of greenhouse gas emissions. Indirect methods calculate emissions of greenhouse gases based on related metrics such as fossil fuel extraction.
There are several different methods of measuring carbon dioxide concentrations in the atmosphere, including infrared analyzing and manometry. Methane and nitrous oxide are measured by other instruments, such as the range-resolved infrared differential absorption lidar (DIAL). Greenhouse gases are measured from space such as by the Orbiting Carbon Observatory and through networks of ground stations such as the Integrated Carbon Observation System.
The Annual Greenhouse Gas Index (AGGI) is defined by atmospheric scientists at NOAA as the ratio of total direct radiative forcing due to long-lived and well-mixed greenhouse gases for any year for which adequate global measurements exist, to that present in year 1990. These radiative forcing levels are relative to those present in year 1750 (i.e. prior to the start of the industrial era). 1990 is chosen because it is the baseline year for the Kyoto Protocol, and is the publication year of the first IPCC Scientific Assessment of Climate Change. As such, NOAA states that the AGGI "measures the commitment that (global) society has already made to living in a changing climate. It is based on the highest quality atmospheric observations from sites around the world. Its uncertainty is very low."
=== Data networks ===
== Types of sources ==
=== Natural sources ===
The natural flows of carbon between the atmosphere, ocean, terrestrial ecosystems, and sediments are fairly balanced; so carbon levels would be roughly stable without human influence. Carbon dioxide is removed from the atmosphere primarily through photosynthesis and enters the terrestrial and oceanic biospheres. Carbon dioxide also dissolves directly from the atmosphere into bodies of water (ocean, lakes, etc.), as well as dissolving in precipitation as raindrops fall through the atmosphere. When dissolved in water, carbon dioxide reacts with water molecules and forms carbonic acid, which contributes to ocean acidity. It can then be absorbed by rocks through weathering. It also can acidify other surfaces it touches or be washed into the ocean.
=== Human-made sources ===
The vast majority of carbon dioxide emissions by humans come from the burning of fossil fuels. Additional contributions come from cement manufacturing, fertilizer production, and changes in land use like deforestation.: 687 Methane emissions originate from agriculture, fossil fuel production, waste, and other sources. Rice paddies are a significant agricultural source of greenhouse gas emissions, contributing 22% of total agricultural methane and 11% of nitrous oxide emissions.
If current emission rates continue then temperature rises will surpass 2.0 °C (3.6 °F) sometime between 2040 and 2070, which is the level the United Nations' Intergovernmental Panel on Climate Change (IPCC) says is "dangerous".
Most greenhouse gases have both natural and human-caused sources. An exception are purely human-produced synthetic halocarbons which have no natural sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant, because the large natural sources and sinks roughly balanced. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests.: 115
== Reducing human-caused greenhouse gases ==
=== Needed emissions cuts ===
=== Removal from the atmosphere through negative emissions ===
Several technologies remove greenhouse gas emissions from the atmosphere. Most widely analyzed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage and carbon dioxide air capture, or to the soil as in the case with biochar. Many long-term climate scenario models require large-scale human-made negative emissions to avoid serious climate change.
Negative emissions approaches are also being studied for atmospheric methane, called atmospheric methane removal.
== History of discovery ==
In the late 19th century, scientists experimentally discovered that N2 and O2 do not absorb infrared radiation (called, at that time, "dark radiation"), while water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and CO2 and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century, researchers realized that greenhouse gases in the atmosphere made Earth's overall temperature higher than it would be without them. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
During the late 20th century, a scientific consensus evolved that increasing concentrations of greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system, with consequences for the environment and for human health.
== Other planets ==
Greenhouse gases exist in many atmospheres, creating greenhouse effects on Mars, Titan, and particularly in the thick atmosphere of Venus. While Venus has been described as the ultimate end state of runaway greenhouse effect, such a process would have virtually no chance of occurring from any increases in greenhouse gas concentrations caused by humans, as the Sun's brightness is too low and it would likely need to increase by some tens of percents, which will take a few billion years.
== See also ==
Carbon accounting – Processes used to measure emissions of carbon dioxide equivalents
Carbon budget – Limit on carbon dioxide emission for a given climate impact
Carbon sequestration – Storing carbon in a carbon pool
Climate change feedbacks – Feedback related to climate change
== References ==
== External links ==
Media related to Greenhouse gases at Wikimedia Commons
Carbon Dioxide Information Analysis Center (CDIAC), U.S. Department of Energy, retrieved 26 July 2020
Annual Greenhouse Gas Index (AGGI) from NOAA
Atmospheric spectra of GHGs and other trace gases. Archived 25 March 2013 at the Wayback Machine. |
Ground attack | Close air support (CAS) is defined as aerial warfare actions—often air-to-ground actions such as strafes or airstrikes—by military aircraft against hostile targets in close proximity to friendly forces. A form of fire support, CAS requires detailed integration of each air mission with fire and movement of all forces involved. CAS may be conducted using aerial bombs, glide bombs, missiles, rockets, autocannons, machine guns, and even directed-energy weapons such as lasers.
The requirement for detailed integration because of proximity, fires or movement is the determining factor. CAS may need to be conducted during shaping operations with special forces if the mission requires detailed integration with the fire and movement of those forces. A closely related subset of air interdiction, battlefield air interdiction, denotes interdiction against units with near-term effects on friendly units, but which does not require integration with friendly troop movements. CAS requires excellent coordination with ground forces, typically handled by specialists such as artillery observers, joint terminal attack controllers, and forward air controllers.
World War I was the first conflict to make extensive use of CAS, albeit using relatively primitive methods in contrast to later military tactics, though it was made evident that proper coordination between aerial and ground forces via radio made attacks more effective. Several conflicts during the interwar period—including the Polish–Soviet War, the Spanish Civil War, the Iraqi Revolt, and the Chaco War—made notable use of CAS. World War II marked the universal acceptance of the integration of air power into combined arms warfare, with all of the war's major combatants having developed effective air-ground coordination techniques by the conflict's end. New techniques, such as the use of forward air control to guide CAS aircraft and identifying invasion stripes, also emerged at this time, being heavily shaped by the Italian Campaign and the invasion of Normandy. CAS continued to advance during the conflicts of the Cold War, especially the Korean War and the Vietnam War; major milestones included the introduction of attack helicopters, gunships, and dedicated CAS attack jets.
== History ==
=== World War I ===
The use of aircraft in the close air support of ground forces dates back to World War I, the first conflict to make significant military use of aerial forces. Air warfare, and indeed aviation itself, was still in its infancy – and the direct effect of rifle caliber machine guns and light bombs of World War I aircraft was very limited compared with the power of (for instance) an average fighter bomber of World War II, but CAS aircraft were still able to achieve a powerful psychological impact. Unlike artillery, the aircraft was a visible and personal enemy presenting a direct threat to enemy troops, while at the same time providing friendly forces proof of support from their superiors.
The most successful attacks of 1917–1918 had included planning for co-ordination between aerial and ground units, although it was relatively difficult at this early date to co-ordinate these attacks due to the primitive nature of air-to-ground radio communication. Though most air-power proponents sought independence from ground commanders and hence pushed the importance of interdiction and strategic bombing, they nonetheless recognized the need for close air support.
From the commencement of hostilities in 1914, aviators engaged in sporadic and spontaneous attacks on ground forces, but it was not until 1916 that an air support doctrine was elaborated and dedicated fighters for the job were put into service. By that point, the startling and demoralizing effect that attack from the air could have on the troops in the trenches had been made clear.
At the Battle of the Somme, 18 British armed reconnaissance planes strafed the enemy trenches after conducting surveillance operations. The success of this improvised assault spurred innovation on both sides. In 1917, following the Second Battle of the Aisne, the British debuted the first ground-attack aircraft, a modified F.E 2b fighter carrying 20 lb (9.1 kg) bombs and mounted machine-guns. After exhausting their ammunition, the planes returned to base for refueling and rearming before returning to the battle-zone. Other modified planes used in this role were the Airco DH.5 and Sopwith Camel – the latter was particularly successful in this role.
Aircraft support was first integrated into a battle plan on a large scale at the 1917 Battle of Cambrai, where a significantly larger number of tanks were deployed than previously. By that time, effective anti-aircraft tactics were being used by the enemy infantry and pilot casualties were high, although air support was later judged as having been of a critical importance in places where the infantry had got pinned down.
At this time, British doctrine came to recognize two forms of air support; trench strafing (the modern-day doctrine of CAS), and ground strafing (the modern-day doctrine of air interdiction) – attacking tactical ground targets away from the land battle. As well as strafing with machine-guns, planes engaged in such operations were commonly modified with bomb racks; the plane would fly in very low to the ground and release the bombs just above the trenches.
The Germans were also quick to adopt this new form of warfare and were able to deploy aircraft in a similar capacity at Cambrai. While the British used single-seater planes, the Germans preferred the use of heavier two-seaters with an additional machine gunner in the aft cockpit. The Germans adopted the powerful Hannover CL.II and built the first purpose-built ground attack aircraft, the Junkers J.I. During the 1918 German spring offensive, the Germans employed 30 squadrons, or Schlasta, of ground attack fighters and were able to achieve some initial tactical success. The British later deployed the Sopwith Salamander as a specialized ground attack aircraft, although it was too late to see much action.
During the Sinai and Palestine Campaign of 1918, CAS aircraft functioned as an important factor in ultimate victory. After the British achieved air superiority over the German aircraft sent to aid the Ottoman Turks, squadrons of S.E 5a's and D.H. 4s were sent on wide-ranging attacks against German and Turkish positions near the Jordan river. Combined with a ground assault led by General Edmund Allenby, three Turkish armies soon collapsed into a full rout. In the words of the attacking squadron's official report:
No 1 Squadron made six heavy raids during the day, dropped three tons of bombs and fired nearly 24,000 machine gun rounds.
=== Inter-war period ===
The close air support doctrine was further developed in the interwar period. Most theorists advocated the adaptation of fighters or light bombers into the role. During this period, airpower advocates crystallized their views on the role of air-power in warfare. Aviators and ground officers developed largely opposing views on the importance of CAS, views that would frame institutional battles for CAS in the 20th century.
The inter-war period saw the use of CAS in a number of conflicts, including the Polish–Soviet War, the Spanish Civil War, the Iraqi revolt of 1920 and the Gran Chaco War.
The British used air power to great effect in various colonial hotspots in the Middle East and North Africa during the immediate postwar period. The newly formed Royal Air force (RAF) contributed to the defeat of the Afghan military during the Third Anglo-Afghan War by harassing Afghani troops and breaking up their formations. Z Force, an RAF air squadron, was also used to support ground operations during the Somaliland campaign, in which the Darawiish king Diiriye Guure's insurgency was defeated. Following from these successes, the decision was made to create a unified RAF Iraq Command to use air power as a more cost-effective way of controlling large areas than the use of conventional land forces. It was effectively used to suppress the Great Iraqi Revolution of 1920 and various other tribal revolts.
During the Spanish Civil War German volunteer aviators of the Condor Legion on the Nationalist side, despite little official support from their government, developed close air support tactics that proved highly influential for subsequent Luftwaffe doctrine.
U.S. Marine Corps Aviation was used as an intervention force in support of U.S. Marine Corps ground forces during the Banana Wars, in places such as Haiti, the Dominican Republic and Nicaragua. Marine Aviators experimented with air-ground tactics and in Haiti and Nicaragua they adopted the tactic of dive bombing.
The observers and participants of these wars would base their CAS strategies on their experience of the conflict. Aviators, who wanted institutional independence from the Army, pushed for a view of air-power centered around interdiction, which would relieve them of the necessity of integrating with ground forces and allow them to operate as an independent military arm. They saw close air support as both the most difficult and most inefficient use of aerial assets.
Close air support was the most difficult mission, requiring identifying and distinguishing between friendly and hostile units. At the same time, targets engaged in combat are dispersed and concealed, reducing the effectiveness of air attacks. They also argued that the CAS mission merely duplicated the abilities of artillery, whereas interdiction provided a unique capability. Ground officers contended there was rarely sufficient artillery available, and the flexibility of aircraft would be ideal for massing firepower at critical points, while producing a greater psychological effect on friendly and hostile forces alike. Moreover, unlike massive, indiscriminate artillery strikes, small aerial bombs would not render ground untrafficable, slowing attacking friendly forces.
Although the prevailing view in official circles was largely indifferent to CAS during the interwar period, its importance was expounded upon by military theorists, such as J. F. C. Fuller and Basil Liddell Hart. Hart, who was an advocate of what later came to be known as 'Blitzkrieg' tactics, thought that the speed of armoured tanks would render conventional artillery incapable of providing support fire. Instead he argued that low-flying aircraft could serve as “more mobile artillery” in its place.
=== World War II ===
==== Luftwaffe ====
As a continental power intent on offensive operations, Germany could not ignore the need for aerial support of ground operations. Though the Luftwaffe, like its counterparts, tended to focus on strategic bombing, it was unique in its willingness to commit forces to CAS. Unlike the Allies, the Germans were not able to develop powerful strategic bombing capabilities, which implied industrial developments they were forbidden to take according to the Treaty of Versailles. In joint exercises with Sweden in 1934, the Germans were first exposed to dive-bombing, which permitted greater accuracy while making attack aircraft more difficult to track by antiaircraft gunners. As a result, Ernst Udet, chief of the Luftwaffe's development, initiated procurement of close support dive bombers on the model of the U.S. Navy's Curtiss Helldiver, resulting in the Henschel Hs 123, which was later replaced by the famous Junkers Ju 87 Stuka. Experience in the Spanish Civil War lead to the creation of five ground-attack groups in 1938, four of which would be equipped with Stukas. The Luftwaffe matched its material acquisitions with advances in the air-ground coordination. General Wolfram von Richthofen organized a limited number of air liaison detachments that were attached to ground units of the main effort. These detachments existed to pass requests from the ground to the air, and receive reconnaissance reports, but they were not trained to guide aircraft onto targets.
These preparations did not prove fruitful in the invasion of Poland, where the Luftwaffe focused on interdiction and dedicated few assets to close air support. But the value of CAS was demonstrated at the crossing of the Meuse River during the Invasion of France in 1940. General Heinz Guderian, one of the creators of the combined-arms tactical doctrine commonly known as "blitzkrieg", believed the best way to provide cover for the crossing would be a continuous stream of ground attack aircraft on French defenders. Though few guns were hit, the attacks kept the French under cover and prevented them from manning their guns. Aided by the sirens attached to Stukas, the psychological impact was disproportional to the destructive power of close air support (although as often as not, the Stukas were used as tactical bombers instead of close air support, leaving much of the actual work to the older Hs 123 units for the first years of the war). In addition, the reliance on air support over artillery reduced the demand for logistical support through the Ardennes. Though there were difficulties in coordinating air support with the rapid advance, the Germans demonstrated consistently superior CAS tactics to those of the British and French defenders. Later, on the Eastern front, the Germans would devise visual ground signals to mark friendly units and to indicate direction and distance to enemy emplacements.
Despite these accomplishments, German CAS was not perfect and suffered from the same misunderstanding and interservice rivalry that plagued other nations' air arms, and friendly fire was not uncommon. For example, on the eve of the Meuse offensive, Guderian's superior cancelled his CAS plans and called for high-altitude strikes from medium bombers, which would have required halting the offensive until the air strikes were complete. Fortunately for the Germans, his order was issued too late to be implemented, and the Luftwaffe commander followed the schedule he had previously worked out with Guderian. As late as November 1941, the Luftwaffe refused to provide Erwin Rommel with an air liaison officer for the Afrika Korps, because it "would be against the best use of the air force as a whole."
German CAS was also extensively used on the Eastern Front during the period 1941–1943. Their decline was caused by the growing strength of the Red Air Force and the redeployment of assets to defend against American and British strategic bombardment. Luftwaffe's loss of air superiority, combined with a declining supply of aircraft and fuel, crippled their ability to provide effective CAS on the western front after 1943.
==== RAF and USAAF ====
The Royal Air Force (RAF) entered the war woefully unprepared to provide CAS. In 1940 during the Battle of France, the Royal Air Force and Army headquarters in France were located at separate positions, resulting in unreliable communications. After the RAF was withdrawn in May, Army officers had to telephone the War Office in London to arrange for air support. The stunning effectiveness of German air-ground coordination spurred change. On the basis of tests in Northern Ireland in August 1940, Group Captain A. H. Wann RAF and Colonel J.D. Woodall (British Army) issued the Wann-Woodall Report, recommending the creation of a distinct tactical air force liaison officer (known colloquially as "tentacles") to accompany Army divisions and brigades. Their report spurred the RAF to create an RAF Army Cooperation Command and to develop tentacle equipment and procedures placing an Air Liaison Officer with each brigade.
Although the RAF was working on its CAS doctrine in London, officers in North Africa improvised their own coordination techniques. In October 1941, Sir Arthur Tedder and Arthur Coningham, senior RAF commanders in North Africa, created joint RAF-Army Air Support Control staffs at each corps and armored division headquarters, and placed a Forward Air Support Link at each brigade to forward air support requests. When trained tentacle teams arrived in 1942, they cut response time on support requests to thirty minutes. It was also in the North Africa desert that the cab rank strategy was developed. It used a series of three aircraft, each in turn directed by the pertinent ground control by radio. One aircraft would be attacking, another in flight to the battle area, while a third was being refuelled and rearmed at its base. If the first attack failed to destroy the tactical target, the aircraft in flight would be directed to continue the attack. The first aircraft would land for its own refuelling and rearming once the third had taken off. The CAS tactics developed and refined by the British during the campaign in North Africa served as the basis for the Allied system used to subsequently gain victory in the air over Germany in 1944 and devastate its cities and industries.
The use of forward air control to guide close air support (CAS) aircraft, so as to ensure that their attack hits the intended target and not friendly troops, was first used by the British Desert Air Force in North Africa, but not by the USAAF until operations in Salerno. During the North African Campaign in 1941 the British Army and the Royal Air Force established Forward Air Support Links (FASL), a mobile air support system using ground vehicles. Light reconnaissance aircraft would observe enemy activity and report it by radio to the FASL which was attached at brigade level. The FASL was in communication (a two-way radio link known as a "tentacle") with the Air Support Control (ASC) Headquarters attached to the corps or armoured division which could summon support through a Rear Air Support Link with the airfields. They also introduced the system of ground direction of air strikes by what was originally termed a "Mobile Fighter Controller" traveling with the forward troops. The controller rode in the "leading tank or armoured car" and directed a "cab rank" of aircraft above the battlefield. This system of close co-operation first used by the Desert Air Force, was steadily refined and perfected, during the campaigns in Italy, Normandy and Germany.
By the time the Italian Campaign had reached Rome, the Allies had established air superiority. They were then able to pre-schedule strikes by fighter-bomber squadrons; however, by the time the aircraft arrived in the strike area, oftentimes the targets, which were usually trucks, had fled. The initial solution to fleeing targets was the British "Rover" system. These were pairings of air controllers and army liaison officers at the front but able to switch communications seamlessly from one brigade to another – hence Rover. Incoming strike aircraft arrived with pre-briefed targets, which they would strike 20 minutes after arriving on station only if the Rovers had not directed them to another more pressing target. Rovers might call on artillery to mark targets with smoke shells, or they might direct the fighters to map grid coordinates, or they might resort to a description of prominent terrain features as guidance. However, one drawback for the Rovers was the constant rotation of pilots, who were there for fortnightly stints, leading to a lack of institutional memory. US commanders, impressed by the British tactics at the Salerno landings, adapted their own doctrine to include many features of the British system.
At the start of the War, the United States Army Air Forces (USAAF) had, as its principal mission, the doctrine of strategic bombing. This incorporated the unerring belief that unescorted bombers could win the war without the advent of ground troops. This doctrine proved to be fundamentally flawed. However, during the entire course of the war the USAAF top brass clung to this doctrine, and hence operated independently of the rest of the Army. Thus it was initially unprepared to provide CAS, and in fact, had to be dragged "kicking and screaming" into the CAS function with the ground troops. USAAF doctrinal priorities for tactical aviation were, in order, air superiority, isolation of the battlefield via supply interdiction, and thirdly, close air support. Hence, during the North African Campaign, CAS was poorly executed, if at all. So few aerial assets were assigned to U.S. troops that they fired on anything in the air. And in 1943, the USAAF changed their radios to a frequency incompatible with ground radios.
The situation improved during the Italian Campaign, where American and British forces, working in close cooperation, exchanged CAS techniques and ideas. There, the AAF's XII Air Support Command and the Fifth U.S. Army shared headquarters, meeting every evening to plan strikes and devising a network of liaisons and radios for communications. However, friendly fire continued to be a concern – pilots did not know recognition signals and regularly bombed friendly units, until an A-36 was shot down in self-defense by Allied tanks. The expectation of losses to friendly fire from the ground during the planned invasion of France prompted the black and white invasion stripes painted on all Allied aircraft from 1944.
In 1944, USAAF commander Lt. Gen. Henry ("Hap") Arnold acquired 2 groups of A-24 dive bombers, the army version of the Navy's SBD-2, in response to the success of the Stuka and German CAS. Later, the USAAF developed a modification of the North American P-51 Mustang with dive brakes – the North American A-36 Apache. However, there was no training to match the purchases. Though Gen. Lesley McNair, commander of Army Ground Forces, pushed to change USAAF priorities, the latter failed to provide aircraft for even major training exercises. Six months before the invasion of Normandy, 33 divisions had received no joint air-ground training.
The USAAF saw the greatest innovations in 1944 under General Elwood Quesada, commander of IX Tactical Air Command, supporting the First U.S. Army. He developed the "armored column cover", where on-call fighter-bombers maintained a high level of availability for important tank advances, allowing armor units to maintain a high tempo of exploitation even when they outran their artillery assets. He also used a modified antiaircraft radar to track friendly attack aircraft to redirect them as necessary, and experimented with assigning fighter pilots to tours as forward air controllers to familiarize them with the ground perspective. In July 1944, Quesada provided VHF aircraft radios to tank crews in Normandy. When the armored units broke out of the Normandy beachhead, tank commanders were able to communicate directly with overhead fighter-bombers. However, despite the innovation, Quesada focused his aircraft on CAS only for major offensives. Typically, both British and American attack aircraft were tasked primarily to interdiction, even though later analysis showed them to be twice as dangerous as CAS.
XIX TAC, under the command of General Otto P. Weyland used similar tactics to support the rapid armored advance of General Patton's Third Army in its drive across France. Armed reconnaissance was a major feature of XIX TAC close air support, as the rapid advance left Patton's Southern flank open. Such was the close nature of cooperation between the Third Army and XIX TAC that Patton actually counted on XIX TAC to guard his flanks. This close air support from XIX TAC was credited by Patton as having been a key factor in the rapid advance and success of his Third Army.
The American Navy and Marine Corps used CAS in conjunction with or as a substitute for the lack of available artillery or naval gunfire in the Pacific theater. Navy and Marine F6F Hellcats and F4U Corsairs used a variety of ordnance such as conventional bombs, rockets and napalm to dislodge or attack Japanese troops using cave complexes in the latter part of the Second World War.
==== Red Air Force ====
The Soviet Union's Red Air Force quickly recognized the value of ground-support aircraft. As early as the Battles of Khalkhyn Gol in 1939, Soviet aircraft had the task of disrupting enemy ground operations. This use increased markedly after the June 1941 Axis invasion of the Soviet Union. Purpose-built aircraft such as the Ilyushin Il-2 Sturmovik proved highly effective in blunting the activity of the Panzers. Joseph Stalin paid the Il-2 a great tribute in his own inimitable manner: when a particular production factory fell behind on its deliveries, Stalin sent the following cable to the factory manager: "They are as essential to the Red Army as air and bread".
=== Korean War ===
From Navy experiments with the KGW-1 Loon, the Navy designation for the German V-1 flying bomb, Marine Captain Marian Cranford Dalby developed the AN/MPQ-14, a system that enabled radar-guided bomb release at night or in poor weather.
Though the Marine Corps continued its tradition of intimate air–ground cooperation in the Korean War, the newly created United States Air Force (USAF) again moved away from CAS, now to strategic bombers and jet interceptors. Though eventually the Air Force supplied sufficient pilots and forward air controllers to provide battlefield support, coordination was still lacking. Since pilots operated under centralized control, ground controllers were never able to familiarize themselves with pilots, and requests were not processed quickly. Harold K. Johnson, then commander of the 8th Cavalry Regiment, 1st Cavalry Division (later Army Chief of Staff) commented regarding CAS: "If you want it, you can't get it. If you can get it, it can't find you. If it can find you, it can't identify the target. If it can identify the target, it can't hit it. But if it does hit the target, it doesn't do a great deal of damage anyway."
It is unsurprising, then, that MacArthur excluded USAF aircraft from the airspace over the Inchon Landing in September 1950, instead relying on Marine Aircraft Group 33 for CAS. In December 1951, Lt. Gen. James Van Fleet, commander of the Eighth U.S. Army, formally requested the United Nations Commander, Gen. Mark Clark, to permanently attach an attack squadron to each of the four army corps in Korea. Though the request was denied, Clark allocated many more Navy and Air Force aircraft to CAS. Despite the rocky start, the USAF would also work to improve its coordination efforts. It eventually required pilots to serve 80 days as forward air controllers (FACs), which gave them an understanding of the difficulties from the ground perspective and helped cooperation when they returned to the cockpit. The USAF also provided airborne FACs in critical locations. The Army also learned to assist, by suppressing anti-aircraft fire prior to air strikes.
The U.S. Army wanted a dedicated USAF presence on the battlefield to reduce fratricide, or the harm of friendly forces. This preference led to the creation of the air liaison officer (ALO) position. The ALO is an aeronautically rated officer that has spent a tour away from the cockpit, serving as the primary adviser to the ground commander on the capabilities and limitations of airpower. The Korean War revealed important flaws in the application of CAS. Firstly, the USAF preferred interdiction over fire support while the Army regarded support missions as the main concern for air forces. Then, the Army advocated a degree of decentralization for good reactivity, in contrast with the USAF-favored centralization of CAS. The third point dealt with the lack of training and joint culture, which are necessary for an adequate air-ground integration. Finally, USAF aircraft were not designed for CAS: "the advent of jet fighters, too fast to adjust their targets, and strategic bombers, too big to be used on theatre, rendered CAS much harder to implement".
=== Vietnam and the CAS role debate ===
During the late 1950s and early 1960s, the US Army began to identify a dedicated CAS need for itself. The Howze Board, which studied the question, published a landmark report describing the need for a helicopter-based CAS requirement. However, the Army did not follow the Howze Board recommendation initially. Nevertheless, it did eventually adopt the use of helicopter gunships and attack helicopters in the CAS role.
Though the Army gained more control over its own CAS due to the development of the helicopter gunship and attack helicopter, the Air Force continued to provide fixed-wing CAS for Army units. Over the course of the war, the adaptation of The Tactical Air Control System proved crucial to the improvement of Air Force CAS. Jets replaced propeller-driven aircraft with minimal issues. The assumption of responsibility for the air request net by the Air Force improved communication equipment and procedures, which had long been a problem. Additionally, a major step in satisfying the Army's demands for more control over their CAS was the successful implementation of close air support control agencies at the corps level under Air Force control. Other notable adaptations were the usage of airborne Forward Air Controllers (FACs), a role previously dominated by FACs on the ground, and the use of B-52s for CAS.
U.S. Marine Corps Aviation was much more prepared for the application of CAS in the Vietnam War, due to CAS being its central mission. In fact, as late as 1998, Marines were still claiming in their training manuals that "Close air support (CAS) is a Marine Corps innovation." One of the main debates taking place within the Marine Corps during the war was whether to adopt the helicopter gunship as a part of CAS doctrine and what its adoption would mean for fixed-wing CAS in the Marine Corps. The issue would eventually be put to rest, however, as the helicopter gunship proved crucial in the combat environment of Vietnam.
Though helicopters were initially armed merely as defensive measures to support the landing and extraction of troops, their value in this role led to the modification of early helicopters as dedicated gunship platforms. Though not as fast as fixed-wing aircraft and consequently more vulnerable to anti-aircraft weaponry, helicopters could use terrain for cover, and more importantly, had much greater battlefield persistence owing to their low speeds. The latter made them a natural complement to ground forces in the CAS role. In addition, newly developed anti-tank guided missiles, demonstrated to great effectiveness in the 1973 Yom Kippur War, provided aircraft with an effective ranged anti-tank weapon. These considerations motivated armies to promote the helicopter from a support role to a combat arm. Though the U.S. Army controlled rotary-wing assets, coordination continued to pose a problem. During wargames, field commanders tended to hold back attack helicopters out of fear of air defenses, committing them too late to effectively support ground units. The earlier debate over control over CAS assets was reiterated between ground commanders and aviators. Nevertheless, the US Army incrementally gained increased control over its CAS role.
In the mid-1970s, after Vietnam, the USAF decided to train an enlisted force to handle many of the tasks the ALO was saturated with, to include terminal attack control. Presently, the ALO mainly serves in the liaison role, the intricate details of mission planning and attack guidance left to the enlisted members of the Tactical Air Control Party.
=== NATO and AirLand Battle ===
Since their 1977 introduction into modern military practice for close air support purposes, General Crosbie E. Saint provided the AH-64 Apache the doctrinal cover for use in AirLand Battle operations such as in the NATO European theatre.
== Aircraft ==
Various aircraft can fill close air support roles. Military helicopters are often used for close air support and are so closely integrated with ground operations that in most countries they are operated by the army rather than the air force. Fighters and ground attack aircraft like the A-10 Thunderbolt II provide close air support using rockets, missiles, bombs, and strafing runs.
During the Second World War, a mixture of dive bombers and fighters were used for CAS missions. Dive bombing permitted greater accuracy than level bombing runs, while the rapid altitude change made it more difficult for anti aircraft gunners to track. The Junkers Ju 87 Stuka is a well known example of a dive bomber built for precision bombing but which was successfully used for CAS. It was fitted with wind-blown sirens on its landing gear to enhance its psychological effect. Some variants of the Stuka were equipped with a pair of 37 mm (1.5 in) Bordkanone BK 3,7 cannons mounted in under-wing gun pods, each loaded with two six-round magazines of armour-piercing tungsten carbide-cored ammunition, for anti-tank operations.
Other than the North American A-36 Apache, a P-51 Mustang modified with dive brakes, the Americans and British used no dedicated CAS aircraft in the Second World War, preferring fighters or fighter-bombers that could be pressed into CAS service. While some aircraft, such as the Hawker Typhoon and the P-47 Thunderbolt, performed admirably in that role, there were a number of compromises that prevented most fighters from making effective CAS platforms. Fighters were usually optimized for high-altitude operations without bombs or other external ordnance – flying at low level with bombs quickly expended fuel. Cannons had to be mounted differently for strafing – strafing required a further and lower convergence point than aerial combat did.
Of the Allied powers that fought in the Second World War, the Soviet Union used specifically designed ground attack aircraft more than the UK and US. Such aircraft included the Ilyushin Il-2, the single most produced military aircraft at any point in world history. The Soviet military also frequently deployed the Polikarpov Po-2 biplane as a ground attack aircraft.
The Royal Navy Hawker Sea Fury fighters and the U.S. Vought F4U Corsair and Douglas A-1 Skyraider were operated in a ground attack capacity during the Korean War. Outside of the conflict, there were numerous other occasions that the Sea Fury was used as a ground attack platform. Cuban Sea Furies, operated by the Fuerza Aérea Revolucionaria ("Revolutionary Air Force"; FAR), were used to oppose the US-orchestrated Bay of Pigs Invasion to attack incoming transport ships and disembarking ground forces alike. The A-1 Skyraider also saw later use, especially throughout the Vietnam War.
In the Vietnam War, the United States introduced a number of fixed and rotary wing gunships, including several cargo aircraft that were refitted as gun platforms to serve as CAS and air interdiction aircraft. The first of these to emerge was the Douglas AC-47 Spooky, which was converted from the Douglas C-47 Skytrain/Douglas DC-3. Some commentators have remarked on the high effectiveness of the AC-47 in the CAS role. The USAF developed several other platforms following on from the AC-47, including the Fairchild AC-119 and the Lockheed AC-130. The AC-130 has had a particularly lengthy service, being used extensively during the War in Afghanistan, the Iraq War and the US military intervention in Libya during the early twenty-first century. Multiple variants of the AC-130 have been developed and it has continued to be modernised, including the adoption of various new armaments.
Usually close support is thought to be only carried out by fighter-bombers or dedicated ground-attack aircraft, such as the A-10 Thunderbolt II (Warthog) or Su-25 (Frogfoot) or attack helicopters such as the AH-64 Apache, but even large high-altitude bombers have successfully filled close support roles using precision-guided munitions. During Operation Enduring Freedom, the lack of fighter aircraft forced military planners to rely heavily on US bombers, particularly the B-1B Lancer, to fill the CAS role. Bomber CAS, relying mainly on GPS guided weapons and laser-guided JDAMs has evolved into a devastating tactical employment methodology and has changed US doctrinal thinking regarding CAS in general. With significantly longer loiter times, range, and weapon capacity, bombers can be deployed to bases outside of the immediate battlefield area, with 12-hour missions being commonplace since 2001. After the initial collapse of the Taliban regime in Afghanistan, airfields in Afghanistan became available for continuing operations against the Taliban and Al-Qaeda. This resulted in a great number of CAS operations being undertaken by aircraft from Belgium (F-16 Fighting Falcon), Denmark (F-16), France (Mirage 2000D), the Netherlands (F-16), Norway (F-16), the United Kingdom (Harrier GR7s, GR9s and Tornado GR4s) and the United States (A-10, F-16, AV-8B Harrier II, F-15E Strike Eagle, F/A-18 Hornet, F/A-18E/F Super Hornet, UH-1Y Venom).
The use of information technology to direct and coordinate precision air support has increased the importance of intelligence, surveillance, and reconnaissance in using CAS. Laser, GPS, and battlefield data transfer are routinely used to coordinate with a wide variety of air platforms able to provide CAS. The 2003 joint CAS doctrine reflects the increased use of electronic and optical technology to direct targeted fires for CAS. Air platforms communicating with ground forces can also provide additional aerial-to-ground visual search, ground-convoy escort, and enhancement of command and control (C2), assets which can be particularly important for low intensity conflict.
== Doctrine ==
MCWP 3-23.1: CLOSE AIR SUPPORT (PDF). U.S. Marine Corps. 30 July 1998.
JP 3-09.3: Close Air Support (PDF). Joint Chiefs of Staff. 25 November 2014.
== See also ==
Artillery observer
Attack aircraft
Counter-insurgency aircraft, a specific type of CAS aircraft
Dive bomber
Flying Leathernecks
Forward air control
Light Attack/Armed Reconnaissance
Pace-Finletter MOU 1952
Schnellbomber
Tactical bombing, a general term for the type of bombing that includes CAS and air interdiction
== References ==
=== Citations ===
=== Bibliography ===
== External links ==
Can Our Jets Support the Guys on the Ground? – Popular Science
The Forward Air Controller Association
The ROMAD Locator The home of the current ground FAC
Operation Anaconda: An Airpower Perspective Archived 2019-07-14 at the Wayback Machine – Close air support during Operation Anaconda, United States Airforce, 2005.
Field Manuals 44-18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.