id
stringlengths 2
8
| revid
stringlengths 1
10
| url
stringlengths 38
44
| title
stringlengths 1
184
| text
stringlengths 101
448k
|
---|---|---|---|---|
6011
|
17793134
|
https://en.wikipedia.org/wiki?curid=6011
|
Chomsky hierarchy
|
The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a formal language's alphabet that are valid according to the language's syntax. The linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes (set inclusive).
History.
The general idea of a hierarchy of grammars was first described by Noam Chomsky in "Three models for the description of language" during the formalization of transformational-generative grammar (TGG). Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy, including context-free grammars.
Independently, alongside linguists, mathematicians were developing models of computation (via automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models.
The hierarchy.
The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. The classes are defined by the constraints on the productions rules.
Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1.
Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular.
Regular (Type-3) grammars.
Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal, in which case the grammar is "right regular". Alternatively, all the rules can have their right-hand sides consist of a single terminal, possibly "preceded" by a single nonterminal ("left regular"). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule formula_1 is also allowed here if formula_2 does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite-state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages.
For example, the regular language formula_3 is generated by the Type-3 grammar formula_4 with the productions formula_5 being the following.
Context-free (Type-2) grammars.
Type-2 grammars generate the context-free languages. These are defined by rules of the form formula_6 with formula_7 being a nonterminal and formula_8 being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser.
For example, the context-free language formula_9 is generated by the Type-2 grammar formula_10 with the productions formula_5 being the following.
The language is context-free but not regular (by the pumping lemma for regular languages).
Context-sensitive (Type-1) grammars.
Type-1 grammars generate context-sensitive languages. These grammars have rules of the form formula_12 with formula_7 a nonterminal and formula_8, formula_15 and formula_16 strings of terminals and/or nonterminals. The strings formula_8 and formula_15 may be empty, but formula_16 must be nonempty. The rule formula_20 is allowed if formula_2 does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.)
For example, the context-sensitive language formula_22 is generated by the Type-1 grammar formula_23 with the productions formula_5 being the following.
The language is context-sensitive but not context-free (by the pumping lemma for context-free languages).
A proof that this grammar generates formula_22 is sketched in the article on Context-sensitive grammars.
Recursively enumerable (Type-0) grammars.
Type-0 grammars include all formal grammars. There are no constraints on the productions rules. They generate exactly all languages that can be recognized by a Turing machine, thus any language that is possible to be generated can be generated by a Type-0 grammar. These languages are also known as the "recursively enumerable" or "Turing-recognizable" languages. Note that this is different from the recursive languages, which can be "decided" by an always-halting Turing machine.
|
6014
|
7903804
|
https://en.wikipedia.org/wiki?curid=6014
|
Cathode-ray tube
|
A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term "cathode ray" was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.
In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.
The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors.
Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas was about the largest size of a CRT.
A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.
History.
Discoveries.
Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the mass-to-charge ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named "electrons" by Irish physicist George Johnstone Stoney in 1891.
The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The "Braun tube" became the foundation of 20th century TV.
In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal "Nature", in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in "The Times" and the "Journal of the Röntgen Society".
The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.
Development.
The technology of a cathode-ray tube derives from a paper of Karl Ferdinand Braun in 1897 which describes his development of cathode-ray oscilloscope. Braun's paper came out just a few months before JJ Thomson's work that lead to the discovery that cathode-rays are streams of corpuscles now called electrons.
In 1926, Kenjiro Takayanagi demonstrated a CRT TV receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display.
In 1927, Philo Farnsworth created a TV prototype.
The CRT was named in 1929 by inventor Vladimir K. Zworykin. He was subsequently hired by RCA, which was granted a trademark for the term "Kinescope", RCA's term for a CRT, in 1932; it voluntarily released the term to the public domain in 1950.
In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of TV.
The first commercially made electronic TV sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.
In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.
From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.
In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.
The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 25 inches by 1974, 30 inches by 1980, 35 inches by 1985, and 43 inches by 1989. The world's largest was the Sony KX-45ED1 at 45 inches but only one known working model exists.
In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.
1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.
In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.
In 1990, the first CRT with HD resolution, the Sony KW-3600HD, was released to the market. It is considered to be "historical material" by Japan's national museum.
The Sony KWP-5500HD, an HD CRT projection TV, was released in 1992.
In the mid-1990s, some 160 million CRTs were made per year.
In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.
The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.
In 2012, Samsung SDI and several other major companies were fined by the European Commission for price fixing of TV cathode-ray tubes.
The same occurred in 2015 in the US and in Canada in 2018.
Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.
Decline.
Beginning in the late 1990s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004. Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005. Samsung SDI stopped CRT production in 2012.
Despite being a mainstay of display technology for decades, CRT-based computer monitors and TVs are now obsolete. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets and vintage enthusiasts once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as advantages.
Some industries still use CRTs because it is too much effort, downtime, or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons. , at least one company manufactures new CRTs for these markets.
A popular consumer usage of CRTs is for retro gaming. Some games are impossible to play without CRT display hardware. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs. Another reason people use CRTs is due to the natural blending of the image on the displays. Some games designed for CRT displays exploit this, and use the blending of detail and color to turn raw pixels into softer images for aesthetic appeal and variety. In addition, compared to LCD Displays, CRTs have a reduced input latency between when one touches the controller and the action is reflected on screen; allowing for more precise control for consumers.
Constructions.
Body.
The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.
The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties.
The optical properties of the glass used on the screen affect color reproduction and purity in color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside.
The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with flame-fused necks, for Color CRTs, or as bulbs made up of a flame-fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.
CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.
The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays, as it doesn't brown unlike glass containing lead. Another glass formulation uses 2–3% of lead on the screen. Alternatively zirconium can also be used on the screen in combination with barium, instead of lead.
Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24–32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or ~1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.
The leaded glass in the funnels of CRTs may contain 21–25% of lead oxide (PbO), The neck may contain 30–40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s. Before this, CRTs used lead on the faceplate.
Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.
The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is 5–10 nF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.
The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.
Size and weight.
The size of a CRT can be measured by the screen's "entire" area (or face diagonal) or alternatively by only its "viewable" area (or diagonal) that is coated by phosphor and surrounded by black edges.
While the viewable area may be rectangular, the edges of the CRT may have a curvature (e.g. black stripe CRTs, first made by Toshiba in 1972) or the edges may be black and truly flat (e.g. Flatron CRTs), or the viewable area may follow the curvature of the edges of the CRT (with or without black edges or curved edges).
Small CRTs below 3 inches were made for handheld TVs such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.
Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT and limits its practical size (see ). The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel can vary in thickness, to join the thin neck with the thick screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.
Anode.
The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.
For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.
The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.
The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.
The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40–49% of Nickel and 3–6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.
The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.
The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.
Electron gun.
The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.
Construction and method of operation.
The electron gun has an indirectly heated hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5–2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is a material coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.
There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.
The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation is done when forming the vacuum (described in ). After activation, the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800–1000°C, at which point it starts shedding electrons.
Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.
The rate of emission of electrons from the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.
A voltage negative with respect to the cathode is applied to the first (control) grid (G1) to control the emission of electrons into the rest of the electron gun. G1 in practice is a Wehnelt cylinder. The brightness of the image on the screen depends on both the anode voltage and the electron beam current and in practise the latter is constant, while the former is controlled by varying the difference in voltage between the cathode and the G1 control grid. The second (screen) grid of the gun (G2) then accelerates the electrons towards the screen using several hundred DC volts. Then a third grid (G3) electrostatically focuses the electron beam before it is deflected and later accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.
However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600–8000 V) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.
There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.
During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.
The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.
Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.
After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.
The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40–170 V per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30 MHz of bandwidth can usually provide 720p or 1080i resolution, while 20 MHz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus creating shades of colors which create the image line by line and this can also affect the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.
Gamma.
CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).
Deflection.
There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2 kV, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.
Magnetic deflection.
Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.
The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15–240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.
Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require ~24 volts while the horizontal deflection coils require ~120 volts to operate.
The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.
Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.
Electrostatic deflection.
Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.
Burn-in.
Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.
Evacuation.
The CRT's partial vacuum of to or less is evacuated or exhausted in a ~375–475 °C oven in a process called "baking" or "bake-out". The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems.
Rebuilding.
CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.
Reactivation.
Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.
Phosphors.
Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.
The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.
Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red,
SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.
The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kV. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.
Phosphor persistence.
Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.
Limitations and workarounds.
Blooming.
Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.
Doming.
Doming is a phenomenon found on some CRT TVs in which parts of the shadow mask become heated. In TVs that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.
During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.
Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.
Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.
High voltage.
Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.
Size.
A practical limit on the size of a CRT is the weight of the thick glass needed to safely sustain its vacuum, since a CRT's exterior is exposed to the full atmospheric pressure, which for instance totals on a 27-inch (400 in2) screen. For example, the large 43-inch Sony PVM-4300 weighs , much heavier than 32-inch CRTs (up to ) and 19-inch CRTs (up to ). Much lighter flat panel TVs are only ~ for 32-inch and for 19-inch.
Size is also limited by anode voltage, as it would require a higher dielectric strength to prevent arcing and the electrical losses and ozone generation it causes, without sacrificing image brightness.
Shadow masks also become more difficult to make with increasing resolution and size.
Limits imposed by deflection.
At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass increases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.
Comparison with other technologies.
One of the defining points of comparison
On CRTs, refresh rates depend on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.
CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are often preferred for playing video games made in the early 2000s and prior in spite of their bulk, weight and heat generation, with some pieces of technology requiring a CRT to function due to not being built with the functionality of modern displays in mind.
CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.
Types.
CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan.
Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.
Monochrome CRTs.
If the CRT is in black and white (B&W or monochrome), there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. This was often done by placing the CRT in a special machine to draw a vacumm within the CRT, evaporate the aluminum inside the CRT using a heater surrounding a piece of aluminum, and then release the vacuum. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.
The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.
The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.
Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.
When a monochrome CRT is shut off, the screen itself retracts to a small, white dot in the center, along with the phosphors shutting down, shot by the electron gun; it sometimes takes a while for it to go away.
Color CRTs.
Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs).
Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). The triangular configuration is often called "delta-gun", based on its relation to the shape of the Greek letter delta (Δ). The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.
A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually inch behind the screen.
Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.
The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).
The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.
Shadow mask.
The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at ~435 °C of the frit seal between the faceplate and the funnel of the CRT.
Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.
Screen manufacture.
Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.
Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.
The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Such technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.
After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65–88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and the edges of funnel of the CRT that mate with the screen, are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435–475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.
Convergence and purity in color CRTs.
Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.
Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.
The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one pair of rings has 2 poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.
On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. In this case the deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.
The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.
Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.
Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.
Alternatively, the guns can be aligned with one another (converged) using convergence rings placed right outside the neck; with one ring per gun. The rings can have north and south poles. There can be 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.
Magnetic shielding and degaussing.
If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.
Color CRT displays in TV sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.
Resolution.
Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.
Projection CRTs.
Projection CRTs were used in CRT projectors and CRT rear-projection TVs, and are usually small (being 7–9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2 mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.
Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.
Beam-index tube.
Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.
Flat CRTs.
Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. LG's Flatron technology is based on this technology developed by Zenith, now a subsidiary of LG.
Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.
Radar CRTs.
Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor.
Oscilloscope CRTs.
In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with TV and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. TVs use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15 kV. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.
Microchannel plate.
When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.
Graticules.
Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.
Image storage tubes.
These are found in "analog phosphor storage oscilloscopes". These are distinct from "digital storage oscilloscopes" which rely on solid state digital memory to store the image.
Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.
When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.
Vector monitors.
Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as "Asteroids".
They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.
Data storage tubes.
The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.
Cat's eye.
In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.
Charactrons.
Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.
Nimo.
Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.
Flood-beam CRT.
Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.
Print-head CRT.
CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.
Zeus – thin CRT display.
In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the "Zeus" display, which contained CRT-like functionality in a flat-panel display. The cathode of this display was mounted under the front of the display, and the electrons from the cathode would be directed to the back to the display where they would stay until extracted by electrodes near the front of the display, and directed to the front of the display which had phosphor dots. The devices were demonstrated but never marketed.
Slimmer CRT.
Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was .
Health concerns.
Ionizing radiation.
CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, TV receivers to 0.5 milliroentgens per hour at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem.
The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.
Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused TV industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US.
Toxicity.
Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).
Flicker.
At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most TVs run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL TVs that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.
High-frequency audible noise.
50 Hz/60 Hz CRTs used for TV operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT TV. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.
This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).
Implosion.
If the glass wall is damaged, atmospheric pressure can implode the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in TVs and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid injury.
Implosion protection.
Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm2.
Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode.
The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.
Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air.
Electric shock.
To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1–1.5 kV of anode voltage per inch.
Security concerns.
Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.
Recycling.
Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.
As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and , both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.
Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.
In Europe, disposal of CRT TVs and monitors is covered by the WEEE Directive.
Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7 grams of phosphor.
The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.
Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.
A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.
See also.
Applying CRT in different display-purpose:
Historical aspects:
Safety and precautions:
|
6015
|
28481209
|
https://en.wikipedia.org/wiki?curid=6015
|
Crystal
|
A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.
The word "crystal" derives from the Ancient Greek word (), meaning both "ice" and "rock crystal", from (), "icy cold, frost".
Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.
Despite the name, lead crystal, crystal glass, and related products are "not" crystals, but rather types of glass, i.e. amorphous solids.
Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.
Crystal structure (microscopic).
The scientific definition of a "crystal" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement. (Quasicrystals are an exception, see below).
Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a "polycrystalline" structure. In the final block of ice, each of the small crystals (called "crystallites" or "grains") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does "not" have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called "amorphous solids", also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does.
A crystal structure (an arrangement of atoms in a crystal) is characterized by its "unit cell", a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.
The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries (230 is commonly cited, but this treats chiral equivalents as separate entities), called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).
Crystal faces, shapes and crystallographic forms.
Crystals are commonly recognized, macroscopically, by their shape, consisting of flat faces with sharp angles. These shape characteristics are not "necessary" for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see.
Euhedral crystals are those that have obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.
The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.)
One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry.
A crystal's crystallographic forms are sets of possible faces of the crystal that are related by one of the symmetries of the crystal. For example, crystals of galena often take the shape of cubes, and the six faces of the cube belong to a crystallographic form that displays one of the symmetries of the isometric crystal system. Galena also sometimes crystallizes as octahedrons, and the eight faces of the octahedron belong to another crystallographic form reflecting a different symmetry of the isometric system. A crystallographic form is described by placing the Miller indices of one of its faces within brackets. For example, the octahedral form is written as {111}, and the other faces in the form are implied by the symmetry of the crystal.
Forms may be closed, meaning that the form can completely enclose a volume of space, or open, meaning that it cannot. The cubic and octahedral forms are examples of closed forms. All the forms of the isometric system are closed, while all the forms of the monoclinic and triclinic crystal systems are open. A crystal's faces may all belong to the same closed form, or they may be a combination of multiple open or closed forms.
A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.
Occurrence in nature.
Rocks.
By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. , the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, long and in diameter, and weighing .
Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.
Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. Evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.
Ice.
Water-based ice in the form of snow, sea ice, and glaciers are common crystalline/polycrystalline structures on Earth and other planets. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal. Ice crystals may form from cooling liquid water below its freezing point, such as ice cubes or a frozen lake. Frost, snowflakes, or small ice crystals suspended in the air (ice fog) more often grow from a supersaturated gaseous-solution of water vapor and air, when the temperature of the air drops below its dew point, without passing through a liquid state. Another unusual property of water is that it expands rather than contracts when it crystallizes.
Organigenic crystals.
Many living organisms are able to produce crystals grown from an aqueous solution, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of bones and teeth in vertebrates.
Polymorphism and allotropy.
The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different "phases".
In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.
For pure chemical elements, polymorphism is referred to as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have very different properties. For example, diamond is the hardest substance known, while graphite is so soft that it is used as a lubricant. Chocolate can form six different types of crystals, but only one has the suitable hardness and melting point for candy bars and confections. Polymorphism in steel is responsible for its ability to be heat treated, giving it a wide range of properties.
Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.
Crystallization.
Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see: epitaxy and frost.)
Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.
Specific industrial techniques to produce large single crystals (called "boules") include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization.
Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 m are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above.
Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to "prevent" crystallization from occurring, such as antifreeze proteins.
Defects, impurities, and twinning.
An "ideal" crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects: places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials.
A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials.
Another common type of crystallographic defect is an impurity, meaning that the "wrong" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal.
In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns.
Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way.
Mosaicity is a spread of crystal plane orientations. A mosaic crystal consists of smaller crystalline units that are somewhat misaligned with respect to each other.
Chemical bonds.
In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows:
Metals crystallize rapidly and are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically, for example, fighter-jet turbines are typically made by first growing a single crystal of titanium alloy, increasing its strength and melting point over polycrystalline titanium. A small piece of metal may naturally form into a single crystal, such as Type 2 telluric iron, but larger pieces generally do not unless extremely slow cooling occurs. For example, iron meteorites are often composed of single crystal, or many large crystals that may be several meters in size, due to very slow cooling in the vacuum of space. The slow cooling may allow the precipitation of a separate phase within the crystal lattice, which form at specific angles determined by the lattice, called Widmanstatten patterns.
Ionic compounds typically form when a metal reacts with a non-metal, such as sodium with chlorine. These often form substances called salts, such as sodium chloride (table salt) or potassium nitrate (saltpeter), with crystals that are often brittle and cleave relatively easily. Ionic materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Some ionic compounds can be very hard, such as oxides like aluminium oxide found in many gemstones such as ruby and synthetic sapphire.
Covalently bonded solids (sometimes called covalent network solids) are typically formed from one or more non-metals, such as carbon or silicon and oxygen, and are often very hard, rigid, and brittle. These are also very common, notable examples being diamond and quartz respectively.
Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Substances such as fats, lipids and wax form molecular bonds because the large molecules do not pack as tightly as atomic bonds. This leads to crystals that are much softer and more easily pulled apart or broken. Common examples include chocolates, candles, or viruses. Water ice and dry ice are examples of other materials with molecular bonding.Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.
Quasicrystals.
A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.
Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal (see crystallographic restriction theorem).
The International Union of Crystallography has redefined the term "crystal" to include both ordinary periodic crystals and quasicrystals ("any solid having an essentially discrete diffraction diagram").
Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.
Special properties from anisotropy.
Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.
Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.
Crystallography.
"Crystallography" is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases.
|
6016
|
42219488
|
https://en.wikipedia.org/wiki?curid=6016
|
Cytosine
|
Cytosine () (symbol C or Cyt) is one of the four nucleotide bases found in DNA and RNA, along with adenine, guanine, and thymine (uracil in RNA). It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached (an amine group at position 4 and a keto group at position 2). The nucleoside of cytosine is cytidine. In Watson–Crick base pairing, it forms three hydrogen bonds with guanine.
History.
Cytosine was discovered and named by Albrecht Kossel and Albert Neumann in 1894 when it was hydrolyzed from calf thymus tissues. A structure was proposed in 1903, and was synthesized (and thus confirmed) in the laboratory in the same year.
In 1998, cytosine was used in an early demonstration of quantum information processing when Oxford University researchers implemented the Deutsch–Jozsa algorithm on a two qubit nuclear magnetic resonance quantum computer (NMRQC).
In March 2015, NASA scientists reported the formation of cytosine, along with uracil and thymine, from pyrimidine under the space-like laboratory conditions, which is of interest because pyrimidine has been found in meteorites although its origin is unknown.
Chemical reactions.
Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).
In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA.
Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. The difference in rates of deamination of cytosine and 5-methylcytosine (to uracil and thymine) forms the basis of bisulfite sequencing.
Biological function.
When found third in a codon of RNA, cytosine is synonymous with uracil, as they are interchangeable as the third base.
When found as the second base in a codon, the third is always interchangeable. For example, UCU, UCC, UCA and UCG are all serine, regardless of the third base.
Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood.
Theoretical aspects.
Until October 2021, Cytosine had not been found in meteorites, which suggested the first strands of RNA and DNA had to look elsewhere to obtain this building block. Cytosine likely formed within some meteorite parent bodies, however did not persist within these bodies due to an effective deamination reaction into uracil.
In October 2021, Cytosine was announced as having been found in meteorites by researchers in a joint Japan/NASA project, that used novel methods of detection which avoided damaging nucleotides as they were extracted from meteorites.
|
6019
|
7903804
|
https://en.wikipedia.org/wiki?curid=6019
|
Computational chemistry
|
Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Overview.
Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.
Historically, computational chemistry has had two different aspects:
As a result, a whole host of algorithms has been put forward by computational chemists.
History.
Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 "Introduction to Quantum Mechanics – with Applications to Chemistry", Eyring, Walter and Kimball's 1944 "Quantum Chemistry", Heitler's 1945 "Elementary Wave Mechanics – with Applications to Quantum Chemistry", and later Coulson's 1952 textbook "Valence", each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first "ab initio" Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of "ab initio" calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in "ab initio" theory have been published by Schaefer.
In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.
In the early 1970s, efficient "ab initio" computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed "ab initio" calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.
One of the first mentions of the term "computational chemistry" can be found in the 1970 book "Computers and Their Role in the Physical Sciences" by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of "computational chemistry". The "Journal of Computational Chemistry" was first published in 1980.
Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".
Applications.
There are several fields within computational chemistry.
These fields can give rise to several applications as shown below.
Catalysis.
Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Drug development.
Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Computational chemistry databases.
Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.
Methods.
"Ab initio" method.
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called "ab initio methods". A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).
Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.
A common type of "ab initio" electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.
Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post–Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.
In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.
The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.
Computational thermochemistry.
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post–Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.
Chemical dynamics.
After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.
The most popular methods for propagating the wave packet associated to the molecular geometry are:
Split operator technique.
How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.
formula_1
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
formula_2
There are ways to reduce this error, which include taking an average of two split equations.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Density functional methods.
Density functional theory (DFT) methods are often considered to be "ab initio methods" for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.
Semi-empirical methods.
Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.
Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics.
Molecular mechanics.
In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or "ab initio" calculations.
The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.
Molecular dynamics.
Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.
Monte Carlo.
Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called "importance sampling". Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.
Quantum mechanics/molecular mechanics (QM/MM).
QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.
Quantum Computational Chemistry.
Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators.
Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.
Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.
While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.
Computational costs in chemistry algorithms.
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
Algorithmic complexity examples.
The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.
Molecular dynamics.
Algorithm.
Solves Newton's equations of motion for atoms and molecules.
Complexity.
The standard pairwise interaction calculation in MD leads to an formula_3complexity for formula_4 particles. This is because each particle interacts with every other particle, resulting in formula_5 interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to formula_6 or even formula_7 by grouping distant particles and treating them as a single entity or using clever mathematical approximations.
Quantum mechanics/molecular mechanics (QM/MM).
Algorithm.
Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.
Complexity.
The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as formula_8, where formula_9 is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
Hartree-Fock method.
Algorithm.
Finds a single Fock state that minimizes the energy.
Complexity.
NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as formula_10 to formula_7 depending on implementation, with formula_4 being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
Density functional theory.
Algorithm.
Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.
Complexity.
Traditional implementations of DFT typically scale as formula_10, mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
Standard CCSD and CCSD(T) method.
Algorithm.
CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Complexity.
CCSD.
Scales as formula_14 where formula_9 is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.
CCSD(T).
With the addition of perturbative triples, the complexity increases to formula_16. This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.
Linear-scaling CCSD(T) method.
Algorithm.
An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Complexity.
Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.
Accuracy.
Computational chemistry is not an "exact" description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.
Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).
There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).
Software packages.
Many self-sufficient exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:
|
6020
|
49489870
|
https://en.wikipedia.org/wiki?curid=6020
|
Crash (Ballard novel)
|
Crash is a novel by British author J. G. Ballard, first published in 1973 with cover designed by Bill Botten. It follows a group of car-crash fetishists who, inspired by the famous crashes of celebrities, become sexually aroused by staging and participating in car accidents.
The novel was released to divided critical reception, with many reviewers horrified by its provocative content. It was adapted into a controversial 1996 film of the same name by David Cronenberg.
Synopsis.
The story is told through the eyes of narrator James Ballard, named after the author himself, but it centers on the sinister figure of Dr. Robert Vaughan, a former TV scientist turned "nightmare angel of the highways". James meets Vaughan after being injured in a car crash near London Airport. Gathering around Vaughan is a group of alienated people, all of them former crash victims, who follow him in his pursuit to re-enact the crashes of Hollywood celebrities such as Jayne Mansfield and James Dean, in order to experience what the narrator calls "a new sexuality, born from a perverse technology". Vaughan's ultimate fantasy is to die in a head-on collision with movie star Elizabeth Taylor.
Development.
The Papers of J. G. Ballard at the British Library include two revised drafts of "Crash" (Add MS 88938/3/8). Scanned extracts from Ballard's drafts are included in "Crash: The Collector's Edition," ed. Chris Beckett.
In 1971, Harley Cokeliss directed a short film entitled "Crash!" based on a chapter in J. G. Ballard's book "The Atrocity Exhibition", where Ballard is featured, talking about the ideas in his book. British actress Gabrielle Drake appeared as a passenger and car-crash victim. Ballard later developed the idea, resulting in "Crash". In his draft of the novel he mentioned Drake by name, but references to her were removed from the published version.
Interpretation.
"Crash" has been difficult to characterize as a novel. At some points in his career, Ballard claimed that "Crash" was a "cautionary tale", a view that he would later regret, asserting that it is in fact "a psychopathic hymn. But it is a psychopathic hymn which has a point". Likewise, Ballard previously characterized it a science fiction novel, a position he would later take back.
Jean Baudrillard wrote an analysis of "Crash" in "Simulacra and Simulation" in which he declared it "the first great novel of the universe of simulation". He made note of how the fetish in the story conflates the functionality of the automobiles with that of the human body and how the characters' injuries and the damage to the vehicles are used as equivalent signs. To him, the hyperfunctionality leads to the dysfunction in the story. Quotes were used extensively to illustrate that the language of the novel employs plain, mechanical terms for the parts of the automobile and proper, medical language for human sex organs and acts. The story is interpreted as showing a merger between technology, sexuality, and death, and he further argued that by pointing out Vaughan's character takes and keeps photos of the car crashes and the mutilated bodies involved. Baudrillard stated that there is no moral judgment about the events within the novel but that Ballard himself intended it as a warning against a cultural trend.
The story can be classed as dystopic.
Critical reception.
The novel received divided reviews when originally published. One publisher's reader returned the verdict "This author is beyond psychiatric help. Do Not Publish!" A 1973 review in "The New York Times" was equally horrified: ""Crash" is, hands-down, the most repulsive book I've yet to come across."
However, retrospective opinion now considers "Crash" to be one of Ballard's best and most challenging works. Reassessing "Crash" in "The Guardian", Zadie Smith wrote, ""Crash" is an existential book about how "everybody uses everything". How everything uses everybody. And yet it is not a hopeless vision." On Ballard's legacy, she writes: "In Ballard's work there is always this mix of futuristic dread and excitement, a sweet spot where dystopia and utopia converge. For we cannot say we haven't got precisely what we dreamed of, what we always wanted, so badly."
References in popular art.
Music.
The Normal's 1978 song "Warm Leatherette" was inspired by the novel, and later covered in 1980 by Grace Jones. Similarly inspired was "Miss the Girl," a 1983 single by The Creatures.
The Manic Street Preachers' song "Mausoleum" from 1994's "The Holy Bible" contains the famous Ballard quote about his reasons for writing the book, "I wanted to rub the human face in its own vomit. I wanted to force it to look in the mirror." John Foxx's album "Metamatic" contains songs that have Ballardian themes, such as "No-one Driving".
Other film adaptations.
An apparently unauthorized adaptation of "Crash" called "Nightmare Angel" was filmed in 1986 by Susan Emerling and Zoe Beloff. This short film bears the credit "Inspired by J. G. Ballard".
External links.
|
6021
|
48683141
|
https://en.wikipedia.org/wiki?curid=6021
|
C (programming language)
|
C is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains widely used and influential. By design, C gives the programmer relatively direct access to the features of the typical CPU architecture; customized for the target instruction set. It has been and continues to be used to implement operating systems (especially kernels), device drivers, and protocol stacks, but its use in application software has been decreasing. C is used on computers that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book "The C Programming Language", co-authored by the original language designer, served for many years as the "de facto" standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
Although neither C nor its standard library provide some popular features found in other languages, it is flexible enough to support them. For example, object orientation and garbage collection are provided by external libraries GLib Object System and Boehm garbage collector, respectively.
Since 2000, C has consistently ranked among the top four languages in the TIOBE index, a measure of the popularity of programming languages. Originally, C was popular mostly due to being easier to use than other programming languages. Currently, C is popular mostly due to speed, efficiency, low memory usage, and simplicity. C uses approximately 80 times less energy than Python, Perl, and PHP. On average, C uses less energy than Fortran, despite Fortran being faster on average.
Characteristics.
The C language exhibits the following characteristics:
"Hello, world" example.
The "Hello, World!" program example that appeared in the first edition of "K&R" has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output.
The original version was:
main()
printf("hello, world\n");
A more modern version is:
int main(void)
printf("hello, world\n");
The first line is a preprocessor directive, indicated by codice_10, which causes the preprocessor to replace that line of code with the text of the codice_11 header file, which contains declarations for input and output functions including codice_12. The angle brackets around codice_11 indicate that the header file can be located using a search strategy that selects header files provided with the compiler over files with the same name that may be found in project-specific directories.
The next code line declares the entry point function codice_14. The run-time environment calls this function to begin program execution. The type specifier codice_15 indicates that the function returns an integer value. The codice_7 parameter list indicates that the function consumes no arguments. The run-time environment actually passes two arguments (typed codice_15 and codice_18), but this implementation ignores them. The ISO C standard (section 5.1.2.2.1) requires syntax that either is void or these two arguments a special treatment not afforded to other functions.
The opening curly brace indicates the beginning of the code that defines the function.
The next line of code calls (diverts execution to) the C standard library function codice_12 with the address of the first character of a null-terminated string specified as a string literal. The text codice_20 is an escape sequence that denotes the newline character which when output in a terminal results in moving the cursor to the beginning of the next line. Even though codice_12 returns an codice_15 value, it is silently discarded. The semicolon codice_23 terminates the call statement.
The closing curly brace indicates the end of the codice_24 function. Prior to C99, an explicit codice_25 statement was required at the end of codice_24 function, but since C99, the codice_24 function (unlike other functions) implicitly returns codice_28 upon reaching its final closing curly brace.
History.
Early developments.
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.
B.
Thompson wanted a programming language for developing utilities for the new platform. He first tried writing a Fortran compiler, but he soon gave up the idea and instead created a cut-down version of the recently developed systems programming language called BCPL. The official description of BCPL was not available at the time, and Thompson modified the syntax to be less 'wordy' and similar to a simplified ALGOL known as SMALGOL. He called the result "B", describing it as "BCPL semantics with a lot of SMALGOL syntax". Like BCPL, B had a bootstrapping compiler to facilitate porting to new machines. Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such as byte addressability.
Unlike BCPL's codice_29 marking comments up to the end of the line, B adopted codice_30 as the comment delimiter, more akin to PL/1, and allowing comments to appear in the middle of lines. (BCPL's comment style would be reintroduced in C++.)
New B and first C release.
In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called this "New B" (NB). Thompson started to use NB to write the Unix kernel, and his requirements shaped the direction of the language development.
Through to 1972, richer types were added to the NB language. NB had arrays of codice_15 and codice_32, and to these types were added pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions. Arrays within expressions were effectively treated as pointers. A new compiler was written, and the language was renamed C.
The C compiler and some utilities made with it were included in Version 2 Unix, which is also known as Research Unix.
Structures and Unix kernel re-write.
At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C. By this time, the C language had acquired some powerful features such as codice_33 types.
The preprocessor was introduced around 1973 at the urging of Alan Snyder and also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL and PL/I. Its original version provided only included files and simple string replacements: codice_10 and codice_35 of parameterless macros. Soon after that, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation.
Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.
K&R C.
In 1978 Brian Kernighan and Dennis Ritchie published the first edition of "The C Programming Language". Known as "K&R" from the initials of its authors, the book served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to as "C78". The second edition of the book covers the later ANSI C standard, described below.
"K&R" introduced several language features:
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
Although later versions of C require functions to have an explicit type declaration, K&R C only requires functions that return a type other than codice_15 to be declared before use. Functions used without prior declaration were presumed to return codice_15.
For example:
long long_function();
calling_function()
long longvar;
register intvar;
longvar = long_function();
if (longvar > 1)
intvar = 0;
else
intvar = int_function();
return intvar;
The declaration of (on line 1) is required since it returns ; not . Function can be called (line 11) even though it is not declared since it returns . Also, variable does not need to be declared as type since that is the default type for keyword.
Since function declarations did not include information about arguments, type checks were not performed, although some compilers would issue a warning if different calls to a function used different numbers or types of arguments. Tools such as Unix's lint utility were developed that (among other things) checked for consistency of function use across multiple source files.
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC) and other vendors. These included:
The popularity of the language, lack of agreement on standard library interfaces, and lack of compliance to the K&R specification, lead to standardization efforts.
ANSI C and ISO C.
During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.
In 1983 the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.
In 1990 the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language.
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), codice_7 pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the codice_54 macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.
C99.
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.
C99 introduced several new features, including inline functions, several new data types (including codice_55 and a codice_56 type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with codice_57, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has codice_15 implicitly assumed. A standard macro codice_59 is defined with value codice_60 to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.
In addition, the C99 standard requires support for identifiers using Unicode in the form of escaped characters (e.g. or ) and suggests support for raw Unicode names.
C11.
Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro codice_59 is defined as codice_62 to indicate that C11 support is available.
C17.
C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro codice_59 is defined as codice_64 to indicate that C17 support is available.
C23.
C23 is an informal name for the current major C language standard revision and was known as "C2X" through most of its development. It builds on past releases, introducing features like new keywords, types including codice_65 and codice_66, and expansions to the standard library.
C23 was published in October 2024 as ISO/IEC 9899:2024. The standard macro codice_59 is defined as codice_68 to indicate that C23 support is available.
C2Y.
C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working group ISO/IEC JTC1/SC22/WG14.
Embedded C.
Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
In 2008, the C Standards Committee published a technical report extending the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
Definition.
C has a formal grammar specified by the C standard. Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters codice_69 and codice_70, or (since C99) following codice_57 until the end of the line. Comments delimited by codice_69 and codice_70 do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as codice_33, codice_51, and codice_76, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as codice_32 and codice_15 specify built-in types. Sections of code are enclosed in braces (codice_79 and codice_80, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
As an imperative language, C uses "statements" to specify actions. The most common statement is an "expression statement", consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by codice_1 ... [codice_82] conditional execution and by codice_3 ... codice_4, codice_4, and codice_2 iterative execution (looping). The codice_2 statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. codice_88 and codice_89 can be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structured codice_90 statement which branches directly to the designated label within the function. codice_5 selects a codice_92 to be executed based on the value of an integer expression. Different from many other languages, control-flow will fall through to the next codice_92 unless terminated by a codice_88.
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (codice_95, codice_96, codice_97 and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
Kernighan and Ritchie say in the Introduction of "The C Programming Language": "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better." The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
Character set.
The basic C source character set includes the following characters:
The "newline" character indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such.
The POSIX standard mandates a portable character set which adds a few characters (notably "@") to the basic C source character set. Both standards do not prescribe any particular value encoding -- ASCII and EBCDIC both comply with these standards, since they include at least those basic characters, even though they use different encoded values for those characters.
Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. Since C99 multi-national Unicode characters can be embedded portably within C source text by using codice_105 or codice_106 encoding (where codice_107 denotes a hexadecimal character).
The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.
Reserved words.
All versions of C have reserved words that are case sensitive. As reserved words, they cannot be used for variable names.
C89 has 32 reserved words:
C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
C11 added seven more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
C23 reserved fifteen more words:
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed.
Prior to C89, codice_167 was reserved as a keyword. In the second edition of their book "The C Programming Language", which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword] codice_167, formerly reserved but never used, is no longer reserved." and "The stillborn codice_167 keyword is withdrawn."
Operators.
C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:
C uses the operator codice_175 (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator codice_185 to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression codice_207 might mistakenly be written as codice_208, which will be evaluated as codice_160 unless the value of codice_98 is codice_28 after the assignment.
The C operator precedence is not always intuitive. For example, the operator codice_185 binds more tightly than (is executed prior to) the operators codice_177 (bitwise AND) and codice_178 (bitwise OR) in expressions such as codice_215, which must be written as codice_216 if that is the coder's intent.
Data types.
The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal. There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (codice_76). Integer type codice_32 is often used for single-byte characters. C99 added a Boolean data type. There are also derived types including arrays, pointers, records (codice_33), and unions (codice_51).
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a "type cast" to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)
C's "usual arithmetic conversions" allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
Pointers.
C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be "dereferenced" to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.
Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers; the result of a codice_221 is usually cast to the data type of the data to be stored. Many data types, such as trees, are commonly implemented as dynamically allocated codice_33 objects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays of codice_33 objects. Pointers to functions ("function pointers") are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch), in dispatch tables, or as callbacks to event handlers.
A "null pointer value" explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a "null pointer constant" can be written as codice_28, with or without explicit casting to a pointer type, as the codice_225 macro defined by several standard headers or, since C23 with the constant codice_157. In conditional contexts, null pointer values evaluate to codice_156, while all other pointer values evaluate to codice_160.
Void pointers (codice_229) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.
Arrays.
Array types in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's codice_221 function, and treat it as an array.
Since arrays are always accessed (in effect) via pointers, array accesses are typically "not" checked against the underlying array size, although some compilers may provide bounds checking as an option. Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions.
C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue.
The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):
int func(int N, int M)
float (*p)[N] [M] = malloc(sizeof *p);
if (p == 0)
return -1;
for (int i = 0; i < N; i++)
for (int j = 0; j < M; j++)
(*p)[i] [j] = i + j;
print_array(N, M, p);
free(p);
return 1;
And here is a similar implementation using C99's "Auto VLA" feature:
int func(int N, int M)
// Caution: checks should be made to ensure N*M*sizeof(float) does NOT exceed limitations for auto VLAs and is within available size of stack.
float p[N] [M]; // auto VLA is held on the stack, and sized when the function is invoked
for (int i = 0; i < N; i++)
for (int j = 0; j < M; j++)
p[i] [j] = i + j;
print_array(N, M, p);
// no need to free(p) since it will disappear when the function exits, along with the rest of the stack frame
return 1;
Array–pointer interchangeability.
The subscript notation codice_231 (where codice_232 designates a pointer) is syntactic sugar for codice_233. Taking advantage of the compiler's knowledge of the pointer type, the address that codice_234 points to is not the base address (pointed to by codice_232) incremented by codice_44 bytes, but rather is defined to be the base address incremented by codice_44 multiplied by the size of an element that codice_232 points to. Thus, codice_231 designates the codice_240th element of the array.
Furthermore, in most expression contexts (a notable exception is as operand of codice_130), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.
The total size of an array codice_232 can be determined by applying codice_130 to an expression of array type. The size of an element can be determined by applying the operator codice_130 to any dereferenced element of an array codice_100, as in codice_246. Thus, the number of elements in a declared array codice_100 can be determined as codice_248. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.
Memory management.
One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three principal ways to allocate memory for objects:
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary. Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on C dynamic memory allocation for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.
Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a "memory leak." Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages with automatic garbage collection.
Libraries.
The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., codice_252, shorthand for "link the math library").
The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, codice_11) specify the interfaces for these and other standard library facilities.
Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.
File handling and streams.
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. codice_11). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid-state drive. Low-level I/O functions are not part of the standard C library but are generally part of "bare metal" programming (programming that is independent of any operating system such as most embedded programming). With few exceptions, implementations include low-level I/O.
Language tools.
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler.
Automated source code checking and auditing tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.
Memory management checking tools like Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.
Uses.
Rationale for use in systems programming.
C is widely used for systems programming in implementing operating systems and embedded system applications. This is for several reasons:
Used for computationally-intensive libraries.
C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C. Many languages support calling library functions in C, for example, the Python-based framework NumPy uses C for the high-performance and hardware-interacting aspects.
Games.
Computer games are often built from a combination of languages. C has featured significantly, especially for those games attempting to obtain best performance from computer platforms. Examples include Doom from 1993.
C as an intermediate language.
C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--. Also, contemporary major compilers GCC and LLVM both feature an intermediate representation that is not C, and those compilers support front ends for many languages including C.
Other languages written in C.
A consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. For example, the reference implementations of Python, Perl, Ruby, and PHP are written in C.
Once used for web development.
Historically, C was sometimes used for web development using the Common Gateway Interface (CGI) as a "gateway" for information between the web application, the server, and the browser. C may have been chosen over interpreted languages because of its speed, stability, and near-universal availability. It is no longer common practice for web development to be done in C, and many other web development languages are popular. Applications where C-based web development continues include the HTTP configuration pages on routers, IoT devices and similar, although even here some projects have parts in higher-level languages e.g. the use of Lua within OpenWRT.
Web servers.
The two most popular web servers, Apache HTTP Server and Nginx, are both written in C. These web servers interact with the operating system, listen on TCP ports for HTTP requests, and then serve up static web content, or cause the execution of other languages handling to 'render' content such as PHP, which is itself primarily written in C. C's close-to-the-metal approach allows for the construction of these high-performance software systems.
End-user applications.
C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.
Limitations.
Ritchie himself joked about the deficiencies of the language he created:
While C has been popular, influential and hugely successful, it has drawbacks, including:
For some purposes, restricted styles of C have been adopted, e.g. MISRA C or CERT C, in an attempt to reduce the opportunity for bugs. Databases such as CWE attempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.
There are tools that can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.
Related languages.
Many languages developed after C, were influenced by and borrowed aspects of C, including C++, C#, C shell, D, Go, Java, JavaScript, Julia, Limbo, LPC, Objective-C, Perl, PHP, Python, Ruby, Rust, Swift, Verilog and SystemVerilog. Some claim that the most pervasive influence has been syntactical that these languages combine the statement and expression syntax of C with type systems, data models and large-scale program structures that differ from those of C, sometimes radically.
Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.
When object-oriented programming languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.
The C++ programming language (originally named "C with Classes") was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax. C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.
Objective-C was originally a thin layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
In addition to C++ and Objective-C, Ch, Cilk, and Unified Parallel C are nearly supersets of C.
|
6023
|
36449898
|
https://en.wikipedia.org/wiki?curid=6023
|
Castle of the Winds
|
Castle of the Winds is a tile-based roguelike video game for Microsoft Windows. It was developed by Rick Saada in 1989 and distributed by Epic MegaGames in 1993. The game was released around 1998 as a freeware download by the author. Though it is secondary to its hack and slash gameplay, "Castle of the Winds" has a plot loosely based on Norse mythology, told with setting changes, unique items, and occasional passages of text. The game is composed of two parts: A Question of Vengeance, released as shareware, and Lifthransir's Bane, sold commercially. A combined license for both parts was also sold.
Gameplay.
The game differs from most roguelikes in a number of ways. Its interface is mouse-dependent, but supports keyboard shortcuts (such as 'g' to get an item). "Castle of the Winds" also allows the player to restore saved games after dying.
The game favors the use of magic in combat, as spells are the only weapons that work from a distance. The player character automatically gains a spell with each experience level, and can permanently gain others using corresponding books, until all thirty spells available are learned. There are two opposing pairs of elements: cold vs. fire and lightning vs. acid/poison. Spells are divided into six categories: attack, defense, healing, movement, divination, and miscellaneous.
"Castle of the Winds" possesses an inventory system that limits a player's load based on weight and bulk, rather than by number of items. It allows the character to use different containers, including packs, belts, chests, and bags. Other items include weapons, armor, protective clothing, purses, and ornamental jewellery. Almost every item in the game can be normal, cursed, or enchanted, with curses and enchantments working in a manner similar to "NetHack". Although items do not break with use, they may already be broken or rusted when found. Most objects that the character currently carries can be renamed.
Wherever the player goes before entering the dungeon, there is always a town which offers the basic services of a temple for healing and curing curses, a junk store where anything can be sold for a few copper coins, a sage who can identify items and (from the second town onwards) a bank for storing the total capacity of coins to lighten the player's load. Other services that differ and vary in what they sell are outfitters, weaponsmiths, armoursmiths, magic shops and general stores.
The game tracks how much time has been spent playing the game. Although story events are not triggered by the passage of time, it does determine when merchants rotate their stock. Victorious players are listed as "Valhalla's Champions" in the order of time taken, from fastest to slowest. If the player dies, they are still put on the list, but are categorized as "Dead", with their experience point total listed as at the final killing blow. The amount of time spent also determines the difficulty of the last boss.
Plot.
The player begins in a tiny hamlet, near which they used to live. Their farm has been destroyed and godparents killed. After clearing out an abandoned mine, the player finds a scrap of parchment that reveals the death of the player's godparents was ordered by an unknown enemy. The player then returns to the hamlet to find it pillaged, and decides to travel to Bjarnarhaven.
Once in Bjarnarhaven, the player explores the levels beneath a nearby fortress, eventually facing Hrungnir, the Hill Giant Lord, responsible for ordering the player's godparents' death. Hrungnir carries the Enchanted Amulet of Kings. Upon activating the amulet, the player is informed of their past by their dead father, after which the player is transported to the town of Crossroads, and "Part I" ends. The game can be imported or started over in "Part II".
The town of Crossroads is run by a Jarl who at first does not admit the player, but later (on up to three occasions) provides advice and rewards. The player then enters the nearby ruined titular Castle of the Winds. There the player meets his/her deceased grandfather, who instructs them to venture into the dungeons below, defeat Surtur, and reclaim their birthright. Venturing deeper, the player encounters monsters run rampant, a desecrated crypt, a necromancer, and the installation of various special rooms for elementals. The player eventually meets and defeats the Wolf-Man leader, Bear-Man leader, the four Jotun kings, a Demon Lord, and finally Surtur. Upon defeating Surtur and escaping the dungeons, the player sits upon the throne, completing the game.
Development.
Inspired by his love of RPGs and while learning Windows programming in the 80s, Rick Saada designed and completed "Castle of the Winds". The game sold 13,500 copies. By 1998, the game's author, Rick Saada, decided to distribute the entirety of "Castle of the Winds" free of charge.
The game is public domain per Rick Saada's words:
Graphics.
All terrain tiles, some landscape features, all monsters and objects, and some spell/effect graphics take the form of Windows 3.1 icons and were done by Paul Canniff. Multi-tile graphics, such as ball spells and town buildings, are bitmaps included in the executable file. No graphics use colors other than the Windows-standard 16-color palette, plus transparency. They exist in monochrome versions as well, meaning that the game will display well on monochrome monitors.
The map view is identical to the playing-field view, except for scaling to fit on one screen. A simplified map view is available to improve performance on slower computers. The latter functionality also presents a cleaner display, as the aforementioned scaling routine does not always work correctly.
Reception.
"Computer Gaming World" rated the gameplay as good and the graphics simple but effective, while noticing the lack of audio, but regarded the game itself enjoyable.
|
6024
|
15534247
|
https://en.wikipedia.org/wiki?curid=6024
|
Reformed Christianity
|
Reformed Christianity, also called Calvinism, is a major branch of Protestantism that began during the 16th-century Protestant Reformation. In the modern day, it is largely represented by the Continental Reformed, Presbyterian, and Congregational traditions, as well as parts of the Anglican (known as "Episcopal" in some regions), Baptist and Waldensian traditions, in addition to a minority of persons belonging to the Methodist faith (who are known as Calvinistic Methodists).
Reformed theology emphasizes the authority of the Bible and the sovereignty of God, as well as covenant theology, a framework for understanding the Bible based on God's covenants with people. Reformed churches emphasize simplicity in worship. Several forms of ecclesiastical polity are exercised by Reformed churches, including presbyterian, congregational, and some episcopal. Articulated by John Calvin, the Reformed faith holds to a spiritual (pneumatic) presence of Christ in the Lord's Supper.
Emerging in the 16th century, the Reformed tradition developed over several generations, especially in Switzerland, Scotland and the Netherlands. In the 17th century, Jacobus Arminius and the Remonstrants were expelled from the Dutch Reformed Church over disputes regarding predestination and salvation, and from that time Arminians are usually considered to be a distinct tradition from the Reformed. This dispute produced the Canons of Dort, the basis for the "doctrines of grace" also known as the "five points" of Calvinism.
Definition and terminology.
The term Reformed Christianity is derived from the denomination's self designation of "Reformed Church", beginning in Switzerland and Germany, shortly thereafter followed by the Dutch Republic. "Calvinism" is the name derived from its most famous leader, John Calvin (born Jehan Cauvin), influential Reformation-era theologian from Geneva, Switzerland. The term was first used by opposing Lutherans in the 1550s. Calvin did not approve of the use of this term, and religious scholars have argued its use is misleading, inaccurate, unhelpful, and "inherently distortive."
The definitions and boundaries of the terms "Reformed Christianity" and "Calvinism" are contested by scholars. As a historical movement, Reformed Christianity began during the Reformation with Huldrych Zwingli in Zürich, Switzerland. Following the failure of the Marburg Colloquy between Zwingli's followers and those of Martin Luther in 1529 to mediate disputes regarding the real presence of Christ in the Lord's Supper, Zwingli's followers were defined by their opposition to Lutherans (while Lutherans affirmed a corporeal presence of Christ in the Eucharist through a sacramental union, the Reformed came to hold a real spiritual presence of Christ in the Eucharist as propunded by Calvin and Bullinger). They also opposed Anabaptist radicals thus remaining within the Magisterial Reformation. During the 17th-century Arminian Controversy, followers of Jacobus Arminius were forcibly removed from the Dutch Reformed Church for their views regarding predestination and salvation, and thenceforth Arminians would be considered outside the pale of Reformed orthodoxy, though some use the term "Reformed" to include Arminians while using the term "Calvinist" to exclude Arminians.
Reformed Christianity has historically included Anglicanism, the branch of Christianity originating in the Church of England. The Anglican confessions are considered Reformed Protestant and leaders of the Protestant Reformation in England, such as the guiding Reformer who shaped Anglican theology Thomas Cranmer, were influenced by and counted among Reformed (Calvinist) theologians. As with Lutheranism, the Church of England retained elements of Catholicism such as bishops and vestments, thus sometimes being called "but halfly Reformed" or a middle way between Lutheranism and Reformed Christianity, being closer liturgically to the former and theologically aligned with the latter. Beginning in the 17th century, Anglicanism broadened to the extent that Reformed theology is no longer the sole dominant theology of Anglicanism.
Some scholars argue that the Particular Baptist (Reformed Baptist) strand of the Baptist tradition, who hold many of the same beliefs as Reformed Christians but not infant baptism, as expressed in the Second London Confession of Faith of 1689, should be considered part of Reformed Christianity, though this might not have been the view of early Reformed theologians. Others disagree, asserting that any type of Baptist should be considered separate from the Reformed branch of Christianity.
History.
The first wave of Reformed theologians included Zwingli, Martin Bucer, Wolfgang Capito, John Oecolampadius, and Guillaume Farel. While from diverse academic backgrounds, their work already contained key themes within Reformed theology, especially the priority of scripture as a source of authority. Scripture was also viewed as a unified whole, which led to a covenantal theology of the sacraments of baptism and the Lord's Supper as visible signs of the covenant of grace. Another shared perspective was their denial of the real presence of Christ in the Eucharist. Each understood salvation to be by grace alone and affirmed a doctrine of unconditional election, the teaching that some people are chosen by God to be saved. Luther and his successor Philipp Melanchthon were significant influences on these theologians and, to a larger extent, those who followed. The doctrine of justification by faith alone, also known as "sola fide", was a direct inheritance from Luther.
The second generation featured John Calvin, Heinrich Bullinger, Thomas Cranmer, Wolfgang Musculus, Peter Martyr Vermigli, Andreas Hyperius and John à Lasco. Written between 1536 and 1539, Calvin's "Institutes of the Christian Religion" was one of the most influential works of the era. Toward the middle of the 16th century, these beliefs were formed into one consistent creed which would shape the future definition of the Reformed faith. The 1549 "Consensus Tigurinus" unified Zwingli and Bullinger's memorialist theology of the Eucharist, which taught that it was simply a reminder of Christ's death, with Calvin's view of it as a means of grace with Christ actually present, though spiritually rather than bodily as in Catholic doctrine. The document demonstrates the diversity as well as unity in early Reformed theology, giving it a stability that enabled it to spread rapidly throughout Europe. This stands in marked contrast to the bitter controversy experienced by Lutherans prior to the 1579 Formula of Concord.
Through Calvin's missionary work in France, his program of reform eventually reached the French-speaking provinces of the Netherlands. Calvinism was adopted in the Electorate of the Palatinate under Frederick III, which led to the formulation of the Heidelberg Catechism in 1563. This and the Belgic Confession were adopted as confessional standards in the first synod of the Dutch Reformed Church in 1571.
In 1573, William the Silent joined the Calvinist Church. Calvinism was declared the official religion of the Kingdom of Navarre by the queen regnant Jeanne d'Albret after her conversion in 1560. Leading divines, either Calvinist or those sympathetic to Calvinism, settled in England, including Bucer, Martyr, and John Łaski, as did John Knox in Scotland. During the First English Civil War, English and Scots Presbyterians produced the Westminster Confession, which became the confessional standard for Presbyterians in the English-speaking world. Having established itself in Europe, the movement continued to spread to areas including North America, South Africa and Korea. While Calvin did not live to see the foundation of his work grow into an international movement, his death allowed his ideas to spread far beyond their city of origin and their borders and to establish their own distinct character.
Spread.
Although much of Calvin's work was in Geneva, his publications spread his ideas of a correctly Reformed church to many parts of Europe. In Switzerland, some cantons are still Reformed, and some are Catholic. Calvinism became the dominant doctrine within the Church of Scotland (Presbyterian Church), the Dutch Republic and parts of Germany, especially those adjacent to the Netherlands in the Palatinate, Kassel, and Lippe, spread by Caspar Olevian and Zacharias Ursinus among others. Protected by the local nobility, Calvinism became a significant religion in eastern Hungary and Hungarian-speaking areas of Transylvania. , there are about 3.5 million Hungarian Reformed people worldwide.
Calvinism was also initially spreading in Flanders, Wallonia, France, Lithuania, and Poland before being mostly erased during the Counter-Reformation. One of the most important Polish reformed theologists was Łaski, who was also involved into organising churches in East Frisia and Stranger's Church in London. Later, a faction called the Polish Brethren broke away from Calvinism on January 22, 1556, when Piotr of Goniądz, a Polish student, spoke out against the doctrine of the Trinity during the general synod of the Reformed churches of Poland held in the village of Secemin. Calvinism gained some popularity in Scandinavia, especially Sweden, but was rejected in favor of Lutheranism after the Synod of Uppsala in 1593.
Many 17th century European settlers in the Thirteen Colonies in British America were Calvinists, who emigrated because of arguments over church structure, including the Pilgrim Fathers. Others were forced into exile, including the French Huguenots. Dutch and French Calvinist settlers were also among the first European colonizers of South Africa, beginning in the 17th century, who became known as Boers or Afrikaners.
Sierra Leone was largely colonized by Calvinist settlers from Nova Scotia, many of whom were Black Loyalists who fought for the British Empire during the American War of Independence. John Marrant had organized a congregation there under the auspices of the Huntingdon Connection. Some of the largest Calvinist communions were started by 19th- and 20th-century missionaries. Especially large are those in Indonesia, Korea and Nigeria. In South Korea there are 20,000 Presbyterian congregations with about 9–10 million church members, scattered in more than 100 Presbyterian denominations. In South Korea, Presbyterianism is the largest Christian denomination.
Demography.
A 2011 report of the Pew Forum on Religious and Public Life estimates that members of Presbyterian or Reformed churches make up 7% of the estimated 801 million Protestants globally, or approximately 56 million people.
Though the broadly defined Reformed faith is much larger, as it constitutes Congregationalist (0.5%), most of the United and uniting churches (unions of different denominations) (7.2%) and most likely some of the other Protestant denominations (38.2%). All three are distinct categories from Presbyterian or Reformed (7%) in this report. The Reformed family of churches is one of the largest Christian denominations, representing 75 million believers worldwide.
According to "Global Christianity: A Guide to the World's Largest Religion from Afghanistan to Zimbabwe", in 2020, Presbyterian and Reformed Christians numbered around 65,446,000 people, or 0.8% of the world's population. Congregationalists were listed at 4,986,000, with 0.1% of the world's population. Therefore, the three branches of Reformed Christianity totaled 70,432,000 people, or 0.9% of the global population.
The survey also listed 77,792,000 members (1% of the world's population) in United Churches, the majority of which are formed by the merger of churches of the Reformed Tradition with churches of other branches of Protestantism.
World Communions.
The World Communion of Reformed Churches (WCRC), which includes some United Churches, has 80 million believers. WCRC is the fourth largest Christian communion in the world, after the Roman Catholic Church, the Eastern Orthodox Churches, and the Anglican Communion. Many conservative Reformed churches which are strongly Calvinistic formed the World Reformed Fellowship which has about 70 member denominations. Most are not part of the WCRC because of its ecumenical attire. The International Conference of Reformed Churches is another conservative association.
Theology.
Revelation and scripture.
Reformed theologians believe that God communicates knowledge of himself to people through the Word of God. People are not able to know anything about God except through this self-revelation. (With the exception of general revelation of God; "His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made, so that they are without excuse" (Romans 1:20).) Speculation about anything which God has not revealed through his Word is not warranted. The knowledge people have of God is different from that which they have of anything else because God is infinite, and finite people are incapable of comprehending an infinite being. While the knowledge revealed by God to people is never incorrect, it is also never comprehensive.
According to Reformed theologians, God's self-revelation is always through his son Jesus Christ, because Christ is the only mediator between God and people. Revelation of God through Christ comes through two basic channels. The first is creation and providence, which is God's creating and continuing to work in the world. This action of God gives everyone knowledge about God, but this knowledge is only sufficient to make people culpable for their sin; it does not include knowledge of the gospel. The second channel through which God reveals himself is redemption, which is the gospel of salvation from condemnation which is punishment for sin.
In Reformed theology, the Word of God takes several forms. Jesus Christ is the Word Incarnate. The prophecies about him said to be found in the Old Testament and the ministry of the apostles who saw him and communicated his message are also the Word of God. Further, the preaching of ministers about God is the very Word of God because God is considered to be speaking through them. God also speaks through human writers in the Bible, which is composed of texts set apart by God for self-revelation. Reformed theologians emphasize the Bible as a uniquely important means by which God communicates with people. People gain knowledge of God from the Bible which cannot be gained in any other way.
Reformed theologians affirm that the Bible is true, but differences emerge among them over the meaning and extent of its truthfulness. Conservative followers of the Princeton theologians take the view that the Bible is true and inerrant, or incapable of error or falsehood, in every place. This view is similar to that of Catholic orthodoxy as well as modern Evangelicalism. Another view, influenced by the teaching of Karl Barth and neo-orthodoxy, is found in the Presbyterian Church (U.S.A.)'s Confession of 1967. Those who take this view believe the Bible to be the primary source of our knowledge of God, but also that some parts of the Bible may be false, not witnesses to Christ, and not normative for the church. In this view, Christ is the revelation of God, and the scriptures witness to this revelation rather than being the revelation itself.
Covenant theology.
Reformed theologians use the concept of covenant to describe the way God enters into fellowship with people in history. The concept of covenant is so prominent in Reformed theology that Reformed theology as a whole is sometimes called "covenant theology". However, sixteenth- and seventeenth-century theologians developed a particular theological system called "covenant theology" or "federal theology" which many conservative Reformed churches continue to affirm. This framework orders God's life with people primarily in two covenants: the covenant of works and the covenant of grace.
The covenant of works is made with Adam and Eve in the Garden of Eden. The terms of the covenant are that God provides a blessed life in the garden on condition that Adam and Eve obey God's law perfectly. Because Adam and Eve broke the covenant by eating the forbidden fruit, they became subject to death and were banished from the garden. This sin was passed down to all mankind because all people are said to be in Adam as a covenantal or "federal" head. Federal theologians usually imply that Adam and Eve would have gained immortality had they obeyed perfectly.
A second covenant, called the covenant of grace, is said to have been made immediately following Adam and Eve's sin. In it, God graciously offers salvation from death on condition of faith in God. This covenant is administered in different ways throughout the Old and New Testaments, but retains the substance of being free of a requirement of perfect obedience.
Through the influence of Karl Barth, many contemporary Reformed theologians have discarded the covenant of works, along with other concepts of federal theology. Barth saw the covenant of works as disconnected from Christ and the gospel, and rejected the idea that God works with people in this way. Instead, Barth argued that God always interacts with people under the covenant of grace, and that the covenant of grace is free of all conditions whatsoever. Barth's theology and that which follows him has been called "mono covenantal" as opposed to the "bi-covenantal" scheme of classical federal theology. Conservative contemporary Reformed theologians, such as John Murray, have also rejected the idea of covenants based on law rather than grace. Michael Horton, however, has defended the covenant of works as combining principles of law and love.
God.
For the most part, the Reformed tradition did not modify the medieval consensus on the doctrine of God. God's character is described primarily using three adjectives: eternal, infinite, and unchangeable. Reformed theologians such as Shirley Guthrie have proposed that rather than conceiving of God in terms of his attributes and freedom to do as he pleases, the doctrine of God is to be based on God's work in history and his freedom to live with and empower people.
Reformed theologians have also traditionally followed the medieval tradition going back to before the early church councils of Nicaea and Chalcedon on the doctrine of the Trinity. God is affirmed to be one God in three persons: Father, Son, and Holy Spirit. The Son (Christ) is held to be eternally begotten by the Father and the Holy Spirit eternally proceeding from the Father and Son. However, contemporary theologians have been critical of aspects of Western views here as well. Drawing on the Eastern tradition, these Reformed theologians have proposed a "social trinitarianism" where the persons of the Trinity only exist in their life together as persons-in-relationship. Contemporary Reformed confessions such as the Barmen Confession and Brief Statement of Faith of the Presbyterian Church (USA) have avoided language about the attributes of God and have emphasized his work of reconciliation and empowerment of people. Feminist theologian Letty Russell used the image of partnership for the persons of the Trinity. According to Russell, thinking this way encourages Christians to interact in terms of fellowship rather than reciprocity. Conservative Reformed theologian Michael Horton, however, has argued that social trinitarianism is untenable because it abandons the essential unity of God in favor of a community of separate beings.
Christ and atonement.
Reformed theologians affirm the historic Christian belief that Christ is eternally one person with a divine and a human nature. Reformed Christians have especially emphasized that Christ truly became human so that people could be saved. Christ's human nature has been a point of contention between Reformed and Lutheran Christology. In accord with the belief that finite humans cannot comprehend infinite divinity, Reformed theologians hold that Christ's human body cannot be in multiple locations at the same time. Because Lutherans believe that Christ is bodily present in the Eucharist, they hold that Christ is bodily present in many locations simultaneously. For Reformed Christians, such a belief denies that Christ actually became human. Some contemporary Reformed theologians have moved away from the traditional language of one person in two natures, viewing it as unintelligible to contemporary people. Instead, theologians tend to emphasize Jesus's context and particularity as a first-century Jew.
John Calvin and many Reformed theologians who followed him describe Christ's work of redemption in terms of three offices: prophet, priest, and king. Christ is said to be a prophet in that he teaches perfect doctrine, a priest in that he intercedes to the Father on believers' behalf and offered himself as a sacrifice for sin, and a king in that he rules the church and fights on believers' behalf. The threefold office links the work of Christ to God's work in ancient Israel. Many, but not all, Reformed theologians continue to make use of the threefold office as a framework because of its emphasis on the connection of Christ's work to Israel. They have, however, often reinterpreted the meaning of each of the offices. For example, Karl Barth interpreted Christ's prophetic office in terms of political engagement on behalf of the poor.
Christians believe Jesus' death and resurrection make it possible for believers to receive forgiveness for sin and reconciliation with God through the atonement. Reformed Protestants generally subscribe to a particular view of the atonement called penal substitutionary atonement, which explains Christ's death as a sacrificial payment for sin. Christ is believed to have died in place of the believer, who is accounted righteous as a result of this sacrificial payment.
Sin.
In Christian theology, people are created good and in the image of God but have become corrupted by sin, which causes them to be imperfect and overly self-interested. Reformed Christians, following the tradition of Augustine of Hippo, believe that this corruption of human nature was brought on by Adam and Eve's first sin, a doctrine called original sin.
Although earlier Christian authors taught the elements of physical death, moral weakness, and a sin propensity within original sin, Augustine was the first Christian to add the concept of inherited guilt ("reatus") from Adam whereby every infant is born eternally damned and humans lack any residual ability to respond to God. Reformed theologians emphasize that this sinfulness affects all of a person's nature, including their will. This view, that sin so dominates people that they are unable to avoid sin, has been called total depravity. As a consequence, every one of their descendants inherited a stain of corruption and depravity. This condition, innate to all humans, is known in Christian theology as "original sin".
Calvin thought original sin was "a hereditary corruption and depravity of our nature, extending to all the parts of the soul." Calvin asserted people were so warped by original sin that "everything which our mind conceives, meditates, plans, and resolves, is always evil." The depraved condition of every human being is not the result of sins people commit during their lives. Instead, before we are born, while we are in our mother's womb, "we are in God's sight defiled and polluted." Calvin thought people were justly condemned to hell because their corrupted state is "naturally hateful to God."
In colloquial English, the term "total depravity" can be easily misunderstood to mean that people are absent of any goodness or unable to do any good. However the Reformed teaching is actually that while people continue to bear God's image and may do things that appear outwardly good, their sinful intentions affect all of their nature and actions so that they are not pleasing to God.
Salvation.
Reformed theologians, along with other Protestants, believe salvation from punishment for sin is to be given to all those who have faith in Christ. Faith is not purely intellectual, but involves trust in God's promise to save. Protestants do not hold there to be any other requirement for salvation, but that faith alone is sufficient. However, this faith in the Lord Jesus is understood as one that effects obedience. In a commentary on Ezekiel 18, Calvin stated: "faith cannot justify when it is without works, because it is dead, and a mere fiction...Thus faith can be no more separated from works than the sun from his heat."
Justification is the part of salvation where God pardons the sin of those who believe in Christ. It is historically held by Protestants to be the most important article of Christian faith, though more recently it is sometimes given less importance out of ecumenical concerns. People are not on their own able to fully repent of their sin or prepare themselves to repent because of their sinfulness. Therefore, justification is held to arise solely from God's free and gracious act.
Sanctification is the part of salvation in which God makes believers holy, by enabling them to exercise greater love for God and for other people. The good works accomplished by believers as they are sanctified are considered to be the necessary outworking of the believer's salvation, though they do not cause the believer to be saved. Sanctification, like justification, is by faith, because doing good works is simply living as the child of God one has become.
Predestination.
Stemming from the theology of John Calvin, Reformed theologians teach that sin so affects human nature that they are unable even to exercise faith in Christ by their own will. While people are said to retain free will, in that they willfully sin, they are unable not to sin because of the corruption of their nature due to original sin. Reformed Christians believe that God predestined some people to be saved and others were predestined to eternal damnation. This choice by God to save some is held to be unconditional and not based on any characteristic or action on the part of the person chosen. The Calvinist view is opposed to the Arminian view that God's choice of whom to save is conditional or based on his foreknowledge of who would respond positively to God.
Karl Barth reinterpreted the doctrine of predestination to apply only to Christ. Individual people are only said to be elected through their being in Christ. Reformed theologians who followed Barth, including Jürgen Moltmann, David Migliore, and Shirley Guthrie, have argued that the traditional Reformed concept of predestination is speculative and have proposed alternative models. These theologians claim that a properly trinitarian doctrine emphasizes God's freedom to love all people, rather than choosing some for salvation and others for damnation. God's justice towards and condemnation of sinful people is spoken of by these theologians as out of his love for them and a desire to reconcile them to himself.
Five Points of Calvinism.
Much attention surrounding Calvinism focuses on the "Five Points of Calvinism" (also called the "doctrines of grace"). The five points have been summarized under the acrostic TULIP. The five points are popularly said to summarize the Canons of Dort; however, there is no historical relationship between them, and some scholars argue that their language distorts the meaning of the Canons, Calvin's theology, and the theology of 17th-century Calvinistic orthodoxy, particularly in the language of total depravity and limited atonement. The five points were more recently popularized in the 1963 booklet "The Five Points of Calvinism Defined, Defended, Documented" by David N. Steele and Curtis C. Thomas. The origins of the five points and the acrostic are uncertain, but they appear to be outlined in the Counter Remonstrance of 1611, a lesser-known Reformed reply to the Arminians, which was written prior to the Canons of Dort. The acrostic was used by Cleland Boyd McAfee as early as circa 1905. An early printed appearance of the acrostic can be found in Loraine Boettner's 1932 book, "The Reformed Doctrine of Predestination".
Church.
Reformed Christians see the Christian Church as the community with which God has made the covenant of grace, a promise of eternal life and relationship with God. This covenant extends to those under the "old covenant" whom God chose, beginning with Abraham and Sarah. The church is conceived of as both invisible and visible. The invisible church is the body of all believers, known only to God. The visible church is the institutional body which contains both members of the invisible church as well as those who appear to have faith in Christ, but are not truly part of God's elect.
In order to identify the visible church, Reformed theologians have spoken of certain marks of the Church. For some, the only mark is the pure preaching of the gospel of Christ. Others, including John Calvin, also include the right administration of the sacraments. Others, such as those following the Scots Confession, include a third mark of rightly administered church discipline, or exercise of censure against unrepentant sinners. These marks allowed the Reformed to identify the church based on its conformity to the Bible rather than the magisterium or church tradition.
Worship.
Regulative principle of worship.
The regulative principle of worship is a teaching shared by some Calvinists and Anabaptists on how the Bible orders public worship. The substance of the doctrine regarding worship is that God institutes in the Scriptures everything he requires for worship in the Church and that everything else is prohibited. As the regulative principle is reflected in Calvin's own thought, it is driven by his evident antipathy toward the Roman Catholic Church and its worship practices, and it associates musical instruments with icons, which he considered violations of the Ten Commandments' prohibition of graven images.
On this basis, many early Calvinists also eschewed musical instruments and advocated a cappella exclusive psalmody in worship, though Calvin himself allowed other scriptural songs as well as psalms, and this practice typified Presbyterian worship and the worship of other Reformed churches for some time. The original Lord's Day service designed by John Calvin was a highly liturgical service with the Creed, Alms, Confession and Absolution, the Lord's supper, Doxologies, prayers, Psalms being sung, the Lords prayer being sung, and Benedictions.
Since the 19th century, however, some of the Reformed churches have modified their understanding of the regulative principle and make use of musical instruments, believing that Calvin and his early followers went beyond the biblical requirements and that such things are circumstances of worship requiring biblically rooted wisdom, rather than an explicit command. Despite the protestations of those who hold to a strict view of the regulative principle, today hymns and musical instruments are in common use, as are contemporary worship music styles with elements such as worship bands.
Sacraments.
The Westminster Confession of Faith limits the sacraments to baptism and the Lord's Supper. Sacraments are denoted "signs and seals of the covenant of grace." Westminster speaks of "a sacramental relation, or a sacramental union, between the sign and the thing signified; whence it comes to pass that the names and effects of the one are attributed to the other." Baptism is for infant children of believers as well as believers, as it is for all the Reformed except Baptists and some Congregationalists. Baptism admits the baptized into the visible church, and in it all the benefits of Christ are offered to the baptized. On the Lord's supper, the Westminster Confession takes a position between Lutheran sacramental union and Zwinglian memorialism: "the Lord's supper really and indeed, yet not carnally and corporally, but spiritually, receive and feed upon Christ crucified, and all benefits of his death: the body and blood of Christ being then not corporally or carnally in, with, or under the bread and wine; yet, as really, but spiritually, present to the faith of believers in that ordinance as the elements themselves are to their outward senses."
The 1689 London Baptist Confession of Faith does not use the term sacrament, but describes baptism and the Lord's supper as ordinances, as do most Baptists, Calvinist or otherwise. Baptism is only for those who "actually profess repentance towards God", and not for the children of believers. Baptists also insist on immersion or dipping, in contradistinction to other Reformed Christians. The Baptist Confession describes the Lord's supper as "the body and blood of Christ being then not corporally or carnally, but spiritually present to the faith of believers in that ordinance", similarly to the Westminster Confession. There is significant latitude in Baptist congregations regarding the Lord's supper, and many hold the Zwinglian view.
Logical order of God's decree.
There are two schools of thought regarding the logical order of God's decree to ordain the fall of man: supralapsarianism (from the Latin: , "above", here meaning "before" + , "fall") and infralapsarianism (from the Latin: , "beneath", here meaning "after" + "", "fall"). The former view, sometimes called "high Calvinism", argues that the Fall occurred partly to facilitate God's purpose to choose some individuals for salvation and some for damnation. Infralapsarianism, sometimes called "low Calvinism", is the position that, while the Fall was indeed planned, it was not planned with reference to who would be saved.
Supralapsarianism is based on the belief that God chose which individuals to save logically prior to the decision to allow the race to fall and that the Fall serves as the means of realization of that prior decision to send some individuals to hell and others to heaven (that is, it provides the grounds of condemnation in the reprobate and the need for salvation in the elect). In contrast, infralapsarians hold that God planned the race to fall logically prior to the decision to save or damn any individuals because, it is argued, in order to be "saved", one must first need to be saved from something and therefore the decree of the Fall must precede predestination to salvation or damnation.
These two views vied with each other at the Synod of Dort, an international body representing Calvinist Christian churches from around Europe, and the judgments that came out of that council sided with infralapsarianism (Canons of Dort, First Point of Doctrine, Article 7). The Westminster Confession of Faith also teaches (in Hodge's words "clearly impl[ies]") the infralapsarian view, but is sensitive to those holding to supralapsarianism. The Lapsarian controversy has a few vocal proponents on each side today, but overall it does not receive much attention among modern Calvinists.
Branches.
The Reformed tradition is historically represented by the Continental, Presbyterian, Reformed Anglican, Congregationalist, and Reformed Baptist denominational families.
Reformed churches practice several forms of church government, primarily presbyterian and congregational, but some adhere to episcopal polity. The largest interdenominational association is the World Communion of Reformed Churches with more than 100 million members in 211 member denominations around the world. Smaller, conservative Reformed associations include the World Reformed Fellowship and the International Conference of Reformed Churches.
Continental.
"Continental" Reformed churches originate in continental Europe, a term used by English speakers to distinguish them from traditions from the British Isles. Many uphold the Helvetic Confessions and Heidelberg Catechism, which were adopted in Zurich and Heidelberg, respectively. In the United States, immigrants belonging to the continental Reformed churches joined the Dutch Reformed Church there, as well as the Anglican Church.
Presbyterian.
Presbyterian churches are named for their order of government by assemblies of elders, or "presbyters". They are especially influenced by John Knox, who brought Reformed theology and polity to the Church of Scotland after spending time on the continent in Calvin's Geneva. Presbyterians historically uphold the Westminster Confession of Faith.
Congregational.
Congregationalism originates in Puritanism, a sixteenth-century movement to reform the Church of England. Unlike the Presbyterians, Congregationalists consider the local church to be rightfully self-ruled by their own officers, not higher ecclesiastical courts. The Savoy Declaration, a revision of Westminster, is the primary confession of historic Congregationalism. Evangelical Congregationalists are internationally represented by the World Evangelical Congregational Fellowship. Christian denominations in the Congregationalist tradition include the United Church of Christ, the National Association of Congregational Christian Churches and the Conservative Congregational Christian Conference in the United States, Evangelical Congregational Church in Argentina and Evangelical Fellowship of Congregational Churches in the United Kingdom, among others.
Anglican.
Though Anglicanism today is often described as a separate branch from the Reformed, historic Anglicanism is a part of the wider Reformed tradition. The foundational documents of the Anglican church "express a theology in keeping with the Reformed theology of the Swiss and South German Reformation." The Most Rev. Peter Robinson, presiding bishop of the United Episcopal Church of North America, writes:
Baptist.
Reformed or Calvinistic Baptists, unlike other Reformed groups, exclusively practice believer's baptism. They observe a more congregational polity, taken from the Congregationalists. Their primary confession is the Second London Confession of Faith of 1689, a revision of the Savoy Declaration from the Congregationalists, and the Westminster Confession of Faith, from the Presbyterians, but other Baptist confessions like the First London Confession are also used. Not all Baptists are Particular Baptists, and, in fact, the Baptist tradition didn't start Particular Baptist, but General Baptist. Many Reformed Baptists accept Reformed theology, especially soteriology, and a covenantal theology, named the Baptist covenant theology.
Variants in Reformed theology.
Amyraldism.
Amyraldism (or sometimes Amyraldianism, also known as the School of Saumur, hypothetical universalism, post redemptionism, moderate Calvinism, or four-point Calvinism) is the belief that God, prior to his decree of election, decreed Christ's atonement for all alike if they believe, but seeing that none would believe on their own, he then elected those whom he will bring to faith in Christ, thereby preserving the Calvinist doctrine of unconditional election. The efficacy of the atonement remains limited to those who believe.
Named after its formulator Moses Amyraut, this doctrine is still viewed as a variety of Calvinism in that it maintains the particularity of sovereign grace in the application of the atonement. However, detractors like B. B. Warfield have termed it "an inconsistent and therefore unstable form of Calvinism."
Hyper-Calvinism.
Hyper-Calvinism is the belief that emphasizes God's sovereignty in election and salvation to such an extent that it rejects the responsibility of all people to "repent and believe" the gospel. This belief system became prominent among some of the early English Particular Baptists in the 18th century. Historically, it has been associated with theologians such as John Gill and Joseph Hussey who contributed to the development of its distinct views. This variant of Reformed Theology was opposed by ministers such as Andrew Fuller and missionaries such as William Carey who argued against the Hyper-Calvinistic mindset that "if God wants to save the heathen, He will do it without your help or mine."
The Westminster Confession of Faith says that the gospel is to be freely offered to sinners, and the Larger Catechism makes clear that the gospel is offered to the non-elect.
The term is also used as a pejorative and occasionally appears in both theological and secular controversial contexts. It usually connotes a negative opinion about some variety of theological determinism, predestination, or a version of Evangelical Christianity or Calvinism that is deemed by the critic to be unenlightened, harsh, or extreme.
Neo-Calvinism.
Beginning in the 1880s, Neo-Calvinism, a form of Dutch Calvinism, is the movement initiated by the theologian and later Dutch prime minister Abraham Kuyper. James Bratt has identified a number of different types of Dutch Calvinism: The Seceders—split into the Reformed Church "West" and the Confessionalists; and the Neo-Calvinists—the Positives and the Antithetical Calvinists. The Seceders were largely infralapsarian and the Neo-Calvinists usually supralapsarian.
Kuyper wanted to awaken the church from what he viewed as its pietistic slumber. He declared:
No single piece of our mental world is to be sealed off from the rest and there is not a square inch in the whole domain of human existence over which Christ, who is sovereign over all, does not cry: 'Mine!'
This refrain has become something of a rallying call for Neo-Calvinists.
Christian Reconstructionism.
Christian Reconstructionism is a fundamentalist Calvinist theonomic movement that has remained rather obscure. Founded by R. J. Rushdoony, the movement has had an important influence on the Christian Right in the United States. The movement peaked in the 1990s. However, it lives on in small denominations such as the Reformed Presbyterian Church in the United States and as a minority position in other denominations. Christian Reconstructionists are usually postmillennialists and followers of the presuppositional apologetics of Cornelius Van Til. They tend to support a decentralized political order resulting in laissez-faire capitalism.
New Calvinism.
New Calvinism is a growing perspective within conservative Evangelicalism that embraces the fundamentals of 16th century Calvinism while also trying to be relevant in the present day world. In March 2009, "Time" magazine described the New Calvinism as one of the "10 ideas changing the world". Some of the major figures who have been associated with the New Calvinism are John Piper, Mark Driscoll, Al Mohler, Mark Dever, C. J. Mahaney, and Tim Keller. New Calvinists have been criticized for blending Calvinist soteriology with popular Evangelical positions on the sacraments and continuationism and for rejecting tenets seen as crucial to the Reformed faith such as confessionalism and covenant theology.
Social and economic influences.
Calvin expressed himself on usury in a 1545 letter to a friend, Claude de Sachin, in which he criticized the use of certain passages of scripture invoked by people opposed to the charging of interest. He reinterpreted some of these passages, and suggested that others of them had been rendered irrelevant by changed conditions. He also dismissed the argument (based upon the writings of Aristotle) that it is wrong to charge interest for money because money itself is barren. He said that the walls and the roof of a house are barren, too, but it is permissible to charge someone for allowing him to use them. In the same way, money can be made fruitful.
He qualified his view, however, by saying that money should be lent to people in dire need without hope of interest, while a modest interest rate of 5% should be permitted in relation to other borrowers.
In "The Protestant Ethic and the Spirit of Capitalism", Max Weber wrote that capitalism in Northern Europe evolved when the Protestant (particularly Calvinist) ethic influenced large numbers of people to engage in work in the secular world, developing their own enterprises and engaging in trade and the accumulation of wealth for investment. In other words, the Protestant work ethic was an important force behind the unplanned and uncoordinated emergence of modern capitalism.
Expert researchers and authors have referred to the United States as a "Protestant nation" or "founded on Protestant principles," specifically emphasizing its Calvinist heritage.
Politics and society.
Calvin's concepts of God and man led to ideas which were gradually put into practice after his death, in particular in the fields of politics and society. After their fight for independence from Spain (1579), the Netherlands, under Calvinist leadership, granted asylum to religious minorities, including French Huguenots, English Independents (Congregationalists), and Jews from Spain and Portugal. The ancestors of the philosopher Baruch Spinoza were Portuguese Jews. Aware of the trial against Galileo, René Descartes lived in the Netherlands, out of reach of the Inquisition, from 1628 to 1649. Pierre Bayle, a Reformed Frenchman, also felt safer in the Netherlands than in his home country. He was the first prominent philosopher who demanded tolerance for atheists. Hugo Grotius (1583–1645) was able to publish a rather liberal interpretation of the Bible and his ideas about natural law in the Netherlands. Moreover, the Calvinist Dutch authorities allowed the printing of books that could not be published elsewhere, such as Galileo's "Discorsi" (1638).
Alongside the liberal development of the Netherlands came the rise of modern democracy in England and North America. In the Middle Ages, state and church had been closely connected. Martin Luther's doctrine of the two kingdoms separated state and church in principle. His doctrine of the priesthood of all believers raised the laity to the same level as the clergy. Going one step further, Calvin included elected laymen (church elders, presbyters) in his concept of church government. The Huguenots added synods whose members were also elected by the congregations. The other Reformed churches took over this system of church self-government, which was essentially a representative democracy. Baptists, Quakers, and Methodists are organized in a similar way. These denominations and the Anglican Church were influenced by Calvin's theology in varying degrees.
In another factor in the rise of democracy in the Anglo-American world, Calvin favored a mixture of democracy and aristocracy as the best form of government (mixed government). He appreciated the advantages of democracy. His political thought aimed to safeguard the rights and freedoms of ordinary men and women. In order to minimize the misuse of political power he suggested dividing it among several institutions in a system of checks and balances (separation of powers). Finally, Calvin taught that if worldly rulers rise up against God they should be put down. In this way, he and his followers stood in the vanguard of resistance to political absolutism and furthered the cause of democracy. The Congregationalists who founded Plymouth Colony (1620) and Massachusetts Bay Colony (1628) were convinced that the democratic form of government was the will of God. Enjoying self-rule, they practiced separation of powers. Rhode Island, Connecticut, and Pennsylvania, founded by Roger Williams, Thomas Hooker, and William Penn, respectively, combined democratic government with a limited freedom of religion that did not extend to Catholics (Congregationalism being the established, tax-supported religion in Connecticut). These colonies became safe havens for persecuted religious minorities, including Jews.
In England, Baptists Thomas Helwys ( 1575– 1616), and John Smyth ( 1554–) influenced the liberal political thought of the Presbyterian poet and politician John Milton (1608–1674) and of the philosopher John Locke (1632–1704), who in turn had both a strong impact on the political development in their home country (English Civil War of 1642–1651, Glorious Revolution of 1688) as well as in North America. The ideological basis of the American Revolution was largely provided by the radical Whigs, who had been inspired by Milton, Locke, James Harrington (1611–1677), Algernon Sidney (1623–1683), and other thinkers. The Whigs' "perceptions of politics attracted widespread support in America because they revived the traditional concerns of a Protestantism that had always verged on Puritanism". The United States Declaration of Independence, the United States Constitution and (American) Bill of Rights initiated a tradition of human and civil rights that continued in the French Declaration of the Rights of Man and of the Citizen and the constitutions of numerous countries around the world, e.g. Latin America, Japan, India, Germany, and other European countries. It is also echoed in the United Nations Charter and the Universal Declaration of Human Rights.
In the 19th century, churches based on or influenced by Calvin's theology became deeply involved in social reforms, e.g. the abolition of slavery (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln, and others), women suffrage, and prison reforms. Members of these churches formed co-operatives to help the impoverished masses. The founders of the Red Cross Movement, including Henry Dunant, were Reformed Christians. Their movement also initiated the Geneva Conventions.
Throughout the world, the Reformed churches operate hospitals, homes for handicapped or elderly people, and educational institutions on all levels. For example, American Congregationalists founded Harvard University (1636), Yale University (1701), and about a dozen other colleges. A particular stream of influence of Calvinism concerns art. Visual art cemented society in the first modern nation state, the Netherlands, and also Neo-Calvinism put much weight on this aspect of life. Hans Rookmaaker is the most prolific example. In literature the non-fiction of Marilynne Robinson
argues for the modernity of Calvin's thinking, calling him a humanist scholar (p. 174, The Death of Adam).
Criticism.
Others view Calvinist influence as not always being solely positive. The Boers and Afrikaner Calvinists combined ideas from Calvinism and Kuyperian theology to justify apartheid in South Africa. As late as 1974 the majority of the Dutch Reformed Church in South Africa was convinced that their theological stances (including the story of the Tower of Babel) could justify apartheid. In 1990 the Dutch Reformed Church document "Church and Society" maintained that although they were changing their stance on apartheid, they believed that within apartheid and under God's sovereign guidance, "...everything was not without significance, but was of service to the Kingdom of God." These views were not universal and were condemned by many Calvinists outside South Africa. Pressure from both outside and inside the Dutch Reformed Calvinist church helped reverse apartheid in South Africa.
|
6026
|
18496646
|
https://en.wikipedia.org/wiki?curid=6026
|
Countable set
|
In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is "countable" if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.
In more technical terms, assuming the axiom of countable choice, a set is "countable" if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite.
The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers.
A note on terminology.
Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal. An alternative style uses "countable" to mean what is here called countably infinite, and "at most countable" to mean what is here called countable.
The terms "enumerable" and denumerable may also be used, e.g. referring to countable and countably infinite respectively, definitions vary and care is needed respecting the difference with recursively enumerable.
Definition.
A set formula_1 is "countable" if:
All of these definitions are equivalent.
A set formula_1 is "countably infinite" if:
A set is "uncountable" if it is not countable, i.e. its cardinality is greater than formula_3.
History.
In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable. In 1878, he used one-to-one correspondences to define and compare cardinalities. In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.
Introduction.
A "set" is a collection of "elements", and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted formula_28, called roster form. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example, formula_29 presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still "possible" to list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up to formula_30, this gives us the usual definition of "sets of size formula_30".
Some sets are "infinite"; these sets have more than formula_30 elements where formula_30 is any integer that can be specified. (No matter how large the specified integer formula_30 is, such as formula_35, infinite sets have more than formula_30 elements.) For example, the set of natural numbers, denotable by formula_37, has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:
formula_38
or, more generally, formula_39 (see picture). What we have done here is arrange the integers and the even integers into a "one-to-one correspondence" (or "bijection"), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integers "countably infinite" and say they have cardinality formula_3.
Georg Cantor showed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable.
Formal overview.
By definition, a set formula_1 is "countable" if there exists a bijection between formula_1 and a subset of the natural numbers formula_43. For example, define the correspondence
formula_44
Since every element of formula_45 is paired with "precisely one" element of formula_46, "and" vice versa, this defines a bijection, and shows that formula_1 is countable. Similarly we can show all finite sets are countable.
As for the case of infinite sets, a set formula_1 is countably infinite if there is a bijection between formula_1 and all of formula_4. As examples, consider the sets formula_51, the set of positive integers, and formula_52, the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignments formula_53 and formula_54, so that
formula_55
Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally:
The set of all ordered pairs of natural numbers (the Cartesian product of two sets of natural numbers, formula_56 is countably infinite, as can be seen by following a path like the one in the picture: The resulting mapping proceeds as follows:
formula_57
This mapping covers all such ordered pairs.
This form of triangular mapping recursively generalizes to formula_30-tuples of natural numbers, i.e., formula_59 where formula_23 and formula_30 are natural numbers, by repeatedly mapping the first two elements of an formula_30-tuple to a natural number. For example, formula_63 can be written as formula_64. Then formula_65 maps to 5 so formula_64 maps to formula_67, then formula_67 maps to 39. Since a different 2-tuple, that is a pair such as formula_69, maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set of formula_30-tuples to the set of natural numbers formula_4 is proved. For the set of formula_30-tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem.
The set of all integers formula_73 and the set of all rational numbers formula_74 may intuitively seem much bigger than formula_4. But looks can be deceiving. If a pair is treated as the numerator and denominator of a vulgar fraction (a fraction in the form of formula_76 where formula_77 and formula_78 are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural number formula_30 is also a fraction formula_80. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below.
Sometimes more than one mapping is useful: a set formula_83 to be shown as countable is one-to-one mapped (injection) to another set formula_84, then formula_83 is proved as countable if formula_84 is one-to-one mapped to the set of natural numbers. For example, the set of positive rational numbers can easily be one-to-one mapped to the set of natural number pairs (2-tuples) because formula_87 maps to formula_88. Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable.
With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so.
For example, given countable sets formula_89, we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:
formula_90
We need the axiom of countable choice to index "all" the sets formula_89 simultaneously.
This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, and so on, each of which is a countable set (finite Cartesian product). Thus the set is a countable union of countable sets, which is countable by the previous theorem.
The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets.
These follow from the definitions of countable set as injective / surjective functions.
Cantor's theorem asserts that if formula_83 is a set and formula_93 is its power set, i.e. the set of all subsets of formula_83, then there is no surjective function from formula_83 to formula_93. A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have:
For an elaboration of this result see Cantor's diagonal argument.
The set of real numbers is uncountable, and so is the set of all infinite sequences of natural numbers.
Minimal model of set theory is countable.
If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this model "M" contains elements that are:
was seen as paradoxical in the early days of set theory; see Skolem's paradox for more.
The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers.
Total orders.
Countable sets can be totally ordered in various ways, for example:
In both examples of well orders here, any subset has a "least element"; and in both examples of non-well orders, "some" subsets do not have a "least element".
This is the key definition that determines whether a total order is also a well order.
|
6034
|
7903804
|
https://en.wikipedia.org/wiki?curid=6034
|
Cahn–Ingold–Prelog priority rules
|
In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named after Robert Sidney Cahn, Christopher Kelk Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an to each stereocenter and an "E" or "Z" descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer describing the number of stereocenters will usually have stereoisomers, and diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the "number of neighboring atoms" bonded to a center).
The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and... [extended to the case of] ligancy 6... [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases."
A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable.
Steps for naming.
The steps for naming molecules using the CIP system are often presented as:
Assignment of priorities.
and "E"/"Z" descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as "the sequence rules", is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases.
Isotopes.
If two groups differ only in isotopes, then the larger atomic mass is used to set the priority.
Double and triple bonds.
If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is "connected to the same atom twice". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, one is allowed to visit the same atom twice as one creates an arc.
When B is replaced with a list of attached atoms, A itself, but not its "phantom", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other.
Geometrical isomers.
If two substituents on an atom are geometric isomers of each other, the "Z"-isomer has higher priority than the "E"-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond ("cis") is classified as "Z." The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond ("trans") is classified as "E."
Cyclic molecules.
To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree.
Assigning descriptors.
Stereocenters: "R"/"S".
A chiral sp3 hybridized isomer contains four different substituents. All four substituents are assigned priorities based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from the reader. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the "rectus" ("R") assignment. An arc drawn counterclockwise, has the "sinister" ("S") assignment. The names are derived from the Latin for 'right' and 'left', respectively. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as ("R")-3-methyl-1-pentene.
A practical method of determining whether an enantiomer is "R" or "S" is by using the right-hand rule: one wraps the molecule with the fingers in the direction . If the thumb points in the direction of the fourth substituent, the enantiomer is "R"; otherwise, it is "S".
It is possible in rare cases that two substituents on an atom differ only in their absolute configuration ("R" or "S"). If the relative priorities of these substituents need to be established, "R" takes priority over "S". When this happens, the descriptor of the stereocenter is a lowercase letter ("r" or "s") instead of the uppercase letter normally used.
Double bonds: "E"/"Z".
For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond ("cis" configuration), then the stereoisomer is assigned the configuration "Z" ("zusammen," German word meaning "together"). If the high priority groups are on opposite sides of the double bond ("trans" configuration), then the stereoisomer is assigned the configuration "E" ("entgegen", German word meaning "opposed")
Coordination compounds.
In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that "non-covalent interactions have a fictitious number between 0 and 1" when assigning priority. Compounds in which this occurs are referred to as coordination compounds.
Spiro compounds.
Some spiro compounds, for example the SDP ligands (("R")- and ("S")-7,7'-bis(diphenylphosphaneyl)-2,2',3,3'-tetrahydro-1,1'-spirobi[indene]), represent chiral, C2-symmetrical molecules where the rings lie approximately at right angles to each other and each molecule cannot be superposed on its mirror image. The spiro carbon, C, is a stereogenic centre, and priority can be assigned a>a′>b>b′, in which one ring (both give the same answer) contains atoms a and b adjacent to the spiro carbon, and the other contains a′ and b′. The configuration at C may then be assigned as for any other stereocentre.
Examples.
The following are examples of application of the nomenclature.
Describing multiple centers.
If a compound has more than one chiral stereocenter, each center is denoted by either "R" or "S". For example, ephedrine exists in (1"R",2"S") and (1"S",2"R") stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1"R",2"R") and (1"S",2"S"), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each.
More generally, for any pair of enantiomers, all of the descriptors are opposite: ("R","R") and ("S","S") are enantiomers, as are ("R","S") and ("S","R"). Diastereomers have at least one descriptor in common; for example ("R","S") and ("R","R") are diastereomers, as are ("S","R") and ("S","S"). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers.
A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is superposable on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2n rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which ("R","S") is the same as the ("S","R") form. In meso compounds the "R" and "S" stereocenters occur in symmetrically positioned pairs.
Relative configuration.
The relative configuration of two stereoisomers may be denoted by the descriptors "R" and "S" with an asterisk (*). ("R"*,"R"*) means two centers having identical configurations, ("R","R") or ("S","S"); ("R"*,"S"*) means two centers having opposite configurations, ("R","S") or ("S","R"). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the "R"* descriptor.
To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the "anomeric carbon atom" and the "reference atom" do have opposite configurations ("R","S") or ("S","R"), whereas in the β anomer they are the same ("R","R") or ("S","S").
Faces.
Stereochemistry also plays a role assigning "faces" to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical ("enantiotopic") and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter ("R" or "S") also apply when assigning the face of a molecular group. The faces are then called the Re"-face and Si"-face. In the example displayed on the right, the compound acetophenone is viewed from the "Re"-face. Hydride addition as in a reduction process from this side will form the ("S")-enantiomer and attack from the opposite "Si"-face will give the ("R")-enantiomer. However, one should note that adding a chemical group to the prochiral center from the "Re"-face will not always lead to an ("S")-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride ("Z" = 17) were added to the prochiral center from the "Re"-face, this would result in an ("R")-enantiomer.
|
6035
|
19235292
|
https://en.wikipedia.org/wiki?curid=6035
|
Celibacy
|
Celibacy (from Latin "caelibatus") is the state of voluntarily being unmarried, sexually abstinent, or both. It is often in association with the role of a religious official or devotee. In its narrow sense, the term "celibacy" is applied only to those for whom the unmarried state is the result of a sacred vow, act of renunciation, or religious conviction. In a wider sense, it is commonly understood to only mean abstinence from sexual activity.
Celibacy has existed in one form or another throughout history, in virtually all the major religions of the world, and views on it have varied. Classical Hindu culture encouraged asceticism and celibacy in the later stages of life, after one has met one's societal obligations. Jainism, on the other hand, preached complete celibacy even for young monks and considered celibacy to be an essential behavior to attain moksha. Buddhism is similar to Jainism in this respect. There were, however, significant cultural differences in the various areas where Buddhism spread, which affected the local attitudes toward celibacy. A somewhat similar situation existed in Japan, where the Shinto tradition also opposed celibacy. In most native African and Native American religious traditions, celibacy has been viewed negatively as well, although there were exceptions like periodic celibacy practiced by some Mesoamerican warriors.
The Romans viewed celibacy as an aberration and legislated fiscal penalties against it, with the exception of the Vestal Virgins, who took a 30-year vow of chastity in order to devote themselves to the study and correct observance of state rituals. In Christianity, celibacy means the promise to live either virginal or celibate in the future. Such a vow of celibacy has been normal for some centuries for Catholic priests, Catholic and Eastern Orthodox monks, and nuns. In addition, a promise or vow of celibacy may be made in the Anglican Communion and some Protestant churches or communities, such as the Shakers; for members of religious orders and religious congregations; and for hermits, consecrated virgins, and deaconesses. Judaism and Islam have denounced celibacy, as both religions emphasize marriage and family life; however, the priests of the Essenes, a Jewish sect during the Second Temple period, practised celibacy. Several hadiths indicate that the Islamic prophet Muhammad denounced celibacy.
Etymology.
The English word "celibacy" derives from the Latin "caelibatus", "state of being unmarried", from Latin , meaning "unmarried". This word derives from two Proto-Indo-European stems, "alone" and "living".
Abstinence and celibacy.
The words "abstinence" and "celibacy" are often used interchangeably, but are not necessarily the same thing. Sexual abstinence, also known as "continence", is abstaining from some or all aspects of sexual activity, often for some limited period of time, while celibacy may be defined as a voluntary religious vow not to marry or engage in sexual activity. Asexuality is commonly conflated with celibacy and sexual abstinence, but it is considered distinct from the two, as celibacy and sexual abstinence are behavioral and those who use those terms for themselves are generally motivated by factors such as an individual's personal or religious beliefs.
A. W. Richard Sipe, while focusing on the topic of celibacy in Catholicism, states that "the most commonly assumed definition of "celibate" is simply an unmarried or single person, and celibacy is perceived as synonymous with sexual abstinence or restraint." Sipe adds that even in the relatively uniform milieu of Catholic priests in the United States there seems to be "simply no clear operational definition of celibacy". Elizabeth Abbott commented on the terminology in her "A History of Celibacy" (2001) writing that she "drafted a definition of celibacy that discarded the rigidly pedantic and unhelpful distinctions between celibacy, chastity, and virginity".
The concept of "new" celibacy was introduced by Gabrielle Brown in her 1980 book "The New Celibacy". In a revised version (1989) of her book, she claims abstinence to be "a response on the outside to what's going on, and celibacy is a response from the inside". According to her definition, celibacy (even short-term celibacy that is pursued for non-religious reasons) is much more than not having sex. It is more intentional than abstinence, and its goal is personal growth and empowerment. Although Brown repeatedly states that celibacy is a matter of choice, she clearly suggests that those who do not choose this route are somehow missing out. This new perspective on celibacy is echoed by several authors, including Elizabeth Abbott, Wendy Keller, and Wendy Shalit.
Buddhism.
The rule of celibacy in the Buddhist religion, whether Mahayana or Theravada, has a long history. Celibacy was advocated as an ideal rule of life for all monks and nuns by Gautama Buddha, except in Japan where it is not strictly followed due to historical and political developments following the Meiji Restoration. In Japan, celibacy was an ideal among Buddhist clerics for hundreds of years. But violations of clerical celibacy were so common for so long that finally, in 1872, state laws made marriage legal for Buddhist clerics. Subsequently, ninety percent of Buddhist monks/clerics married. An example is Higashifushimi Kunihide, a prominent Buddhist priest of Japanese royal ancestry who was married and a father whilst serving as a monk for most of his lifetime.
Gautama, later known as the Buddha, is known for his renunciation of his wife, Princess Yasodharā, and son, Rahula. In order to pursue an ascetic life, he needed to renounce aspects of the impermanent world, including his wife and son. Later on both his wife and son joined the ascetic community and are mentioned in the Buddhist texts to have become enlightened.
Christianity.
There is no commandment in the New Testament that Jesus Christ's disciples have to live in celibacy. However, it is a general view that Christ himself lived a life of perfect chastity; thus, "Voluntary chastity is the imitation of him who was the virgin Son of a virgin Mother". One of his invocations is "King of virgins and lover of stainless chastity" "(Rex virginum, amator castitatis)".
Furthermore, Christ, when his disciples suggest it is "better not to marry," stated "Not everyone can accept this word, but only those to whom it has been given. For there are eunuchs who have been so from birth, and there are eunuchs who have been made eunuchs by others, and there are eunuchs who have made themselves eunuchs for the sake of the kingdom of heaven. Let anyone accept this who can" (Matthew 19:10-12, NRSV). While eunuchs were not generally celibate, over subsequent centuries this statement has come to be interpreted as referring to celibacy.
Paul the Apostle emphasized the importance of overcoming the desires of the flesh and saw the state of celibacy being superior to that of marriage. Paul made parallels between the relations between spouses and God's relationship with the church. "Husbands, love your wives even as Christ loved the church. Husbands should love their wives as their own bodies" (Ephesians 5:25–28). Paul himself was celibate and said that his wish was "that all of you were as I am" (1 Corinthians 7:7). In fact, this entire chapter endorses celibacy while also clarifying that marriage is also acceptable.
The early Christians lived in the belief that the end of the world would soon come upon them, and saw no point in planning new families and having children. According to Chadwick, this was why Paul encouraged both celibate and marital lifestyles among the members of the Corinthian congregation, regarding celibacy as the preferable of the two.
In the counsels of perfection (evangelical counsels), which include chastity alongside poverty and obedience, Jesus is said to have "[given] the rule of the higher life, founded upon his own most perfect life", for those who seek "the highest perfection" and feel "called to follow Christ in this way"—i.e. through such "exceptional sacrifices".
A number of early Christian martyrs were women or girls who had given themselves to Christ in perpetual virginity, such as Saint Agnes and Saint Lucy. According to most Christian thought, the first sacred virgin was Mary, the mother of Jesus, who was consecrated by the Holy Spirit during the Annunciation. Tradition also has it that the Apostle Matthew consecrated virgins. In the Catholic Church and the Orthodox churches, a consecrated virgin is a woman who has been consecrated by the church to a life of perpetual virginity in the service of the church.
Desert Fathers.
The Desert Fathers were Christian hermits and ascetics who had a major influence on the development of Christianity and celibacy. Paul of Thebes is often credited with being the first hermit or anchorite to go to the desert, but it was Anthony the Great who launched the movement that became the Desert Fathers. Sometime around AD 270, Anthony heard a Sunday sermon stating that perfection could be achieved by selling all of one's possessions, giving the proceeds to the poor, and following Christ (Matthew 19:21). He followed the advice and made the further step of moving deep into the desert to seek complete solitude.
Over time, the model of Anthony and other hermits attracted many followers, who lived alone in the desert or in small groups. They chose a life of extreme asceticism, renouncing all the pleasures of the senses, rich food, baths, rest, and anything that made them comfortable. Thousands joined them in the desert, mostly men but also a handful of women. Religious seekers also began going to the desert seeking advice and counsel from the early Desert Fathers. By the time of Anthony's death, there were so many men and women living in the desert in celibacy that it was described as "a city" by Anthony's biographer.
The first Conciliar document on clerical celibacy of the Western Church (Synod of Elvira, can. xxxiii) states that the discipline of celibacy is to refrain from the use of marriage, i.e. refrain from having carnal contact with one's spouse.
According to the later St. Jerome (420), celibacy is a moral virtue, consisting of living in the flesh, but outside the flesh, and so being not corrupted by it ("vivere in carne praeter carnem"). Celibacy excludes not only libidinous acts, but also sinful thoughts or desires of the flesh. Jerome referred to marriage prohibition for priests when he claimed in "Against Jovinianus" that Peter and the other apostles had been married before they were called, but subsequently gave up their marital relations.
In the Catholic, Orthodox and Oriental Orthodox traditions, bishops are required to be celibate. In the Eastern Catholic and Orthodox traditions, priests and deacons are allowed to be married, yet have to remain celibate if they are unmarried at the time of ordination.
Augustinian view.
In the early Church, higher clerics lived in marriages. Augustine taught that the original sin of Adam and Eve was either an act of foolishness "(insipientia)" followed by pride and disobedience to God, or else inspired by pride. The first couple disobeyed God, who had told them not to eat of the tree of the knowledge of good and evil (Gen 2:17). The tree was a symbol of the order of creation. Self-centeredness made Adam and Eve eat of it, thus failing to acknowledge and respect the world as it was created by God, with its hierarchy of beings and values. They would not have fallen into pride and lack of wisdom, if Satan had not sown into their senses "the root of evil" "(radix mali)". Their nature was wounded by concupiscence or libido, which affected human intelligence and will, as well as affections and desires, including sexual desire.
The sin of Adam is inherited by all human beings. Already in his pre-Pelagian writings, Augustine taught that original sin was transmitted by concupiscence, which he regarded as the passion of both soul and body, making humanity a "massa damnata" (mass of perdition, condemned crowd) and much enfeebling, though not destroying, the freedom of the will.
In the early 3rd century, the Canons of the Apostolic Constitutions decreed that only lower clerics might still marry after their ordination, but marriage of bishops, priests, and deacons were not allowed.
After Augustine.
One explanation for the origin of obligatory celibacy is that it is based on the writings of Saint Paul, who wrote of the advantages of celibacy allowed a man in serving the Lord. Celibacy was popularised by the early Christian theologians like Saint Augustine of Hippo and Origen. Another possible explanation for the origins of obligatory celibacy revolves around more practical reason, "the need to avoid claims on church property by priests' offspring". It remains a matter of Canon Law (and often a criterion for certain religious orders, especially Franciscans) that priests may not own land and therefore cannot pass it on to legitimate or illegitimate children. The land belongs to the Church through the local diocese as administered by the Local Ordinary (usually a bishop), who is often an "ex officio" corporation sole. Celibacy is viewed differently by the Catholic Church and the various Protestant communities. It includes clerical celibacy, celibacy of the consecrated life and voluntary celibacy.
The Protestant Reformation rejected celibate life and sexual continence for preachers. Protestant celibate communities have emerged, especially from Anglican and Lutheran backgrounds. A few minor Christian sects advocate celibacy as a better way of life. These groups included the Shakers, the Harmony Society and the Ephrata Cloister.
Many evangelicals prefer the term "abstinence" to "celibacy". Assuming everyone will marry, they focus their discussion on refraining from premarital sex and focusing on the joys of a future marriage. But some evangelicals, particularly older singles, desire a positive message of celibacy that moves beyond the "wait until marriage" message of abstinence campaigns. They seek a new understanding of celibacy that is focused on God rather than a future marriage or a lifelong vow to the Church.
There are also many Pentecostal churches which practice celibate ministry. For instance, the full-time ministers of the Pentecostal Mission are celibate and generally single. Married couples who enter full-time ministry may become celibate and could be sent to different locations.
Catholic Church.
During the first three or four centuries, no law was promulgated prohibiting clerical marriage. Celibacy was a matter of choice for bishops, priests, and deacons.
Statutes forbidding clergy from having wives were written beginning with the Council of Elvira (306) but these early statutes were not universal and were often defied by clerics and then retracted by hierarchy. The Synod of Gangra (345) condemned a false asceticism whereby worshipers boycotted celebrations presided over by married clergy. The Apostolic Constitutions () excommunicated a priest or bishop who left his wife "under the pretense of piety" (Mansi, 1:51).
"A famous letter of Synesius of Cyrene () is evidence both for the respecting of personal decision in the matter and for contemporary appreciation of celibacy. For priests and deacons clerical marriage continued to be in vogue".
"The Second Lateran Council (1139) seems to have enacted the first written law making sacred orders a direct impediment to marriage for the universal Church." Celibacy was first required of some clerics in 1123 at the First Lateran Council. Because clerics resisted it, the celibacy mandate was restated at the Second Lateran Council (1139) and the Council of Trent (1545–64). In places, coercion and enslavement of clerical wives and children was apparently involved in the enforcement of the law. "The earliest decree in which the children [of clerics] were declared to be slaves and never to be enfranchised [freed] seems to have been a canon of the Synod of Pavia in 1018. Similar penalties were promulgated against wives and concubines (see the Synod of Melfi, 1189 can. xii), who by the very fact of their unlawful connexion with a subdeacon or clerk of higher rank became liable to be seized by the over-lord".
In the Roman Catholic Church, the Twelve Apostles are considered to have been the first priests and bishops of the Church. Some say the call to be eunuchs for the sake of Heaven in Matthew 19 was a call to be sexually continent and that this developed into celibacy for priests as the successors of the apostles. Others see the call to be sexually continent in Matthew 19 to be a caution for men who were too readily divorcing and remarrying.
The view of the Church is that celibacy is a reflection of life in Heaven, a source of detachment from the material world which aids in one's relationship with God. Celibacy is designed to "consecrate themselves with undivided heart to the Lord and to "the affairs of the Lord, they give themselves entirely to God and to men. It is a sign of this new life to the service of which the Church's minister is consecrated; accepted with a joyous heart celibacy radiantly proclaims the Reign of God." In contrast, Saint Peter, whom the Church considers its first Pope, was married given that he had a mother-in-law whom Christ healed (Matthew 8). But some argue that Peter was a widower, due to the fact that this passage does not mention his wife, and that his mother-in-law is the one who serves Christ and the apostles after she is healed. Furthermore, Peter himself states: "Then Peter spoke up, 'We have left everything to follow you!' 'Truly I tell you', Jesus replied, 'no one who has left home or brothers or sisters or mother or father or children or fields for me and the gospel will fail to receive a hundred times as much'" (Mark 10,28–30).
Usually, only celibate men are ordained as priests in the Latin Church. Married clergy who have converted from other Christian denominations can be ordained Roman Catholic priests without becoming celibate. Priestly celibacy is not "doctrine" of the Church (such as the belief in the Assumption of Mary) but a matter of discipline, like the use of the vernacular (local) language in Mass or Lenten fasting and abstinence. As such, it can theoretically change at any time though it still must be obeyed by Catholics until the change were to take place. The Eastern Catholic Churches ordain both celibate and married men. However, in both the East and the West, bishops are chosen from among those who are celibate. In Ireland, several priests have fathered children, the two most prominent being bishop Eamonn Casey and Michael Cleary.
The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and the Latin West. When discerning the population of Christendom in medieval Europe during the Middle Ages, Will Durant, referring to Plato's ideal community, stated on the "oratores" (clergy):
"The clergy, like Plato's guardians, were placed in authority not by the suffrages of the people, but by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and (perhaps it should be added) by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [AD 800 onwards], the clergy were as free from family cares as even Plato could desire; and in some cases it would seem they enjoyed no little of the reproductive freedom accorded to the guardians. Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them and to the readiness of these sinners to bare their lives in the confessional."
With respect to clerical celibacy, Richard P. O'Brien stated in 1995, that in his opinion, "greater understanding of human psychology has led to questions regarding the impact of celibacy on the human development of the clergy. The realization that many non-European countries view celibacy negatively has prompted questions concerning the value of retaining celibacy as an absolute and universal requirement for ordained ministry in the Roman Catholic Church".
Celibate homosexual Christians.
Some homosexual Christians choose to be celibate following their denomination's teachings on homosexuality.
In 2014, the American Association of Christian Counselors amended its code of ethics to eliminate the promotion of conversion therapy for homosexuals and encouraged them to be celibate instead.
Hinduism.
In Hinduism, celibacy is usually associated with the "sadhus" ("holy men"), ascetics who withdraw from society and renounce all worldly ties. Celibacy, termed "brahmacharya" in Vedic scripture, is the fourth of the "yamas" and the word literally translated means "dedicated to the Divinity of Life". The word is often used in yogic practice to refer to celibacy or denying pleasure, but this is only a small part of what "brahmacharya" represents. The purpose of practicing "brahmacharya" is to keep a person focused on the purpose in life, the things that instill a feeling of peace and contentment. It is also used to cultivate occult powers and many supernatural feats, called siddhi.
In the religious movement of Brahma Kumaris, celibacy is also promoted for peace and to defeat power of lust.
Islam.
Islamic attitudes toward celibacy have been complex, Muhammad denounced it, however some Sufi orders embrace it. Islam does not promote celibacy; rather it condemns premarital sex and extramarital sex. In fact, according to Islam, marriage enables one to attain the highest form of righteousness within this sacred spiritual bond but the Qur'an does not state it as an obligation. The Qur'an () states, "But the Monasticism which they (who followed Jesus) invented for themselves, We did not prescribe for them but only to please God therewith, but that they did not observe it with the right observance." Therefore, religion is clearly not a reason to stay unmarried although people are allowed to live their lives however they are comfortable; but relationships and sex outside of marriage, let alone forced marriage, is definitely a sin, "Oh you who believe! You are forbidden to inherit women against their will" (). In addition, marriage partners can be distractions from practicing religion at the same time, "Your mates and children are only a trial for you" () however that still does not mean Islam does not encourage people who have sexual desires and are willing to marry. Anyone who does not (intend to) get married in this life can always do it in the Hereafter instead.
Celibacy appears as a peculiarity among some Sufis.
Celibacy was practiced by women saints in Sufism. Celibacy was debated along with women's roles in Sufism in medieval times.
Celibacy, poverty, meditation, and mysticism within an ascetic context along with worship centered around saints' tombs were promoted by the Qadiri Sufi order among Hui Muslims in China. In China, unlike other Muslim sects, the leaders (Shaikhs) of the Qadiriyya Sufi order are celibate. Unlike other Sufi orders in China, the leadership within the order is not a hereditary position, rather, one of the disciples of the celibate Shaikh is chosen by the Shaikh to succeed him. The 92-year-old celibate Shaikh Yang Shijun was the leader of the Qadiriya order in China as of 1998.
Celibacy is practiced by Haydariya Sufi dervishes.
Zoroastrianism.
Zoroastrian text Videvdad (4:47) praises a married man by saying:[T]he man who has a wife is far above him who lives in continence
Meher Baba.
The spiritual teacher Meher Baba stated that "[F]or the [spiritual] aspirant a life of strict celibacy is preferable to married life, if restraint comes to him easily without undue sense of self-repression. Such restraint is difficult for most persons and sometimes impossible, and for them married life is decidedly more helpful than a life of celibacy. For ordinary persons, married life is undoubtedly advisable unless they have a special aptitude for celibacy". Baba also asserted that "The value of celibacy lies in the habit of restraint and the sense of detachment and independence which it gives" and that "The aspirant must choose one of the two courses which are open to him. He must take to the life of celibacy or to the married life, and he must avoid at all costs a cheap compromise between the two. Promiscuity in sex gratification is bound to land the aspirant in a most pitiful and dangerous chaos of ungovernable lust."
Ancient Greece and Rome.
In Sparta and many other Greek cities, failure to marry was grounds for loss of citizenship, and could be prosecuted as a crime. Both Cicero and Dionysius of Halicarnassus stated that Roman law forbade celibacy. There are no records of such a prosecution, nor is the Roman punishment for refusing to marry known.
Pythagoreanism was the system of esoteric and metaphysical beliefs held by Pythagoras and his followers. Pythagorean thinking was dominated by a profoundly mystical view of the world. The Pythagorean code further restricted his members from eating meat, fish, and beans which they practised for religious, ethical and ascetic reasons, in particular the idea of metempsychosis – the transmigration of souls into the bodies of other animals.
"Pythagoras himself established a small community that set a premium on study, vegetarianism, and sexual restraint or abstinence. Later philosophers believed that celibacy would be conducive to the detachment and equilibrium required by the philosopher's calling."
The Balkans.
The tradition of sworn virgins developed out of the "Kanuni i Lekë Dukagjinit" (, or simply the "Kanun"). The "Kanun" is not a religious document – many groups follow this code, including Roman Catholics, the Albanian Orthodox, and Muslims.
Women who become sworn virgins make a vow of celibacy, and are allowed to take on the social role of men: inheriting land, wearing male clothing, etc.
Political contexts.
During the May Fourth Movement in China, pledges of celibacy were a means through which participants resisted traditional marriage and devoted themselves to revolutionary causes.
|
6036
|
37065389
|
https://en.wikipedia.org/wiki?curid=6036
|
Coalition government
|
A coalition government, or coalition cabinet, is a government by political parties that enter into a power-sharing arrangement of the executive. Coalition governments usually occur when no single party has achieved an absolute majority after an election. A party not having majority is common under proportional representation, but not in nations with majoritarian electoral systems.
There are different forms of coalition governments, minority coalitions and surplus majority coalition governments. A surplus majority coalition government controls more than the absolute majority of seats in parliament necessary to have a majority in the government, whereas minority coalition governments do not hold the majority of legislative seats.
A coalition government may also be created in a time of national difficulty or crisis (for example, during wartime or economic crisis) to give a government the high degree of perceived political legitimacy or collective identity, it can also play a role in diminishing internal political strife. In such times, parties have formed all-party coalitions (national unity governments, grand coalitions).
If a coalition collapses, the prime minister and cabinet may be ousted by a vote of no confidence, call snap elections, form a new majority coalition, or continue as a minority government.
Formation of coalition governments.
For a coalition to come about the coalition partners need to compromise on their policy expectations. One coalition or probing partner must lose for the other one to win, to achieve a Nash equilibrium, which is necessary for a coalition to form. If the parties are not willing to compromise, the coalition will not come about.
Before parties form a coalition government, they formulate a coalition agreement, in which they state what policies they try to adapt in the legislative period.
Coalition agreement.
In multi-party states, a coalition agreement is an agreement negotiated between the parties that form a coalition government. It codifies the most important shared goals and objectives of the cabinet. It is often written by the leaders of the parliamentary groups. Coalitions that have a written agreement are more productive than those that do not.
If an issue is discussed more deeply and in more detail in chamber than what appears in the coalition agreement, it indicates that the coalition parties do not share the same policy ideas. Hence, a more detailed written formulation of the issue helps parties in the coalition to limit 'agency loss' when the ministry overseeing that issue is managed by another coalition party.
Electoral accountability.
Coalition governments can also impact voting behavior by diminishing the clarity of responsibility.
Electoral accountability is harder to achieve in coalition governments than in single party governments because there is no direct responsibility within the governing parties in the coalition.
Retrospective voting has a huge influence on the outcome of an election. However, the risk of retrospective voting is a lot weaker with coalition governments than in single party governments. Within the coalition, the party with the head of state has the biggest risk of retrospective voting.
Governing cost.
Governing parties lose votes in the election after their legislative period, this is called “the governing cost”. In comparison, a single- party government has a higher electoral cost, than a party that holds the office of the prime minister. Furthermore, the party that holds the office of prime minister suffer less electoral costs, then a junior coalition partner, when looking only on the electoral cost created by being in the coalition government.
Distribution.
Countries which often operate with coalition cabinets include: the Nordic countries, the Benelux countries, Australia, Austria, Brazil, Chile, Cyprus, East Timor, France, Germany, Greece, Guinea-Bissau, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Kosovo, Latvia, Lebanon, Lesotho, Lithuania, Malaysia, Nepal, New Zealand, Pakistan, Thailand, Spain, Trinidad and Tobago, Turkey, and Ukraine. Switzerland has been ruled by a consensus government with a coalition of the four strongest parties in parliament since 1959, called the "Magic Formula". Between 2010 and 2015, the United Kingdom also operated a formal coalition between the Conservative and the Liberal Democrat parties, but this was unusual: the UK usually has a single-party majority government. Not every parliament forms a coalition government, for example the European Parliament.
Armenia.
Armenia became an independent state in 1991, following the collapse of the Soviet Union. Since then, many political parties were formed in it, who mainly work with each other to form coalition governments. The country was governed by the My Step Alliance coalition after successfully gaining a majority in the National Assembly of Armenia following the 2018 Armenian parliamentary election.
Australia.
In federal Australian politics, the conservative Liberal, National, Country Liberal and Liberal National parties are united in a coalition, known simply as the Coalition.
While nominally two parties, the Coalition has become so stable, at least at the federal level, that in practice the lower house of Parliament has become a two-party system, with the Coalition and the Labor Party being the major parties. This coalition is also found in the states of New South Wales and Victoria. In South Australia and Western Australia the Liberal and National parties compete separately, while in the Northern Territory and Queensland the two parties have merged, forming the Country Liberal Party, in 1978, and the Liberal National Party, in 2008, respectively.
Coalition governments involving the Labor Party and the Australian Greens have occurred at state and territory level, for example following the 2010 Tasmanian state election and the 2016 and 2020 Australian Capital Territory elections.
Belgium.
In Belgium, a nation internally divided along linguistic lines (primarily between Dutch-speaking Flanders in the north and French-speaking Wallonia in the south, with Brussels also being by and large Francophone), each main political disposition (Social democracy, liberalism, right-wing populism, etc.) is, with the exception of the far-left Workers' Party of Belgium, split between Francophone and Dutch-speaking parties (e.g. the Dutch-speaking Vooruit and French-speaking Socialist Party being the two social-democratic parties). In the 2019 federal election, no party got more than 17% of the vote. Thus, forming a coalition government is an expected and necessary part of Belgian politics. In Belgium, coalition governments containing ministers from six or more parties are not uncommon; consequently, government formation can take an exceptionally long time. Between 2007 and 2011, Belgium operated under a caretaker government as no coalition could be formed.
Canada.
In Canada, the Great Coalition was formed in 1864 by the Clear Grits, , and Liberal-Conservative Party. During the First World War, Prime Minister Robert Borden attempted to form a coalition with the opposition Liberals to broaden support for controversial conscription legislation. The Liberal Party refused the offer but some of their members did cross the floor and join the government. Although sometimes referred to as a coalition government, according to the definition above, it was not. It was disbanded after the end of the war.
During the 2008–09 Canadian parliamentary dispute, two of Canada's opposition parties signed an agreement to form what would become the country's second federal coalition government since Canadian Confederation if the minority Conservative government was defeated on a vote of non-confidence, unseating Stephen Harper as Prime Minister. The agreement outlined a formal coalition consisting of two opposition parties, the Liberal Party and the New Democratic Party. The Bloc Québécois agreed to support the proposed coalition on confidence matters for 18 months. In the end, parliament was prorogued by the Governor General, and the coalition dispersed before parliament was reconvened.
According to historian Christopher Moore, coalition governments in Canada became much less possible in 1919, when the leaders of parties were no longer chosen by elected MPs but instead began to be chosen by party members. Such a manner of leadership election had never been tried in any parliamentary system before. According to Moore, as long as that kind of leadership selection process remains in place and concentrates power in the hands of the leader, as opposed to backbenchers, then coalition governments will be very difficult to form. Moore shows that the diffusion of power within a party tends to also lead to a diffusion of power in the parliament in which that party operates, thereby making coalitions more likely.
Provincial.
Several coalition governments have been formed within provincial politics. As a result of the 1919 Ontario election, the United Farmers of Ontario and the Labour Party, together with three independent MLAs, formed a coalition that governed Ontario until 1923.
In British Columbia, the governing Liberals formed a coalition with the opposition Conservatives in order to prevent the surging, left-wing Cooperative Commonwealth Federation from taking power in the 1941 British Columbia general election. Liberal premier Duff Pattullo refused to form a coalition with the third-place Conservatives, so his party removed him. The Liberal–Conservative coalition introduced a winner-take-all preferential voting system (the "Alternative Vote") in the hopes that their supporters would rank the other party as their second preference; however, this strategy backfired in the subsequent 1952 British Columbia general election where, to the surprise of many, the right-wing populist BC Social Credit Party won a minority. They were able to win a majority in the subsequent election as Liberal and Conservative supporters shifted their anti-CCF vote to Social Credit.
Manitoba has had more formal coalition governments than any other province. Following gains by the United Farmer's/Progressive movement elsewhere in the country, the United Farmers of Manitoba unexpectedly won the 1921 election. Like their counterparts in Ontario, they had not expected to win and did not have a leader. They asked John Bracken, a professor in animal husbandry, to become leader and premier. Bracken changed the party's name to the Progressive Party of Manitoba. During the Great Depression, Bracken survived at a time when other premiers were being defeated by forming a coalition government with the Manitoba Liberals (eventually, the two parties would merge into the , and decades later, the party would change its name to the Manitoba Liberal Party). In 1940, Bracken formed a wartime coalition government with almost every party in the Manitoba Legislature (the Conservatives, CCF, and Social Credit; however, the CCF broke with the coalition after a few years over policy differences). The only party not included was the small, communist Labor-Progressive Party, which had a handful of seats.
In Saskatchewan, NDP premier Roy Romanow formed a formal coalition with the Saskatchewan Liberals in 1999 after being reduced to a minority. After two years, the newly elected Liberal leader David Karwacki ordered the coalition be disbanded, the Liberal caucus disagreed with him and left the Liberals to run as New Democrats in the upcoming election. The Saskatchewan NDP was re-elected with a majority under its new leader Lorne Calvert, while the Saskatchewan Liberals lost their remaining seats and have not been competitive in the province since.
Denmark.
From the creation of the Folketing in 1849 through the introduction of proportional representation in 1918, there were only single-party governments in Denmark. Thorvald Stauning formed his second government and Denmark's first coalition government in 1929. Since then, the norm has been coalition governments, though there have been periods where single-party governments were frequent, such as the decade after the end of World War II, during the 1970s, and in the late 2010s. Every government from 1982 until the 2015 elections were coalitions. While Mette Frederiksen's first government only consisted of her own Social Democrats, her second government is a coalition of the Social Democrats, Venstre, and the Moderates.
When the Social Democrats under Stauning won 46% of the votes in the 1935 election, this was the closest any party has gotten to winning an outright majority in parliament since 1918. One party has thus never held a majority alone, and even one-party governments have needed to have confidence agreements with at least one other party to govern. For example, though Frederiksen's first government only consisted of the Social Democrats, it also relied on the support of the Social Liberal Party, the Socialist People's Party, and the Red–Green Alliance.
Finland.
In Finland, no party has had an absolute majority in the parliament since independence, and multi-party coalitions have been the norm. Finland experienced its most stable government (Lipponen I and II) since independence with a five-party governing coalition, a so-called "rainbow government". The Lipponen cabinets set the stability record and were unusual in the respect that both the centre-left (SDP) and radical left-wing (Left Alliance) parties sat in the government with the major centre-right party (National Coalition). The Katainen cabinet was also a rainbow coalition of a total of five parties.
Germany.
In Germany, coalition governments are the norm, as it is rare for any single party to win a majority in parliament. The German political system makes extensive use of the constructive vote of no confidence, which requires governments to control an absolute majority of seats. Every government since the foundation of the Federal Republic in 1949 has involved at least two political parties. Typically, governments involve one of the two major parties forming a coalition with a smaller party. For example, from 1982 to 1998, the country was governed by a coalition of the CDU/CSU with the minor Free Democratic Party (FDP); from 1998 to 2005, a coalition of the Social Democratic Party of Germany (SPD) and the minor Greens held power. The CDU/CSU comprises an alliance of the Christian Democratic Union of Germany and Christian Social Union in Bavaria, described as "sister parties" which form a joint parliamentary group, and for this purpose are always considered a single party. Coalition arrangements are often given names based on the colours of the parties involved, such as "red-green" for the SPD and Greens. Coalitions of three parties are often named after countries whose flags contain those colours, such as the black-yellow-green Jamaica coalition.
Grand coalitions of the two major parties also occur, but these are relatively rare, as they typically prefer to associate with smaller ones. However, if the major parties are unable to assemble a majority, a grand coalition may be the only practical option. This was the case following the 2005 federal election, in which the incumbent SPD–Green government was defeated but the opposition CDU/CSU–FDP coalition also fell short of a majority. A grand coalition government was subsequently formed between the CDU/CSU and the SPD. Partnerships like these typically involve carefully structured cabinets: Angela Merkel of the CDU/CSU became Chancellor while the SPD was granted the majority of cabinet posts.
Coalition formation has become increasingly complex as voters increasingly migrate away from the major parties during the 2000s and 2010s. While coalitions of more than two parties were extremely rare in preceding decades, they have become common on the state level. These often include the liberal FDP and the Greens alongside one of the major parties, or "red–red–green" coalitions of the SPD, Greens, and The Left. In the eastern states, dwindling support for moderate parties has seen the rise of new forms of grand coalitions such as the Kenya coalition. The rise of populist parties also increases the time that it takes for a successful coalition to form. By 2016, the Greens were participating eleven governing coalitions on the state level in seven different constellations. During campaigns, parties often declare which coalitions or partners they prefer or reject. This tendency toward fragmentation also spread to the federal level, particularly during the 2021 federal election, which saw the CDU/CSU and SPD fall short of a combined majority of votes for the first time in history.
India.
After India's Independence on 15 August 1947, the Indian National Congress, the major political party instrumental in the Indian independence movement, ruled the nation. The first Prime Minister, Jawaharlal Nehru, his successor Lal Bahadur Shastri, and the third Prime Minister, Indira Gandhi, were all members of the Congress party. However, Raj Narain, who had unsuccessfully contested an election against Indira from the constituency of Rae Bareli in 1971, lodged a case alleging electoral malpractice. In June 1975, Indira was found guilty and barred by the High Court from holding public office for six years. In response, a state of emergency was declared under the pretext of national security. The next election resulted in the formation of India's first ever national coalition government under the prime ministership of Morarji Desai, which was also the first non-Congress national government. It existed from 24 March 1977 to 15 July 1979, headed by the Janata Party, an amalgam of political parties opposed to the emergency imposed between 1975 and 1977. As the popularity of the Janata Party dwindled, Desai had to resign, and Chaudhary Charan Singh, a rival of his, became the fifth Prime Minister. However, due to lack of support, this coalition government did not complete its five-year term.
Congress returned to power in 1980 under Indira Gandhi, and later under Rajiv Gandhi as the sixth Prime Minister. However, the general election of 1989 once again brought a coalition government under National Front, which lasted until 1991, with two Prime Ministers, the second one being supported by Congress. The 1991 election resulted in a Congress-led stable minority government for five years. The eleventh parliament produced three Prime Ministers in two years and forced the country back to the polls in 1998. The first successful coalition government in India which completed a whole five-year term was the Bharatiya Janata Party (BJP)-led National Democratic Alliance with Atal Bihari Vajpayee as Prime Minister from 1999 to 2004. Then another coalition, the Congress-led United Progressive Alliance, consisting of 13 separate parties, ruled India for two terms from 2004 to 2014 with Manmohan Singh as PM. However, in the 16th general election in May 2014, the BJP secured a majority on its own (becoming the first party to do so since the 1984 election), and the National Democratic Alliance came into power, with Narendra Modi as Prime Minister. In 2019, Narendra Modi was re-elected as Prime Minister as the National Democratic Alliance again secured a majority in the 17th general election. India returned to an NDA led coalition government in 2024 as the BJP failed to achieve an outright majority.
Indonesia.
As a result of the toppling of Suharto, political freedom is significantly increased. Compared to only three parties allowed to exist in the New Order era, a total of 48 political parties participated in the 1999 election and always a total of more than 10 parties in next elections. There are no majority winner of those elections and coalition governments are inevitable. The current government is a coalition of five parliamentary parties led by the major centre-right Gerindra to let governing big tent Advanced Indonesia Coalition.
Ireland.
In Ireland, coalition governments are common; not since 1977 has a single party formed a majority government. Coalition governments to date have been led by either Fianna Fáil or Fine Gael. They have been joined in government by one or more smaller parties or independent members of parliament (TDs).
Ireland's first coalition government was formed after the 1948 general election, with five parties and independents represented at cabinet. Before 1989, Fianna Fáil had opposed participation in coalition governments, preferring single-party minority government instead. It formed a coalition government with the Progressive Democrats in that year.
The Labour Party has been in government on eight occasions. On all but one of those occasions, it was as a junior coalition party to Fine Gael. The exception was a government with Fianna Fáil from 1993 to 1994. The 29th Government of Ireland (2011–16), was a grand coalition of the two largest parties, as Fianna Fáil had fallen to third place in the Dáil.
The current government is a Fianna Fáil, Fine Gael and the Independents. Although Fianna Fáil and Fine Gael have been serving in government together since 2020, they haven't formed coalition before due to their different roots that goes back to Irish Civil War (1922–23).
Israel.
A similar situation exists in Israel, which typically has at least 10 parties holding representation in the Knesset. The only faction to ever gain the majority of Knesset seats was Alignment, an alliance of the Labor Party and Mapam that held an absolute majority for a brief period from 1968 to 1969. Historically, control of the Israeli government has alternated between periods of rule by the right-wing Likud in coalition with several right-wing and religious parties and periods of rule by the center-left Labor in coalition with several left-wing parties. Ariel Sharon's formation of the centrist Kadima party in 2006 drew support from former Labor and Likud members, and Kadima ruled in coalition with several other parties.
Israel also formed a national unity government from 1984–1988. The premiership and foreign ministry portfolio were held by the head of each party for two years, and they switched roles in 1986.
Japan.
In Japan, controlling a majority in the House of Representatives is enough to decide the election of the prime minister (=recorded, two-round votes in both houses of the National Diet, yet the vote of the House of Representatives decision eventually overrides a dissenting House of Councillors vote automatically after the mandatory conference committee procedure fails which, by precedent, it does without real attempt to reconcile the different votes). Therefore, a party that controls the lower house can form a government on its own. It can also pass a budget on its own. But passing any law (including important budget-related laws) requires either majorities in both houses of the legislature or, with the drawback of longer legislative proceedings, a two-thirds majority in the House of Representatives.
In recent decades, single-party full legislative control is rare, and coalition governments are the norm: Most governments of Japan since the 1990s and, as of 2020, all since 1999 have been coalition governments, some of them still fell short of a legislative majority. The Liberal Democratic Party (LDP) held a legislative majority of its own in the National Diet until 1989 (when it initially continued to govern alone), and between the 2016 and 2019 elections (when it remained in its previous ruling coalition). The Democratic Party of Japan (through accessions in the House of Councillors) briefly controlled a single-party legislative majority for a few weeks before it lost the 2010 election (it, too, continued to govern as part of its previous ruling coalition).
From the constitutional establishment of parliamentary cabinets and the introduction of the new, now directly elected upper house of parliament in 1947 until the formation of the LDP and the reunification of the Japanese Socialist Party in 1955, no single party formally controlled a legislative majority on its own. Only few formal coalition governments (46th, 47th, initially 49th cabinet) interchanged with technical minority governments and cabinets without technical control of the House of Councillors (later called "twisted Diets", "nejire kokkai", when they were not only technically, but actually divided). But during most of that period, the centrist Ryokufūkai was the strongest overall or decisive cross-bench group in the House of Councillors, and it was willing to cooperate with both centre-left and centre-right governments even when it was not formally part of the cabinet; and in the House of Representatives, minority governments of Liberals or Democrats (or their precursors; loose, indirect successors to the two major pre-war parties) could usually count on support from some members of the other major conservative party or from smaller conservative parties and independents. Finally in 1955, when Hatoyama Ichirō's Democratic Party minority government called early House of Representatives elections and, while gaining seats substantially, remained in the minority, the Liberal Party refused to cooperate until negotiations on a long-debated "conservative merger" of the two parties were agreed upon, and eventually successful.
After it was founded in 1955, the Liberal Democratic Party dominated Japan's governments for a long period: The new party governed alone without interruption until 1983, again from 1986 to 1993 and most recently between 1996 and 1999. The first time the LDP entered a coalition government followed its third loss of its House of Representatives majority in the 1983 House of Representatives general election. The LDP-New Liberal Club coalition government lasted until 1986 when the LDP won landslide victories in simultaneous double elections to both houses of parliament.
There have been coalition cabinets where the post of prime minister was given to a junior coalition partner: the JSP-DP-Cooperativist coalition government in 1948 of prime minister Ashida Hitoshi (DP) who took over after his JSP predecessor Tetsu Katayama had been toppled by the left wing of his own party, the JSP-Renewal-Kōmei-DSP-JNP-Sakigake-SDF-DRP coalition in 1993 with Morihiro Hosokawa (JNP) as compromise PM for the Ichirō Ozawa-negotiated rainbow coalition that removed the LDP from power for the first time to break up in less than a year, and the LDP-JSP-Sakigake government that was formed in 1994 when the LDP had agreed, if under internal turmoil and with some defections, to bury the main post-war partisan rivalry and support the election of JSP prime minister Tomiichi Murayama in exchange for the return to government.
Malaysia.
Ever since Malaysia gained independence in 1957, none of its federal governments have ever been controlled by a single political party. Due to the social nature of the country, the first federal government was formed by a three-party Alliance coalition, composed of the United Malays National Organisations (UMNO), the Malaysian Chinese Association (MCA), and the Malaysian Indian Congress (MIC). It was later expanded and rebranded as Barisan Nasional (BN), which includes parties representing the Malaysian states of Sabah and Sarawak.
The 2018 Malaysian general election saw the first non-BN coalition federal government in the country's electoral history, formed through an alliance between the Pakatan Harapan (PH) coalition and the Sabah Heritage Party (WARISAN). The federal government formed after the 2020–2022 Malaysian political crisis was the first to be established through coordination between multiple political coalitions. This occurred when the newly formed Perikatan Nasional (PN) coalition partnered with BN and Gabungan Parti Sarawak (GPS). In 2022 after its registration, Sabah-based Gabungan Rakyat Sabah (GRS) formally joined the government (though it had been a part of an informal coalition since 2020). The current government led by Prime Minister Anwar Ibrahim is composed of four political coalitions and 19 parties.
New Zealand.
MMP was introduced in New Zealand in the 1996 election.
In order to get into power, parties need to get a total of 50% of the approximately (there can be more if an Overhang seat exists) 120 seats in parliament – 61. Since it is rare for a party to win a full majority, they must form coalitions with other parties. For example, from 1996 to 1998, the country was governed by a coalition of the National with the minor NZ First; from 1999 to 2002, a coalition of the Labour and the minor Alliance and with confidence and supply from the Green Party held power. Between 2017 and 2020, Labour, New Zealand First formed a Coalition Government with confidence and supply from the Green Party. During the 2023 general election, National,ACT and New Zealand First formed a coalition government following three weeks of negotiations.
Spain.
Since 2015, there are many more coalition governments than previously in municipalities, autonomous regions and, since 2020 (coming from the November 2019 Spanish general election), in the Spanish Government. There are two ways of conforming them: all of them based on a program and its institutional architecture, one consists on distributing the different areas of government between the parties conforming the coalition and the other one is, like in the Valencian Community, where the ministries are structured with members of all the political parties being represented, so that conflicts that may occur are regarding competences and not fights between parties.
Coalition governments in Spain had already existed during the 2nd Republic, and have been common in some specific Autonomous Communities since the 1980s. Nonetheless, the prevalence of two big parties overall has been eroded and the need for coalitions appears to be the new normal since around 2015.
Turkey.
Turkey's first coalition government was formed after the 1961 general election, with two political parties and independents represented at cabinet. It was also Turkey's first grand coalition as the two largest political parties of opposing political ideologies (Republican People's Party and Justice Party) united. Between 1960 and 2002, 17 coalition governments were formed in Turkey. The media and the general public view coalition governments as unfavorable and unstable due to their lack of effectiveness and short lifespan. Following Turkey's transition to a presidential system in 2017, political parties focussed more on forming electoral alliances. Due to separation of powers, the government doesn't have to be formed by parliamentarians and therefore not obliged to result in a coalition government. However, the parliament can dissolve the cabinet if the parliamentary opposition is in majority.
United Kingdom.
In the United Kingdom, coalition governments (sometimes known as "national governments") usually have only been formed at times of national crisis. The most prominent was the National Government of 1931 to 1940. There were multi-party coalitions during both world wars. Apart from this, when no party has had a majority, minority governments normally have been formed with one or more opposition parties agreeing to vote in favour of the legislation which governments need to function: for instance the Labour government of James Callaghan formed a pact with the Liberals from March 1977 until July 1978, following a series of by-election defeats had eroded Labour's majority of three seats which had been gained at the October 1974 election. However, in the run-up to the 1997 general election, Labour opposition leader Tony Blair was in talks with Liberal Democrat leader Paddy Ashdown about forming a coalition government if Labour failed to win a majority at the election; but there proved to be no need for a coalition as Labour won the election by a landslide. The 2010 general election resulted in a hung parliament (Britain's first for 36 years), and the Conservatives, led by David Cameron, which had won the largest number of seats, formed a coalition with the Liberal Democrats in order to gain a parliamentary majority, ending 13 years of Labour government. This was the first time that the Conservatives and Lib Dems had made a power-sharing deal at Westminster. It was also the first full coalition in Britain since 1945, having been formed 70 years virtually to the day after the establishment of Winston Churchill's wartime coalition,
Labour and the Liberal Democrats have entered into a coalition twice in the Scottish Parliament, as well as twice in the Welsh Assembly.
Uruguay.
Since the 1989 election, there have been 4 coalition governments, all including at least both the conservative National Party and the liberal Colorado Party. The first one was after the election of the blanco Luis Alberto Lacalle and lasted until 1992 due to policy disagreements, the longest lasting coalition was the Colorado-led coalition under the second government of Julio María Sanguinetti, in which the national leader Alberto Volonté was frequently described as a "Prime Minister", the next coalition (under president Jorge Batlle) was also Colorado-led, but it lasted only until after the 2002 Uruguay banking crisis, when the blancos abandoned the government. Following the 2019 Uruguayan general election, the blanco Luis Lacalle Pou formed the coalición multicolor, composed of his own National Party, the liberal Colorado Party, the eclectic Open Cabildo and the center left Independent Party.
Support and criticism.
Advocates of proportional representation suggest that a coalition government leads to more consensus-based politics, as a government comprising differing parties (often based on different ideologies) need to compromise about governmental policy. Another stated advantage is that a coalition government better reflects the popular opinion of the electorate within a country; this means, for instance, that the political system contains just one majority-based mechanism. Contrast this with district voting in which the majority mechanism occurs twice: first, the majority of voters pick the representative and, second, the body of representatives make a subsequent majority decision. The doubled majority decision undermines voter support for that decision. The benefit of proportional representation is that it contains that majority mechanism just once. Additionally, coalition partnership may play an important role in moderating the level of affective polarization over parties, that is, the animosity and hostility against the opponent party identifiers/supporters.
Those who disapprove of coalition governments believe that such governments have a tendency to be fractious and prone to disharmony, as their component parties hold differing beliefs and thus may not always agree on policy. Sometimes the results of an election mean that the coalitions which are mathematically most probable are ideologically infeasible, for example in Flanders or Northern Ireland. A second difficulty might be the ability of minor parties to play "kingmaker" and, particularly in close elections, gain far more power in exchange for their support than the size of their vote would otherwise justify.
Germany is the largest nation ever to have had proportional representation during the interbellum. After WW II, the German system, district based but then proportionally adjusted afterward, contains a threshold that keeps the number of parties limited. The threshold is set at five percent, resulting in empowered parties with at least a minimum amount of political gravity.
Coalition governments have also been criticized for sustaining a consensus on issues when disagreement and the consequent discussion would be more fruitful. To forge a consensus, the leaders of ruling coalition parties can agree to silence their disagreements on an issue to unify the coalition against the opposition. The coalition partners, if they control the parliamentary majority, can collude to make the parliamentary discussion on the issue irrelevant by consistently disregarding the arguments of the opposition and voting against the opposition's proposals — even if there is disagreement within the ruling parties about the issue. However, in winner-take-all this seems always to be the case.
Powerful parties can also act in an oligocratic way to form an alliance to stifle the growth of emerging parties. Of course, such an event is rare in coalition governments when compared to two-party systems, which typically exist because of stifling of the growth of emerging parties, often through discriminatory nomination rules regulations and plurality voting systems, and so on.
A single, more powerful party can shape the policies of the coalition disproportionately. Smaller or less powerful parties can be intimidated to not openly disagree. In order to maintain the coalition, they would have to vote against their own party's platform in the parliament. If they do not, the party has to leave the government and loses executive power. However, this is contradicted by the "kingmaker" factor mentioned above.
Finally, a strength that can also be seen as a weakness is that proportional representation puts the emphasis on collaboration. All parties involved are looking at the other parties in the best light possible, since they may be (future) coalition partners. The pendulum may therefore show less of a swing between political extremes. Still, facing external issues may then also be approached from a collaborative perspective, even when the outside force is not benevolent.
Legislative coalitions and agreements.
A legislative coalition or voting coalition is when political parties in a legislature align on voting to push forward specific policies or legislation, but do not engage in power-sharing of the executive branch like in coalition governments.
In a parliamentary system, political parties may form a confidence and supply arrangement, pledging to support the governing party on legislative bills and motions that carry a vote of confidence. Unlike a coalition government, which is a more formalised partnership characterised by the sharing of the executive branch, a confidence and supply arrangement does not entail executive "power-sharing". Instead, it involves the governing party supporting specific proposals and priorities of the other parties in the arrangement, in return for their continued support on motions of confidence.
United States.
In the United States, political parties have formed legislative coalitions in the past in order to push forward specific policies or legislation in the United States Congress. In 1855, a coalition was formed between members of the American Party, Opposition Party and Republican Party to elect Nathaniel P. Banks speaker of the House.
Later, in 1917, at the start of the 65th Congress, a coalition was formed between members of the Democratic Party, Progressive Party and Socialist Party of America to elect Champ Clark as the speaker of the United States House of Representatives. This was the only time a socialist party entered coalition government in the House on a national level. More recently, during the 118th Congress, an informal legislative coalition formed between Democrats and mainline Republicans to pass critical legislation opposed by the Freedom Caucus, an extreme right-wing faction controlling a minority of seats in the Republican Conference.
A coalition government, in which "power-sharing" of executive offices is performed, has not occurred in the United States. The norms that allow coalition governments to form and persist do not exist in the United States.
|
6038
|
1300918047
|
https://en.wikipedia.org/wiki?curid=6038
|
Chemical engineering
|
Chemical engineering is an engineering field which deals with the study of the operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.
Etymology.
A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The "History of Science in United States: An Encyclopedia" puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States.
History.
New concepts and innovations.
In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics".
Safety and hazard developments.
Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. "Silent Spring", published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.
Recent progress.
Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.
Concepts.
Plant design and construction.
Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.
Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.
Process design and analysis.
A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.
Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.
Transport phenomena.
Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.
Applications and practice.
Chemical engineers develop economic ways of using materials and energy. Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.
Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles.
|
6041
|
45293124
|
https://en.wikipedia.org/wiki?curid=6041
|
List of comedians
|
A comedian is one who entertains through comedy, such as jokes and other forms of humour. Following is a list of comedians, comedy groups, and comedy writers.
Comedians.
"(sorted alphabetically by surname)"
Comedy writers.
"(sorted alphabetically by surname)"
See also.
Lists of comedians by nationality
Other related lists
|
6042
|
1297587324
|
https://en.wikipedia.org/wiki?curid=6042
|
Compact space
|
In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all "limiting values" of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers formula_1 is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers formula_2 is not compact either, because it excludes the two limiting values formula_3 and formula_4. However, the "extended" real number line "would" be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.
One such generalization is that a topological space is "sequentially" compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval , some of those points will get arbitrarily close to some real number in that space.
For instance, some of the numbers in the sequence accumulate to 0 (while others accumulate to 1).
Since neither 0 nor 1 are members of the open unit interval , those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering formula_5 (the real number line), the sequence of points has no subsequence that converges to any real number.
Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion—and the standard definition of the unqualified term "compactness"—is phrased in terms of the existence of finite families of open sets that "cover" the space, in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally—that is, in a neighborhood of each point—into corresponding statements that hold throughout the space, and many theorems are of this character.
The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.
Historical development.
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point.
Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected.
The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts—until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points.
The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.
The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt.
For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence – or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space.
It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term "compactness" to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis.
In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it.
The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by , who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
Basic examples.
Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself—an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval , one could choose the sequence of points , of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary—without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point "within" the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.
Definitions.
Various definitions of compactness may apply, depending on the level of generality.
A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness—originally called "bicompactness"—is defined using covers consisting of open sets (see "Open cover definition" below).
That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally—in a neighbourhood of each point of the space—and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.
Open cover definition.
Formally, a topological space is called "compact" if every open cover of has a finite subcover. That is, is compact if for every collection of open subsets of such that
formula_6
there is a finite subcollection ⊆ such that
formula_7
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term "quasi-compact" for the general notion, and reserve the term "compact" for topological spaces that are both Hausdorff and "quasi-compact". A compact set is sometimes referred to as a "compactum", plural "compacta".
Compactness of subsets.
A subset of a topological space is said to be compact if it is compact as a subspace (in the subspace topology). That is, is compact if for every arbitrary collection of open subsets of such that
formula_8
there is a finite subcollection ⊆ such that
formula_9
Because compactness is a topological property, the compactness of a subset depends only on the subspace topology induced on it. It follows that, if formula_10, with subset equipped with the subspace topology, then is compact in if and only if is compact in .
Characterization.
If is a topological space then the following are equivalent:
Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).
Euclidean space.
For any subset of Euclidean space, is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.
As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed -ball.
Metric spaces.
For any metric space , the following are equivalent (assuming countable choice):
A compact metric space also satisfies the following properties:
Ordered spaces.
For an ordered space (i.e. a totally ordered set equipped with the order topology), the following are equivalent:
An ordered space satisfying (any one of) these conditions is called a complete lattice.
In addition, the following are equivalent for all ordered spaces , and (assuming countable choice) are true whenever is compact. (The converse in general fails if is not also metrizable.):
Characterization by continuous functions.
Let be a topological space and the ring of real continuous functions on .
For each , the evaluation map formula_12
given by is a ring homomorphism.
The kernel of is a maximal ideal, since the residue field is the field of real numbers, by the first isomorphism theorem. A topological space is pseudocompact if and only if every maximal ideal in has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals in such that the residue field is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space is compact if and only if every point of the natural extension is infinitely close to a point of (more precisely, is contained in the monad of ).
Hyperreal definition.
A space is compact if its hyperreal extension (constructed, for example, by the ultrapower construction) has the property that every point of is infinitely close to some point of . For example, an open real interval is not compact because its hyperreal extension contains infinitesimals, which are infinitely close to 0, which is not a point of .
Properties of compact spaces.
Functions and compact spaces.
Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum.
(Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.
Compactifications.
Every topological space is an open dense subspace of a compact space having at most one point more than , by the Alexandroff one-point compactification.
By the same construction, every locally compact Hausdorff space is an open dense subspace of a compact Hausdorff space having at most one point more than .
Ordered compact spaces.
A nonempty compact subset of the real numbers has a greatest element and a least element.
Let be a simply ordered set endowed with the order topology.
Then is compact if and only if is a complete lattice (i.e. all subsets have suprema and infima).
|
6045
|
28481209
|
https://en.wikipedia.org/wiki?curid=6045
|
Clodius
|
Clodius is an alternate form of the Roman "nomen" Claudius, a patrician "gens" that was traditionally regarded as Sabine in origin. The alternation of "o" and "au" is characteristic of the Sabine dialect. The feminine form is Clodia.
Republican era.
Other Clodii of the Republic.
In addition to Clodius, Clodii from the Republican era include:
Women of the Claudii Marcelli branch were often called "Clodia" in the late Republic.
Imperial era.
People using the name "Clodius" during the period of the Roman Empire include:
Clodii Celsini.
The Clodii Celsini continued to practice the traditional religions of antiquity in the face of Christian hegemony through at least the 4th century, when Clodius Celsinus Adelphius (see below) converted. Members of this branch include:
|
6046
|
5957417
|
https://en.wikipedia.org/wiki?curid=6046
|
Cicero
|
Marcus Tullius Cicero ( ; ; 3 January 106 BC – 7 December 43 BC) was a Roman statesman, lawyer, scholar, philosopher, orator, writer and Academic skeptic, who tried to uphold optimate principles during the political crises that led to the establishment of the Roman Empire. His extensive writings include treatises on rhetoric, philosophy and politics. He is considered one of Rome's greatest orators and prose stylists and the innovator of what became known as "Ciceronian rhetoric". Cicero was educated in Rome and in Greece. He came from a wealthy municipal family of the Roman equestrian order, and served as consul in 63 BC.
He greatly influenced both ancient and modern reception of the Latin language. A substantial part of his work has survived, and he was admired by both ancient and modern authors alike. Cicero adapted the arguments of the chief schools of Hellenistic philosophy in Latin and coined a large portion of Latin philosophical vocabulary via lexical innovation (e.g. neologisms such as , "generator", , "infinitio", , ), almost 150 of which were the result of translating Greek philosophical terms.
Though he was an accomplished orator and successful lawyer, Cicero believed his political career was his most important achievement. During his consulship in 63 BC, he suppressed the Catilinarian conspiracy. However, because he had summarily and controversially executed five of the conspirators without trial, he was exiled in 58 but recalled the next year. Spending much of the 50s unhappy with the state of Roman politics, he took a governorship in Cilicia in 51 and returned to Italy on the eve of Caesar's civil war. Supporting Pompey during the war, Cicero was pardoned after Caesar's victory. After Caesar's assassination in 44 BC, he led the Senate against Mark Antony, attacking him in a series of speeches. He elevated Caesar's heir Octavian to rally support against Antony in the ensuing violent conflict. But after Octavian and Antony reconciled to form the triumvirate, Cicero was proscribed and executed in late 43 BC while attempting to escape Italy for safety. His severed hands and head (taken by order of Antony and displayed representing the repercussions of his anti-Antonian actions as a writer and as an orator, respectively) were then displayed on the rostra.
Petrarch's rediscovery of Cicero's letters is often credited for initiating the 14th-century Renaissance in public affairs, humanism, and classical Roman culture. According to Polish historian Tadeusz Zieliński, "the Renaissance was above all things a revival of Cicero, and only after him and through him of the rest of Classical antiquity." The peak of Cicero's authority and prestige came during the 18th-century Enlightenment, and his impact on leading Enlightenment thinkers and political theorists such as John Locke, David Hume, Montesquieu, and Edmund Burke was substantial. His works rank among the most influential in global culture, and today still constitute one of the most important bodies of primary material for the writing and revision of Roman history, especially the last days of the Roman Republic.
Early life.
Marcus Tullius Cicero was born on 3 January 106 BC in Arpinum, a hill town southeast of Rome. He belonged to the "tribus" Cornelia. His father was a wealthy member of the equestrian order and possessed good connections in Rome. However, not being of robust health (he experienced poor digestion and inflammation of the eyes), he could not enter public life and studied extensively to compensate. Little is known about Cicero's mother Helvia, but Cicero's brother Quintus wrote in a letter that she was a thrifty housewife.
Cicero's cognomen, a hereditary nickname, comes from the Latin for chickpea, . Plutarch explains that the name was originally given to one of Cicero's ancestors who had a cleft in the tip of his nose resembling a chickpea. The famous family names of Fabius, Lentulus, and Piso come from the Latin names of beans, lentils, and peas, respectively. Plutarch writes that Cicero was urged to change this deprecatory name when he entered politics, but refused, saying that he would make "Cicero" more glorious than "Scaurus" ("Swollen-ankled") and "Catulus" ("Puppy").
At the age of 15, in 90 BC, Cicero started serving under Pompey Strabo and later Sulla in the Social war between Rome and its Italian allies. When in Rome during the turbulent plebeian tribunate of Publius Sulpicius Rufus in 88 BC which saw a short bout of fighting between the Sulpicius and Sulla, who had been elected consul for that year, Cicero found himself greatly impressed by Sulpicius' oratory even if he disagreed with his politics. He continued his studies at Rome, writing a pamphlet titled "On Invention" relating to rhetorical argumentation and studying philosophy with Greek academics who had fled the ongoing First Mithridatic War.
Education.
During this period in Roman history, Greek language and cultural studies were highly valued by the elite classes. Cicero was therefore educated in the teachings of the ancient Greek philosophers, poets and historians; as he obtained much of his understanding of the theory and practice of rhetoric from the Greek poet Archias. Cicero used his knowledge of Greek to translate many of the theoretical concepts of Greek philosophy into Latin, thus translating Greek philosophical works for a larger audience. It was precisely his broad education that tied him to the traditional Roman elite.
Cicero's interest in philosophy figured heavily in his later career and led to him providing a comprehensive account of Greek philosophy for a Roman audience, including creating a philosophical vocabulary in Latin. In 87 BC, Philo of Larissa, the head of the Platonic Academy that had been founded by Plato in Athens about 300 years earlier, arrived in Rome. Cicero, "inspired by an extraordinary zeal for philosophy", sat enthusiastically at his feet and absorbed Carneades' Academic Skeptic philosophy.
According to Plutarch, Cicero was an extremely talented student, whose learning attracted attention from all over Rome, affording him the opportunity to study Roman law under Quintus Mucius Scaevola. Cicero's fellow students were Gaius Marius Minor, Servius Sulpicius Rufus (who became a famous lawyer, one of the few whom Cicero considered superior to himself in legal matters), and Titus Pomponius. The latter two became Cicero's friends for life, and Pomponius (who later received the nickname "Atticus", and whose sister married Cicero's brother) would become, in Cicero's own words, "as a second brother", with both maintaining a lifelong correspondence.
In 79 BC, Cicero left for Greece, Asia Minor and Rhodes. This was perhaps to avoid the potential wrath of Sulla, as Plutarch claims, though Cicero himself says it was to hone his skills and improve his physical fitness. In Athens he studied philosophy with Antiochus of Ascalon, the 'Old Academic' and initiator of Middle Platonism. In Asia Minor, he met the leading orators of the region and continued to study with them. Cicero then journeyed to Rhodes to meet his former teacher, Apollonius Molon, who had taught him in Rome. Molon helped Cicero hone the excesses in his style, as well as train his body and lungs for the demands of public speaking. Charting a middle path between the competing Attic and Asiatic styles, Cicero would ultimately become considered second only to Demosthenes among history's orators.
Early career.
Early legal activity.
While Cicero had feared that the law courts would be closed forever, they were reopened in the aftermath of Sulla's civil war and the purging of Sulla's political opponents in the proscriptions. Many of the orators whom Cicero had admired in his youth were now dead from age or political violence. His first major appearance in the courts was in 81 BC at the age of 26 when he delivered "Pro Quinctio", a speech defending certain commercial transactions which Cicero had recorded and disseminated.
His more famous speech defending Sextus Roscius of Ameria – – on charges of parricide in 80 BC was his first appearance in criminal court. In this high-profile case, Cicero accused a freedman of the dictator Sulla, Chrysogonus, of fabricating Roscius' father's proscription to obtain Roscius' family's property. Successful in his defence, Cicero tactfully avoided incriminating Sulla of any wrongdoing and developed a positive oratorical reputation for himself.
While Plutarch claims that Cicero left Rome shortly thereafter out of fear of Sulla's response, according to Kathryn Tempest, "most scholars now dismiss this suggestion" because Cicero left Rome after Sulla resigned his dictatorship. Cicero, for his part, later claimed that he left Rome, headed for Asia, to develop his physique and develop his oratory. After marrying his wife, Terentia, in 80 BC, he eventually left for Asia Minor with his brother Quintus, his friend Titus Atticus, and others on a long trip spanning most of 79 through 77 BC. Returning to Rome in 77 BC, Cicero again busied himself with legal defence.
Early political career.
In 76 BC, at the quaestorian elections, Cicero was elected at the minimum age required – 30 years – in the first returns from the "comitia tributa", to the post of quaestor. Ex officio, he also became a member of the Senate. In the quaestorian lot, he was assigned to Sicily for 75 BC. The post, which was largely one related to financial administration in support of the state or provincial governors, proved for Cicero an important place where he could gain clients in the provinces. His time in Sicily saw him balance his duties – largely in terms of sending more grain back to Rome – with his support for the provincials, Roman businessmen in the area, and local potentates. Adeptly balancing those responsibilities, he won their gratitude. He was also appreciated by local Syracusans for the rediscovery of the lost tomb of Archimedes, which he personally financed.
Promising to lend the Sicilians his oratorical voice, he was called on a few years after his quaestorship to prosecute the Roman province's governor Gaius Verres, for abuse of power and corruption. In 70 BC, at the age of 36, Cicero launched his first high-profile prosecution against Verres, an emblem of the corrupt Sullan supporters who had risen in the chaos of the civil war.
The prosecution of Gaius Verres was a great forensic success for Cicero. While Verres hired the prominent lawyer, Quintus Hortensius, after a lengthy period in Sicily collecting testimonials and evidence and persuading witnesses to come forward, Cicero returned to Rome and won the case in a series of dramatic court battles. His unique style of oratory set him apart from the flamboyant Hortensius. On the conclusion of this case, Cicero came to be considered the greatest orator in Rome. The view that Cicero may have taken the case for reasons of his own is viable. Hortensius was, at this point, known as the best lawyer in Rome; to beat him would guarantee much success and the prestige that Cicero needed to start his career. Cicero's oratorical ability is shown in his character assassination of Verres and various other techniques of persuasion used on the jury. One such example is found in the speech "In Verrem", where he states "with you on this bench, gentlemen, with Marcus Acilius Glabrio as your president, I do not understand what Verres can hope to achieve". Oratory was considered a great art in ancient Rome and an important tool for disseminating knowledge and promoting oneself in elections, in part because there were no regular newspapers or mass media. Cicero was neither a patrician nor a plebeian noble; his rise to political office despite his relatively humble origins has traditionally been attributed to his brilliance as an orator.
Cicero grew up in a time of civil unrest and war. Sulla's victory in the first of a series of civil wars led to a new constitutional framework that undermined (liberty), the fundamental value of the Roman Republic. Nonetheless, Sulla's reforms strengthened the position of the equestrian class, contributing to that class's growing political power. Cicero was both an Italian and a , but more importantly he was a Roman constitutionalist. His social class and loyalty to the Republic ensured that he would "command the support and confidence of the people as well as the Italian middle classes". He successfully ascended the cursus honorum, holding each magistracy at or near the youngest possible age: quaestor in 75 BC (age 30), aedile in 69 BC (age 36), and praetor in 66 BC (age 39), when he served as president of the extortion court. He was then elected consul at age 42.
Consulship.
Cicero, seizing the opportunity offered by optimate fear of reform, was elected consul for the year 63 BC; he was elected with the support of every unit of the centuriate assembly, rival members of the post-Sullan establishment, and the leaders of municipalities throughout post-Social War Italy. His co-consul for the year, Gaius Antonius Hybrida, played a minor role.
He began his consular year by opposing a land bill proposed by a plebeian tribune which would have appointed commissioners with semi-permanent authority over land reform. Cicero was also active in the courts, defending Gaius Rabirius from accusations of participating in the unlawful killing of plebeian tribune Lucius Appuleius Saturninus in 100 BC. The prosecution occurred before the and threatened to reopen conflict between the Marian and Sullan factions at Rome. Cicero defended the use of force as being authorised by a , which would prove similar to his own use of force under such conditions.
Catilinarian conspiracy.
Most famouslyin part because of his own publicityhe thwarted a conspiracy led by Lucius Sergius Catilina to overthrow the Roman Republic with the help of foreign armed forces. Cicero procured a "senatus consultum ultimum" (a recommendation from the senate attempting to legitimise the use of force) and drove Catiline from the city with four vehement speeches (the Catilinarian orations), which remain outstanding examples of his rhetorical style. The Orations listed Catiline and his followers' debaucheries, and denounced Catiline's senatorial sympathizers as roguish and dissolute debtors clinging to Catiline as a final and desperate hope. Cicero demanded that Catiline and his followers leave the city. At the conclusion of Cicero's first speech (which was made in the Temple of Jupiter Stator), Catiline hurriedly left the Senate. In his following speeches, Cicero did not directly address Catiline. He delivered the second and third orations before the people, and the last one again before the Senate. By these speeches, Cicero wanted to prepare the Senate for the worst possible case; he also delivered more evidence, against Catiline.
Catiline fled and left behind his followers to start the revolution from within while he himself assaulted the city with an army of "moral and financial bankrupts, or of honest fanatics and adventurers". It is alleged that Catiline had attempted to involve the Allobroges, a tribe of Transalpine Gaul, in their plot, but Cicero, working with the Gauls, was able to seize letters that incriminated the five conspirators and forced them to confess in front of the Senate. The senate then deliberated upon the conspirators' punishment. As it was the dominant advisory body to the various legislative assemblies rather than a judicial body, there were limits to its power; however, martial law was in effect, and it was feared that simple house arrest or exile – the standard options – would not remove the threat to the state. At first Decimus Junius Silanus spoke for the "extreme penalty"; but during the debate many were swayed by Julius Caesar, who decried the precedent it would set and argued in favor of life imprisonment in various Italian towns. Cato the Younger then rose in defense of the death penalty and the Senate finally agreed on the matter, and came down in support of the death penalty. Cicero had the conspirators taken to the Tullianum, the notorious Roman prison, where they were strangled. Cicero himself accompanied the former consul Publius Cornelius Lentulus Sura, one of the conspirators, to the Tullianum.
Cicero received the honorific "pater patriae" for his efforts to suppress the conspiracy, but lived thereafter in fear of trial or exile for having put Roman citizens to death without trial. While the "senatus consultum ultimum" gave some legitimacy to the use of force against the conspirators, Cicero also argued that Catiline's conspiracy, by virtue of its treason, made the conspirators enemies of the state and forfeited the protections intrinsically possessed by Roman citizens. The consuls moved decisively. Antonius Hybrida was dispatched to defeat Catiline in battle that year, preventing Crassus or Pompey from exploiting the situation for their own political aims.
After the suppression of the conspiracy, Cicero was proud of his accomplishment. Some of his political enemies argued that though the act gained Cicero popularity, he exaggerated the extent of his success. He overestimated his popularity again several years later after being exiled from Italy and then allowed back from exile. At this time, he claimed that the republic would be restored along with him.
Shortly after completing his consulship, in late 62 BC, Cicero arranged the purchase of a large townhouse on the Palatine Hill previously owned by Rome's richest citizen, Marcus Licinius Crassus. To finance the purchase, Cicero borrowed some two million sesterces from Publius Cornelius Sulla, whom he had previously defended from court. Cicero boasted his house was "in conspectu prope totius urbis" ("in sight of nearly the whole city"), only a short walk from the Roman Forum.
Exile and return.
In 60 BC, Julius Caesar invited Cicero to be the fourth member of his existing partnership with Pompey and Marcus Licinius Crassus, an assembly that would eventually be called the First Triumvirate. Cicero refused the invitation because he suspected it would undermine the Republic, and because he was strongly opposed to anything unconstitutional that limited the powers of the consuls and replaced them with non-elected officials.
During Caesar's consulship of 59 BC, the triumvirate had achieved many of their goals of land reform, publicani debt forgiveness, ratification of Pompeian conquests, etc. With Caesar leaving for his provinces, they wished to maintain their hold on politics. They engineered the adoption of patrician Publius Clodius Pulcher into a plebeian family and had him elected as one of the ten tribunes of the plebs for 58 BC. Clodius used the triumvirate's backing to push through legislation that benefited them. He introduced several laws (the "leges Clodiae") that made him popular with the people, strengthening his power base, then he turned on Cicero. Clodius passed a law which made it illegal to offer "fire and water" (i.e. shelter or food) to anyone who executed a Roman citizen without a trial.
Cicero, having executed members of the Catiline conspiracy four years previously without formal trial, was clearly the intended target. Furthermore, many believed that Clodius acted in concert with the triumvirate who feared that Cicero would seek to abolish many of Caesar's accomplishments while consul the year before. Cicero argued that the "senatus consultum ultimum" indemnified him from punishment, and he attempted to gain the support of the senators and consuls, especially of Pompey.
Cicero grew out his hair, dressed in mourning and toured the streets. Clodius' gangs dogged him, hurling abuse, stones and even excrement. Hortensius, trying to rally to his old rival's support, was almost lynched. The Senate and the consuls were cowed. Caesar, who was still encamped near Rome, was apologetic but said he could do nothing when Cicero brought himself to grovel in the proconsul's tent. Everyone seemed to have abandoned Cicero.
After Clodius passed a law to deny to Cicero fire and water (i.e. shelter) within four hundred miles of Rome, Cicero went into exile. He arrived at Thessalonica, on 23 May 58 BC. In his absence, Clodius, who lived next door to Cicero on the Palatine, arranged for Cicero's house to be confiscated by the state, and was even able to purchase a part of the property in order to extend his own house. After demolishing Cicero's house, Clodius had the land consecrated and symbolically erected a temple of Liberty ("aedes Libertatis") on the vacant land.
Cicero's exile caused him to fall into depression. He wrote to Atticus: "Your pleas have prevented me from committing suicide. But what is there to live for? Don't blame me for complaining. My afflictions surpass any you ever heard of earlier". After the intervention of recently elected tribune Titus Annius Milo, acting on the behalf of Pompey who wanted Cicero as a client, the Senate voted in favor of recalling Cicero from exile. Clodius cast the single vote against the decree. Cicero returned to Italy on 5 August 57 BC, landing at Brundisium. He was greeted by a cheering crowd, and, to his delight, his beloved daughter Tullia. In his "Oratio De Domo Sua Ad Pontifices", Cicero convinced the College of Pontiffs to rule that the consecration of his land was invalid, thereby allowing him to regain his property and rebuild his house on the Palatine.
Cicero tried to re-enter politics as an independent operator, but his attempts to attack portions of Caesar's legislation were unsuccessful and encouraged Caesar to re-solidify his political alliance with Pompey and Crassus. The conference at Luca in 56 BC left the three-man alliance in domination of the republic's politics; this forced Cicero to recant and support the triumvirate out of fear from being entirely excluded from public life. After the conference, Cicero lavishly praised Caesar's achievements, got the Senate to vote a thanksgiving for Caesar's victories, and grant money to pay his troops. He also delivered a speech 'On the consular provinces' () which checked an attempt by Caesar's enemies to strip him of his provinces in Gaul. After this, a cowed Cicero concentrated on his literary works. It is uncertain whether he was directly involved in politics for the following few years. His legal work largely consisted of defending allies of the ruling and his own personal friends and allies; he defended his former pupil Marcus Caelius Rufus against a charge of murder in 56. Under the influence of the triumvirs, he had also defended his former enemies Publius Vatinius (in August 54 BC), Marcus Aemilius Scaurus (between July and September) and Gnaeus Plancius (with the ) in September, which weakened his prestige and sparked attacks on his integrity: Luca Grillo has suggested these cases as the source of the poet Catullus's double-edged comment that Cicero was "the best defender of anybody".
Governorship of Cilicia.
In 51 BC he reluctantly accepted a promagistracy (as proconsul) in Cilicia for the year; there were few other former consuls eligible as a result of a legislative requirement enacted by Pompey in 52 BC specifying an interval of five years between a consulship or praetorship and a provincial command. He served as proconsul of Cilicia from May 51 BC, arriving in the provinces three months later around August.
In 53 BC Marcus Licinius Crassus had been defeated by the Parthians at the Battle of Carrhae. This opened the Roman East for a Parthian invasion, causing unrest in Syria and Cilicia. Cicero restored calm by his mild system of government. He discovered that a great amount of public property had been embezzled by corrupt previous governors and members of their staff, and did his utmost to restore it. Thus he greatly improved the condition of the cities. He retained the civil rights of, and exempted from penalties, the men who gave the property back. Besides this, he was extremely frugal in his outlays for staff and private expenses during his governorship, and this made him highly popular among the natives.
Besides his activity in ameliorating the hard pecuniary situation of the province, Cicero was also creditably active in the military sphere. Early in his governorship he received information that prince Pacorus, son of Orodes II the king of the Parthians, had crossed the Euphrates, and was ravaging the Syrian countryside and had even besieged Cassius (the interim Roman commander in Syria) in Antioch. Cicero eventually marched with two understrength legions and a large contingent of auxiliary cavalry to Cassius's relief. Pacorus and his army had already given up on besieging Antioch and were heading south through Syria, ravaging the countryside again. Cassius and his legions followed them, harrying them wherever they went, eventually ambushing and defeating them near Antigonea.
Another large troop of Parthian horsemen was defeated by Cicero's cavalry who happened to run into them while scouting ahead of the main army. Cicero next defeated some robbers who were based on Mount Amanus and was hailed as imperator by his troops. Afterwards he led his army against the independent Cilician mountain tribes, besieging their fortress of Pindenissum. It took him 47 days to reduce the place, which fell in December. On 30 July 50 BC Cicero left the province to his brother Quintus, who had accompanied him on his governorship as his legate. On his way back to Rome he stopped in Rhodes and then went to Athens, where he caught up with his old friend Titus Pomponius Atticus and met men of great learning.
Julius Caesar's civil war.
Cicero arrived in Rome on 4 January 49 BC. He stayed outside the pomerium, to retain his promagisterial powers: either in expectation of a triumph or to retain his independent command authority in the coming civil war. The struggle between Pompey and Julius Caesar grew more intense in 50 BC. Cicero favored Pompey, seeing him as a defender of the senate and Republican tradition, but at that time avoided openly alienating Caesar. When Caesar invaded Italy in 49 BC, Cicero fled Rome. Caesar, seeking an endorsement by a senior senator, courted Cicero's favor, but even so Cicero slipped out of Italy and traveled to Dyrrhachium where Pompey's staff was situated. Cicero traveled with the Pompeian forces to Pharsalus in Macedonia in 48 BC, though he was quickly losing faith in the competence and righteousness of the Pompeian side. Eventually, he provoked the hostility of his fellow senator Cato, who told him that he would have been of more use to the cause of the "optimates" if he had stayed in Rome. After Caesar's victory at the Battle of Pharsalus on 9 August, Cicero refused to take command of the Pompeian forces and continue the war. He returned to Rome, still as a promagistrate with his lictors, in 47 BC, and dismissed them upon his crossing the pomerium and renouncing his command.
In a letter to Varro on , Cicero outlined his strategy under Caesar's dictatorship. Cicero, however, was taken by surprise when the "Liberatores" assassinated Caesar on the ides of March, 44 BC. Cicero was not included in the conspiracy, even though the conspirators were sure of his sympathy. Marcus Junius Brutus called out Cicero's name, asking him to restore the republic when he lifted his bloodstained dagger after the assassination. A letter Cicero wrote in February 43 BC to Trebonius, one of the conspirators, began, "How I could wish that you had invited me to that most glorious banquet on the Ides of March!" Cicero became a popular leader during the period of instability following the assassination. He had no respect for Mark Antony, who was scheming to take revenge upon Caesar's murderers. In exchange for amnesty for the assassins, he arranged for the Senate to agree not to declare Caesar to have been a tyrant, which allowed the Caesarians to have lawful support and kept Caesar's reforms and policies intact.
Opposition to Mark Antony and death.
In April 43 BC, "diehard republicans" may have revived the ancient position of "princeps senatus" (leader of the senate) for Cicero. This position had been very prestigious until the constitutional reforms of Sulla in 82–80 BC, which removed most of its importance.
On the other side, Antony was consul and leader of the Caesarian faction, and unofficial executor of Caesar's public will. Relations between the two were never friendly and worsened after Cicero claimed that Antony was taking liberties in interpreting Caesar's wishes and intentions. Octavian was Caesar's adopted son and heir. After he returned to Italy, Cicero began to play him against Antony. He praised Octavian, declaring he would not make the same mistakes as his father. He attacked Antony in a series of speeches he called the "Philippics", named after Demosthenes's denunciations of Philip II of Macedon. At the time, Cicero's popularity as a public figure was unrivalled.
Cicero supported Decimus Junius Brutus Albinus as governor of Cisalpine Gaul ("Gallia Cisalpina") and urged the Senate to name Antony an enemy of the state. The speech of Lucius Piso, Caesar's father-in-law, delayed proceedings against Antony. Antony was later declared an enemy of the state when he refused to lift the siege of Mutina, which was in the hands of Decimus Brutus. Cicero's plan to drive out Antony failed. Antony and Octavian reconciled and allied with Lepidus to form the Second Triumvirate after the successive battles of Forum Gallorum and Mutina. The alliance came into official existence with the "lex Titia", passed on 27 November 43 BC, which gave each triumvir a consular "imperium" for five years. The Triumvirate immediately began a proscription of their enemies, modeled after that of Sulla in 82 BC. Cicero and all of his contacts and supporters were numbered among the enemies of the state, even though Octavian argued for two days against Cicero being added to the list.
Cicero was one of the most viciously and doggedly hunted among the proscribed. He was viewed with sympathy by a large segment of the public and many people refused to report that they had seen him. He was caught on 7 December 43 BC leaving his villa in Formiae in a litter heading to the seaside, where he hoped to embark on a ship destined for Macedonia. When his killers – Herennius (a Centurion) and Popilius (a Tribune) – arrived, Cicero's own slaves said they had not seen him, but he was given away by Philologus, a freedman of his brother Quintus Cicero.
As reported by Seneca the Elder, according to the historian Aufidius Bassus, Cicero's last words are said to have been:
He bowed to his captors, leaning his head out of the litter in a gladiatorial gesture to ease the task. By baring his neck and throat to the soldiers, he was indicating that he would not resist. According to Plutarch, Herennius first slew him, then cut off his head. On Antony's instructions his hands, which had penned the Philippics against Antony, were cut off as well; these were nailed along with his head on the Rostra in the Forum Romanum according to the tradition of Marius and Sulla, both of whom had displayed the heads of their enemies in the Forum. Cicero was the only victim of the proscriptions who was displayed in that manner. According to Cassius Dio, in a story often mistakenly attributed to Plutarch, Antony's wife Fulvia took Cicero's head, pulled out his tongue, and jabbed it repeatedly with her hairpin in final revenge against Cicero's power of speech.
Cicero's son, Marcus Tullius Cicero Minor, during his year as a consul in 30 BC, avenged his father's death, to a certain extent, when he announced to the Senate Mark Antony's naval defeat at Actium in 31 BC by Octavian.
Octavian is reported to have praised Cicero as a patriot and a scholar of meaning in later times, within the circle of his family. However, it was Octavian's acquiescence that had allowed Cicero to be killed, as Cicero was condemned by the new triumvirate.
Cicero's career as a statesman was marked by inconsistencies and a tendency to shift his position in response to changes in the political climate. His indecision may be attributed to his sensitive and impressionable personality; he was prone to overreaction in the face of political and private change. "Would that he had been able to endure prosperity with greater self-control, and adversity with more fortitude!" wrote C. Asinius Pollio, a contemporary Roman statesman and historian.
Personal life and family.
Cicero married Terentia probably at the age of 27, in 79 BC. According to the upper-class mores of the day it was a marriage of convenience but lasted harmoniously for nearly 30 years. Terentia's family was wealthy, probably the plebeian noble house of Terenti Varrones, thus meeting the needs of Cicero's political ambitions in both economic and social terms. She had a half-sister named Fabia, who as a child had become a Vestal Virgin, a great honour. Terentia was a strong-willed woman and (citing Plutarch) "took more interest in her husband's political career than she allowed him to take in household affairs".
In the 50s BC, Cicero's letters to Terentia became shorter and colder. He complained to his friends that Terentia had betrayed him but did not specify in which sense. Perhaps the marriage could not outlast the strain of the political upheaval in Rome, Cicero's involvement in it, and various other disputes between the two. The divorce appears to have taken place in 51 BC or shortly before. In 46 or 45 BC, Cicero married a young girl, Publilia, who had been his ward. It is thought that Cicero needed her money, particularly after having to repay the dowry of Terentia, who came from a wealthy family.
Although his marriage to Terentia was one of convenience, it is commonly known that Cicero held great love for his daughter Tullia. When she suddenly became ill in February 45 BC and died after having seemingly recovered from giving birth to a son in January, Cicero was stunned. "I have lost the one thing that bound me to life," he wrote to Atticus. Atticus told him to come for a visit during the first weeks of his bereavement, so that he could comfort him when his pain was at its greatest. In Atticus's large library, Cicero read everything that the Greek philosophers had written about overcoming grief, "but my sorrow defeats all consolation." Caesar and Brutus, as well as Servius Sulpicius Rufus, sent him letters of condolence.
Cicero hoped that his son Marcus would become a philosopher like him, but Marcus himself wished for a military career. He joined the army of Pompey in 49 BC, and after Pompey's defeat at Pharsalus 48 BC, he was pardoned by Caesar. Cicero sent him to Athens to study as a disciple of the peripatetic philosopher Kratippos in 48 BC, but he used this absence from "his father's vigilant eye" to "eat, drink, and be merry." After Cicero's death, he joined the army of the "Liberatores" but was later pardoned by Augustus. Augustus's bad conscience for having given in to Cicero's being put on the proscription list during the Second Triumvirate led him to aid considerably Marcus Minor's career. He became an augur and was nominated consul in 30 BC together with Augustus. As such, he was responsible for revoking the honors of Mark Antony, who was responsible for the proscription and could in this way take revenge. Later he was appointed proconsul of Syria and the province of Asia.
Legacy.
Cicero has been traditionally considered the master of Latin prose, with Quintilian declaring that Cicero was "not the name of a man, but of eloquence itself." The English words "Ciceronian" (meaning "eloquent") and "cicerone" (meaning "local guide") derive from his name. He is credited with transforming Latin from a modest utilitarian language into a versatile literary medium capable of expressing abstract and complicated thoughts with clarity. Julius Caesar praised Cicero's achievement by saying "it is more important to have greatly extended the frontiers of the Roman spirit than the frontiers of the Roman empire". According to John William Mackail, "Cicero's unique and imperishable glory is that he created the language of the civilized world, and used that language to create a style which nineteen centuries have not replaced, and in some respects have hardly altered."
Cicero was also an energetic writer with an interest in a wide variety of subjects, in keeping with the Hellenistic philosophical and rhetorical traditions in which he was trained. The quality and ready accessibility of Ciceronian texts favored very wide distribution and inclusion in teaching curricula, as suggested by a graffito at Pompeii, admonishing: "You will like Cicero, or you will be whipped".
Cicero was greatly admired by influential Church Fathers such as Augustine of Hippo, who credited Cicero's lost "Hortensius" for his eventual conversion to Christianity, and St. Jerome, who had a feverish vision in which he was accused of being "follower of Cicero and not of Christ" before the judgment seat.
This influence further increased after the Early Middle Ages in Europe, where more of his writings survived than any other Latin author's. Medieval philosophers were influenced by Cicero's writings on natural law and innate rights.
Petrarch's rediscovery of Cicero's letters provided the impetus for searches for ancient Greek and Latin writings scattered throughout European monasteries, and the subsequent rediscovery of classical antiquity led to the Renaissance. Subsequently, Cicero became synonymous with classical Latin to such an extent that a number of humanist scholars began to assert that no Latin word or phrase should be used unless it appeared in Cicero's works, a stance criticised by Erasmus.
His voluminous correspondence, much of it addressed to his friend Atticus, has been especially influential, introducing the art of refined letter writing to European culture. Cornelius Nepos, the first century BC biographer of Atticus, remarked that Cicero's letters contained such a wealth of detail "concerning the inclinations of leading men, the faults of the generals, and the revolutions in the government" that their reader had little need for a history of the period.
Among Cicero's admirers were Desiderius Erasmus, Martin Luther, and John Locke. Following the invention of Johannes Gutenberg's printing press, "De Officiis" was the second book printed in Europe, after the Gutenberg Bible. Scholars note Cicero's influence on the rebirth of religious toleration in the 17th century.
Cicero was especially popular with the Philosophes of the 18th century, including Edward Gibbon, Diderot, David Hume, Montesquieu, and Voltaire. Gibbon wrote of his first experience reading the author's collective works thus: "I tasted the beauty of the language; I breathed the spirit of freedom; and I imbibed from his precepts and examples the public and private sense of a man...after finishing the great author, a library of eloquence and reason, I formed a more extensive plan of reviewing the Latin classics..."
Voltaire called Cicero "the greatest as well as the most elegant of Roman philosophers" and even staged a play based on Cicero's role in the Catilinarian conspiracy, called "Rome Sauvée, ou Catilina", to "make young people who go to the theatre acquainted with Cicero." Voltaire was spurred to pen the drama as a rebuff to his rival Claude Prosper Jolyot de Crébillon's own play "Catilina", which had portrayed Cicero as a coward and villain who hypocritically married his own daughter to Catiline.
Montesquieu produced his "Discourse on Cicero" in 1717, in which he heaped praise on the author because he rescued "philosophy from the hands of scholars, and freed it from the confusion of a foreign language". Montesquieu went on to declare that Cicero was "of all the ancients, the one who had the most personal merit, and whom I would prefer to resemble."
Cicero the republican inspired the Founding Fathers of the United States and the revolutionaries of the French Revolution. John Adams said, "As all the ages of the world have not produced a greater statesman and philosopher united than Cicero, his authority should have great weight." Thomas Jefferson names Cicero as one of a handful of major figures who contributed to a tradition "of public right" that informed his draft of the Declaration of Independence and shaped American understandings of "the common sense" basis for the right of revolution. Camille Desmoulins said of the French republicans in 1789 that they were "mostly young people who, nourished by the reading of Cicero at school, had become passionate enthusiasts for liberty".
In the modern era, American libertarian Jim Powell starts his history of liberty with the sentence: "Marcus Tullius Cicero expressed principles that became the bedrock of liberty in the modern world."
Likewise, no other ancient personality has inspired as much venomous dislike as Cicero, especially in more modern times. His commitment to the values of the Republic accommodated a hatred of the poor and persistent opposition to the advocates and mechanisms of popular representation. Friedrich Engels referred to him as "the most contemptible scoundrel in history" for upholding republican "democracy" while at the same time denouncing land and class reforms. Cicero has faced criticism for exaggerating the democratic qualities of republican Rome, and for defending the Roman oligarchy against the popular reforms of Caesar. Michael Parenti admits Cicero's abilities as an orator, but finds him a vain, pompous and hypocritical personality who, when it suited him, could show public support for popular causes that he privately despised. Parenti presents Cicero's prosecution of the Catiline conspiracy as legally flawed at least, and possibly unlawful.
Cicero also had an influence on modern astronomy. Nicolaus Copernicus, searching for ancient views on earth motion, said that he "first ... found in Cicero that Hicetas supposed the earth to move."
Notably, "Cicero" was the name attributed to size 12 font in typesetting table drawers. For ease of reference, type sizes 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, and 20 were all given different names.
Works.
Cicero was declared a righteous pagan by the Early Church. Subsequent Roman and medieval Christian writers quoted liberally from his works "De re publica" ("On the Commonwealth") and "De Legibus" ("On the Laws"), and much of his work has been recreated from these surviving fragments. Cicero also articulated an early, abstract conceptualization of rights, based on ancient law and custom. Of Cicero's books, six on rhetoric have survived, as well as parts of seven on philosophy. Of his speeches, 88 were recorded, but only 52 survive.
In archaeology.
Cicero's great repute in Italy has led to numerous ruins being identified as having belonged to him, though none have been substantiated with absolute certainty. In Formia, two Roman-era ruins are popularly believed to be Cicero's mausoleum, the "Tomba di Cicerone", and the villa where he was assassinated in 43 BC. The latter building is centered around a central hall with Doric columns and a coffered vault, with a separate nymphaeum, on five acres of land near Formia. A modern villa was built on the site after the Rubino family purchased the land from Ferdinand II of the Two Sicilies in 1868. Cicero's supposed tomb is a 24-meter (79 feet) tall tower on an "opus quadratum" base on the ancient Via Appia outside of Formia. Some suggest that it is not in fact Cicero's tomb, but a monument built on the spot where Cicero was intercepted and assassinated while trying to reach the sea.
In Pompeii, a large villa excavated in the mid 18th century just outside the Herculaneum Gate was widely believed to have been Cicero's, who was known to have owned a holiday villa in Pompeii he called his "Pompeianum". The villa was stripped of its fine frescoes and mosaics and then re-buried after 1763 – it has yet to be re-excavated. However, contemporaneous descriptions of the building from the excavators combined with Cicero's own references to his "Pompeianum" differ, making it unlikely that it is Cicero's villa.
In Rome, the location of Cicero's house has been roughly identified from excavations of the Republican-era stratum on the northwestern slope of the Palatine Hill. Cicero's "domus" has long been known to have stood in the area, according to his own descriptions and those of later authors, but there is some debate about whether it stood near the base of the hill, very close to the Roman Forum, or nearer to the summit. During his life the area was the most desirable in Rome, densely occupied with Patrician houses including the "Domus Publica" of Julius Caesar and the home of Cicero's mortal enemy Clodius.
Notable fictional portrayals.
In Dante's 1320 poem the "Divine Comedy", the author encounters Cicero, among other philosophers, in Limbo. Ben Jonson dramatised the conspiracy of Catiline in his play "Catiline His Conspiracy", featuring Cicero as a character. Cicero also appears as a minor character in William Shakespeare's play "Julius Caesar".
Cicero was portrayed on the motion picture screen by British actor Alan Napier in the 1953 film "Julius Caesar", based on Shakespeare's play. He has also been played by such noted actors as Michael Hordern (in "Cleopatra"), and André Morell (in the 1970 "Julius Caesar"). Most recently, Cicero was portrayed by David Bamber in the HBO series "Rome" (2005–2007) and appeared in both seasons.
In the historical novel series "Masters of Rome", Colleen McCullough presents a not-so-flattering depiction of Cicero's career, showing him struggling with an inferiority complex and vanity, morally flexible and fatally indiscreet, while his rival Julius Caesar is shown in a more approving light. Cicero is portrayed as a hero in the novel "A Pillar of Iron" by Taylor Caldwell (1965). Robert Harris' novels "Imperium", "Lustrum" (published under the name "Conspirata" in the United States) and "Dictator" comprise a three-part series based on the life of Cicero. In these novels Cicero's character is depicted in a more favorable way than in those of McCullough, with his positive traits equaling or outweighing his weaknesses (while conversely Caesar is depicted as more sinister than in McCullough). Cicero is a major recurring character in the "Roma Sub Rosa" series of mystery novels by Steven Saylor. He also appears several times as a peripheral character in John Maddox Roberts' "SPQR" series.
Samuel Barnett portrays Cicero in a 2017 audio drama series pilot produced by Big Finish Productions. A full series was released the following year. All episodes are written by David Llewellyn and directed and produced by Scott Handcock.
External links.
Works by Cicero
Biographies and descriptions of Cicero's time
Plutarch's biography of Cicero contained in the
|
6047
|
20483999
|
https://en.wikipedia.org/wiki?curid=6047
|
Consul
|
Consul (abbrev. "cos."; Latin plural "consules") was the title of one of the two chief magistrates of the Roman Republic, and subsequently also an important title under the Roman Empire. The title was used in other European city-states through antiquity and the Middle Ages, in particular in the Republics of Genoa and Pisa, then revived in modern states, notably in the First French Republic. The related adjective is consular, from the Latin "consularis".
This usage contrasts with modern terminology, where a consul is a type of diplomat.
Roman consul.
A consul held the highest elected political office of the Roman Republic (509 to 27 BC), and ancient Romans considered the consulship the highest level of the "cursus honorum" (an ascending sequence of public offices to which politicians aspired). Consuls were elected to office and held power for one year. There were always two consuls in power at any time.
Other uses in antiquity.
Private sphere.
It was not uncommon for an organization under Roman private law to copy the terminology of state and city institutions for its own statutory agents. The founding statute, or contract, of such an organisation was called "lex", 'law'. The people elected each year were patricians, members of the upper class.
City-states.
While many cities, including the Gallic states and the Carthaginian Republic, had a double-headed chief magistracy, another title was often used, such as the Punic "sufet", "Duumvir", or native styles like "Meddix".
Medieval city-states, communes and municipalities.
Republic of Genoa.
The city-state of Genoa, unlike ancient Rome, bestowed the title of "consul" on various state officials, not necessarily restricted to the highest. Among these were Genoese officials stationed in various Mediterranean ports, whose role included helping Genoese merchants and sailors in difficulties with the local authorities. Great Britain reciprocated by appointing consuls to Genoa from 1722. This institution, with its name, was later emulated by other powers and is reflected in the modern usage of the word (see Consul (representative)).
Republic of Pisa.
In addition to the Genoese Republic, the Republic of Pisa also took the form of "Consul" in the early stages of its government. The Consulate of the Republic of Pisa was the major government institution present in Pisa from 1087 to 1189. Despite losing space within the government since 1190 in favor of the Podestà, for some periods of the 13th century some citizens were again elected as consuls.
Other uses in the Medieval period.
Throughout most of southern France, a consul ( or "") was an office equivalent to the of the north and roughly similar with English aldermen. The most prominent were those of Bordeaux and Toulouse, which came to be known as jurats and capitouls, respectively. The capitouls of Toulouse were granted transmittable nobility. In many other smaller towns the first consul was the equivalent of a mayor today, assisted by a variable number of secondary consuls and jurats. His main task was to levy and collect tax.
The Dukes of Gaeta often used also the title of "consul" in its Greek form "Hypatos" (see List of Hypati and Dukes of Gaeta).
French Revolution.
French Republic 1799–1804.
After Napoleon Bonaparte staged a coup against the Directory government in November 1799, the French Republic adopted a constitution which conferred executive powers upon three consuls, elected for a period of ten years. In reality, the first consul, Bonaparte, dominated his two colleagues and held supreme power, soon making himself consul for life (1802) and eventually, in 1804, emperor.
The office was held by:
Bolognese Republic, 1796.
The short-lived Bolognese Republic, proclaimed in 1796 as a French client republic in the Central Italian city of Bologna, had a government consisting of nine consuls and its head of state was the "Presidente del Magistrato", i.e., chief magistrate, a presiding office held for four months by one of the consuls. Bologna already had consuls at some parts of its Medieval history.
Roman Republic, 1798–1800.
The French-sponsored Roman Republic (15 February 1798 – 23 June 1800) was headed by multiple consuls:
Consular rule was interrupted by the Neapolitan occupation (27 November – 12 December 1798), which installed a Provisional Government:
Rome was occupied by France (11 July – 28 September 1799) and again by Naples (30 September 1799 – 23 June 1800), bringing an end to the Roman Republic.
Revolutionary Greece, 1821.
Among the many petty local republics that were formed during the first year of the Greek Revolution, prior to the creation of a unified Provisional Government at the First National Assembly at Epidaurus, were:
"Note: in Greek, the term for "consul" is "hypatos" (ὕπατος), which translates as "supreme one", and hence does not necessarily imply a joint office."
Paraguay, 1813–1844.
In between a series of juntas and various other short-lived regimes, the young republic was governed by "consuls of the republic", with two consuls alternating in power every 4 months:
After a few presidents of the Provisional Junta, there were again consuls of the republic, 14 March 1841 – 13 March 1844 (ruling jointly, but occasionally styled "first consul", "second consul"): Carlos Antonio López Ynsfrán (b. 1792 – d. 1862) + Mariano Roque Alonzo Romero (d. 1853) (the lasts of the aforementioned juntistas, Commandant-General of the Army)
Thereafter all republican rulers were styled "president".
Modern uses of the term.
In modern terminology, a consul is a type of diplomat. The "American Heritage Dictionary" defines consul as "an official appointed by a government to reside in a foreign country and represent its interests there." "The Devil's Dictionary" defines Consul as "in American politics, a person who having failed to secure an office from the people is given one by the Administration on condition that he leave the country".
In most governments, the consul is the head of the consular section of an embassy, and is responsible for all consular services such as immigrant and non-immigrant visas, passports, and citizen services for expatriates living or traveling in the host country.
A less common modern usage is when the consul of one country takes a governing role in the host country.
See also.
Differently named, but same function
Modern UN System
Sources and references.
Specific
|
6050
|
49001208
|
https://en.wikipedia.org/wiki?curid=6050
|
List of equations in classical mechanics
|
Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space.
Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these.
This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics (which includes Lagrangian and Hamiltonian mechanics).
Classical mechanics.
General energy definitions.
Every conservative force has a potential energy. By following two principles one can consistently assign a non-relative value to "U":
Kinematics.
In the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use "θ", but this does not have to be the polar angle used in polar coordinate systems. The unit axial vector
formula_1
defines the axis of rotation, formula_2 = unit vector in direction of , formula_3 = unit vector tangential to the angle.
Dynamics.
Precession.
The precession angular speed of a spinning top is given by:
formula_4
where "w" is the weight of the spinning flywheel.
Energy.
The mechanical work done by an external agent on a system is equal to the change in kinetic energy of the system:
General work-energy theorem (translation and rotation).
The work done "W" by an external agent which exerts a force F (at r) and torque τ on an object along a curved path "C" is:
formula_5
where θ is the angle of rotation about an axis defined by a unit vector n.
Kinetic energy.
The change in kinetic energy for an object initially traveling at speed formula_6 and later at speed formula_7 is:
formula_8
Elastic potential energy.
For a stretched spring fixed at one end obeying Hooke's law, the elastic potential energy is
formula_9
where "r"2 and "r"1 are collinear coordinates of the free end of the spring, in the direction of the extension/compression, and k is the spring constant.
Euler's equations for rigid body dynamics.
Euler also worked out analogous laws of motion to those of Newton, see Euler's laws of motion. These extend the scope of Newton's laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is:
formula_10
where I is the moment of inertia tensor.
General planar motion.
The previous equations for planar motion can be used here: corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane,
formula_11
the following general results apply to the particle.
Central force motion.
For a massive body moving in a central potential due to another object, which depends only on the radial separation between the centers of masses of the two objects, the equation of motion is:
formula_12
Equations of motion (constant acceleration).
These equations can be used only when acceleration is constant. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity and acceleration (see above).
Galilean frame transforms.
For classical (Galileo-Newtonian) mechanics, the transformation law from one inertial or accelerating (including rotation) frame (reference frame traveling at constant velocity - including zero) to another is the Galilean transform.
Unprimed quantities refer to position, velocity and acceleration in one frame F; primed quantities refer to position, velocity and acceleration in another frame F' moving at translational velocity V or angular velocity Ω relative to F. Conversely F moves at velocity (—V or —Ω) relative to F'. The situation is similar for relative accelerations.
Mechanical oscillators.
SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator and damped harmonic oscillator respectively.
|
6051
|
28481209
|
https://en.wikipedia.org/wiki?curid=6051
|
Cursus honorum
|
The , or more colloquially 'ladder of offices'; ) was the sequential order of public offices held by aspiring politicians in the Roman Republic and the early Roman Empire. It was designed for men of senatorial rank. The comprised a mixture of military and political administration posts; the ultimate prize for winning election to each "rung" in the sequence was to become one of the two consuls in a given year.
These rules were altered and flagrantly ignored in the course of the last century of the Republic. For example, Gaius Marius held consulships for five years in a row between 104 and 100 BC. He was consul seven times in all, also serving in 107 and 86. Officially presented as opportunities for public service, the offices often became mere opportunities for self-aggrandizement. The constitutional reforms of Sulla between 82 and 79 BC required a ten-year interval before holding the same office again for another term.
To have held each office at the youngest possible age (, 'in his year') was considered a great political success. For instance, to miss out on a praetorship at 39 meant that one could not become consul at 42. Cicero expressed extreme pride not only in being a ('new man'; comparable to a "self-made man") who became consul even though none of his ancestors had ever served as a consul, but also in having become consul "in his year".
Military service.
Prior to entering political life and the "cursus honorum", a young man of senatorial rank was expected to serve around ten years of military duty. The years of service were intended to be mandatory in order to qualify for political office.
Advancement and honors would improve his political prospects, and a successful military career might culminate in the office of military tribune, to which 24 men were elected by the Tribal Assembly each year. The rank of military tribune is sometimes described as the first office of the "cursus honorum".
Quaestor.
The first official post was that of quaestor. Ever since the reforms of Sulla, candidates had to be at least 30 years old to hold the office. From the time of Augustus onwards, twenty quaestors served in the financial administration at Rome or as second-in-command to a governor in the provinces. They could also serve as the paymaster for a legion.
Aedile.
At 36 years of age, a promagistrate could stand for election to one of the aediles (pronounced , from "aedes", "temple edifice") positions. Of these aediles, two were plebeian and two were patrician, with the patrician aediles called curule aediles. The plebeian aediles were elected by the Plebeian Council and the curule aediles were either elected by the Tribal Assembly or appointed by the reigning consul. The aediles had administrative responsibilities in Rome. They had to take care of the temples (whence their title, from the Latin "aedes", "temple"), organize games, and be responsible for the maintenance of the public buildings in Rome. Moreover, they took charge of Rome's water and food supplies; in their capacity as market superintendents, they served sometimes as judges in mercantile affairs.
The aedile was the supervisor of public works; the words "edifice" and "edification" stem from the same root. He oversaw the public works, temples and markets. Therefore, the aediles would have been in some cooperation with the current censors, who had similar or related duties. Also, they oversaw the organization of festivals and games ("ludi"), which made this a very sought-after office for a career minded politician of the late Republic, as it was a good means of gaining popularity by staging spectacles.
Curule aediles were added at a later date in the 4th century BC; their duties do not differ substantially from plebeian aediles. However, unlike plebeian aediles, curule aediles were allowed certain symbols of rank—the "sella curulis" or curule chair, for example—and only patricians could stand for election to curule aedile. This later changed, and both plebeians and patricians could stand for curule aedileship.
The elections for curule aedile were at first alternated between patricians and plebeians, until late in the 2nd century BC, when the practice was abandoned and both classes became free to run during all years.
While part of the "cursus honorum", this step was optional and not required to hold future offices. Though the office was usually held after the quaestorship and before the praetorship, there are some cases with former praetors serving as aediles.
Praetor.
After serving either as quaestor or as aedile, a man of 39 years could run for praetor. During the reign of Augustus this requirement was lowered to 30, at the request of Gaius Maecenas. The number of praetors elected varied through history, generally increasing with time. During the republic, six or eight were generally elected each year to serve judicial functions throughout Rome and other governmental responsibilities. In the absence of the consuls, a praetor would be given command of the garrison in Rome or in Italy. Also, a praetor could exercise the functions of the consuls throughout Rome, but their main function was that of a judge. They would preside over trials involving criminal acts, grant court orders and validate "illegal" acts as acts of administering justice. A praetor was escorted by six lictors, and wielded "imperium". After a term as praetor, the magistrate could serve as a provincial governor with the title of propraetor, wielding "propraetor imperium", commanding the province's legions, and possessing ultimate authority within his province(s).
Two of the praetors were more prestigious than the others. The first was the Praetor Peregrinus, who was the chief judge in trials involving one or more foreigners. The other was the Praetor Urbanus, the chief judicial office in Rome. He had the power to overturn any verdict by any other courts, and served as judge in cases involving criminal charges against provincial governors. The Praetor Urbanus was not allowed to leave the city for more than ten days. If one of these two praetors was absent from Rome, the other would perform the duties of both.
Consul.
The office of consul was the most prestigious of all of the offices on the "cursus honorum", and represented the summit of a successful career. The minimum age was 42. Years were identified by the names of the two consuls elected for a particular year; for instance, "M. Messalla et M. Pisone consulibus", "in the consulship of Messalla and Piso", dates an event to 61 BC. Consuls were responsible for the city's political agenda, commanded large-scale armies and controlled important provinces. The consuls served for only a year (a restriction intended to limit the amassing of power by individuals) and could only rule when they agreed, because each consul could veto the other's decision.
The consuls would alternate monthly as the chairman of the Senate. They also were the supreme commanders in the Roman army, with each being granted two legions during their consular year. Consuls also exercised the highest juridical power in the Republic, being the only office with the power to override the decisions of the Praetor Urbanus. Only laws and the decrees of the Senate or the People's assembly limited their powers, and only the veto of a fellow consul or a tribune of the plebs could supersede their decisions.
A consul was escorted by twelve lictors, held "imperium" and wore the toga "praetexta". Because the consul was the highest executive office within the Republic, they had the power to veto any action or proposal by any other magistrate, save that of the Tribune of the Plebs. After a consulship, a consul was assigned one of the more important provinces and acted as the governor in the same way that a propraetor did, only owning proconsular "imperium". A second consulship could only be attempted after an interval of 10 years to prevent one man holding too much power.
Governor.
Although not part of the "cursus honorum", upon completing a term as either praetor or consul, an officer was required to serve a term as propraetor and proconsul, respectively, in one of Rome's many provinces. These propraetors and proconsuls held near autocratic authority within their selected province or provinces. Because each governor held equal "imperium" to the equivalent magistrate, they were escorted by the same number of lictors (12) and could only be vetoed by a reigning consul or praetor. Their abilities to govern were only limited by the decrees of the Senate or the people's assemblies, and the Tribune of the Plebs was unable to veto their acts as long as the governor remained at least a mile outside of Rome.
Censor.
After a term as consul, the final step in the "cursus honorum" was the office of "censor". This was the only office in the Roman Republic whose term was a period of eighteen months instead of the usual twelve. Censors were elected every five years and although the office held no military "imperium", it was considered a great honour. The censors took a regular census of the people and then apportioned the citizens into voting classes on the basis of income and tribal affiliation. The censors enrolled new citizens in tribes and voting classes as well. The censors were also in charge of the membership roll of the Senate, every five years adding new senators who had been elected to the requisite offices. Censors could also remove unworthy members from the Senate. This ability was lost during the dictatorship of Sulla. Censors were also responsible for construction of public buildings and the moral status of the city.
Censors also had financial duties, in that they had to put out to tender projects that were to be financed by the state. Also, the censors were in charge of the leasing out of conquered land for public use and auction. Though this office owned no "imperium", meaning no lictors for protection, they were allowed to wear the toga "praetexta".
Tribune of the Plebs.
The office of Tribune of the Plebs was an important step in the political career of plebeians. Patricians could not hold the office. They were not an official step in the "cursus honorum". The Tribune was an office first created to protect the right of the common man in Roman politics and served as the head of the Plebeian Council. In the mid-to-late Republic, however, plebeians were often just as, and sometimes more, wealthy and powerful than patricians. Those who held the office were granted sacrosanctity (the right to be legally protected from any physical harm), the power to rescue any plebeian from the hands of a patrician magistrate, and the right to veto any act or proposal of any magistrate, including another tribune of the people and the consuls. The tribune also had the power to exercise capital punishment against any person who interfered in the performance of his duties. The tribunes could even convene a Senate meeting and lay legislation before it and arrest magistrates. Their houses had to remain open for visitors even during the night, and they were not allowed to be more than a day's journey from Rome. Due to their unique power of sacrosanctity, the Tribune had no need for lictors for protection and owned no "imperium", nor could they wear the toga "praetexta". For a period after Sulla's reforms, a person who had held the office of Tribune of the Plebs could no longer qualify for any other office, and the powers of the tribunes were more limited, but these restrictions were subsequently lifted.
"Princeps senatus".
Another office not officially a step in the "cursus honorum" was the "princeps senatus", an extremely prestigious office for a patrician. The "princeps senatus" served as the leader of the Senate and was chosen to serve a five-year term by each pair of Censors every five years. Censors could, however, confirm a "princeps senatus" for a period of another five years. The "princeps senatus" was chosen from all Patricians who had served as a Consul, with former Censors usually holding the office. The office originally granted the holder the ability to speak first at session on the topic presented by the presiding magistrate, but eventually gained the power to open and close the senate sessions, decide the agenda, decide where the session should take place, impose order and other rules of the session, meet in the name of the senate with embassies of foreign countries, and write in the name of the senate letters and dispatches. This office, like the Tribune, did not own "imperium", was not escorted by lictors, and could not wear the "toga praetexta".
Dictator and "magister equitum".
Of all the offices within the Roman Republic, none granted as much power and authority as the position of dictator, known as the Master of the People. In times of emergency, the Senate would declare that a dictator was required, and the current consuls would appoint a dictator. This was the only decision that could not be vetoed by the Tribune of the Plebs. The dictator was the sole exception to the Roman legal principles of having multiple magistrates in the same office and being legally able to be held to answer for actions in office. Essentially by definition, only one dictator could serve at a time, and no dictator could ever be held legally responsible for any action during his time in office for any reason.
The dictator was the highest magistrate in degree of "imperium" and was attended by twenty-four lictors (as were the former Kings of Rome). Although his term lasted only six months instead of twelve (except for the Dictatorships of Sulla and Caesar), all other magistrates reported to the dictator (except for the tribunes of the plebs – although they could not veto any of the dictator's acts), granting the dictator absolute authority in both civil and military matters throughout the Republic. The dictator was free from the control of the Senate in all that he did, could execute anyone without a trial for any reason, and could ignore any law in the performance of his duties. The dictator was the sole magistrate under the Republic that was truly independent in discharging his duties. All of the other offices were extensions of the Senate's executive authority and thus answerable to the Senate. Since the dictator exercised his own authority, he did not suffer this limitation, which was the cornerstone of the office's power.
When a dictator entered office, he appointed to serve as his second-in-command a "magister equitum", the Master of the Horse, whose office ceased to exist once the dictator left office. The "magister equitum" held "praetorian imperium", was attended by six lictors, and was charged with assisting the dictator in managing the State. When the dictator was away from Rome, the "magister equitum" usually remained behind to administer the city. The "magister equitum", like the dictator, had unchallengeable authority in all civil and military affairs, with his decisions only being overturned by the dictator himself.
The dictatorship was definitively abolished in 44 BC after the assassination of Gaius Julius Caesar ("Lex Antonia").
|
6056
|
10340692
|
https://en.wikipedia.org/wiki?curid=6056
|
Continental drift
|
Continental drift is a highly supported scientific theory, originating in the early 20th century, that Earth's continents move or drift relative to each other over geologic time. The theory of continental drift has since been validated and incorporated into the science of plate tectonics, which studies the movement of the continents as they ride on plates of the Earth's lithosphere.
The speculation that continents might have "drifted" was first put forward by Abraham Ortelius in 1596. A pioneer of the modern view of mobilism was the Austrian geologist Otto Ampferer. The concept was independently and more fully developed by Alfred Wegener in his 1915 publication, "The Origin of Continents and Oceans". However, at that time his hypothesis was rejected by many for lack of any motive mechanism. In 1931, the English geologist Arthur Holmes proposed mantle convection for that mechanism.
History.
Early history.
Abraham Ortelius , Theodor Christoph Lilienthal (1756), Alexander von Humboldt (1801 and 1845), Antonio Snider-Pellegrini , and others had noted earlier that the shapes of continents on opposite sides of the Atlantic Ocean (most notably, Africa and South America) seem to fit together. W. J. Kious described Ortelius's thoughts in this way:
In 1889, Alfred Russel Wallace remarked, "It was formerly a very general belief, even amongst geologists, that the great features of the earth's surface, no less than the smaller ones, were subject to continual mutations, and that during the course of known geological time the continents and great oceans had, again and again, changed places with each other." He quotes Charles Lyell as saying, "Continents, therefore, although permanent for whole geological epochs, shift their positions entirely in the course of ages." and claims that the first to throw doubt on this was James Dwight Dana in 1849.
In his "Manual of Geology" (1863), Dana wrote, "The continents and oceans had their general outline or form defined in earliest time. This has been proved with regard to North America from the position and distribution of the first beds of the Lower Silurian, – those of the Potsdam epoch. The facts indicate that the continent of North America had its surface near tide-level, part above and part below it (p.196); and this will probably be proved to be the condition in Primordial time of the other continents also. And, if the outlines of the continents were marked out, it follows that the outlines of the oceans were no less so". Dana was enormously influential in America—his "Manual of Mineralogy" is still in print in revised form—and the theory became known as the "Permanence theory".
This appeared to be confirmed by the exploration of the deep sea beds conducted by the "Challenger" expedition, 1872–1876, which showed that contrary to expectation, land debris brought down by rivers to the ocean is deposited comparatively close to the shore on what is now known as the continental shelf. This suggested that the oceans were a permanent feature of the Earth's surface, rather than them having "changed places" with the continents.
Eduard Suess had proposed a supercontinent Gondwana in 1885 and the Tethys Ocean in 1893, assuming a land-bridge between the present continents submerged in the form of a geosyncline, and John Perry had written an 1895 paper proposing that the Earth's interior was fluid, and disagreeing with Lord Kelvin on the age of the Earth.
Wegener and his predecessors.
Apart from the earlier speculations mentioned above, the idea that the American continents had once formed a single landmass with Eurasia and Africa was postulated by several scientists before Alfred Wegener's 1912 paper. Although Wegener's theory was formed independently and was more complete than those of his predecessors, Wegener later credited a number of past authors with similar ideas: Franklin Coxworthy (between 1848 and 1890), Roberto Mantovani (between 1889 and 1909), William Henry Pickering (1907) and Frank Bursley Taylor (1908).
The similarity of southern continent geological formations had led Roberto Mantovani to conjecture in 1889 and 1909 that all the continents had once been joined into a supercontinent; Wegener noted the similarity of Mantovani's and his own maps of the former positions of the southern continents. In Mantovani's conjecture, this continent broke due to volcanic activity caused by thermal expansion, and the new continents drifted away from each other because of further expansion of the rip-zones, where the oceans now lie. This led Mantovani to propose a now-discredited Expanding Earth theory.
Continental drift without expansion was proposed by Frank Bursley Taylor, who suggested in 1908 (published in 1910) that the continents were moved into their present positions by a process of "continental creep", later proposing a mechanism of increased tidal forces during the Cretaceous dragging the crust towards the equator. He was the first to realize that one of the effects of continental motion would be the formation of mountains, attributing the formation of the Himalayas to the collision between the Indian subcontinent with Asia. Wegener said that of all those theories, Taylor's had the most similarities to his own. For a time in the mid-20th century, the theory of continental drift was referred to as the "Taylor-Wegener hypothesis".
Alfred Wegener first presented his hypothesis to the German Geological Society on 6 January 1912. He proposed that the continents had once formed a single landmass, called Pangaea, before breaking apart and drifting to their present locations.
Wegener was the first to use the phrase "continental drift" (1912, 1915) () and to publish the hypothesis that the continents had somehow "drifted" apart. Although he presented much evidence for continental drift, he was unable to provide a convincing explanation for the physical processes which might have caused this drift. He suggested that the continents had been pulled apart by the centrifugal pseudoforce () of the Earth's rotation or by a small component of astronomical precession, but calculations showed that the force was not sufficient. The hypothesis was also studied by Paul Sophus Epstein in 1920 and found to be implausible.
Rejection of Wegener's theory, 1910s–1950s.
Although now accepted, and even with a minority of scientific proponents over the decades, the theory of continental drift was largely rejected for many years, with evidence in its favor considered insufficient. One problem was that a plausible driving force was missing. A second problem was that Wegener's estimate of the speed of continental motion, , was implausibly high. (The currently accepted rate for the separation of the Americas from Europe and Africa is about .) Furthermore, Wegener was treated less seriously because he was not a geologist. Even today, the details of the forces propelling the plates are poorly understood.
The English geologist Arthur Holmes championed the theory of continental drift at a time when it was deeply unfashionable. He proposed in 1931 that the Earth's mantle contained convection cells which dissipated heat produced by radioactive decay and moved the crust at the surface. His "Principles of Physical Geology", ending with a chapter on continental drift, was published in 1944.
Geological maps of the time showed huge land bridges spanning the Atlantic and Indian oceans to account for the similarities of fauna and flora and the divisions of the Asian continent in the Permian period, but failing to account for glaciation in India, Australia and South Africa.
The fixists.
Hans Stille and Leopold Kober opposed the idea of continental drift and worked on a "fixist" geosyncline model with Earth contraction playing a key role in the formation of orogens. Other geologists who opposed continental drift were Bailey Willis, Charles Schuchert, Rollin Chamberlin, Walther Bucher and Walther Penck. In 1939 an international geological conference was held in Frankfurt. This conference came to be dominated by the fixists, especially as those geologists specializing in tectonics were all fixists except Willem van der Gracht. Criticism of continental drift and mobilism was abundant at the conference not only from tectonicists but also from sedimentological (Nölke), paleontological (Nölke), mechanical (Lehmann) and oceanographic (Troll, Wüst) perspectives. Hans Cloos, the organizer of the conference, was also a fixist who together with Troll held the view that excepting the Pacific Ocean continents were not radically different from oceans in their behaviour. The mobilist theory of Émile Argand for the Alpine orogeny was criticized by Kurt Leuchs. The few drifters and mobilists at the conference appealed to biogeography (Kirsch, Wittmann), paleoclimatology (Wegener, K), paleontology (Gerth) and geodetic measurements (Wegener, K). F. Bernauer correctly equated Reykjanes in south-west Iceland with the Mid-Atlantic Ridge, arguing with this that the floor of the Atlantic Ocean was undergoing extension just like Reykjanes. Bernauer thought this extension had drifted the continents only apart, the approximate width of the volcanic zone in Iceland.
David Attenborough, who attended university in the second half of the 1940s, recounted an incident illustrating its lack of acceptance then: "I once asked one of my lecturers why he was not talking to us about continental drift and I was told, sneeringly, that if I could prove there was a force that could move continents, then he might think about it. The idea was moonshine, I was informed."
As late as 1953—just five years before Carey introduced the theory of plate tectonics—the theory of continental drift was rejected by the physicist Scheidegger on the following grounds.
Road to acceptance.
From the 1930s to the late 1950s, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Holmes's views were particularly influential: in his bestselling textbook, "Principles of Physical Geology," he included a chapter on continental drift, proposing that Earth's mantle contained convection cells which dissipated radioactive heat and moved the crust at the surface. Holmes's proposal resolved the phase disequilibrium objection (the underlying fluid was kept from solidifying by radioactive heating from the core). However, scientific communication in the 1930s and 1940s was inhibited by World War II, and the theory still required work to avoid foundering on the orogeny and isostasy objections. Worse, the most viable forms of the theory predicted the existence of convection cell boundaries reaching deep into the Earth, that had yet to be observed.
In 1947, a team of scientists led by Maurice Ewing confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the sediments was chemically and physically different from continental crust. As oceanographers continued to bathymeter the ocean basins, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift".
Meanwhile, scientists began recognizing odd magnetic variations across the ocean floor using devices developed during World War II to detect submarines. Over the next decade, it became increasingly clear that the magnetization patterns were not anomalies, as had been originally supposed. In a series of papers published between 1959 and 1963, Heezen, Dietz, Hess, Mason, Vine, Matthews, and Morley collectively realized that the magnetization of the ocean floor formed extensive, zebra-like patterns: one stripe would exhibit normal polarity and the adjoining stripes reversed polarity. The best explanation was the "conveyor belt" or Vine–Matthews–Morley hypothesis. New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. The new crust is magnetized by the Earth's magnetic field, which undergoes occasional reversals. Formation of new crust then displaces the magnetized crust apart, akin to a conveyor belt – hence the name.
Without workable alternatives to explain the stripes, geophysicists were forced to conclude that Holmes had been right: ocean rifts were sites of perpetual orogeny at the boundaries of convection cells. By 1967, barely two decades after discovery of the mid-oceanic rifts, and a decade after discovery of the striping, plate tectonics had become axiomatic to modern geophysics.
In addition, Marie Tharp, in collaboration with Bruce Heezen, who was initially sceptical of Tharp's observations that her maps confirmed continental drift theory, provided essential corroboration, using her skills in cartography and seismographic data, to confirm the theory.
Modern evidence.
Geophysicist Jack Oliver is credited with providing seismologic evidence supporting plate tectonics which encompassed and superseded continental drift with the article "Seismology and the New Global Tectonics", published in 1968, using data collected from seismologic stations, including those he set up in the South Pacific. The modern theory of plate tectonics, refining Wegener, explains that there are two kinds of crust of different composition: continental crust and oceanic crust, both floating above a much deeper "plastic" mantle. Continental crust is inherently lighter. Oceanic crust is created at spreading centers, and this, along with subduction, drives the system of plates in a chaotic manner, resulting in continuous orogeny and areas of isostatic imbalance.
Evidence for the movement of continents on tectonic plates is now extensive. Similar plant and animal fossils are found around the shores of different continents, suggesting that they were once joined. The fossils of "Mesosaurus", a freshwater reptile rather like a small crocodile, found both in Brazil and South Africa, are one example; another is the discovery of fossils of the land reptile "Lystrosaurus" in rocks of the same age at locations in Africa, India, and Antarctica. There is also living evidence, with the same animals being found on two continents. Some earthworm families (such as Ocnerodrilidae, Acanthodrilidae, Octochaetidae) are found in South America and Africa.
The complementary arrangement of the facing sides of South America and Africa is an obvious and temporary coincidence. In millions of years, slab pull, ridge-push, and other forces of tectonophysics will further separate and rotate those two continents. It was that temporary feature that inspired Wegener to study what he defined as continental drift although he did not live to see his hypothesis generally accepted.
The widespread distribution of Permo-Carboniferous glacial sediments in South America, Africa, Madagascar, Arabia, India, Antarctica and Australia was one of the major pieces of evidence for the theory of continental drift. The continuity of glaciers, inferred from oriented glacial striations and deposits called tillites, suggested the existence of the supercontinent of Gondwana, which became a central element of the concept of continental drift. Striations indicated glacial flow away from the equator and toward the poles, based on continents' current positions and orientations, and supported the idea that the southern continents had previously been in dramatically different locations that were contiguous with one another.
GPS evidence.
In measuring continental drift with GPS relative to some other locations whose positions were measured by GPS, a GPS device located in Maui, Hawaii moved about 48 cm latitudinally and about 84 cm longitudinally during a time of 14 years.
|
6057
|
1296668906
|
https://en.wikipedia.org/wiki?curid=6057
|
Commodores
|
Commodores, often billed as The Commodores, are an American funk and soul group. The group's most successful period was in the late 1970s and early 1980s when Lionel Richie was the co-lead singer.
The members of the group met as mostly freshmen at Tuskegee Institute (now Tuskegee University) in 1968, and signed with Motown in November 1972, having first caught the public eye opening for the Jackson 5 while on tour.
The band's biggest hit singles are ballads such as "Easy", "Three Times a Lady", and "Nightshift"; and funk-influenced dance songs, including "Brick House", "Fancy Dancer", "Lady (You Bring Me Up)", and "Too Hot ta Trot".
Commodores were inducted into the Alabama Music Hall of Fame and Vocal Group Hall of Fame. The band has also won one Grammy Award out of nine nominations. The Commodores have sold over 70 million albums worldwide.
History.
Commodores were formed from two former student groups: the Mystics and the Jays. Richie described some members of the Mystics as "jazz buffs". The new six-man band featured Lionel Richie, Thomas McClary, and William King from the Mystics, and Andre Callahan, Michael Gilbert, and Milan Williams from the Jays. They chose their present name when King flipped open a dictionary and ran his finger down the page. "We lucked out," he remarked with a laugh when telling this story to "People" magazine. "We almost became 'The Commodes.'"
The bandmembers attended Tuskegee Institute in Alabama. After winning the college's annual freshman talent contest, they played at fraternity parties as well as a weekend gig at the Black Forest Inn, one of a few clubs in Tuskegee that catered to college students. They performed cover tunes and some original songs with their first singer, James Ingram (not the famous solo artist). Ingram, older than the rest of the band, left to serve in Vietnam, and was later replaced by drummer Walter "Clyde" Orange, who wrote or co-wrote many of their hits. Lionel Richie and Orange alternated as lead singers. Orange was the lead singer on the Top 10 hits "Brick House" (1977) and "Nightshift" (1985).
The early band was managed by Benny Ashburn, who brought them to his family's vacation lodge on Martha's Vineyard in 1971 and 1972. There, Ashburn test-marketed the group by having them play in parking lots and summer festivals.
"Machine Gun" (1974), the instrumental title track from the band's debut album, became a staple at American sporting events, and is also heard in many films, including "Boogie Nights" and "Looking for Mr. Goodbar". It reached No. 22 on the "Billboard" Hot 100 in 1974. Another 1974 song "I Feel Sanctified" has been called a "prototype" of Wild Cherry's 1976 big hit "Play That Funky Music". Of the three albums released in 1975 and 1976, "Caught in the Act" was funk album, but "Movin' On" and "Hot on the Tracks" were pop albums. After those recordings the group developed the mellower sound hinted at in their 1976 top-ten hits, "Sweet Love" and "Just to Be Close to You". In 1977, the Commodores released "Easy", which became the group's biggest hit yet, reaching No. 4 in the US, followed by funky single "Brick House", also top 5, both from their album "Commodores", as was "Zoom". The group reached No. 1 in 1978 with "Three Times a Lady". In 1979, the Commodores scored another top-five ballad, "Sail On", before reaching the top of the charts once again with another ballad, "Still". In 1981 they released two top-ten hits with "Oh No" (No. 4) and their first upbeat single in almost five years, "Lady (You Bring Me Up)" (No. 8).
Commodores made a brief appearance in the 1978 film "Thank God It's Friday". They performed the song "Too Hot ta Trot" during the dance contest; the songs "Brick House" and "Easy" were also played in the movie
In 1982, the group decided to take a hiatus from touring and recording, during which time Lionel Richie recorded a solo album at the suggestion of Motown and the other group members. Its success encouraged Richie to pursue a solo career, and Skyler Jett replaced him as co-lead singer. Also in 1982, Ashburn died of a heart attack at the age of 54.
Founding member McClary left in 1984 (shortly after Richie) to pursue a solo career, and to develop a gospel music company. McClary was replaced by guitarist-vocalist Sheldon Reynolds. Then LaPread left in 1986 and moved to Auckland, New Zealand. Reynolds departed for Earth, Wind & Fire in 1987, which prompted trumpeter William "WAK" King to take over primary guitar duties for live performances. Keyboardist Milan Williams exited the band in 1989 after allegedly refusing to tour South Africa.
The group gradually abandoned its funk roots and moved into the more commercial pop arena. In 1984, former Heatwave singer James Dean "J.D." Nicholas assumed co-lead vocal duties with drummer Walter Orange. That line-up was hitless until 1985 when their final Motown album "Nightshift", produced by Dennis Lambert (prior albums were produced by James Anthony Carmichael, who would continue to work with Richie on his albums), delivered the title track "Nightshift", a loving tribute to Marvin Gaye and Jackie Wilson, both of whom had died the previous year. "Nightshift" hit no. 3 in the US and won the Commodores their first Grammy for Best R&B Performance by a Duo or Group With Vocals in 1985.
In 2010 a new version was recorded, dedicated to Michael Jackson. The Commodores were on a European tour performing at Wembley Arena, London, on June 25, 2009, when they walked off the stage after they were told that Michael Jackson had died. Initially the band thought it was a hoax. However, back in their dressing rooms they received confirmation and broke down in tears. The next night at Birmingham's NIA Arena, J.D. Nicholas added Jackson's name to the lyrics of the song, and henceforth the Commodores have mentioned Jackson and other deceased R&B singers. Thus came the inspiration upon the first anniversary of Jackson's death to re-record, with new lyrics, the hit song "Nightshift" as a tribute.
In 1990, they formed Commodores Records and re-recorded their 20 greatest hits as "Commodores Hits Vol. I & II". They have recorded a live album, "Commodores Live", along with a DVD of the same name, and a Christmas album titled "Commodores Christmas". In 2012, the band was working on new material, with some contributions written by current and former members.
Commodores as of 2020 consist of Walter "Clyde" Orange, James Dean "J.D." Nicholas, and William "WAK" King, along with their five-piece band The Mean Machine.They continue to perform, playing at arenas, theaters, and festivals around the world.
Accolades.
Grammy awards.
The Commodores have won one Grammy Award out of ten nominations.
Alabama Music Hall of Fame.
During 1995 the Commodores were inducted into the Alabama Music Hall of Fame.
Vocal Group Hall of Fame.
During 2003 the Commodores were also inducted into the Vocal Group Hall of Fame.
|
6058
|
7903804
|
https://en.wikipedia.org/wiki?curid=6058
|
Collagen
|
Collagen () is the main structural protein in the extracellular matrix of the connective tissues of many animals. It is the most abundant protein in mammals, making up 25% to 35% of protein content. Amino acids are bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis.
Depending on the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes 1% to 2% of muscle tissue and 6% by weight of skeletal muscle. The fibroblast is the most common cell creating collagen in animals. Gelatin, which is used in food and industry, is collagen that was irreversibly hydrolyzed using heat, basic solutions, or weak acids.
Etymology.
The name "collagen" comes from the Greek κόλλα ("kólla"), meaning "glue", and suffix -γέν, "-gen", denoting "producing".
Types.
As of 2011, 28 types of human collagen have been identified, described, and classified according to their structure. This diversity shows collagen's diverse functionality. All of the types contain at least one triple helix. Over 90% of the collagen in humans is type I & III collagen.
The five most common types are:
In humans.
Cardiac.
The collagenous cardiac skeleton, which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.
Bone grafts.
As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting because its triple-helix structure makes it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure prevents collagen from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.
Tissue regeneration.
Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.
Reconstructive surgery.
Collagens are widely used in the construction of artificial skin substitutes used for managing severe burns and wounds. These collagens may be derived from cow, horse, pig, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.
Wound healing.
Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. This avoids wound deterioration and procedures such as amputation.
Collagen is used as a natural wound dressing because it has properties that artificial wound dressings do not have. It resists bacteria, which is vitally important in wound dressing. As a burn dressing, collagen helps it heal fast by helping granulation tissue to grow over the burn.
Throughout the four phases of wound healing, collagen performs the following functions:
Use in basic research.
Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.
Biology.
The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in collagen's amino acid sequence are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline.
The table below lists average amino acid composition for fish and mammal skin.
Synthesis.
First, a three-dimensional stranded structure is assembled, mostly composed of the amino acids glycine and proline. This is the collagen precursor procollagen. Then, procollagen is modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of collagen's triple helix structure. Because the hydroxylase enzymes performing these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by the enzymes prolyl 4-hydroxylase and lysyl hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. Collagen synthesis occurs inside and outside cells.
The most common form of collagen is fibrillary collagen. Another common form is meshwork collagen, which is often involved in the formation of filtration systems. All types of collagen are triple helices, but differ in the make-up of their alpha peptides created in step 2. Below we discuss the formation of fibrillary collagen.
Amino acids.
Collagen has an unusual amino acid composition and sequence:
Cortisol stimulates degradation of (skin) collagen into amino acids.
Collagen I formation.
Most collagen forms in a similar manner, but the following process is typical for type I:
Molecular structure.
A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.
A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix, as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.
Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.
Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro because of a stereoelectronic effect; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals lead to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.
The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.
There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure "in situ". These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.
Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.
Associated disorders.
Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.
In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.
Diseases.
One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.
Osteogenesis imperfecta – Caused by a mutation in "type 1 collagen", dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.
Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in "type 2 collagen", further research is being conducted to confirm this.
Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in "collagen type 3".
Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.
Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.
Animal harvesting.
When not synthesized, collagen can be harvested from animal skin. This has led to deforestation as has occurred in Paraguay where large collagen producers buy large amounts of cattle hides from regions that have been clear-cut for cattle grazing.
Characteristics.
Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called "collagen fibers" are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.
Mechanical properties.
Collagen is a complex hierarchical material with mechanical properties that vary significantly across different scales.
On the molecular scale, atomistic and course-grained modeling simulations, as well as numerous experimental methods, have led to several estimates of the Young's modulus of collagen at the molecular level. Only above a certain strain rate is there a strong relationship between elastic modulus and strain rate, possibly due to the large number of atoms in a collagen molecule. The length of the molecule is also important, where longer molecules have lower tensile strengths than shorter ones due to short molecules having a large proportion of hydrogen bonds being broken and reformed.
On the fibrillar scale, collagen has a lower modulus compared to the molecular scale, and varies depending on geometry, scale of observation, deformation state, and hydration level. By increasing the crosslink density from zero to 3 per molecule, the maximum stress the fibril can support increases from 0.5 GPa to 6 GPa.
Limited tests have been done on the tensile strength of the collagen fiber, but generally it has been shown to have a lower Young's modulus compared to fibrils.
When studying the mechanical properties of collagen, tendon is often chosen as the ideal material because it is close to a pure and aligned collagen structure. However, at the macro, tissue scale, the vast number of structures that collagen fibers and fibrils can be arranged into results in highly variable properties. For example, tendon has primarily parallel fibers, whereas skin consists of a net of wavy fibers, resulting in a much higher strength and lower ductility in tendon compared to skin. The mechanical properties of collagen at multiple hierarchical levels is given.
Collagen is known to be a viscoelastic solid. When the collagen fiber is modeled as two Kelvin-Voigt models in series, each consisting of a spring and a dashpot in parallel, the strain in the fiber can be modeled according to the following equation:
formula_1
where α, β, and γ are defined materials properties, εD is fibrillar strain, and εT is total strain.
Uses.
Collagen has a wide variety of applications. In the medical industry, it is used in cosmetic surgery and burn surgery. An example of collagen use for food manufacturing is in casings for sausages.
If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.
From the Greek for glue, "kolla", the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.
Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.
Cosmetics.
Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Collagen is a vital protein in skin, hair, nails, and other tissues. Its production decreases with age and factors like sun damage and smoking. Collagen supplements, derived from sources like fish and cattle, are marketed to improve skin, hair, and nails. Studies show some skin benefits, but these supplements often contain other beneficial ingredients, making it unclear if collagen alone is effective. There's minimal evidence supporting collagen's benefits for hair and nails. Overall, the effectiveness of oral collagen supplements is not well-proven, and focusing on a healthy lifestyle and proven skincare methods like sun protection is recommended.
History.
The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.
The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed "microfibril".
|
6059
|
2304267
|
https://en.wikipedia.org/wiki?curid=6059
|
Calvin and Hobbes
|
Calvin and Hobbes is a daily American comic strip created by cartoonist Bill Watterson that was syndicated from November 18, 1985, to December 31, 1995. Commonly described as "the last great newspaper comic", "Calvin and Hobbes" has enjoyed enduring popularity, influence, and academic and even a philosophical interest.
"Calvin and Hobbes" follows the humorous antics of the title characters: Calvin, a mischievous and adventurous six-year-old boy; and his friend Hobbes, a sardonic tiger. Set in the suburban United States of the 1980s and 1990s, the strip depicts Calvin's frequent flights of fancy and friendship with Hobbes. It also examines Calvin's relationships with his long-suffering parents and with his classmates, especially his neighbor Susie Derkins. Hobbes's dual nature is a defining motif for the strip: to Calvin, Hobbes is a living anthropomorphic tiger, while all the other characters seem to see Hobbes as an inanimate stuffed toy, though Watterson has not clarified exactly how Hobbes is perceived by others, or whether he is real or an imaginary friend. Though the series does not frequently mention specific political figures or ongoing events, it does explore broad issues like environmentalism, public education, and philosophical quandaries.
At the height of its popularity, "Calvin and Hobbes" was featured in over 2,400 newspapers worldwide. As of 2010, reruns of the strip appeared in more than 50 countries, and nearly 45 million copies of the "Calvin and Hobbes" books had been sold worldwide.
History.
Development.
"Calvin and Hobbes" was conceived when Bill Watterson, while working in an advertising job he detested, began devoting his spare time to developing a newspaper comic for potential syndication. He explored various strip ideas but all were rejected by the syndicates. United Feature Syndicate finally responded positively to one strip called "The Doghouse", which featured a side character (the main character's little brother) who had a stuffed tiger. United identified these characters as the strongest and encouraged Watterson to develop them as the center of their own strip. Ironically, United Feature ultimately rejected the new strip as lacking in marketing potential, although Universal Press Syndicate took it up.
Launch and early success (1985–1990).
The first "Calvin and Hobbes" strip was published on November 18, 1985 in 35 newspapers. The strip quickly became popular. Within a year of syndication, the strip was published in roughly 250 newspapers and proved to have international appeal with translation and wide circulation outside the United States.
Although "Calvin and Hobbes" underwent continual artistic development and creative innovation over the period of syndication, the earliest strips demonstrated a remarkable consistency with the latest. Watterson introduced all the major characters within the first three weeks and made no changes to the central cast over the strip's 10-year history.
By April 5, 1987, Watterson was featured in an article in the "Los Angeles Times". "Calvin and Hobbes" earned Watterson the Reuben Award from the National Cartoonists Society in the Outstanding Cartoonist of the Year category, first in 1986 and again in 1988. He was nominated another time in 1992. The Society awarded him the Humor Comic Strip Award for 1988. "Calvin and Hobbes" has also won several more awards.
As his creation grew in popularity, there was strong interest from the syndicate to merchandise the characters and expand into other forms of media. Watterson's contract with the syndicate allowed the characters to be licensed without the creator's consent, as was standard at the time. Nevertheless, Watterson had leverage by threatening to simply walk away from the comic strip.
This dynamic played out in a long and emotionally draining battle between Watterson and his syndicate editors. By 1991, Watterson had achieved his goal of securing a new contract that granted him legal control over his creation and all future licensing arrangements.
Creative control (1991–1995).
Having achieved his objective of creative control, Watterson's desire for privacy subsequently reasserted itself and he ceased all media interviews, relocated to New Mexico, and largely disappeared from public engagements, refusing to attend the ceremonies of any of the cartooning awards he won. The pressures of the battle over merchandising led to Watterson taking an extended break from May 5, 1991, to February 1, 1992, a move that was virtually unprecedented in the world of syndicated cartoonists.
During Watterson's first sabbatical from the strip, Universal Press Syndicate continued to charge newspapers full price to re-run old "Calvin and Hobbes" strips. Few editors approved of the move, but the strip was so popular that they had no choice but to continue to run it for fear that competing newspapers might pick it up and draw its fans away. Watterson returned to the strip in 1992 with plans to produce his Sunday strip as an unbreakable half of a newspaper or tabloid page. This made him only the second cartoonist since Garry Trudeau to have sufficient popularity to demand more space and control over the presentation of his work.
Watterson took a second sabbatical from April 3 through December 31, 1994. His return came with an announcement that "Calvin and Hobbes" would be concluding at the end of 1995. Stating his belief that he had achieved everything that he wanted to within the medium, he announced his intention to work on future projects at a slower pace with fewer artistic compromises.
The final strip ran on Sunday, December 31, 1995, depicting Calvin and Hobbes sledding down a snowy hill after a fresh snowfall with Calvin exclaiming "Let's go exploring!"
Speaking to NPR in 2005, animation critic Charles Solomon opined that the final strip "left behind a hole in the comics page that no strip has been able to fill."
Sunday formatting.
Syndicated comics were typically published six times a week in black and white, with a Sunday supplement version in a larger, full color format. This larger format version of the strip was constrained by mandatory layout requirements that made it possible for newspaper editors to format the strip for different page sizes and layouts.
Watterson grew increasingly frustrated by the shrinking of the available space for comics in the newspapers and the mandatory panel divisions that restricted his ability to produce better artwork and more creative storytelling. He felt that without space for anything more than simple dialogue or sparse artwork, comics as an art form were becoming dilute, bland, and unoriginal.
Watterson longed for the artistic freedom allotted to classic strips such as "Little Nemo" and "Krazy Kat", and in 1989 he gave a sample of what could be accomplished with such liberty in the opening pages of the Sunday strip compilation, "The Calvin and Hobbes Lazy Sunday Book—"an 8-page previously unpublished Calvin story fully illustrated in watercolor. The same book contained an afterword from the artist himself, reflecting on a time when comic strips were allocated a whole page of the newspaper and every comic was like a "color poster".
Within two years, Watterson was ultimately successful in negotiating a deal that provided him more space and creative freedom. Following his 1991 sabbatical, Universal Press announced that Watterson had decided to sell his Sunday strip as an unbreakable half of a newspaper or tabloid page. Many editors and even a few cartoonists including Bil Keane ("The Family Circus") and Bruce Beattie ("Snafu") criticized him for what they perceived as arrogance and an unwillingness to abide by the normal practices of the cartoon business. Others, including Bill Amend ("Foxtrot"), Johnny Hart ("BC", "Wizard of Id") and Barbara Brandon ("Where I'm Coming From") supported him. The American Association of Sunday and Feature Editors even formally requested that Universal reconsider the changes. Watterson's own comments on the matter was that "editors will have to judge for themselves whether or not Calvin and Hobbes deserves the extra space. If they don't think the strip carries its own weight, they don't have to run it." Ultimately only 15 newspapers cancelled the strip in response to the layout changes.
Sabbaticals.
Bill Watterson took two sabbaticals from the daily requirements of producing the strip. The first took place from May 5, 1991, to February 1, 1992, and the second from April 3 through December 31, 1994. These sabbaticals were included in the new contract Watterson managed to negotiate with Universal Features in 1990. The sabbaticals were proposed by the syndicate themselves, who, fearing Watterson's complete burnout, endeavored to get another five years of work from their star artist.
Watterson remains only the third cartoonist with sufficient popularity and stature to receive a sabbatical from their syndicate, the first two being Garry Trudeau ("Doonesbury") in 1983 and Gary Larson ("The Far Side") in 1989. Typically, cartoonists are expected to produce sufficient strips to cover any period that they may wish to take off. Watterson's lengthy sabbaticals received some mild criticism from his fellow cartoonists including Greg Evans ("Luann"), and Charles Schulz ("Peanuts"), one of Watterson's major artistic influences, who even called it a "puzzle". Some cartoonists resented the idea that Watterson worked harder than others, while others supported it. At least one newspaper editor noted that the strip was the most popular in the country and stated that he "earned it".
Merchandising.
"Calvin and Hobbes" had almost no official product merchandising. Watterson held that comic strips should stand on their own as an art form and although he did not start out completely opposed to merchandising in all forms (or even for all comic strips), he did reject an early syndication deal that involved incorporating a more marketable, licensed character into his strip. In spite of being an unproven cartoonist, and having been flown all the way to New York to discuss the proposal, Watterson reflexively resented the idea of "cartooning by committee" and turned it down.
When "Calvin and Hobbes" was accepted by Universal Syndicate, and began to grow in popularity, Watterson found himself at odds with the syndicate, which urged him to begin merchandising the characters and touring the country to promote the first collections of comic strips. Watterson refused, believing that the integrity of the strip and its artist would be undermined by commercialization, which he saw as a major negative influence in the world of cartoon art, and that licensing his character would only violate the spirit of his work. He gave an example of this in discussing his opposition to a Hobbes plush toy: that if the essence of Hobbes' nature in the strip is that it remain unresolved whether he is a real tiger or a stuffed toy, then creating a real stuffed toy would only destroy the magic. However, having initially signed away control over merchandising in his initial contract with the syndicate, Watterson commenced a lengthy and emotionally draining battle with Universal to gain control over his work. Ultimately Universal did not approve any products against Watterson's wishes, understanding that, unlike other comic strips, it would be nearly impossible to separate the creator from the strip if Watterson chose to walk away.
One estimate places the value of licensing revenue forgone by Watterson at $300–$400 million. Almost no legitimate "Calvin and Hobbes" merchandise exists. Exceptions produced during the strip's original run include two 16-month calendars (1988–89 and 1989–90), a t-shirt for the Smithsonian Exhibit, "Great American Comics: 100 Years of Cartoon Art" (1990) and the textbook "Teaching with Calvin and Hobbes", which has been described as "perhaps the most difficult piece of official "Calvin and Hobbes" memorabilia to find." In 2010, Watterson did allow his characters to be included in a series of United States Postal Service stamps honoring five classic American comics. Licensed prints of "Calvin and Hobbes" were made available and have also been included in various academic works.
The strip's immense popularity has led to the appearance of various counterfeit items such as window decals and T-shirts that often feature crude humor, binge drinking and other themes that are not found in Watterson's work. Images from one strip in which Calvin and Hobbes dance to loud music at night were commonly used for copyright violations. After threat of a lawsuit alleging infringement of copyright and trademark, some sticker makers replaced Calvin with a different boy, while other makers made no changes. Watterson wryly commented, "I clearly miscalculated how popular it would be to show Calvin urinating on a Ford logo," but later added, "long after the strip is forgotten, [they] are my ticket to immortality".
Animation.
Watterson has expressed admiration for animation as an artform. In a 1989 interview in "The Comics Journal" he described the appeal of being able to do things with a moving image that cannot be done by a simple drawing: the distortion, the exaggeration and the control over the length of time an event is viewed. However, although the visual possibilities of animation appealed to Watterson, the idea of finding a voice for Calvin made him uncomfortable, as did the idea of working with a team of animators. Ultimately, "Calvin and Hobbes" was never made into an animated series. Watterson later stated in "The Calvin and Hobbes Tenth Anniversary Book" that he liked the fact that his strip was a "low-tech, one-man operation," and that he took great pride in the fact that he drew every line and wrote every word on his own. Calls from major Hollywood figures interested in an adaptation of his work, including Jim Henson, George Lucas and Steven Spielberg, were never returned and in a 2013 interview Watterson stated that he had "zero interest" in an animated adaptation as there was really no upside for him in doing so.
Style and influences.
The strip borrows several elements and themes from three major influences: Walt Kelly's "Pogo", George Herriman's "Krazy Kat" and Charles M. Schulz's "Peanuts". Schulz and Kelly particularly influenced Watterson's outlook on comics during his formative years.
Elements of Watterson's artistic style are his characters' diverse and often exaggerated expressions (particularly those of Calvin), elaborate and bizarre backgrounds for Calvin's flights of imagination, expressions of motion and frequent visual jokes and metaphors. In the later years of the strip, with more panel space available for his use, Watterson experimented more freely with different panel layouts, art styles, stories without dialogue and greater use of white space. He also experimented with his tools, once inking a strip with a stick from his yard in order to achieve a particular look. He also makes a point of not showing certain things explicitly: the "Noodle Incident" and the children's book "Hamster Huey and the Gooey Kablooie" are left to the reader's imagination, where Watterson was sure they would be "more outrageous" than he could portray.
Production and technique.
Watterson's technique started with minimalist pencil sketches drawn with a light pencil (though the larger Sunday strips often required more elaborate work) on a piece of Bristol board, with his brand of choice being Strathmore because he felt it held the drawings better on the page as opposed to the cheaper brands (Watterson said he initially used any cheap pad of Bristol board his local supply store had but switched to Strathmore after he found himself growing more and more displeased with the results). He would then use a small sable brush and India ink to fill in the rest of the drawing, saying that he did not want to simply trace over his penciling and thus make the inking more spontaneous. He lettered dialogue with a Rapidograph fountain pen, and he used a crowquill pen for odds and ends. Mistakes were covered with various forms of correction fluid, including the type used on typewriters. Watterson was careful in his use of color, often spending a great deal of time in choosing the right colors to employ for the weekly Sunday strip; his technique was to cut the color tabs the syndicate sent him into individual squares, lay out the colors, and then paint a watercolor approximation of the strip on tracing paper over the Bristol board and then mark the strip accordingly before sending it on. When "Calvin and Hobbes" began there were 64 colors available for the Sunday strips. For the later Sunday strips Watterson had 125 colors as well as the ability to fade the colors into each other.
Characters.
In addition to the two titular characters, six-year-old Calvin and his stuffed tiger Hobbes, the strip features a small recurring cast that also includes Calvin's unnamed parents, his classmate and neighbor Susie Derkins, his teacher Miss Wormwood, his school bully Moe, and his babysitter Rosalyn.
Recurring elements and themes.
Art and academia.
Watterson used the strip to poke fun at the art world, principally through Calvin's unconventional creations of snowmen but also through other expressions of childhood art. When Miss Wormwood complains that he is wasting class time drawing impossible things (a "Stegosaurus" in a rocket ship, for example), Calvin proclaims himself "on the cutting edge of the "avant-garde"." He begins exploring the medium of snow when a warm day melts his snowman. His next sculpture "speaks to the horror of our own mortality, inviting the viewer to contemplate the evanescence of life." In later strips, Calvin's creative instincts diversify to include sidewalk drawings (or, as he terms them, examples of "suburban postmodernism").
Watterson also lampooned the academic world. In one example, Calvin carefully crafts an "artist's statement", claiming that such essays convey more messages than artworks themselves ever do (Hobbes blandly notes, "You misspelled "Weltanschauung""). He indulges in what Watterson calls "pop psychobabble" to justify his destructive rampages and shift blame to his parents, citing "toxic codependency." In one instance, he pens a book report based on the theory that the purpose of academic writing is to "inflate weak ideas, obscure poor reasoning and inhibit clarity," entitled "The Dynamics of Interbeing and Monological Imperatives in Dick and Jane: A Study in Psychic Transrelational Gender Modes". Displaying his creation to Hobbes, he remarks, "Academia, here I come!" Watterson explains that he adapted this jargon (and similar examples from several other strips) from an actual book of art criticism.
Overall, Watterson's satirical essays serve to attack both sides, criticizing both the commercial mainstream and the artists who are supposed to be "outside" it. The strip on Sunday, June 21, 1992, criticized the naming of the Big Bang theory as not evocative of the wonders behind it and coined the term "Horrendous Space Kablooie", an alternative that achieved some informal popularity among scientists and was often shortened to "the HSK". The term has also been referred to in newspapers, books and university courses.
Calvin's alter-egos.
Calvin imagines himself as many great creatures and other people, including dinosaurs, elephants, jungle-farers and superheroes. Three of his alter egos are well-defined and recurrent:
Cardboard boxes.
Calvin also has several adventures involving corrugated cardboard boxes, which he adapts for many imaginative and elaborate uses. In one strip, when Calvin shows off his Transmogrifier, a device that transforms its user into any desired creature or item, Hobbes remarks, "It's amazing what they do with corrugated cardboard these days." Calvin is able to change the function of the boxes by rewriting the label and flipping the box onto another side. In this way, a box can be used not only for its conventional purposes (a storage container for water balloons, for example), but also as a flying time machine, a duplicator, a transmogrifier or, with the attachment of a few wires and a colander, a "Cerebral Enhance-o-tron."
In the real world, Calvin's antics with his box have had varying effects. When he transmogrified into a tiger, he still appeared as a regular human child to his parents. However, in a story where he made several duplicates of himself, his parents are seen interacting with what does seem like multiple Calvins, including in a strip where two of him are seen in the same panel as his father. It is ultimately unknown what his parents do or do not see, as Calvin tries to hide most of his creations (or conceal their effects) so as not to traumatize them.
In addition, Calvin uses a cardboard box as a sidewalk kiosk to sell things. Often, Calvin offers merchandise no one would want, such as "suicide drink", "a swift kick in the butt" for one dollar or a "frank appraisal of your looks" for fifty cents. In one strip, he sells "happiness" for ten cents, hitting the customer in the face with a water balloon and explaining that he meant his own happiness. In another strip, he sold "insurance", firing a slingshot at those who refused to buy it. In some strips, he tried to sell "great ideas" and, in one earlier strip, he attempted to sell the family car to obtain money for a grenade launcher. In yet another strip, he sells "life" for five cents, where the customer receives nothing in return, which, in Calvin's opinion, is life.
The box has also functioned as an alternate secret meeting place for G.R.O.S.S., as the "Box of Secrecy".
Calvinball.
Calvinball is an improvisational sport/game introduced in a 1990 storyline that involved Calvin's negative experience of joining the school baseball team. Calvinball is a nomic or self-modifying game, a contest of wits, skill and creativity rather than stamina or athletic skill. The game is portrayed as a rebellion against conventional team sports and became a staple of the final five years of the comic. The only consistent rules of the game are that Calvinball may never be played with the same rules twice and that each participant must wear a mask.
When asked how to play, Watterson stated: "It's pretty simple: you make up the rules as you go." In most appearances of the game, a comical array of conventional and non-conventional sporting equipment is involved, including a croquet set, a badminton set, assorted flags, bags, signs, a hobby horse, water buckets and balloons, with humorous allusions to unseen elements such as "time-fracture wickets". Scoring is portrayed as arbitrary and nonsensical ("Q to 12" and "oogy to boogy") and the lack of fixed rules leads to lengthy argument between the participants as to who scored, where the boundaries are, and when the game is finished. Usually, the contest results in Calvin being outsmarted by Hobbes. The game has been described in one academic work not as a new game based on fragments of an older one, but as the "constant connecting and disconnecting of parts, the constant evasion of rules or guidelines based on collective creativity."
Snowmen and other snow art.
Calvin often creates horrendous/dark humor scenes with his snowmen and other snow sculptures. He uses the snowman for social commentary, revenge or pure enjoyment. Examples include Snowman Calvin being yelled at by Snowman Dad to shovel the snow; one snowman eating snow cones scooped out of a second snowman, who is lying on the ground with an ice-cream scoop in his back; a "snowman house of horror"; and snowmen representing people he hates. "The ones I "really" hate are small, so they'll melt faster," he says. There was even an occasion on which Calvin accidentally brought a snowman to life and it made itself and a small army into "deranged mutant killer monster snow goons."
Calvin's snow art is often used as a commentary on art in general. For example, Calvin has complained more than once about the lack of originality in other people's snow art and compared it with his own grotesque snow sculptures. In one of these instances, Calvin and Hobbes claim to be the sole guardians of high culture; in another, Hobbes admires Calvin's willingness to put artistic integrity above marketability, causing Calvin to reconsider and make an ordinary snowman.
Wagon and sled rides.
Calvin and Hobbes frequently ride downhill in a wagon or sled (depending on the season), as a device to add some physical comedy to the strip and because, according to Watterson, "it's a lot more interesting ... than talking heads." While the ride is sometimes the focus of the strip, it also frequently serves as a counterpoint or visual metaphor while Calvin ponders the meaning of life, death, God, philosophy or a variety of other weighty subjects. Many of their rides end in spectacular crashes which leave them battered, beaten up and broken, a fact which convinces Hobbes to sometimes hop off before a ride even begins. In the final strip, Calvin and Hobbes depart on their sled to go exploring. This theme is similar (perhaps even an homage) to scenes in Walt Kelly's "Pogo". Calvin and Hobbes' sled has been described as the most famous sled in American arts since "Citizen Kane".
G.R.O.S.S. (Get Rid of Slimy GirlS).
G.R.O.S.S. (which is a backronym for Get Rid Of Slimy GirlS, "otherwise it doesn't spell anything") is a club in which Calvin and Hobbes are the only members. The club was founded in the garage of their house, but to clear space for its activities, Calvin and (purportedly) Hobbes push Calvin's parents' car, causing it to roll into a ditch (but not suffer damage); the incident prompts the duo to change the club's location to Calvin's treehouse. They hold meetings that involve finding ways to annoy and discomfort Susie Derkins, a girl and enemy of their club. Actions include planting a fake secret tape near her in an attempt to draw her into a trap, trapping her in a closet at their house and creating elaborate water balloon traps. Calvin gave himself and Hobbes important positions in the club, Calvin being "Dictator-for-Life" and Hobbes being "President-and-First-Tiger". They go into Calvin's treehouse for their club meetings and often get into fights during them. The password to get into the treehouse is intentionally long and difficult, which has, on at least one occasion, ruined Calvin's plans. As Hobbes is able to climb the tree without the rope, he is usually the one who comes up with the password, which often involves heaping praise upon tigers. An example of this can be seen in the comic strip where Calvin, rushing to get into the treehouse to throw things at a passing Susie Derkins, insults Hobbes, who is in the treehouse and thus has to let down the rope. Hobbes forces Calvin to say the password for insulting him. By the time Susie arrives, in time to hear Calvin saying some of the password, causing him to stumble, Calvin is on ""Verse Seven:" Tigers are perfect!/The E-pit-o-me/of good looks and grace/and quiet..uh..um..dignity". The opportunity to pelt Susie with something having passed, Calvin threatens to turn Hobbes into a rug.
Dinosaurs.
Dinosaurs play a heavy role in many of Calvin's imagination sequences. These strips will often begin with hyper-realistic scenes of dinosaur interactions, only to end with a cut to Calvin acting out these scenes as part of a daydream, often to his embarrassment. Watterson placed a heavy focus on accurately depicting dinosaurs, due to his own interest in them as well as to reinforce how real they are to Calvin.
Books.
There are 18 "Calvin and Hobbes" books, published from 1987 to 1997. These include 11 collections, which form a complete archive of the newspaper strips, except for a single daily strip from November 28, 1985. (The collections "do" contain a strip for this date, but it is not the same strip that appeared in some newspapers.) Treasuries usually combine the two preceding collections with bonus material and include color reprints of Sunday comics.
Watterson included some new material in the treasuries. In "The Essential Calvin and Hobbes", which includes cartoons from the collections "Calvin and Hobbes" and "Something Under the Bed Is Drooling", the back cover features a scene of a giant Calvin rampaging through a town. The scene is based on Watterson's home town of Chagrin Falls, Ohio, and Calvin is holding the Chagrin Falls Popcorn Shop, an iconic candy and ice cream shop overlooking the town's namesake falls. Several of the treasuries incorporate additional poetry; "The Indispensable Calvin and Hobbes" book features a set of poems, ranging from just a few lines to an entire page, that cover topics such as Calvin's mother's "hindsight" and exploring the woods. In "The Essential Calvin and Hobbes", Watterson presents a long poem explaining a night's battle against a monster from Calvin's perspective. "The Authoritative Calvin and Hobbes" includes a story based on Calvin's use of the Transmogrifier to finish his reading homework.
A complete collection of "Calvin and Hobbes" strips, in three hardcover volumes totaling 1440 pages, was released on October 4, 2005, by Andrews McMeel Publishing. It includes color prints of the art used on paperback covers, the treasuries' extra illustrated stories and poems and a new introduction by Bill Watterson in which he talks about his inspirations and his story leading up to the publication of the strip. The alternate 1985 strip is still omitted, and three other strips (January 7 and November 24, 1987, and November 25, 1988) have altered dialogue. A four-volume paperback version was released November 13, 2012.
To celebrate the release (which coincided with the strip's 20th anniversary and the tenth anniversary of its absence from newspapers), Bill Watterson answered 15 questions submitted by readers.
Early books were printed in smaller format in black and white. These were later reproduced in twos in color in the "Treasuries" ("Essential", "Authoritative" and "Indispensable"), except for the contents of "Attack of the Deranged Mutant Killer Monster Snow Goons". Those Sunday strips were not reprinted in color until the "Complete" collection was finally published in 2005.
Watterson claims he named the books the ""Essential", "Authoritative" and "Indispensable"" because, as he says in "The Calvin and Hobbes Tenth Anniversary Book", the books are "obviously none of these things."
"Teaching with Calvin and Hobbes".
An officially licensed children's textbook entitled "Teaching with Calvin and Hobbes" was published in a single print run in Fargo, North Dakota, in 1993. The book is composed of "Calvin and Hobbes" strips that form story arcs, including "The Binoculars" and "The Bug Collection", followed by lessons based on the stories.
The book is rare and highly sought. It has been called the "Holy Grail" for "Calvin and Hobbes" collectors.
Reception.
Reviewing "Calvin and Hobbes" in 1990, "Entertainment Weekly" Ken Tucker gave the strip an A+ rating, writing "Watterson summons up the pain and confusion of childhood as much as he does its innocence and fun."
Academic response.
In 1993, paleontologist and paleoartist Gregory S. Paul praised Bill Watterson for the scientific accuracy of the dinosaurs appearing in "Calvin and Hobbes".
In her 1994 book "When Toys Come Alive", Lois Rostow Kuznets theorizes that Hobbes serves both as a figure of Calvin's childish fantasy life and as an outlet for the expression of libidinous desires more associated with adults. Kuznets also analyzes Calvin's other fantasies, suggesting that they are a second tier of fantasies utilized in places like school where transitional objects such as Hobbes would not be socially acceptable.
Political scientist James Q. Wilson, in a paean to "Calvin and Hobbes" upon Watterson's decision to end the strip in 1995, characterized it as "our only popular explication of the moral philosophy of Aristotle."
A collection of original Sunday strips was exhibited at Ohio State University's Billy Ireland Cartoon Library & Museum in 2001. Watterson himself selected the strips and provided his own commentary for the exhibition catalog, which was later published by Andrews McMeel as "Calvin and Hobbes: Sunday Pages 1985–1995".
Since the discontinuation of "Calvin and Hobbes", individual strips have been licensed for reprint in schoolbooks, including the Christian homeschooling book "The Fallacy Detective" in 2002, and the university-level philosophy reader "Open Questions: Readings for Critical Thinking and Writing" in 2005; in the latter, the ethical views of Watterson and his characters Calvin and Hobbes are discussed in relation to the views of professional philosophers. In a 2009 evaluation of the entire body of "Calvin and Hobbes" strips using grounded theory methodology, Christijan D. Draper found that: "Overall, "Calvin and Hobbes" suggests that meaningful time use is a key attribute of a life well lived," and that "the strip suggests one way to assess the meaning associated with time use is through preemptive retrospection by which a person looks at current experiences through the lens of an anticipated future..."
"Calvin and Hobbes" strips were again exhibited at the Billy Ireland Cartoon Library & Museum at The Ohio State University in 2014, in an exhibition entitled "Exploring Calvin and Hobbes". An exhibition catalog by the same title, which also contained an interview with Watterson conducted by Jenny Robb, the curator of the museum, was published by Andrews McMeel in 2015.
Legacy.
Years after its original newspaper run, "Calvin and Hobbes" has continued to exert influence in entertainment, art, and fandom.
In television, Calvin and Hobbes have been satirically depicted in stop motion animation in the 2006 and 2018 "Robot Chicken" episodes "Lust for Puppets" and "Jew No. 1 Opens a Treasure Chest" respectively, and in traditional animation in the 2009 "Family Guy" episode "Not All Dogs Go to Heaven." In the 2013 "Community" episode "Paranormal Parentage," the characters Abed Nadir (Danny Pudi) and Troy Barnes (Donald Glover) dress as Calvin and Hobbes, respectively, for Halloween.
British artists, merchandisers, booksellers, and philosophers were interviewed for a 2009 BBC Radio 4 half-hour programme about the abiding popularity of the comic strip, narrated by Phill Jupitus.
The first book-length study of the strip, "Looking for Calvin and Hobbes: The Unconventional Story of Bill Watterson and His Revolutionary Comic Strip" by Nevin Martell, was first published in 2009; an expanded edition was published in 2010. The book chronicles Martell's quest to tell the story of "Calvin and Hobbes" and Watterson through research and interviews with people connected to the cartoonist and his work. The director of the later documentary "Dear Mr. Watterson" referenced "Looking for Calvin and Hobbes" in discussing the production of the movie, and Martell appears in the film.
The American documentary film "Dear Mr. Watterson", released in 2013, explores the impact and legacy of "Calvin and Hobbes" through interviews with authors, curators, historians, and numerous professional cartoonists.
The enduring significance of "Calvin and Hobbes" to international cartooning was recognized by the jury of the Angoulême International Comics Festival in 2014 by the awarding of its Grand Prix to Watterson, only the fourth American to ever receive the honor (after Will Eisner, Robert Crumb, and Art Spiegelman).
From 2016 to 2021, author Berkeley Breathed included "Calvin and Hobbes" in various "Bloom County" cartoons. He launched the first cartoon on April Fool's Day 2016 and jokingly issued a statement suggesting that he had acquired "Calvin and Hobbes" from Bill Watterson, who was "out of the Arizona facility, continent and looking forward to some well-earned financial security." While bearing Watterson's signature and drawing style as well as featuring characters from both "Calvin and Hobbes" and Breathed's "Bloom County", it is unclear whether Watterson had any input into these cartoons or not.
"Calvin and Hobbes" remains the most viewed comic on GoComics, which cycles through old strips with an approximately 30-year delay.
Grown-up Calvin.
Portraying Calvin as a teenager/adult has inspired writers.
In 2011, a comic strip appeared by cartoonists Dan and Tom Heyerman called "Hobbes and Bacon". The strip depicts Calvin as an adult, married to Susie Derkins with a young daughter named after philosopher Francis Bacon, to whom Calvin gives Hobbes. Though consisting of only four strips originally, "Hobbes and Bacon" received considerable attention when it appeared and was continued by other cartoonists and artists.
A novel titled "Calvin" by CLA Young Adult Book Award–winning author Martine Leavitt was published in 2015. The story tells of seventeen-year-old Calvin—who was born on the day that "Calvin and Hobbes" ended, and who has now been diagnosed with schizophrenia—and his hallucination of Hobbes, his childhood stuffed tiger. With his friend Susie, who might also be a hallucination, Calvin sets off to find Bill Watterson in the hope that the cartoonist can provide aid for Calvin's condition.
The titular character of the comic strip "Frazz" has been noted for his similar appearance and personality to a grown-up Calvin. Creator Jef Mallett has stated that although Watterson is an inspiration to him, the similarities are unintentional.
|
6060
|
7903804
|
https://en.wikipedia.org/wiki?curid=6060
|
Campaign for Real Ale
|
The Campaign for Real Ale (CAMRA) is an independent voluntary consumer organisation headquartered in St Albans, which promotes real ale, cider and perry and traditional British pubs and clubs.
History.
The organisation was founded on 16 March 1971 in Kruger's Bar, Dunquin, County Kerry, Ireland, by Michael Hardman, Graham Lees, Jim Makin, and Bill Mellor, who were opposed to the growing mass production of beer and the homogenisation of the British brewing industry. The original name was the Campaign for the Revitalisation of Ale. Following the formation of the Campaign, the first annual general meeting took place in 1972, at the Rose Inn in Coton Road, Nuneaton.
Early membership consisted of the four founders and their friends. Interest in CAMRA and its objectives spread rapidly, with 5,000 members signed up by 1973. Other early influential members included Christopher Hutt, author of "Death of the English Pub", who succeeded Hardman as chairman, Frank Baillie, author of "The Beer Drinker's Companion", and later the many times "Good Beer Guide" editor, Roger Protz.
In 1991, CAMRA had 30,000 members across the UK and abroad and, a year later, helped to launch the European Beer Consumers Union.
Activities.
CAMRA's campaigns include promoting small brewing and pub businesses, reforming licensing laws, reducing tax on beer, and stopping continued consolidation among local British brewers. It also makes an effort to promote less common varieties of beer, including stout, porter, and mild, as well as traditional cider and perry.
CAMRA's states that real ale should be served without the use of additional carbonation. This means that "any beer brand which is produced in both cask and keg versions" is not admitted to CAMRA festivals if the brewery's marketing is deemed to imply an equivalence of quality or character between the two versions.
Organisation.
CAMRA is organised on a federal basis, over 200 local branches, each covering a particular geographical area of the UK, that contribute to the central body of the organisation based in St Albans. It is governed by a National Executive, made up of 12 voluntary unpaid directors elected by the membership. The local branches are grouped into 16 regions across the UK, such as the West Midlands or Wessex.
Publications and websites.
CAMRA publishes the "Good Beer Guide", an annually compiled directory of the best 4,500 real ale outlets and listing of real ale brewers.
CAMRA members received a monthly newspaper called "What's Brewing" until its April 2021 issue and there is a quarterly colour magazine called "Beer". It also maintains a National Inventory of Historic Pub Interiors to help bring greater recognition and protection to Britain's most historic pubs.
Festivals.
CAMRA supports and promotes beer and cider festivals around the country, which are organised by local CAMRA branches. Generally, each festival charges an entry fee which either covers entry only or also includes a commemorative glass showing the details of the festival. A festival programme is usually also provided, with a list and description of the drinks available. Members may get discounted entrance to CAMRA festivals.
The Campaign also organises the annual Great British Beer Festival in August. It is now held in the Great, National & West Halls at the Olympia Exhibition Centre, in Kensington, London, having been held for a few years at Earl's Court as well as regionally in the past at venues such as Brighton and Leeds. This is the UK's largest beer festival, with over 900 beers, ciders and perries available over the week long event.
For many years, CAMRA also organised the National Winter Ales Festival. However, in 2017 this was re-branded as the Great British Beer Festival Winter where they award the Champion Winter Beer of Britain. Unlike the Great British Beer Festival, the Winter event does not have a permanent venue and is rotated throughout the country every three years. Recent hosts have been Derby and Norwich, with the event currently held each February in Birmingham. In 2020 CAMRA also launched the Great Welsh Beer Festival, to be held in Cardiff in April.
Awards.
CAMRA presents awards for beers and pubs, such as the National Pub of the Year. The competition begins in the preceding year with branches choosing their local pub of the year through either a ballot or a panel of judges. The branch winners are entered into 16 regional competitions which are then visited by several individuals who agree the best using a scoring system that considers beer quality, aesthetic and welcome. The four finalists are announced each year before a ceremony to crown the winner in the spring. There are also the Pub Design Awards, which are held in association with English Heritage and the Victorian Society. These comprise several categories, including new build, refurbished and converted pubs.
The best known CAMRA award is the Champion Beer of Britain, which is selected at the Great British Beer Festival. Other awards include the Champion Beer of Scotland and the Champion Beer of Wales.
National Beer Scoring Scheme.
CAMRA developed the National Beer Scoring Scheme (NBSS) as an easy to use scheme for judging beer quality in pubs, to assist CAMRA branches in selecting pubs for the "Good Beer Guide". CAMRA members input their beer scores online via WhatPub or through the Good Beer Guide app.
Pub heritage.
The CAMRA Pub Heritage Group identifies, records and helps to protect pub interiors of historic and/or architectural importance, and seeks to get them listed.
The group maintains two inventories of Heritage pubs, the National Inventory (NI), which contains only those pubs that have been maintained in their original condition (or have been modified very little) for at least thirty years, but usually since at least World War II. The second, larger, inventory is the Regional Inventory (RI), which is broken down by county and contains both those pubs listed in the NI and other pubs that are not eligible for the NI, for reasons such as having been overly modified, but are still considered historically important, or have particular architectural value.
LocAle.
The LocAle scheme was launched in 2007 to promote locally brewed beers. The scheme functions slightly differently in each area, and is managed by each branch, but each is similar: if the beer is to be promoted as a LocAle it must come from a brewery within a predetermined number of miles set by each CAMRA branch, generally around 20, although the North London branch has set it at 30 miles from brewery to pub, even if it comes from a distribution centre further away; in addition, each participating pub must keep at least one LocAle for sale at all times.
Investment club.
CAMRA members may join the CAMRA Members' Investment Club which, since 1989, has invested in real ale breweries and pub chains. As of January 2021 the club had over 3,000 members and owned investments worth over £17 million. Although all investors must be CAMRA members, the CAMRA Members' Investment Club is not part of CAMRA Ltd.
|
6061
|
20818238
|
https://en.wikipedia.org/wiki?curid=6061
|
CNO cycle
|
In astrophysics, the carbon–nitrogen–oxygen (CNO) cycle, sometimes called Bethe–Weizsäcker cycle, after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker, is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.
Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.
There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:
4 + 2
→ + +
The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.
The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately , but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately .
The Sun has a core temperature of around , and only of nuclei produced in the Sun are
born in the CNO cycle.
The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.
The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.
Cold CNO cycles.
Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.
CNO-I.
The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as "Bethe's Bible". It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of the chemical elements|abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:
This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:
where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional , the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.
The limiting (slowest) reaction in the CNO-I cycle is the proton capture on . In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.
The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to , and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to . On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about available for producing luminosity.
CNO-II.
In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues
In detail:
Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.
CNO-III.
This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues
In detail:
CNO-IV.
Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues
In detail:
In some instances can combine with a helium nucleus to start a neon-sodium cycle, in which:
The sodium-23 can also turn into magesium-24 after proton bombardment, initiating the magnesium-aluminum cycle.
Hot CNO cycles.
Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.
HCNO-I.
The difference between the CNO-I cycle and the HCNO-I cycle is that captures a proton instead of decaying, leading to the total sequence
In detail:
HCNO-II.
The notable difference between the CNO-II cycle and the HCNO-II cycle is that captures a proton instead of decaying, and neon is produced in a subsequent reaction on , leading to the total sequence
In detail:
HCNO-III.
An alternative to the HCNO-II cycle is that captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as
In detail:
Use in astronomy.
While the total number of "catalytic" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle.
|
6062
|
5217210
|
https://en.wikipedia.org/wiki?curid=6062
|
Craps
|
Craps is a dice game in which players bet on the outcomes of the roll of a pair of dice. Players can wager money against each other (playing "street craps") or against a bank ("casino craps"). Because it requires little equipment, "street craps" can be played in informal settings. While shooting craps, players may use slang terminology to place bets and actions.
History.
Craps developed in the United States from a simplification of the western European game of Hazard, also spelled Hazzard or Hasard. The origins of Hazard are obscure and may date to the Crusades; a detailed description of Hazard was provided by Edmond Hoyle in "Hoyle's Games, Improved" (1790). At approximately the same time (1788), "Krabs" was documented as a French variation on Hazard.
In aristocratic London, crabs was the epithet for the sum combinations of two and three for two rolled dice, which in Hazard are instant-losing numbers for the first dice roll, regardless of the shooter's selected main number. The name craps is derived from the corruption of this term crabs (or Krabs) to creps and then craps.
According to some accounts, Hazard was brought from London to New Orleans in approximately 1805 by the returning Bernard Xavier Philippe de Marigny de Mandeville, the young gambler and scion of a family of wealthy landowners in colonial Louisiana. Hazard allows the dice shooter to choose any number from five to nine as their "main" number; in a pamphlet published in 1933, Edward Tinker claimed that Marigny simplified the game by making the main always seven, which is the mathematically optimal choice, i.e., the choice with the lowest disadvantage for the shooter. However, more recent research indicates that Marigny played an unmodified version of Hazard, which had been played in America since at least the 1600s. Instead, John Scarne credits anonymous Black American inventors with simplifying and streamlining Hazard, increasing the pace of the game and adding a variety of wagers.
Regardless of who deserves credit for simplifying Hazard, the game initially was called Pass from the French word "pas" (meaning "pace" or "step"), and was popularized by the underclass starting in the early 19th century. Field hands taught their friends and deckhands, who carried the new game up the Mississippi River and its tributaries, although the game was never popular amongst the riverboat gamblers. Marigny gave the name Rue de Craps to a street in his new subdivision in New Orleans; in that city, craps experienced a resurgence of popularity in the late 1830s, but was not played in gaming houses until the 1890s. Budd Theobald credits the cultural exchange between attendants and railroad passengers on Pullman cars for popularizing the game, which eventually spread throughout America by the 1910s, when it was described as "the gambling game of [the country]" in "Foster's Complete Hoyle" (1914).
The craps numbers of 2, 3, and 12 are similarly derived from Hazard. If the main is seven, then the two-dice sum of twelve is added to the crabs as a losing number on the first dice roll. This condition is retained in the simplified game called Pass. All three losing numbers (2, 3, and 12) on the first roll of Pass are jointly called the craps numbers. The central game Pass gradually has been supplemented over the decades by many companion games and wagers which can be played simultaneously with Pass; these are now collectively known as craps.
Early versions of bank craps played in casinos made money either by charging a commission to shooters or offering short odds on the various wagers, primarily on the "Pass line" bet for the shooter to win against the house. In approximately 1907, a dicemaker named John H. Winn in Philadelphia introduced a layout which featured a space to wager on "Don't Pass" (i.e., for the shooter to lose) in addition to "Pass". Virtually all modern casinos use his innovation, which incentivizes casinos to use fair dice. As introduced by Winn, "Don't Pass" bets were taken with a 5 percent commission to ensure the house retained an edge in running the game; this was replaced by the Bar-3 push for "Don't Pass", and later by the Bar-12 (or Bar-2) push.
Craps exploded in popularity during World War II, which brought most young American men of every social class into the military. The street version of craps was popular among service members who often played it using a blanket as a shooting surface. Their military memories led to craps becoming the dominant casino game in postwar Las Vegas and the Caribbean.
After 1960, a few casinos in Europe, Australia, and Macau began offering craps, and, after 2004, online casinos extended the game's spread globally. Craps has been featured in a number of newer casinos, including the idea of expanding into formerly unavailable locals on the coastline.
Bank craps.
Bank craps or casino craps is played by one or more players betting against the casino rather than each other. Both the players and the dealers stand around a large rectangular craps table. Sitting is discouraged by most casinos unless a player has medical reasons for requiring a seat.
The basic flow of a single game is:
Craps table.
Players use casino chips rather than cash to bet on the Craps "layout", a fabric surface which displays the various bets. The bets vary somewhat among casinos in availability, locations, and payouts. The tables roughly resemble bathtubs and come in various sizes. In some locations, chips may be called checks, tokens, or plaques.
Against one long side is the casino's table bank: as many as two thousand casino chips in stacks of 20. The opposite long side is usually a long mirror. The U-shaped ends of the table have duplicate layouts and standing room for approximately eight players. In the center of the layout is an additional group of side bets which are used by players from both ends. The vertical walls at each end are usually covered with a rubberized target surface covered with small pyramid shapes to randomize the dice which strike them. The top edges of the table walls have one or two horizontal grooves in which players may store their reserve chips.
The table is run by up to four casino employees: a boxman seated (usually the only seated employee) behind the casino's bank, who manages the chips, supervises the dealers, and handles "coloring up" players (exchanging small chip denominations for larger denominations in order to preserve the chips at a table); two base dealers who stand to either side of the boxman and collect and pay bets to players around their half of the table; and a stickman who stands directly across the table from the boxman, takes and pays (or directs the base dealers to do so) the bets in the center of the table, announces the results of each roll (usually with a distinctive patter), and moves the dice across the layout with an elongated wooden stick.
Each employee also watches for mistakes by the others because of the sometimes large number of bets and frantic pace of the game. In smaller casinos or at quiet times of day, one or more of these employees may be missing, and have their job covered by another, or cause player capacity to be reduced.
Some smaller casinos have introduced "mini-craps" tables which are operated with only two dealers; rather than being two essentially identical sides and the center area, a single set of major bets is presented, split by the center bets. Responsibility of the dealers is adjusted: while the stickman continues to handle the center bets, it is the base dealer who handles all other bets (as well as cash and chip exchanges).
By contrast, in "street craps", there is no marked table and often the game is played with no back-stop against which the dice are to hit. Despite the name "street craps", this game is often played in houses, usually on an un-carpeted garage or kitchen floor. The wagers are made in cash, never in chips, and are usually thrown down onto the ground or floor by the players. There are no attendants, and so the progress of the game, fairness of the throws, and the way that the payouts are made for winning bets are self-policed by the players.
Dice.
The dice used at casinos for craps and many other games are sometimes called "perfect" or "gambling house dice". These are generally made from translucent extruded cellulose, with perfectly square edges each in length, with pips drilled deep and filled with opaque paint matching the density of cellulose, which ensures the dice remain balanced. The dice are buffed and polished to a high glossy finish after the pips are set, and the edges usually are left sharp, also called square or razor edge. To discourage cheating and dice substitution, each die carries a serial number and the casino's logo or name. New Jersey specifies the maximum size of the die is on a side.
Under New Jersey regulations, the shooter selects two dice from a set of at least five.
Rules of play.
Each casino may set which bets are offered and different payouts for them, though a core set of bets and payouts is typical. Players take turns rolling two dice and whoever is throwing the dice is called the "shooter". Players can bet on the various options by placing chips directly on the appropriately-marked sections of the layout, or asking the base dealer or stickman to do so, depending on which bet is being made.
While acting as the shooter, a player must have a bet on either the "Pass" or the "Don't Pass" line or both. "Pass" and "Don't Pass" are sometimes called "Win" and "Lose", "Do" and "Don't", or "Right" and "Wrong". The game is played in rounds and these "Pass" and "Don't Pass" bets are betting on the outcome of a single round. The shooter is presented with multiple dice (typically five) by the "stickman", and must choose two for the round. The remaining dice are returned to the stickman's bowl and are not used.
Each round has two phases: "come-out" and "point". Dice are passed to the left.
Phase 1 (Come-out).
To start a round, the shooter makes one or more "come-out" rolls. While the come-out roll may specifically refer to the first roll of a new shooter, any roll where no point is established may be referred to as a come-out. By this definition the start of any new round regardless of whether it is the shooter's first toss can be referred to as a come-out roll. The shooter must shoot toward the farther back wall and is generally required to hit the farther back wall with both dice. Casinos may allow a few warnings before enforcing the dice to hit the back wall and are generally lenient if at least one die hits the back wall. Both dice must be tossed in one throw. If only one die is thrown the shot is invalid.
A come-out roll of 2, 3, or 12 is called "craps" or "crapping out", and anyone betting the Pass line loses. On the other hand, anyone betting the Don't Pass line on come out wins with a roll of 2 or 3 and ties (pushes) if a 12 is rolled; in some rules, the 2 pushes instead of the 12, in which case the 3 and 12 win a Don't Pass bet. Shooters may keep rolling after crapping out; the dice are only required to be passed if a shooter sevens out (rolls a seven after a point has been established). A come-out roll of 7 or 11 is a "natural"; the Pass line wins and Don't Pass loses. The other possible numbers are the point numbers: 4, 5, 6, 8, 9, and 10. If the shooter rolls one of these numbers on the come-out roll, this establishes the "point" – to "pass" or "win", the point number must be rolled again before a seven.
Phase 2 (Point).
The dealer flips a button to the "On" side and moves it to the point number signifying the second phase of the round. If the shooter "hits" the point value again (any value of the dice that sum to the point will do; the shooter does not have to exactly repeat the exact combination of the come-out roll) before rolling a seven, the Pass line wins and a new round starts. If the shooter rolls any seven before repeating the point number (a "seven-out"), the Pass line loses, the Don't Pass line wins, and the dice pass clockwise to the next new shooter for the next round. Once a point has been established, any multi-roll bets (including line bets and odds for Pass, Don't Pass, or both) are unaffected by the 2, 3, 11, or 12; the only numbers which affect the round are the established point, any specific bet on a number, or any 7. Any single roll bet is always affected (win or lose) by the outcome of any roll.
Basic wagering rules.
Any player can make a bet on Pass or Don't Pass as long as a point has not been established, or Come or Don't Come as long as a point is established. All other bets, including an increase in odds behind the Pass and Don't Pass lines, may be made at any time. All bets other than Pass line and Come may be removed or reduced any time before the bet loses. This is known as "taking it down" in craps.
The maximum bet for Place, Buy, Lay, Pass, and Come bets are generally equal to table maximum. Lay bet maximum are equal to the table maximum win, so players wishing to lay the 4 or 10 may bet twice that amount of the table maximum for the win to be table maximum. Odds behind Pass, Come, Don't Pass, and Don't Come may be however larger than the odds offered allows and can be greater than the table maximum in some casinos. Don't odds are capped on the maximum allowed win. Some casino allow the odds bet itself to be larger than the maximum bet allowed as long as the win is capped at maximum odds. Single rolls bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum allowed single roll bet is based on the maximum allowed win from a single roll.
In all the above scenarios, whenever the Pass line wins, the Don't Pass line loses, and vice versa, with one exception: on the come-out roll, a roll of 12 will cause Pass Line bets to lose, but Don't Pass bets are pushed (or "barred"), neither winning nor losing; this is done to establish a house edge for Don't Pass bets. (The same applies to "Come" and "Don't Come" bets, discussed below.)
Joining a game.
A player wishing to play craps without being the shooter should approach the craps table and first check to see if the dealer's "On" button is on any of the point numbers.
In either case, all single or multi-roll proposition bets may be placed in either of the two phases.
Between dice rolls there is a period for dealers to make payouts and collect losing bets, after which players can place new bets. The stickman monitors the action at a table and decides when to give the shooter the dice, after which no more betting is allowed.
When joining the game, one should place money on the table rather than passing it directly to a dealer. The dealer's exaggerated movements during the process of "making change" or "change only" (converting currency to an equivalent in casino cheques) are required so that any disputes can be later reviewed against security camera footage.
Rolling.
The dealers will insist that the shooter roll with one hand and that the dice bounce off the far wall surrounding the table. These requirements are meant to keep the game fair (preventing switching the dice or making a "controlled shot"). If a die leaves the table, the shooter will usually be asked to select another die from the remaining three but can request permission to use the same die if it passes the boxman's inspection. This requirement exists to keep the game fair and reduce the chance of loaded dice.
Names of rolls.
There are many local variants of the calls made by the stickman for rolls during a craps game. These frequently incorporate a reminder to the dealers as to which bets to pay or collect.
The two ones that compose it look like a pair of small, beady eyes. During actual play, more common terms are "two craps two" during the comeout roll because the Pass line bet is lost on a comeout crap roll and/or because a bet on any craps would win. "Aces; double the field" would be a more common call when not on the comeout roll to remind the dealers to pay double on the field bets and encourage the field bettor to place subsequent bets and/or when no crap bets have been placed. Another name for the two is "loose deuce" or "Snickies" due to it sounding like "Snake eyes" but spoken with an accent.
Typically called as "three craps three" during the comeout roll, or "three, ace deuce, come away single" when not on the comeout to signify the come bet has been lost and to pay single to any field bettors. Three may also be referred to as "ace caught a deuce", "Tracy", or even less often "acey deucey".
usually hard, is sometimes referred to as "Little Joe from Kokomo" or "Little Joe on the front row" or just "Little Joe". A hard four can be called a "ballerina" because it is two-two ("tutu").
is frequently called "no field five" in casinos in which five is not one of the field rolls and thus not paid in the field bets. Other names for a five are "fever" and "little Phoebe".
may be referred to as "Jimmie Hicks" or "Jimmie Hicks from the sticks", examples of rhyming slang. On a win, the six is often called "666 winner 6" followed by "came hard" or "came easy".
rolled as 6–1 is sometimes called "six ace" or "up pops the Devil". Older dealers and players may use the term "Big Red" because craps tables once prominently featured a large red "7" in the center of the layout for the one-roll seven bet. During the comeout, the seven is called "seven, front line winner", frequently followed by "pay the line" and/or "take the don'ts". After the point is established, a seven is typically called by simply "7 out" or "7 out 7"..
rolled the hard way, as opposed to an "easy eight", is sometimes called an "eighter from Decatur". It can also be known as a "square pair", "mom and dad", or "Ozzie and Harriet".
is called a "centerfield nine" in casinos in which nine is one of the field rolls, because nine is the center number shown on the layout in such casinos (2–3–4–9–10–11–12). In Atlantic City, a 4–5 is called a "railroad nine". The 4–5 nine is also known as "Jesse James" because the outlaw Jesse James was killed by a .45 caliber pistol. Other names for the nine include "Nina from Pasadena", "Nina at the Marina", and "niner from Carolina". Nine can also be referred to as "Old Mike", named after NBA Hall-of-Famer Michael Jordan, who wore No. 9 in his FIBA international career, when players could only wear numbers 4 to 15.
the hard way is "a hard ten", "dos equis" (Spanish, meaning "two X's", because the pip arrangement on both dice on this roll resembles "XX"), or "Hard ten – a woman's best friend", an example of both rhyming slang and sexual double entendre. Ten as a pair of 5's may also be known as "puppy paws" or "a pair of sunflowers" or "Big Dick" or "Big John." Another slang for a hard ten is "moose head", because it resembles a moose's antlers. This phrase came from players in the Pittsburgh area.
called out as "yo" or "yo-leven" to prevent being misheard as "seven". An older term for eleven is "six five, no jive" because it is a winning roll. During the comeout, eleven is typically followed by "front line winner". After the point is established, "good field and come" is often added.
known as "boxcars" because the spots on the two dice that show 6–6 look like schematic drawings of railroad boxcars; it is also called "midnight", referring to twelve o'clock; and also as "double-action field traction", because of the (standard) 2-to-1 pay on Field bets for this roll and the fact that the arrangement of the pips on the two dice, when laid end-to-end, resemble tire tracks. On tables that pay triple the field on a twelve roll, the stickman will often loudly exclaim "triple" either alone or in combination with "12 craps 12" or "come away triple".
Rolls of 4, 6, 8, and 10 are called "hard" or "gag", when rolled as a double, or "easy", when rolled with two different numbers. For example, rolls will be called "six the hard way", "easy eight", "hard ten", etc., because of their significance in center table bets known as the "hard ways". Hard way rolls are so named because there is only one way to roll them (i.e., the value on each die is the same when the number is rolled). Consequently, it is more likely to roll the number in different-number combinations (easy) rather than as a double (hard).
Note: Individual casinos may pay some of these bets at different payout ratios than those listed below. Some bets are listed more than once below – the most common payout in North American casinos is listed first, followed by other known variants.
Note: "True Odds" do not vary.
Bet odds and summary.
The probability of dice combinations determine the odds of the payout. There are a total of 36 (6 × 6) possible combinations when rolling two dice. The following chart shows the dice combinations needed to roll each number. The two and twelve are the hardest to roll since only one combination of dice is possible. The game of craps is built around the dice roll of seven, since it is the most easily rolled dice combination.
Viewed another way:
The expected value of all bets is usually negative, such that the average player will always lose money. This is because the house always sets the paid odds to below the actual odds. The only exception is the "odds" bet that the player is allowed to make after a point is established on a pass/come Don't Pass/Don't Come bet (the odds portion of the bet has a long-term expected value of 0). However, this "free odds" bet cannot be made independently, so the expected value of the entire bet, including odds, is still negative. Since there is no correlation between die rolls, there is normally no possible long-term winning strategy in craps.
There are occasional promotional variants that provide either no house edge or even a player edge. One example is a field bet that pays 3:1 on 12 and 2:1 on either 3 or 11. Overall, given the 5:4 true odds of this bet, and the weighted average paid odds of approximately 7:5, the player has a 5% advantage on this bet. This is sometimes seen at casinos running limited-time incentives, in jurisdictions or gaming houses that require the game to be fair, or in layouts for use in informal settings using play money. No casino currently runs a craps table with a bet that yields a player edge full-time.
Maximizing the size of the odds bet in relation to the line bet will reduce, but never eliminate the house edge, and will increase variance. Most casinos have a limit on how large the odds bet can be in relation to the line bet, with single, double, and five times odds common. Some casinos offer 3–4–5 odds, referring to the maximum multiple of the line bet a player can place in odds for the points of 4 and 10, 5 and 9, and 6 and 8, respectively. During promotional periods, a casino may even offer 100× odds bets, which reduces the house edge to almost nothing, but dramatically increases variance, as the player will be betting in large betting units.
Since several of the multiple roll bets pay off in ratios of fractions on the dollar, it is important that the player bets in multiples that will allow a correct payoff in complete dollars. Normally, payoffs will be rounded down to the nearest dollar, resulting in a higher house advantage. These bets include all place bets, taking odds, and buying on numbers 6, 8, 5, and 9, as well as laying all numbers.
Types of wagers.
Line bets.
The shooter is required to make either a Pass line bet or a Don't Pass bet if he wants to shoot. On the come out roll each player may only make one bet on the Pass or Don't Pass, but may bet both if desired. The Pass Line and Don't Pass bet is optional for any player not shooting. In rare cases, some casinos require all players to make a minimum Pass Line or Don't Pass bet (if they want to make any other bet), whether they are currently shooting or not.
Pass line.
The basic bet in craps is the Pass line bet, which is a bet for the shooter to win. This bet must be at least the table minimum and at most the table maximum.
The Pass line bet pays even money.
The Pass line bet is a contract bet. Once a Pass line bet is made, it is always working and cannot be turned "Off", taken down, or reduced until a decision is reached – the point is made, or the shooter sevens out. A player may increase any corresponding odds (up to the table limit) behind the Pass line at any time after a point is established. Players may only bet the Pass line on the come out roll when no point has been established, unless the casino allows put betting where the player can bet Pass line or increase an existing Pass line bet whenever desired and may take odds immediately if the point is already on.
Don't Pass.
A Don't Pass bet is a bet for the shooter to lose ("seven out, line away") and is almost the opposite of the Pass line bet. Like the Pass bet, this bet must be at least the table minimum and at most the table maximum.
The Don't Pass bet pays even money.
The Don't Pass bet is a no-contract bet. After a point is established, a player may take down or reduce a Don't Pass bet and any corresponding odds at any time because odds of rolling a 7 before the point is in the player's favor. Once taken down or reduced, however, the Don't Pass bet may not be restored or increased. Because the shooter must have a line bet the shooter generally may not reduce a Don't Pass bet below the table minimum. In Las Vegas, a majority of casinos will allow the shooter to move the bet to the Pass line in lieu of taking it down; however, in other areas such as Pennsylvania and Atlantic City, this is not allowed. Even though players are allowed to remove the Don't Pass line bet after a point has been established, the bet cannot be turned "Off" without being removed. Players choosing to remove the Don't Pass line bet can no longer lay odds behind the Don't Pass line. The player can, however, still make standard lay bets on any of the point numbers (4, 5, 6, 8, 9, 10).
The casino chooses either Bar-2 or Bar-12, but not both. The push on 12 or 2 is mathematically necessary to maintain the house edge over the player. Other casinos allow the player to choose to either push on 2 ("Bar Aces") or push on 12 ("Bar Sixes") depending on where it is placed on the layout. Some older bank crap games used Bar-3 ("Bar Ace-Deuce"), which increases the house edge.
There are two different ways to calculate the odds and house edge of this bet. The summary table gives the numbers considering that the game ends in a push when a 12 is rolled, rather than being undetermined. Betting on Don't Pass is often called "playing the dark side", and it is considered by some players to be in poor taste, or even taboo, because it goes directly against conventional play, winning when most of the players lose.
Pass odds.
If a 4, 5, 6, 8, 9, or 10 is thrown on the come-out roll (i.e., when a point is established), most casinos allow Pass line players to take odds by placing up to some predetermined multiple of the Pass line bet, behind the Pass line. This additional bet wins if the point is rolled again before a 7 is rolled (the point is made) and pays at the true odds:
Unlike the Pass line bet itself, the Pass line odds bet can be turned "Off" (not working), removed or reduced anytime before it loses. In Las Vegas, generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit on odds depending on the point. If the point is a 4 or 10, players can bet as little as $1 on odds if the table minimum is low such as is $5, $10 or $15. If the player requests the Pass odds be not working ("Off") and the shooter sevens-out or hits the point, the Pass line bet will be lost or doubled and the Pass odds returned.
Individual casinos (and sometimes tables within a casino) vary greatly in the maximum odds they offer, from single or double odds (one or two times the Pass line bet) up to 100× or even unlimited odds. A variation often seen is "3-4-5× Odds", where the maximum allowed odds bet depends on the point: three times if the point is 4 or 10; four times on points of 5 or 9; or five times on points of 6 or 8. This rule simplifies the calculation of winnings: a maximum Pass odds bet on a 3–4–5× table will always be paid at six times the Pass line bet regardless of the point.
As odds bets are paid at true odds, in contrast with the Pass line which is always even money, taking odds on a minimum Pass line bet lessens the house advantage compared with betting the same total amount on the Pass line only. A maximum odds bet on a minimum Pass line bet often gives the lowest house edge available in any game in the casino. However, the odds bet cannot be made independently, so the house retains an edge on the Pass line bet itself.
Don't Pass odds.
If a player is playing Don't Pass instead of pass, they also may lay odds by placing chips behind the Don't Pass line. If a 7 comes before the point is rolled, the Don't Pass odds pay at true odds:
Typically the maximum lay bet will be expressed such that a player may win up to an amount equal to the maximum odds multiple at the table. If a player lays maximum odds with a point of 4 or 10 on a table offering five-times odds, he would be able to lay a maximum of ten times the amount of his Don't Pass bet. At 5× odds table, the maximum amount the combined bet can win will always be 6× the amount of the Don't Pass bet. Players can bet table minimum odds if desired and win less than table minimum.
Like the Don't Pass bet the odds can be removed or reduced. Unlike the Don't Pass bet itself, the Don't Pass odds can be turned "Off" (not working). In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine lay odds and Don't Pass bet must be table minimum so players may bet as little as the minimum two units on odds depending on the point. If the point is a 4 or 10 players can bet as little as $2 if the table minimum is low such as $5, $10 or $15 tables. If the player requests the Don't Pass odds to be not working ("Off") and the shooter hits the point or sevens-out, the Don't Pass bet will be lost or doubled and the Don't Pass odds returned. Unlike a standard lay bet on a point, lay odds behind the Don't Pass line does not charge commission (vig).
Come bet.
A player making a Come bet is wagering on the first number that "comes" from the shooter's next roll, regardless of the table's phase. In other words, a Come bet can be considered as starting an entirely new Pass line bet, unique to that player.
The Come bet pays off at even money, like the Pass line bet.
Come bets can only be made after a point has been established since, on the come-out roll, a Come bet would be the same as a Pass line bet. Like the Pass line bet, each player may only make one Come bet per roll; this does not exclude a player from betting odds on an already established come-bet point. The Come bet must be at least the table minimum and at most the table maximum. Players may bet both the Come and Don't Come on the same roll if desired.
Also like a Pass line bet, the come bet is a contract bet and is always working, and cannot be turned "Off", removed or reduced until it wins or loses. However, the odds taken behind a Come bet can be turned "Off" (not working), removed or reduced anytime before the bet loses. In Las Vegas generally odds bets are required to be the table minimum. In Atlantic City and Pennsylvania, the combine odds and Pass bet must be table minimum so players can bet the minimum single unit depending on the point. If the point is a 4 or 10, players can bet as little as $1 if the table minimum is low such as $5, $10, or $15 minimums. If the player requests the Come odds to be not working ("Off") and the shooter sevens-out or hits the Come bet point, the Come bet will be lost or doubled and the Come odds returned. If the casino allows put betting a player may increase a Come bet after a point has been established and bet larger odds behind if desired. Put betting also allows a player to bet on a Come and take odds immediately on a point number without a Come bet point being established.
The dealer will place the odds on top of the come bet, but slightly off center in order to differentiate between the original bet and the odds. The second round wins if the shooter rolls the come bet point again before a seven. Winning come bets are paid the same as winning Pass line bets: even money for the original bet and true odds for the odds bet. If, instead, the seven is rolled before the come-bet point, the come bet (and any odds bet) loses.
Because of the come bet, if the shooter makes their point, a player can find themselves in the situation where they still have a come bet (possibly with odds on it) and the next roll is a come-out roll. In this situation, odds bets on the come wagers are usually presumed to be not working for the come-out roll. That means that if the shooter rolls a 7 on the come-out roll, any players with active come bets waiting for a come-bet point lose their initial wager but will have their odds bets returned to them.
If the come-bet point is rolled on the come-out roll, the odds do not win but the come bet does and the odds bet is returned (along with the come bet and its payoff). The player can tell the dealer that they want their odds working, such that if the shooter rolls a number that matches the come point, the odds bet will win along with the come bet, and if a seven is rolled, both lose.
Many players will use a come bet as "insurance" against sevening out: if the shooter rolls a seven, the come bet pays 1:1, offsetting the loss of the Pass line bet. The risk in this strategy is the situation where the shooter does not hit a seven for several rolls, leading to multiple come bets that will be lost if the shooter eventually sevens out.
Don't Come bet.
In the same way that a Come bet is similar to a Pass line bet, a Don't Come bet is similar to a Don't Pass bet. Like the Come, the Don't Come can only be bet after a point has already been established as it is the same as a Don't Pass line bet when no point is established. This bet must be at least the table minimum and at most the table maximum. A Don't Come bet is played in two phases, just like the Don't Pass line bet.
Like the Don't Pass each player may only make one Don't Come bet per roll, this does not exclude a player from laying odds on an already established Don't Come points. Players may bet both the Don't Come and Come on the same roll if desired.
The player may lay odds on a Don't Come bet, just like a Don't Pass bet; in this case, the dealer (not the player) places the odds bet on top of the bet in the box, because of limited space, slightly offset to signify that it is an odds bet and not part of the original Don't Come bet. Lay odds behind a Don't Come are subject to the same rules as Don't Pass lay odds. Unlike a standard lay bet on a point, lay odds behind a Don't Come point does not charge commission (vig) and gives the player true odds. Like the Don't Pass line bet, Don't Come bets are no-contract, and can be removed or reduced after a Don't Come point has been established, but cannot be turned off ("not working") without being removed. A player may also call, "No Action" when a point is established, and the bet will not be moved to its point. This play is not to the player's advantage. If the bet is removed, the player can no longer lay odds behind the Don't Come point and cannot restore or increase the same Don't Come bet. Players must wait until next roll as long as a Pass line point has been established (players cannot bet Don't Come on come out rolls) before they can make a new Don't Come bet. Las Vegas casinos which allow put betting allows players to move the Don't Come directly to any Come point as a put; however, this is not allowed in Atlantic City or Pennsylvania. Unlike the Don't Come bet itself, the Don't Come odds can be turned "Off" (not working), removed, or reduced if desired. In Las Vegas, players generally must lay at least table minimum on odds if desired and win less than table minimum; in Atlantic City and Pennsylvania a player's combined bet must be at least table minimum, so depending on the point number players may lay as little as 2 minimum units (e.g. if the point is 4 or 10). If the player requests the Don't Come odds be not working ("Off") and the shooter hits the Don't Come point or sevens-out, the Don't Come bet will be lost or doubled and the Don't Come odds returned.
Winning Don't Come bets are paid the same as winning Don't Pass bets: even money for the original bet and true odds for the odds lay. Unlike come bets, the odds laid behind points established by Don't Come bets are always working including come out rolls unless the player specifies otherwise.
Multi-roll bets.
These are bets that may not be settled on the first roll and may need one or more subsequent rolls before an outcome is determined.
Most multi-roll bets may fall into the situation where a point is made by the shooter before the outcome of the multi-roll bet is decided. These bets are often considered "not working" on the new come-out roll until the next point is established, unless the player calls the bet as "working."
Casino rules vary on this; some of these bets may not be callable, while others may be considered "working" during the come-out. Dealers will usually announce if bets are working unless otherwise called off. If a non-working point number placed, bought or laid becomes the new point as the result of a come-out, the bet is usually refunded, or can be moved to another number for free.
Place.
Players can bet any point number (4, 5, 6, 8, 9, 10) by placing their wager in the come area and telling the dealer how much and on what number(s), "30 on the 6", "5 on the 5", or "25 on the 10". These are typically "Place Bets to Win". These are bets that the number bet on will be rolled before a 7 is rolled, similar to the Pass odds bets. These bets are considered working bets, and will continue to be paid out each time a shooter rolls the number bet. On a come-out roll, a place bet is considered to be not in effect unless the player who made it specifies otherwise. This bet may be removed or reduced at any time until it loses; in the latter case, the player must abide by any table minimums.
Place bets to win pay out at slightly worse than the true odds: 9-to-5 on points 4 or 10, 7-to-5 on points 5 or 9, and 7-to-6 on points 6 or 8. The place bets on the outside numbers (4,5,9,10) should be made in units of $5, (on a $5 minimum table), in order to receive the correct exact payout of $5 paying $7 or $5 paying $9. The place bets on the 6 & 8 should be made in units of $6, (on a $5 minimum table), in order to receive the correct exact payout of $6 paying $7. For the 4 and 10, it is to the player's advantage to 'buy' the bet (see below).
An alternative form, rarely offered by casinos, is the "place bet to lose." This bet is the opposite of the place bet to win and pays off if a 7 is rolled before the specific point number. The place bet to lose typically carries a lower house edge than a place bet to win. Payouts are 4-to-5 on points 6 or 8, 5-to-8 on 5 or 9, and 5-to-11 on 4 or 10.
Buy.
Players can also buy a bet which are paid at true odds, but a 5% commission is charged on the amount of the bet. Buy bets are placed with the shooter betting at a specific number will come out before a player sevens out. The buy bet must be at least table minimum excluding commission; however, some casinos require the minimum buy bet amount to be at least $20 to match the $1 charged on the 5% commission. Traditionally, the buy bet commission is paid no matter what, but in recent years a number of casinos have changed their policy to charge the commission only when the buy bet wins. Some casinos charge the commission as a one-time fee to buy the number; payouts are then always at true odds. Most casinos usually charge only $1 for a $25 green-chip bet (4% commission), or $2 for $50 (two green chips), reducing the house advantage a bit more. Players may remove or reduce this bet (bet must be at least table minimum excluding vig) anytime before it loses. Buy bets like place bets are not working when no point has been established unless the player specifies otherwise.
Where commission is charged only on wins, the commission is often deducted from the winning payoff—a winning $25 buy bet on the 10 would pay $49, for instance. The house edges stated in the table assume the commission is charged on all bets. They are reduced by at least a factor of two if commission is charged on winning bets only.
Lay.
A lay bet is the opposite of a buy bet, where a player bets on a 7 to roll before the number that is laid. Players may only lay the 4, 5, 6, 8, 9, or 10 and may lay multiple numbers if desired. Just like the buy bet lay bets pay true odds, but because the lay bet is the opposite of the buy bet, the payout is reversed. Therefore, players get 1 to 2 for the numbers 4 and 10, 2 to 3 for the numbers 5 and 9, and 5 to 6 for the numbers 6 and 8. A 5% commission (vigorish, vig, juice) is charged up front on the possible winning amount. For example: A $40 Lay Bet on the 4 would pay $20 on a win. The 5% vig would be $1 based on the $20 win. (not $2 based on the $40 bet as the way buy bet commissions are figured.) Like the buy bet the commission is adjusted to suit the betting unit such that fraction of a dollar payouts are not needed. Casinos may charge the vig up front thereby requiring the player to pay a vig win or lose, other casinos may only take the vig if the bet wins. Taking vig only on wins lowers house edge. Players may removed or reduce this bet (bet must be at least table minimum) anytime before it loses. Some casinos in Las Vegas allow players to lay table minimum plus vig if desired and win less than table minimum. Lay bet maximums are equal to the table maximum win, so if a player wishes to lay the 4 or 10, he or she may bet twice at amount of the table maximum for the win to be table maximum. Other casinos require the minimum bet to win at $20 even at the lowest minimum tables in order to match the $1 vig, this requires a $40 bet. Similar to buy betting, some casinos only take commission on win reducing house edge. Unlike place and buy bets, lay bets are always working even when no point has been established. The player must specify otherwise if he or she wishes to have the bet not working.
If a player is unsure of whether a bet is a single or multi-roll bet, it can be noted that all single-roll bets will be displayed on the playing surface in one color (usually red), while all multi-roll bets will be displayed in a different color (usually yellow).
Put.
A put bet is a bet which allows players to increase or make a Pass line bet after a point has been established (after come-out roll). Players may make a put bet on the Pass line and take odds immediately or increase odds behind if a player decides to add money to an already existing Pass line bet. Put betting also allows players to increase an existing come bet for additional odds after a come point has been established or make a new come bet and take odds immediately behind if desired without a come bet point being established. If increased or added put bets on the Pass line and Come cannot be turned "Off", removed or reduced, but odds bet behind can be turned "Off", removed or reduced. The odds bet is generally required to be the table minimum. Player cannot put bet the Don't Pass or Don't Come. Put betting may give a larger house edge over place betting unless the casino offers high odds.
Put bets are generally allowed in Las Vegas, but not allowed in Atlantic City and Pennsylvania.
Put bets are better than place bets (to win) when betting more than 5-times odds over the flat bet portion of the put bet. For example, a player wants a $30 bet on the six. Looking at two possible bets: 1) Place the six, or 2) Put the six with odds. A $30 place bet on the six pays $35 if it wins. A $30 put bet would be a $5 flat line bet plus $25 (5-times) in odds, and also would pay $35 if it wins. Now, with a $60 bet on the six, the place bet wins $70, where the put bet ($5 + $55 in odds) would pay $71. The player needs to be at a table which not only allows put bets, but also high-times odds, to take this advantage.
Hard way.
This bet can only be placed on the numbers 4, 6, 8, and 10. In order for this bet to win, the chosen number must be rolled the "hard way" (as doubles) before a 7 or any other non-double combination ("easy way") totaling that number is rolled. For example, a player who bets a hard 6 can only win by seeing a 3–3 roll come up before any 7 or any easy roll totaling 6 (4–2 or 5–1); otherwise, the player loses.
In Las Vegas casinos, this bet is generally working, including when no point has been established, unless the player specifies otherwise. In other casinos such as those in Atlantic City, hard ways are not working when the point is off unless the player requests to have it working on the come out roll.
Like single-roll bets, hard way bets can be lower than the table minimum; however, the maximum bet allowed is also lower than the table maximum. The minimum hard way bet can be a minimum one unit. For example, lower stake table minimums of $5 or $10, generally allow minimum hard ways bets of $1. The maximum bet is based on the maximum allowed win from a single roll.
Easy way is not a specific bet offered in standard casinos, but a term used to define any number combination which has two ways to roll. For example, (6–4, 4–6) would be a "10 easy". The 4, 6, 8 or 10 can be made both hard and easy ways. Betting point numbers (which pays off on easy or hard rolls of that number) or single-roll ("hop") bets (e.g., "hop the 2–4" is a bet for the next roll to be an easy six rolled as a two and four) are methods of betting easy ways.
Big 6 and Big 8.
A player can wager on either the 6 or 8 being rolled before the shooter throws a seven. These wagers are usually avoided by experienced craps players since they create a large house edge by paying even money (1:1) while the true odds are 6:5; experienced players realize the house edge would be reduced by instead making place bets on the 6 or the 8, since those pay more (7:6) and are closer to the true odds. Some casinos (especially all those in Atlantic City) do not even offer the Big 6 & 8. The bets are located in the corners behind the Pass line, and bets may be placed directly by players.
The only real advantage offered by the Big 6 & 8 is that they can be bet for the table minimum, whereas a place bet minimum may sometimes be greater than the table minimum (e.g. $6 place bet on a $3 minimum game.) In addition place bets are usually not working, except by agreement, when the shooter is "coming out" i.e. shooting for a point, and Big 6 and 8 bets always work. Some modern layouts no longer show the Big 6/Big 8 bet.
Single-roll bets.
Single-roll (proposition) bets are resolved in one dice roll by the shooter. Most of these are called "service bets", and they are located at the center of most craps tables. Only the stickman or a dealer can place a service bet. Single-roll bets can be lower than the table minimum, but the maximum bet allowed is also lower than the table maximum. The maximum bet is based on the maximum allowed win from a single roll. The lowest single-roll bet can be a minimum one unit bet. For example, tables with minimums of $5 or $10 generally allow minimum single-roll bets of $1. Single bets are always working by default unless the player specifies otherwise. The bets include:
Player bets.
Fire Bet: Before the shooter begins, some casinos will allow a bet known as a fire bet to be placed. A fire bet is a bet of as little as $1 and generally up to a maximum of $5 to $10 sometimes higher, depending on casino, made in the hope that the next shooter will have a hot streak of setting and getting many points of different values. As different individual points are made by the shooter, they will be marked on the craps layout with a fire symbol.
The first three points will not pay out on the fire bet, but the fourth, fifth, and sixth will pay out at increasing odds. The fourth point pays at 24-to-1, the fifth point pays at 249-to-1, and the 6th point pays at 999-to-1. (The points must all be different numbers for them to count toward the fire bet.) For example, a shooter who successfully hits a point of 10 twice will only garner credit for the first one on the fire bet. Players must hit the established point in order for it to count toward the fire bet. The payout is determine by the number of points which have been established and hit after the shooter sevens out.
Bonus Craps: Prior to the initial "come out roll", players may place an optional wager (usually a $1 minimum to a maximum $25) on one or more of the three Bonus Craps wagers, "All Small", "All Tall", or "All or Nothing at All." For players to win the "All Small" wager, the shooter must hit all five small numbers (2, 3, 4, 5, 6) before a seven is rolled; similarly, "All Tall" wins if all five high numbers (8, 9, 10, 11, 12) are hit before a seven is rolled.
These bets pay 35-for-1, for a house advantage of 7.76%. "All or Nothing at All" wins if the shooter hits all 10 numbers before a seven is rolled. This pays 176-for-1, for a house edge of 7.46%. For all three wagers, the order in which the numbers are hit does not matter. Whenever a seven is hit, including on the come out roll, all bonus bets lose, the bonus board is reset, and new bonus bets may be placed.
Multiple different bets.
A player may wish to make multiple different bets. For example, a player may be wish to bet $1 on all hard ways and the horn. If one of the bets win the dealer may automatically replenish the losing bet with profits from the winning bet. In this example, if the shooter rolls a hard 8 (pays 9:1), the horn loses. The dealer may return $5 to the player and place the other $4 on the horn bet which lost. If the player does not want the bet replenished, he or she should request any or all bets be taken down.
Working and not working bets.
A working bet is a live bet. Bets may also be on the board, but not in play and therefore not working. Pass line and come bets are always working meaning the chips are in play and the player is therefore wagering live money. Other bets may be working or not working depending whether a point has been established or player's choice. Place and buy bets are working by default when a point is established and not working when the point is off unless the player specifies otherwise. Lay bets are always working even if a point has not been established unless the player requests otherwise. At any time, a player may wish to take any bet or bets out of play. The dealer will put an "Off" button on the player's specific bet or bets; this allows the player to keep his chips on the board without a live wager. For example, if a player decides not to wager a place bet mid-roll but wishes to keep the chips on the number, he or she may request the bet be "not working" or "Off". The chips remain on the table, but the player cannot win from or lose chips which are not working.
The opposite is also allowed. By default place and buy bets are not working without an established point; a player may wish to wager chips before a point has been established. In this case, the player would request the bet be working in which the dealer will place an "On" button on the specified chips.
Betting variants.
These variants depend on the casino and the table, and sometimes a casino will have different tables that use or omit these variants and others.
Optimal betting.
When craps is played in a casino, all bets have a house advantage. That is, it can be shown mathematically that a player will (with 100% probability) lose all his or her money to the casino in the long run, while in the short run the player is more likely to lose money than make money. There may be players who are lucky and get ahead for a period of time, but in the long run these winning streaks are eroded away. One can slow, but not eliminate, one's average losses by only placing bets with the smallest house advantage.
The Pass/Don't Pass line, Come/Don't Come line, place 6, place 8, buy 4 and buy 10 (only under the casino rules where commission is charged only on wins) have the lowest house edge in the casino, and all other bets will, on average, lose money between three and twelve times faster because of the difference in house edges.
The place bets and buy bets differ from the Pass line and come line, in that place bets and buy bets can be removed at any time, since, while they are multi-roll bets, their odds of winning do not change from roll to roll, whereas Pass line bets and come line bets are a combination of different odds on their first roll and subsequent rolls. The first roll of a Pass line bet is 2:1 advantage for the player (8 wins, 4 losses), but it is "paid for" by subsequent rolls that are at the same disadvantage to the player as the Don't Pass bets were at an advantage. As such, they cannot profitably let the player take down the bet after the first roll. Players can bet or lay odds behind an established point depending on whether it was a Pass/Come or Don't Pass/Don't Come to lower house edge by receiving true odds on the point. Casinos which allow put betting allows players to increase or make new pass/come bets after the come-out roll. This bet generally has a higher house edge than place betting, unless the casino offers high odds.
Conversely, a player can take back (pick up) a Don't Pass or Don't Come bet after the first roll, but this cannot be recommended, because they already endured the disadvantaged part of the combination – the first roll. On that come-out roll, they win just 3 times (2 and 3), while losing 8 of them (7 and 11) and pushing one (12) out of the 36 possible rolls. On the other 24 rolls that become a point, their Don't Pass bet is now to their advantage by 6:3 (4 and 10), 6:4 (5 and 9) and 6:5 (6 and 8). If a player chooses to remove the initial Don't Come and/or Don't Pass line bet, he or she can no longer lay odds behind the bet and cannot re-bet the same Don't Pass and/or Don't Come number (players must make a new Don't Pass or come bets if desired). However, players can still make standard lay bets odds on any of the point numbers (4,5,6,8,9,10).
Among these, and the remaining numbers and possible bets, there are a myriad of systems and progressions that can be used with many combinations of numbers.
An important alternative metric is house advantage per roll (rather than per bet), which may be expressed in loss per hour. The typical pace of rolls varies depending on the number of players, but 102 rolls per hour is a cited rate for a nearly full table. This same reference states that only "29.6% of total rolls are come out rolls, on average", so for this alternative metric, needing extra rolls to resolve the Pass line bet, for example, is factored. This number then permits calculation of rate of loss per hour, and per the 4 day/5 hour per day gambling trip:
Table rules.
Besides the rules of the game itself, a number of formal and informal rules are commonly applied in the table form of Craps, especially when played in a casino.
To reduce the potential opportunity for switching dice by sleight-of-hand, players are not supposed to handle the dice with more than one hand (such as shaking them in cupped hands before rolling) nor take the dice past the edge of the table. If a player wishes to change shooting hands, they may set the dice on the table, let go, then take them with the other hand.
When throwing the dice, the player is expected to hit the farthest wall at the opposite end of the table (these walls are typically augmented with pyramidal structures to ensure highly unpredictable bouncing after impact). Casinos will sometimes allow a roll that does not hit the opposite wall as long as the dice are thrown past the middle of the table; a very short roll will be nullified as a "no roll". The dice may not be slid across the table and must be tossed. These rules are intended to prevent dexterous players from physically influencing the outcome of the roll.
Players are generally asked not to throw the dice above a certain height (such as the eye level of the dealers). This is both for the safety of those around the table, and to eliminate the potential use of such a throw as a distraction device in order to cheat.
Dice are still considered "in play" if they land on players' bets on the table, the dealer's working stacks, on the marker puck, or with one die resting on top of the other. The roll is invalid if either or both dice land in the boxman's bank, the stickman's bowl (where the extra three dice are kept between rolls), or in the rails around the top of the table where players chips are kept. If one or both dice hits a player or dealer and rolls back onto the table, the roll counts as long as the person being hit did not intentionally interfere with either of the dice, though some casinos will rule "no roll" for this situation. If one or both leave the table, it is also a "no roll", and the dice may either be replaced or examined by the boxman and returned to play.
Shooters may wish to "set" the dice to a particular starting configuration before throwing (such as showing a particular number or combination, stacking the dice, or spacing them to be picked up between different fingers), but if they do, they are often asked to be quick about it so as not to delay the game. Some casinos disallow such rituals to speed up the pace of the game. Some may also discourage or disallow unsanitary practices such as kissing or spitting on the dice.
In most casinos, players are not allowed to hand anything directly to dealers, and vice versa. Items such as cash, checks, and chips are exchanged by laying them down on the table; for example, when "buying in" (paying cash for chips), players are expected to place the cash on the layout: the dealer will take it and then place the chips in front of the player. This rule is enforced in order to allow the casino to easily monitor and record all transfers via overhead surveillance cameras, and to reduce the opportunity for cheating via sleight-of-hand.
Most casinos prohibit "call bets", and may have a warning such as "No Call Bets" printed on the layout to make this clear. This means a player may not call out a bet without also placing the corresponding chips on the table. Such a rule reduces the potential for misunderstanding in loud environments, as well as disputes over the amount that the player intended to bet after the outcome has been decided. Some casinos choose to allow call bets once players have bought-in. When allowed, they are usually made when a player wishes to bet at the last second, immediately before the dice are thrown, to avoid the risk of obstructing the roll.
Etiquette.
Craps is among the most social and most superstitious of all gambling games, which leads to an enormous variety of informal rules of etiquette that players may be expected to follow. An exhaustive list of these is beyond the scope of this article, but the guidelines below are most commonly given.
Tips.
Tipping the dealers is universal and expected in Craps. As in most other casino games, a player may simply place (or toss) chips onto the table and say, "For the dealers", "For the crew", "etc." In craps, it is also common to place a bet for the dealers. This is usually done one of three ways: by placing an ordinary bet and simply declaring it for the dealers, as a "two-way", or "on top". A "Two-Way" is a bet for both parties: for example, a player may toss in two chips and say "Two Way Hard Eight", which will be understood to mean one chip for the player and one chip for the dealers. Players may also place a stack of chips for a bet as usual, but leave the top chip off-center and announce "on top for the dealers". The dealer's portion is often called a "toke" bet, which comes from the practice of using $1 slot machine tokens to place dealer bets in some casinos.
In some cases, players may also tip each other, for example as a show of gratitude to the thrower for a roll on which they win a substantial bet.
Superstition.
Craps players routinely practice a wide range of superstitious behaviors, and may expect or demand these from other players as well.
Most prominently, it is universally considered bad luck to say the word "seven" (after the "come-out", a roll of 7 is a loss for "pass" bets). Dealers themselves often make significant efforts to avoid calling out the number. When necessary, participants may refer to seven with a "nickname" such as "Big Red" (or just "Red"), "the S-word", etc.
Dice setting or dice control.
An approach to achieving an advantage is to "set" the dice in a particular orientation, and then throw them in such a manner that they do not tumble randomly. The theory is that given exactly the same throw from exactly the same starting configuration, the dice will tumble in the same way and therefore show the same or similar values every time.
Casinos take steps to prevent this. The dice are usually required to hit the back wall of the table, which is normally faced with a jagged angular texture such as pyramids, making controlled spins more difficult. There has been no independent evidence that such methods can be successfully applied in a real casino.
Variants.
Bank craps is a variation of the original craps game and is sometimes known as Las Vegas Craps. This variant is quite popular in Nevada gambling houses, and its availability online has now made it a globally played game. Bank craps uses a special table layout and all bets must be made against the house. In Bank Craps, the dice are thrown over a wire or a string that is normally stretched a few inches from the table's surface. The lowest house edge (for the Pass/Don't Pass) in this variation is around 1.4%. Generally, if the word "craps" is used without any modifier, it can be inferred to mean this version of the game, to which most of this article refers.
Crapless craps, also known as bastard craps, is a simple version of the original craps game, and is normally played as an online private game. The biggest difference between crapless craps and original craps is that the shooter (person throwing the dice) is at a far greater disadvantage and has a house edge of 5.38%. Another difference is that this is one of the craps games in which a player can bet on rolling a 2, 3, 11 or 12 before a 7 is thrown. In crapless craps, 2 and 12 have odds of 11:2 and have a house edge of 7.143% while 3 and 11 have odds of 11:4 with a house edge of 6.25%.
New York Craps is one of the variations of craps played mostly in the Eastern coast of the US, true to its name. History states that this game was actually found and played in casinos in Yugoslavia, the UK and the Bahamas. In this craps variant, the house edge is greater than Las Vegas Craps or Bank craps. The table layout is also different, and is called a double-end-dealer table. This variation is different from the original craps game in several ways, but the primary difference is that New York craps does not allow Come or Don't Come bets. New York Craps Players bet on box numbers like 4, 5, 6, 8, 9, or 10. The overall house edge in New York craps is 5%.
Card-based variations.
In order to get around Californian laws barring the payout of a game being directly related to the roll of dice, Indian reservations have adapted the game to substitute cards for dice.
Cards replacing dice.
To replicate the original dice odds exactly without dice or possibility of card-counting, one scheme uses two shuffle machines each with just one deck of Ace through 6 each. Each machine selects one of the 6 cards at random and this is the roll. The selected cards are replaced and the decks are reshuffled for the next roll.
In one variation, two shoes are used, each containing some number of regular card decks that have been stripped down to just the Aces and deuces through sixes. The boxman simply deals one card from each shoe and that is the roll on which bets are settled. Since a card-counting scheme is easily devised to make use of the information of cards that have already been dealt, a relatively small portion (less than 50%) of each shoe is usually dealt in order to protect the house.
In a similar variation, cards representing dice are dealt directly from a continuous shuffling machine (CSM). Typically, the CSM will hold approximately 264 cards, or 44 sets of 1 through 6 spot cards. Two cards are dealt from the CSM for each roll. The game is played exactly as regular craps, but the roll distribution of the remaining cards in the CSM is slightly skewed from the normal symmetric distribution of dice.
Even if the dealer were to shuffle each roll back into the CSM, the effect of buffering a number of cards in the chute of the CSM provides information about the skew of the next roll. Analysis shows this type of game is biased towards the Don't Pass and Don't Come bets. A player betting Don't Pass and Don't Come every roll and laying 10x odds receives a 2% profit on the initial Don't Pass / Don't Come bet each roll. Using a counting system allows the player to attain a similar return at lower variance.
Cards mapping physical dice.
In this game variation, one red deck and one blue deck of six cards each (A through 6), and a red die and a blue die are used. Each deck is shuffled separately, usually by machine. Each card is then dealt onto the layout, into the 6 red and 6 blue numbered boxes. The shooter then shoots the dice. The red card in the red-numbered box corresponding to the red die, and the blue card in the blue-numbered box corresponding to the blue die are then turned over to form the roll on which bets are settled.
Another variation uses a red and a blue deck of 36 custom playing cards each. Each card has a picture of a two-die roll on it – from 1–1 to 6–6. The shooter shoots what looks like a red and a blue die, called "cubes". They are numbered such that they can never throw a pair, and that the blue one will show a higher value than the red one exactly half the time. One such scheme could be 222555 on the red die and 333444 on the blue die.
One card is dealt from the red deck and one is dealt from the blue deck. The shooter throws the "cubes" and the color of the cube that is higher selects the color of the card to be used to settle bets. On one such table, an additional one-roll prop bet was offered: If the card that was turned over for the "roll" was either 1–1 or 6–6, the other card was also turned over. If the other card was the "opposite" (6–6 or 1–1, respectively) of the first card, the bet paid 500:1 for this 647:1 proposition.
And additional variation uses a single set of 6 cards, and regular dice. The roll of the dice maps to the card in that position, and if a pair is rolled, then the mapped card is used twice, as a pair.
Rules of play against other players ("Street Craps").
Recreational or informal playing of craps outside of a casino is referred to as street craps or private craps. The most notable difference between playing street craps and bank craps is that there is no bank or house to cover bets in street craps. Players must bet against each other by covering or fading each other's bets for the game to be played. If money is used instead of chips and depending on the laws of where it is being played, street craps can be an illegal form of gambling.
There are many variations of street craps. The simplest way is to either agree on or roll a number as the point, then roll the point again before rolling a seven. Unlike more complex proposition bets offered by casinos, street craps has more simplified betting options. The shooter is required to make either a Pass or a Don't Pass bet if he wants to roll the dice. Another player must choose to cover the shooter to create a stake for the game to continue.
If there are several players, the rotation of the player who must cover the shooter may change with the shooter (comparable to a blind in poker). The person covering the shooter will always bet against the shooter. For example, if the shooter made a "Pass" bet, the person covering the shooter would make a "Don't Pass" bet to win. Once the shooter is covered, other players may make Pass/Don't Pass bets, or any other proposition bets, as long as there is another player willing to cover.
In popular culture.
Due to the random nature of the game, in popular culture a "crapshoot" is often used to describe an action with an unpredictable outcome.
The prayer or invocation "Baby needs a new pair of shoes!" is associated with shooting craps.
Floating craps.
Floating craps is an illegal operation of craps. The term "floating" refers to the practice of the game's operators using portable tables and equipment to quickly move the game from location to location to stay ahead of the law enforcement authorities. The term may have originated in the 1930s when Benny Binion (later known for founding the downtown Las Vegas hotel Binion's) set up an illegal craps game utilizing tables created from portable crates for the Texas Centennial Exposition.
The 1950 Broadway musical "Guys and Dolls" features a major plot point revolving around a floating craps game.
In the 1950s and 1960s The Sands Hotel in Las Vegas had a craps table that floated in the swimming pool, as a joke reference to the notoriety of the term.
Records.
A Golden Arm is a craps player who rolls the dice for longer than one hour without losing. Likely the first known Golden Arm was Oahu native Stanley Fujitake, who rolled 118 times without sevening out in 3 hours and 6 minutes at the California Hotel and Casino on May 28, 1989.
The current record for length of a "hand" (successive rounds won by the same shooter) is 154 rolls including 25 passes by Patricia DeMauro of New Jersey, lasting 4 hours and 18 minutes, at the Borgata in Atlantic City, New Jersey, on May 23–24, 2009. She bested by over an hour the record held for almost 20 years – that of Fujitake.
|
6066
|
88026
|
https://en.wikipedia.org/wiki?curid=6066
|
Carl von Clausewitz
|
Carl Philipp Gottlieb von Clausewitz ( , ; born Carl Philipp Gottlieb Clauswitz; 1 July 1780 – 16 November 1831) was a Prussian general and military theorist who stressed the "moral" (in modern terms meaning psychological) and political aspects of waging war. His most notable work, ("About War"), though unfinished at his death, is considered a seminal treatise on military strategy and science.
Clausewitz stressed the multiplex interaction of diverse factors in war, noting how unexpected developments unfolding under the "fog of war" (i.e., in the face of incomplete, dubious, and often erroneous information and great fear, doubt, and excitement) call for rapid decisions by alert commanders. He saw history as a vital check on erudite abstractions that did not accord with experience. In contrast to the early work of Antoine-Henri Jomini, he argued that war could not be quantified or reduced to mapwork, geometry, and graphs. Clausewitz had many aphorisms, of which one of the most famous is, "War is the continuation of policy with other means."
Name.
Clausewitz's Christian names are variously given in English-language sources as "Karl", "Carl Philipp Gottlieb", or "Carl Maria." He spelled his own given name with a "C" in order to identify with the classical Western tradition; writers who use "Karl" are often seeking to emphasize his German (rather than European) identity. "Carl Philipp Gottfried" appears on Clausewitz's tombstone. "Encyclopædia Britannica" continues to use Gottlieb instead of Gottfried based on older sources, such as military historian Peter Paret, and historian Sir Michael Howard originated the use of "Carl Maria." However, more modern scholars like Christopher Bassford (editor of "ClausewitzStudies.org") and Vanya Eftimova Bellinger (who wrote the 2016 biography of Carl's wife Marie von Clausewitz) consider his tombstone a more reliable source than the hand-written birth records used by Paret.
Life and military career.
Clausewitz was born on 1 July 1780 in Burg bei Magdeburg in the Prussian Duchy of Magdeburg as the fourth and youngest son of a family that made claims to a noble status which Carl accepted. Clausewitz's family claimed descent from the Barons of Clausewitz in Upper Silesia, though scholars question the connection. His grandfather, the son of a Lutheran pastor, had been a professor of theology. Clausewitz's father, once a lieutenant in the army of Frederick the Great, King of Prussia, held a minor post in the Prussian internal-revenue service. Clausewitz entered the Prussian military service at the age of twelve as a lance corporal, eventually attaining the rank of major general.
Clausewitz served in the Rhine campaigns (1793–1794) including the siege of Mainz, when the Prussian Army invaded France during the French Revolution, and fought in the Napoleonic Wars from 1806 to 1815. He entered the "Kriegsakademie" (also cited as "The German War School", the "Military Academy in Berlin", and the "Prussian Military Academy," later the "War College") in Berlin in 1801 (aged 21), probably studied the writings of the philosophers Immanuel Kant and/or Johann Gottlieb Fichte and Friedrich Schleiermacher and won the regard of General Gerhard von Scharnhorst, the future first chief-of-staff of the newly reformed Prussian Army (appointed 1809). Clausewitz, Hermann von Boyen (1771–1848) and Karl von Grolman (1777–1843) were among Scharnhorst's primary allies in his efforts to reform the Prussian army between 1807 and 1814.
Clausewitz served during the Jena Campaign as aide-de-camp to Prince August. At the Battle of Jena-Auerstedt on 14 October 1806—when Napoleon invaded Prussia and defeated the Prussian-Saxon army commanded by Karl Wilhelm Ferdinand, Duke of Brunswick—he was captured, one of the 25,000 prisoners taken that day as the Prussian army disintegrated. He was 26. Clausewitz was held prisoner with his prince in France from 1807 to 1808. Returning to Prussia, he assisted in the reform of the Prussian army and state. Johann Gottlieb Fichte wrote "On Machiavelli, as an Author, and Passages from His Writings" in June 1807. ("Über Machiavell, als Schriftsteller, und Stellen aus seinen Schriften" ). Carl Clausewitz wrote an interesting and anonymous Letter to Fichte (1809) about his book on "Machiavelli." The letter was published in Fichte's "Verstreute kleine Schriften" 157–166. For an English translation of the letter see "Carl von Clausewitz Historical and Political Writings" Edited by: Peter Paret and D. Moran (1992).
On 10 December 1810, he married the socially prominent Countess Marie von Brühl, whom he had first met in 1803. She was a member of the noble German Brühl family originating in Thuringia. The couple moved in the highest circles, socialising with Berlin's political, literary, and intellectual élite. Marie was well-educated and politically well-connected—she played an important role in her husband's career progress and intellectual evolution. She also edited, published, and introduced his collected works.
Opposed to Prussia's enforced alliance with Napoleon, Clausewitz left the Prussian army and served in the Imperial Russian Army from 1812 to 1813 during the Russian campaign, taking part in the Battle of Borodino (1812). Like many Prussian officers serving in Russia, he joined the Russian–German Legion in 1813. In the service of the Russian Empire, Clausewitz helped negotiate the Convention of Tauroggen (1812), which prepared the way for the coalition of Prussia, Russia, and the United Kingdom that ultimately defeated Napoleon and his allies.
In 1815 the Russian-German Legion became integrated into the Prussian Army and Clausewitz re-entered Prussian service as a colonel. He was soon appointed chief-of-staff of Johann von Thielmann's III Corps. In that capacity he served at the Battle of Ligny and the Battle of Wavre during the Waterloo campaign in 1815. An army led personally by Napoleon defeated the Prussians at Ligny (south of Mont-Saint-Jean and the village of Waterloo) on 16 June 1815, but they withdrew in good order. Napoleon's failure to destroy the Prussian forces led to his defeat a few days later at the Battle of Waterloo (18 June 1815), when the Prussian forces arrived on his right flank late in the afternoon to support the Anglo-Dutch-Belgian forces pressing his front. Napoleon had convinced his troops that the field grey uniforms were those of Marshal Grouchy's grenadiers. Clausewitz's unit fought heavily outnumbered at Wavre (18–19 June 1815), preventing large reinforcements from reaching Napoleon at Waterloo. After the war, Clausewitz served as the director of the "Kriegsakademie", where he served until 1830. In that year he returned to active duty with the army. Soon afterward, the outbreak of several revolutions around Europe and a crisis in Poland appeared to presage another major European war. Clausewitz was appointed chief of staff of the only army Prussia was able to mobilise in this emergency, which was sent to the Polish border. Its commander, Gneisenau, died of cholera (August 1831), and Clausewitz took command of the Prussian army's efforts to construct a to contain the great cholera outbreak (the first time cholera had appeared in modern heartland Europe, causing a continent-wide panic). Clausewitz himself died of the same disease shortly afterwards, on 16 November 1831.
His widow edited, published, and wrote the introduction to his "magnum opus" on the philosophy of war in 1832. (He had started working on the text in 1816 but had not completed it.) She wrote the preface for "On War" and had published most of his collected works by 1835. She died in January 1836.
Theory of war.
Clausewitz was a professional combat soldier and a staff officer who was involved in numerous military campaigns, but he is famous primarily as a military theorist interested in the examination of war, utilising the campaigns of Frederick the Great and Napoleon as frames of reference for his work. He wrote a careful, systematic, philosophical examination of war in all its aspects. The result was his principal book, "Vom Kriege" (in English, "On War"), a major work on the philosophy of war. It was unfinished when Clausewitz died and contains material written at different stages in his intellectual evolution, producing some significant contradictions between different sections. The sequence and precise character of that evolution is a source of much debate as to the exact meaning behind some seemingly contradictory observations in discussions pertinent to the tactical, operational and strategic levels of war, for example (though many of these apparent contradictions are simply the result of his dialectical method). Clausewitz constantly sought to revise the text, particularly between 1827 and his departure on his last field assignments, to include more material on "people's war" and forms of war other than high-intensity warfare between states, but relatively little of this material was included in the book. Soldiers before this time had written treatises on various military subjects, but none had undertaken a great philosophical examination of war on the scale of those written by Clausewitz and Leo Tolstoy, both of whom were inspired by the events of the Napoleonic Era.
Clausewitz's work is still studied today, demonstrating its continued relevance. More than sixteen major English-language books that focused specifically on his work were published between 2005 and 2014, whereas his 19th-century rival Jomini has faded from influence. The historian Lynn Montross said that this outcome "may be explained by the fact that Jomini produced a system of war, Clausewitz a philosophy. The one has been outdated by new weapons, the other still influences the strategy behind those weapons." Jomini did not attempt to define war but Clausewitz did, providing (and dialectically comparing) a number of definitions. The first is his dialectical thesis: "War is thus an act of force to compel our enemy to do our will." The second, often treated as Clausewitz's 'bottom line,' is in fact merely his dialectical antithesis: "War is merely the continuation of policy [or politics—the German original is "Politik", which encompasses both of those rather different English words] with other means." The synthesis of his dialectical examination of the nature of war—and thus his actual definition of war—is his famous "trinity," saying that war is, "when regarded as a whole, in relation to the tendencies predominating in it, a strange trinity, composed of the original violence of its essence, the hate and enmity which are to be regarded as a blind, natural impulse; of the play of probabilities and chance, which make it a free activity of the emotions; and of the subordinate character of a political tool, through which it belongs to the province of pure intelligence." Christopher Bassford says the best shorthand for Clausewitz's trinity should be something like "violent emotion/chance/rational calculation." However, it is frequently presented as "people/army/government," a misunderstanding based on a later paragraph in the same section. This misrepresentation was popularised by U.S. Army Colonel Harry Summers' Vietnam-era interpretation, facilitated by weaknesses in the 1976 Howard/Paret translation.
The degree to which Clausewitz managed to revise his manuscript to reflect that synthesis is the subject of much debate. His final reference to war and "Politik", however, goes beyond his widely quoted antithesis: "War is simply the continuation of political intercourse with the addition of other means. We deliberately use the phrase 'with the addition of other means' because we also want to make it clear that war in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs. The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace."
Clausewitz introduced systematic philosophical contemplation into Western military thinking, with powerful implications not only for historical and analytical writing but also for practical policy, military instruction, and operational planning. He relied on his own experiences, contemporary writings about Napoleon, and on deep historical research. His historiographical approach is evident in his first extended study, written when he was 25, of the Thirty Years' War. In "On War", Clausewitz sees all wars as the sum of decisions, actions, and reactions in an uncertain and dangerous context, and also as a socio-political phenomenon. He also stressed the complex nature of war, which encompasses both the socio-political and the operational and stresses the primacy of state policy. (One should be careful not to limit his observations on war to war between states, however, as he certainly discusses other kinds of protagonists). Clausewitz, according to Azar Gat, expressed in the field of military theory the main themes of the Romantic reaction against the worldview of the Enlightenment, rejecting universal principles and stressing historical diversity and the forces of the human spirit. This explains the strength and value of many of his arguments, derived from this great cultural movement, but also his often harsh rhetoric against his predecessors.
The word "strategy" had only recently come into usage in modern Europe, and Clausewitz's definition is quite narrow: "the use of engagements for the object of war" (which many today would call "the operational level" of war). Clausewitz conceived of war as a political, social, and military phenomenon which might—depending on circumstances—involve the entire population of a political entity at war. In any case, Clausewitz saw military force as an instrument that states and other political actors use to pursue the ends of their policy, in a dialectic between opposing wills, each with the aim of imposing his policies and will upon his enemy.
Clausewitz's emphasis on the inherent superiority of the defense suggests that habitual aggressors are likely to end up as failures. The inherent superiority of the defense obviously does not mean that the defender will always win, however: there are other asymmetries to be considered. He was interested in co-operation between the regular army and militia or partisan forces, or citizen soldiers, as one possible—sometimes the only—method of defense. In the circumstances of the Wars of the French Revolution and those with Napoleon, which were energised by a rising spirit of nationalism, he emphasised the need for states to involve their entire populations in the conduct of war. This point is especially important, as these wars demonstrated that such energies could be of decisive importance and for a time led to a democratisation of the armed forces much as universal suffrage democratised politics.
While Clausewitz was intensely aware of the value of intelligence at all levels, he was also very skeptical of the accuracy of much military intelligence: "Many intelligence reports in war are contradictory; even more are false, and most are uncertain... In short, most intelligence is false." This circumstance is generally described as part of the fog of war. Such skeptical comments apply only to intelligence at the tactical and operational levels; at the strategic and political levels he constantly stressed the requirement for the best possible understanding of what today would be called strategic and political intelligence. His conclusions were influenced by his experiences in the Prussian Army, which was often in an intelligence fog due partly to the superior abilities of Napoleon's system but even more simply to the nature of war. Clausewitz acknowledges that friction creates enormous difficulties for the realization of any plan, and the "fog of war" hinders commanders from knowing what is happening. It is precisely in the context of this challenge that he develops the concept of military genius (), evidenced above all in the execution of operations. 'Military genius' is not simply a matter of intellect, but a combination of qualities of intellect, experience, personality, and temperament (and there are many possible such combinations) that create a very highly developed mental aptitude for the waging of war.
Principal ideas.
Key ideas discussed in "On War" include:
Interpretation and misinterpretation.
Clausewitz used a dialectical method to construct his argument, leading to frequent misinterpretation of his ideas. British military theorist B. H. Liddell Hart contends that the enthusiastic acceptance by the Prussian military establishment—especially Moltke the Elder, a former student of Clausewitz—of what they believed to be Clausewitz's ideas, and the subsequent widespread adoption of the Prussian military system worldwide, had a deleterious effect on military theory and practice, due to their egregious misinterpretation of his ideas:
As described by Christopher Bassford, then-professor of strategy at the National War College of the United States:
Another example of this confusion is the idea that Clausewitz was a proponent of total war as used in the Third Reich's propaganda in the 1940s. In fact, Clausewitz never used the term "total war": rather, he discussed "absolute war," a concept which evolved into the much more abstract notion of "ideal war" discussed at the very beginning of —the purely "logical" result of the forces underlying a "pure," Platonic "ideal" of war. In what he called a "logical fantasy," war cannot be waged in a limited way: the rules of competition will force participants to use all means at their disposal to achieve victory. But in the "real world," he said, such rigid logic is unrealistic and dangerous. As a practical matter, the military objectives in "real" war that support political objectives generally fall into two broad types: limited aims or the effective "disarming" of the enemy "to render [him] politically helpless or militarily impotent. Thus, the complete defeat of the enemy may not be necessary, desirable, or even possible.
According to Azar Gat, the opposing interpretations of Clausewitz are rooted in Clausewitz’s own conceptual journey. The centerpiece of Clausewitz’s theory of war throughout his life was his concept of all-out fighting and energetic conduct leading to the great battle of annihilation. He believed such conduct expressed the very “nature”, or “lasting spirit” of war. Accordingly, Clausewitz disparaged the significance of the maneuver, surprise, and cunning in war, as distracting from the centrality of battle, and argued that defense was legitimate only if and as long as one was weaker than the enemy. Nevertheless, in the last years of his life, after the first six out of the eight books of "On War" had already been drafted, Clausewitz came to recognize that this concept was not universal and did not even apply to the Napoleonic Wars, the supreme model of his theory of war. This was demonstrated by the Spanish and Russian campaigns and by guerrilla warfare, in all of which battle was systematically avoided. Consequently, from 1827 on, Clausewitz recognized the legitimacy of limited war and explained it by the influence of politics that harnessed the unlimited nature of war to serve its objectives. Clausewitz died in 1831 before he completed the revision he planned along these lines. He incorporated his new ideas only into the end of Book VI, Book VIII and the beginning of Book I of "On War". As a result, when published, "On War" encompassed both his old and new ideas, at odds with each other.
Thus, against common interpretations of "On War", Gat points out that Clausewitz's transformed views regarding the relationship between politics and war and the admission of limited war into his theory constituted a U-turn against his own life-long fundamental view of the nature of war. Gat further argues the readers’ miscomprehension of the theory in "On War" as complete and dialectical, rather than a draft undergoing a radical change of mind, has thus generated a range of reactions. People of each age have found in "On War" the Clausewitz who suited their own views on war and its conduct. Between 1870 and 1914, he was celebrated mainly for his insistence on the clash of forces and the decisive battle, and his emphasis on moral forces. By contrast, after 1945, during the nuclear age, his reputation has reached a second pinnacle for his later acceptance of the primacy of politics and the concept of limited war.
Referring to much of the current interpretation of "On War" as the Emperor's New Clothes syndrome, Gat argues that instead of critically addressing the puzzling contradictions in "On War," Clausewitz has been set in stone and could not be wrong.
In modern times the reconstruction of Clausewitzian theory has been a matter of much dispute. One analysis was that of Panagiotis Kondylis, a Greek writer and philosopher, who opposed the interpretations of Raymond Aron in "Penser la Guerre, Clausewitz," and other liberal writers. According to Aron, Clausewitz was one of the first writers to condemn the militarism of the Prussian general staff and its war-proneness, based on Clausewitz's argument that "war is a continuation of policy by other means." In "Theory of War," Kondylis claims that this is inconsistent with Clausewitzian thought. He claims that Clausewitz was morally indifferent to war (though this probably reflects a lack of familiarity with personal letters from Clausewitz, which demonstrate an acute awareness of war's tragic aspects) and that his advice regarding politics' dominance over the conduct of war has nothing to do with pacifist ideas.
Other notable writers who have studied Clausewitz's texts and translated them into English are historians Peter Paret of the Institute for Advanced Study and Sir Michael Howard. Howard and Paret edited the most widely used edition of "On War" (Princeton University Press, 1976/1984) and have produced comparative studies of Clausewitz and other theorists, such as Tolstoy. Bernard Brodie's "A Guide to the Reading of "On War,"" in the 1976 Princeton translation, expressed his interpretations of the Prussian's theories and provided students with an influential synopsis of this vital work. The 1873 translation by Colonel James John Graham was heavily—and controversially—edited by the philosopher, musician, and game theorist Anatol Rapoport.
The British military historian John Keegan attacked Clausewitz's theory in his book "A History of Warfare". Keegan argued that Clausewitz assumed the existence of states, yet 'war antedates the state, diplomacy and strategy by many millennia.'
Influence.
Clausewitz died without completing "Vom Kriege," but despite this his ideas have been widely influential in military theory and have had a strong influence on German military thought specifically. Later Prussian and German generals, such as Helmuth Graf von Moltke, were clearly influenced by Clausewitz: Moltke's widely quoted statement that "No operational plan extends with high certainty beyond the first encounter with the main enemy force" is a classic reflection of Clausewitz's insistence on the roles of chance, friction, "fog," uncertainty, and interactivity in war.
Clausewitz's influence spread to British thinking as well, though at first more as a historian and analyst than as a theorist. See for example Wellington's extended essay discussing Clausewitz's study of the Campaign of 1815—Wellington's only serious written discussion of the battle, which was widely discussed in 19th-century Britain. Clausewitz's broader thinking came to the fore following Britain's military embarrassments in the Boer War (1899–1902). One example of a heavy Clausewitzian influence in that era is Spenser Wilkinson, journalist, the first Chichele Professor of Military History at Oxford University, and perhaps the most prominent military analyst in Britain from until well into the interwar period. Another is naval historian Julian Corbett (1854–1922), whose work reflected a deep if idiosyncratic adherence to Clausewitz's concepts and frequently an emphasis on Clausewitz's ideas about 'limited objectives' and the inherent strengths of the defensive form of war. Corbett's practical strategic views were often in prominent public conflict with Wilkinson's—see, for example, Wilkinson's article "Strategy at Sea", "The Morning Post", 12 February 1912. Following the First World War, however, the influential British military commentator B. H. Liddell Hart in the 1920s erroneously attributed to him the doctrine of "total war" that during the First World War had been embraced by many European general staffs and emulated by the British. More recent scholars typically see that war as so confused in terms of political rationale that it in fact contradicts much of "On War." That view assumes, however, a set of values as to what constitutes "rational" political objectives—in this case, values not shaped by the fervid Social Darwinism that was rife in 1914 Europe. One of the most influential British Clausewitzians today is Colin S. Gray; historian Hew Strachan (like Wilkinson also the Chichele Professor of Military History at Oxford University, since 2001) has been an energetic proponent of the "study" of Clausewitz, but his own views on Clausewitz's ideas are somewhat ambivalent.
With some interesting exceptions (e.g., John McAuley Palmer, Robert M. Johnston, Hoffman Nickerson), Clausewitz had little influence on American military thought before 1945 other than via British writers, though Generals Eisenhower and Patton were avid readers of English translations. He did influence Karl Marx, Friedrich Engels, Vladimir Lenin, Leon Trotsky, Võ Nguyên Giáp, Ferdinand Foch, and Mao Zedong, and thus the Communist Soviet and Chinese traditions, as Lenin emphasized the inevitability of wars among capitalist states in the age of imperialism and presented the armed struggle of the working class as the only path toward the eventual elimination of war. Because Lenin was an admirer of Clausewitz and called him "one of the great military writers," his influence on the Red Army was immense. The Russian historian A.N. Mertsalov commented that "It was an irony of fate that the view in the USSR was that it was Lenin who shaped the attitude towards Clausewitz, and that Lenin's dictum that war is a continuation of politics is taken from the work of this [allegedly] anti-humanist anti-revolutionary." The American mathematician Anatol Rapoport wrote in 1968 that Clausewitz as interpreted by Lenin formed the basis of all Soviet military thinking since 1917, and quoted the remarks by Marshal V.D. Sokolovsky:
Henry A. Kissinger, however, described Lenin's approach as being that politics is a continuation of war by other means, thus turning Clausewitz's argument "on its head."
Rapoport argued that:
Clausewitz directly influenced Mao Zedong, who read "On War" in 1938 and organised a seminar on Clausewitz for the Party leadership in Yan'an. Thus the "Clausewitzian" content in many of Mao's writings is not merely a regurgitation of Lenin but reflects Mao's own study. The idea that war involves inherent "friction" that distorts, to a greater or lesser degree, all prior arrangements, has become common currency in fields such as business strategy and sport. The phrase "fog of war" derives from Clausewitz's stress on how confused warfare can seem while one is immersed within it. The term center of gravity, used in a military context derives from Clausewitz's usage, which he took from Newtonian mechanics. In U.S. military doctrine, "center of gravity" refers to the basis of an opponent's power at the operational, strategic, or political level, though this is only one aspect of Clausewitz's use of the term.
Late 20th and early 21st century.
The deterrence strategy of the United States in the 1950s was closely inspired by President Dwight Eisenhower's reading of Clausewitz as a young officer in the 1920s. Eisenhower was greatly impressed by Clausewitz's example of a theoretical, idealized "absolute war" in "Vom Kriege" as a way of demonstrating how absurd it would be to attempt such a strategy in practice. For Eisenhower, the age of nuclear weapons had made what was for Clausewitz in the early-19th century only a theoretical vision an all too real possibility in the mid-20th century. From Eisenhower's viewpoint, the best deterrent to war was to show the world just how appalling and horrific a nuclear "absolute war" would be if it should ever occur, hence a series of much-publicized nuclear tests in the Pacific, giving first priority in the defense budget to nuclear weapons and to their delivery-systems over conventional weapons, and making repeated statements in public that the United States was able and willing at all times to use nuclear weapons. In this way, through the massive retaliation doctrine and the closely related foreign-policy concept of brinkmanship, Eisenhower hoped to hold out a credible vision of Clausewitzian nuclear "absolute war" in order to deter the Soviet Union and/or China from ever risking a war or even conditions that might lead to a war with the United States.
After 1970, some theorists claimed that nuclear proliferation made Clausewitzian concepts obsolete after the 20th-century period in which they dominated the world. John E. Sheppard Jr., argues that by developing nuclear weapons, state-based conventional armies simultaneously both perfected their original purpose, to destroy a mirror image of themselves, and made themselves obsolete. No two powers have used nuclear weapons against each other, instead using diplomacy, conventional means, or proxy wars to settle disputes. If such a conflict did occur, presumably both combatants would be annihilated. Heavily influenced by the war in Vietnam and by antipathy to American strategist Henry Kissinger, the American biologist, musician, and game-theorist Anatol Rapoport argued in 1968 that a Clausewitzian view of war was not only obsolete in the age of nuclear weapons, but also highly dangerous as it promoted a "zero-sum paradigm" to international relations and a "dissolution of rationality" amongst decision-makers.
The end of the 20th century and the beginning of the 21st century have seen many instances of state armies attempting to suppress insurgencies and terrorism, and engaging in other forms of asymmetrical warfare. Clausewitz did not focus solely on wars between countries with well-defined armies. The era of the French Revolution and Napoleon was full of revolutions, rebellions, and violence by "non-state actors" - witness the wars in the French Vendée and in Spain. Clausewitz wrote a series of "Lectures on Small War" and studied the rebellion in the Vendée (1793–1796) and the Tyrolean uprising of 1809. In his famous "Bekenntnisdenkschrift" of 1812 he called for a "Spanish war in Germany" and laid out a comprehensive guerrilla strategy to be waged against Napoleon. In "On War" he included a famous chapter on "The People in Arms".
One prominent critic of Clausewitz is the Israeli military historian Martin van Creveld. In his 1991 book "The Transformation of War", Creveld argued that Clausewitz's famous "Trinity" of people, army, and government was an obsolete socio-political construct based on the state, which was rapidly passing from the scene as the key player in war, and that he (Creveld) had constructed a new "non-trinitarian" model for modern warfare. Creveld's work has had great influence. Daniel Moran replied, 'The most egregious misrepresentation of Clausewitz's famous metaphor must be that of Martin van Creveld, who has declared Clausewitz to be an apostle of Trinitarian War, by which he means, incomprehensibly, a war of 'state against state and army against army,' from which the influence of the people is entirely excluded." Christopher Bassford went further, noting that one need only "read" the paragraph in which Clausewitz defined his Trinity to see "that the words 'people,' 'army,' and 'government' appear nowhere at all in the list of the Trinity's components... Creveld's and Keegan's assault on Clausewitz's Trinity is not only a classic 'blow into the air,' i.e., an assault on a position Clausewitz doesn't occupy. It is also a pointless attack on a concept that is quite useful in its own right. In any case, their failure to read the actual wording of the theory they so vociferously attack, and to grasp its deep relevance to the phenomena they describe, is hard to credit."
Some have gone further and suggested that Clausewitz's best-known aphorism, that war is a continuation of policy with other means, is not only irrelevant today but also inapplicable historically. For an opposing view see the sixteen essays presented in "Clausewitz in the Twenty-First Century" edited by Hew Strachan and Andreas Herberg-Rothe.
In military academies, schools, and universities worldwide, Clausewitz's "Vom Kriege" is often (usually in translation) mandatory reading.
Some theorists of management look to Clausewitz - just as some look to Sun Tzu - to bolster ideas on the concept of leadership.
|
6068
|
18872885
|
https://en.wikipedia.org/wiki?curid=6068
|
Common Lisp
|
Common Lisp (CL) is a dialect of the Lisp programming language, published in American National Standards Institute (ANSI) standard document "ANSI INCITS 226-1994 (S2018)" (formerly "X3.226-1994 (R1999)"). The Common Lisp HyperSpec, a hyperlinked HTML version, has been derived from the ANSI Common Lisp standard.
The Common Lisp language was developed as a standardized and improved successor of Maclisp. By the early 1980s several groups were already at work on diverse successors to MacLisp: Lisp Machine Lisp (aka ZetaLisp), Spice Lisp, NIL and S-1 Lisp. Common Lisp sought to unify, standardise, and extend the features of these MacLisp dialects. Common Lisp is not an implementation, but rather a language specification. Several implementations of the Common Lisp standard are available, including free and open-source software and proprietary products.
Common Lisp is a general-purpose, multi-paradigm programming language. It supports a combination of procedural, functional, and object-oriented programming paradigms. As a dynamic programming language, it facilitates evolutionary and incremental software development, with iterative compilation into efficient run-time programs. This incremental development is often done interactively without interrupting the running application.
It also supports optional type annotation and casting, which can be added as necessary at the later profiling and optimization stages, to permit the compiler to generate more efficient code. For instance, codice_1 can hold an unboxed integer in a range supported by the hardware and implementation, permitting more efficient arithmetic than on big integers or arbitrary precision types. Similarly, the compiler can be told on a per-module or per-function basis which type of safety level is wanted, using "optimize" declarations.
Common Lisp includes CLOS, an object system that supports multimethods and method combinations. It is often implemented with a Metaobject Protocol.
Common Lisp is extensible through standard features such as "Lisp macros" (code transformations) and "reader macros" (input parsers for characters).
Common Lisp provides partial backwards compatibility with Maclisp and John McCarthy's original Lisp. This allows older Lisp software to be ported to Common Lisp.
History.
Work on Common Lisp started in 1981 after an initiative by ARPA manager Bob Engelmore to develop a single community standard Lisp dialect. Much of the initial language design was done via electronic mail. In 1982, Guy L. Steele Jr. gave the first overview of Common Lisp at the 1982 ACM Symposium on LISP and functional programming.
The first language documentation was published in 1984 as Common Lisp the Language (known as CLtL1), first edition. A second edition (known as CLtL2), published in 1990, incorporated many changes to the language, made during the ANSI Common Lisp standardization process: extended LOOP syntax, the Common Lisp Object System, the Condition System for error handling, an interface to the pretty printer and much more. But CLtL2 does not describe the final ANSI Common Lisp standard and thus is not a documentation of ANSI Common Lisp. The final ANSI Common Lisp standard then was published in 1994. Since then no update to the standard has been published. Various extensions and improvements to Common Lisp (examples are Unicode, Concurrency, CLOS-based IO) have been provided by implementations and libraries.
Syntax.
Common Lisp is a dialect of Lisp. It uses S-expressions to denote both code and data structure. Function calls, macro forms and special forms are written as lists, with the name of the operator first, as in these examples:
(+ 2 2) ; adds 2 and 2, yielding 4. The function's name is '+'. Lisp has no operators as such.
(defvar *x*) ; Ensures that a variable *x* exists,
; without giving it a value. The asterisks are part of
; the name, by convention denoting a special (global) variable.
; The symbol *x* is also hereby endowed with the property that
; subsequent bindings of it are dynamic, rather than lexical.
(setf *x* 42.1) ; Sets the variable *x* to the floating-point value 42.1
;; Define a function that squares a number:
(defun square (x)
(* x x))
;; Execute the function:
(square 3) ; Returns 9
;; The 'let' construct creates a scope for local variables. Here
;; the variable 'a' is bound to 6 and the variable 'b' is bound
;; to 4. Inside the 'let' is a 'body', where the last computed value is returned.
;; Here the result of adding a and b is returned from the 'let' expression.
;; The variables a and b have lexical scope, unless the symbols have been
;; marked as special variables (for instance by a prior DEFVAR).
(let ((a 6)
(b 4))
(+ a b)) ; returns 10
Data types.
Common Lisp has many data types.
Scalar types.
"Number" types include integers, ratios, floating-point numbers, and complex numbers. Common Lisp uses bignums to represent numerical values of arbitrary size and precision. The ratio type represents fractions exactly, a facility not available in many languages. Common Lisp automatically coerces numeric values among these types as appropriate.
The Common Lisp "character" type is not limited to ASCII characters. Most modern implementations allow Unicode characters.
The "symbol" type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object with several parts: name, value, function, property list, and package. Of these, "value cell" and "function cell" are the most important. Symbols in Lisp are often used similarly to identifiers in other languages: to hold the value of a variable; however there are many other uses. Normally, when a symbol is evaluated, its value is returned. Some symbols evaluate to themselves, for example, all symbols in the keyword package are self-evaluating. Boolean values in Common Lisp are represented by the self-evaluating symbols T and NIL. Common Lisp has namespaces for symbols, called 'packages'.
A number of functions are available for rounding scalar numeric values in various ways. The function codice_2 rounds the argument to the nearest integer, with halfway cases rounded to the even integer. The functions codice_3, codice_4, and codice_5 round towards zero, down, or up respectively. All these functions return the discarded fractional part as a secondary value. For example, codice_6 yields −3, 0.5; codice_7 yields −2, −0.5; codice_8 yields 2, 0.5; and codice_9 yields 4, −0.5.
Data structures.
"Sequence" types in Common Lisp include lists, vectors, bit-vectors, and strings. There are many operations that can work on any sequence type.
As in almost all other Lisp dialects, "lists" in Common Lisp are composed of "conses", sometimes called "cons cells" or "pairs". A cons is a data structure with two slots, called its "car" and "cdr". A list is a linked chain of conses or the empty list. Each cons's car refers to a member of the list (possibly another list). Each cons's cdr refers to the next cons—except for the last cons in a list, whose cdr refers to the codice_10 value. Conses can also easily be used to implement trees and other complex data structures; though it is usually advised to use structure or class instances instead. It is also possible to create circular data structures with conses.
Common Lisp supports multidimensional "arrays", and can dynamically resize "adjustable" arrays if required. Multidimensional arrays can be used for matrix mathematics. A "vector" is a one-dimensional array. Arrays can carry any type as members (even mixed types in the same array) or can be specialized to contain a specific type of members, as in a vector of bits. Usually, only a few types are supported. Many implementations can optimize array functions when the array used is type-specialized. Two type-specialized array types are standard: a "string" is a vector of characters, while a "bit-vector" is a vector of bits.
"Hash tables" store associations between data objects. Any object may be used as key or value. Hash tables are automatically resized as needed.
"Packages" are collections of symbols, used chiefly to separate the parts of a program into namespaces. A package may "export" some symbols, marking them as part of a public interface. Packages can use other packages.
"Structures", similar in use to C structs and Pascal records, represent arbitrary complex data structures with any number and type of fields (called "slots"). Structures allow single-inheritance.
"Classes" are similar to structures, but offer more dynamic features and multiple-inheritance. (See CLOS). Classes have been added late to Common Lisp and there is some conceptual overlap with structures. Objects created of classes are called "Instances". A special case is Generic Functions. Generic Functions are both functions and instances.
Functions.
Common Lisp supports first-class functions. For instance, it is possible to write functions that take other functions as arguments or return functions as well. This makes it possible to describe very general operations.
The Common Lisp library relies heavily on such higher-order functions. For example, the codice_11 function takes a relational operator as an argument and key function as an optional keyword argument. This can be used not only to sort any type of data, but also to sort data structures according to a key.
;; Sorts the list using the > and < function as the relational operator.
(sort (list 5 2 6 3 1 4) #'>) ; Returns (6 5 4 3 2 1)
(sort (list 5 2 6 3 1 4) #'<) ; Returns (1 2 3 4 5 6)
;; Sorts the list according to the first element of each sub-list.
(sort (list '(9 A) '(3 B) '(4 C)) #'< :key #'first) ; Returns ((3 B) (4 C) (9 A))
The evaluation model for functions is very simple. When the evaluator encounters a form codice_12 then it presumes that the symbol named f is one of the following:
If f is the name of a function, then the arguments a1, a2, ..., an are evaluated in left-to-right order, and the function is found and invoked with those values supplied as parameters.
Defining functions.
The macro codice_14 defines functions where a function definition gives the name of the function, the names of any arguments, and a function body:
(defun square (x)
(* x x))
Function definitions may include compiler directives, known as "declarations", which provide hints to the compiler about optimization settings or the data types of arguments. They may also include "documentation strings" (docstrings), which the Lisp system may use to provide interactive documentation:
(defun square (x)
"Calculates the square of the single-float x."
(declare (single-float x) (optimize (speed 3) (debug 0) (safety 1)))
(the single-float (* x x)))
Anonymous functions (function literals) are defined using codice_13 expressions, e.g. codice_16 for a function that squares its argument. Lisp programming style frequently uses higher-order functions for which it is useful to provide anonymous functions as arguments.
Local functions can be defined with codice_17 and codice_18.
(flet ((square (x)
(* x x)))
(square 3))
There are several other operators related to the definition and manipulation of functions. For instance, a function may be compiled with the codice_19 operator. (Some Lisp systems run functions using an interpreter by default unless instructed to compile; others compile every function).
Defining generic functions and methods.
The macro codice_20 defines generic functions. Generic functions are a collection of methods.
The macro codice_21 defines methods.
Methods can specialize their parameters over CLOS "standard classes", "system classes", "structure classes" or individual objects. For many types, there are corresponding "system classes".
When a generic function is called, multiple-dispatch will determine the effective method to use.
(defgeneric add (a b))
(defmethod add ((a number) (b number))
(+ a b))
(defmethod add ((a vector) (b number))
(map 'vector (lambda (n) (+ n b)) a))
(defmethod add ((a vector) (b vector))
(map 'vector #'+ a b))
(concatenate 'string a b))
(add 2 3) ; returns 5
(add #(1 2 3 4) 7) ; returns #(8 9 10 11)
(add #(1 2 3 4) #(4 3 2 1)) ; returns #(5 5 5 5)
(add "COMMON " "LISP") ; returns "COMMON LISP"
Generic Functions are also a first class data type. There are many more features to Generic Functions and Methods than described above.
The function namespace.
The namespace for function names is separate from the namespace for data variables. This is a key difference between Common Lisp and Scheme. For Common Lisp, operators that define names in the function namespace include codice_14, codice_17, codice_18, codice_21 and codice_20.
To pass a function by name as an argument to another function, one must use the codice_27 special operator, commonly abbreviated as codice_28. The first codice_11 example above refers to the function named by the symbol codice_30 in the function namespace, with the code codice_31. Conversely, to call a function passed in such a way, one would use the codice_32 operator on the argument.
Scheme's evaluation model is simpler: there is only one namespace, and all positions in the form are evaluated (in any order) – not just the arguments. Code written in one dialect is therefore sometimes confusing to programmers more experienced in the other. For instance, many Common Lisp programmers like to use descriptive variable names such as "list" or "string" which could cause problems in Scheme, as they would locally shadow function names.
Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the "Lisp-1 vs. Lisp-2 debate". Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches.
Multiple return values.
Common Lisp supports the concept of "multiple values", where any expression always has a single "primary value", but it might also have any number of "secondary values", which might be received and inspected by interested callers. This concept is distinct from returning a list value, as the secondary values are fully optional, and passed via a dedicated side channel. This means that callers may remain entirely unaware of the secondary values being there if they have no need for them, and it makes it convenient to use the mechanism for communicating information that is sometimes useful, but not always necessary. For example,
(y 458))
(multiple-value-bind (quotient remainder)
(truncate x y)
(format nil "~A divided by ~A is ~A remainder ~A" x y quotient remainder)))
(gethash 'answer library 42))
(format nil "The answer is ~A" (get-answer library)))
(multiple-value-bind (answer sure-p)
(get-answer library)
(if (not sure-p)
"I don't know"
(format nil "The answer is ~A" answer))))
Multiple values are supported by a handful of standard forms, most common of which are the codice_35 special form for accessing secondary values and codice_36 for returning multiple values:
"Return an outlook prediction, with the probability as a secondary value"
(values "Outlook good" (random 1.0)))
Other types.
Other data types in Common Lisp include:
Scope.
Like programs in many other programming languages, Common Lisp programs make use of names to refer to variables, functions, and many other kinds of entities. Named references are subject to scope.
The association between a name and the entity which the name refers to is called a binding.
Scope refers to the set of circumstances in which a name is determined to have a particular binding.
Determiners of scope.
The circumstances which determine scope in Common Lisp include:
To understand what a symbol refers to, the Common Lisp programmer must know what kind of reference is being expressed, what kind of scope it uses if it is a variable reference (dynamic versus lexical scope), and also the run-time situation: in what environment is the reference resolved, where was the binding introduced into the environment, et cetera.
Kinds of environment.
Global.
Some environments in Lisp are globally pervasive. For instance, if a new type is defined, it is known everywhere thereafter. References to that type look it up in this global environment.
Dynamic.
One type of environment in Common Lisp is the dynamic environment. Bindings established in this environment have dynamic extent, which means that a binding is established at the start of the execution of some construct, such as a codice_51 block, and disappears when that construct finishes executing: its lifetime is tied to the dynamic activation and deactivation of a block. However, a dynamic binding is not just visible within that block; it is also visible to all functions invoked from that block. This type of visibility is known as indefinite scope. Bindings which exhibit dynamic extent (lifetime tied to the activation and deactivation of a block) and indefinite scope (visible to all functions which are called from that block) are said to have dynamic scope.
Common Lisp has support for dynamically scoped variables, which are also called special variables. Certain other kinds of bindings are necessarily dynamically scoped also, such as restarts and catch tags. Function bindings cannot be dynamically scoped using codice_17 (which only provides lexically scoped function bindings), but function objects (a first-level object in Common Lisp) can be assigned to dynamically scoped variables, bound using codice_51 in dynamic scope, then called using codice_32 or codice_57.
Dynamic scope is extremely useful because it adds referential clarity and discipline to global variables. Global variables are frowned upon in computer science as potential sources of error, because they can give rise to ad-hoc, covert channels of communication among modules that lead to unwanted, surprising interactions.
In Common Lisp, a special variable which has only a top-level binding behaves just like a global variable in other programming languages. A new value can be stored into it, and that value simply replaces what is in the top-level binding. Careless replacement of the value of a global variable is at the heart of bugs caused by the use of global variables. However, another way to work with a special variable is to give it a new, local binding within an expression. This is sometimes referred to as "rebinding" the variable. Binding a dynamically scoped variable temporarily creates a new memory location for that variable, and associates the name with that location. While that binding is in effect, all references to that variable refer to the new binding; the previous binding is hidden. When execution of the binding expression terminates, the temporary memory location is gone, and the old binding is revealed, with the original value intact. Of course, multiple dynamic bindings for the same variable can be nested.
In Common Lisp implementations which support multithreading, dynamic scopes are specific to each thread of execution. Thus special variables serve as an abstraction for thread local storage. If one thread rebinds a special variable, this rebinding has no effect on that variable in other threads. The value stored in a binding can only be retrieved by the thread which created that binding. If each thread binds some special variable codice_58, then codice_58 behaves like thread-local storage. Among threads which do not rebind codice_58, it behaves like an ordinary global: all of these threads refer to the same top-level binding of codice_58.
Dynamic variables can be used to extend the execution context with additional context information which is implicitly passed from function to function without having to appear as an extra function parameter. This is especially useful when the control transfer has to pass through layers of unrelated code, which simply cannot be extended with extra parameters to pass the additional data. A situation like this usually calls for a global variable. That global variable must be saved and restored, so that the scheme doesn't break under recursion: dynamic variable rebinding takes care of this. And that variable must be made thread-local (or else a big mutex must be used) so the scheme doesn't break under threads: dynamic scope implementations can take care of this also.
In the Common Lisp library, there are many standard special variables. For instance, all standard I/O streams are stored in the top-level bindings of well-known special variables. The standard output stream is stored in *standard-output*.
Suppose a function foo writes to standard output:
(defun foo ()
(format t "Hello, world"))
To capture its output in a character string, *standard-output* can be bound to a string stream and called:
(with-output-to-string (*standard-output*)
(foo))
-> "Hello, world" ; gathered output returned as a string
Lexical.
Common Lisp supports lexical environments. Formally, the bindings in a lexical environment have lexical scope and may have either an indefinite extent or dynamic extent, depending on the type of namespace. Lexical scope means that visibility is physically restricted to the block in which the binding is established. References which are not textually (i.e. lexically) embedded in that block simply do not see that binding.
The tags in a TAGBODY have lexical scope. The expression (GO X) is erroneous if it is not embedded in a TAGBODY which contains a label X. However, the label bindings disappear when the TAGBODY terminates its execution, because they have dynamic extent. If that block of code is re-entered by the invocation of a lexical closure, it is invalid for the body of that closure to try to transfer control to a tag via GO:
(defvar *stashed*) ;; will hold a function
(tagbody
(setf *stashed* (lambda () (go some-label)))
(go end-label) ;; skip the (print "Hello")
some-label
(print "Hello")
end-label)
-> NIL
When the TAGBODY is executed, it first evaluates the setf form which stores a function in the special variable *stashed*. Then the (go end-label) transfers control to end-label, skipping the code (print "Hello"). Since end-label is at the end of the tagbody, the tagbody terminates, yielding NIL. Suppose that the previously remembered function is now called:
(funcall *stashed*) ;; Error!
This situation is erroneous. One implementation's response is an error condition containing the message, "GO: tagbody for tag SOME-LABEL has already been left". The function tried to evaluate (go some-label), which is lexically embedded in the tagbody, and resolves to the label. However, the tagbody isn't executing (its extent has ended), and so the control transfer cannot take place.
Local function bindings in Lisp have lexical scope, and variable bindings also have lexical scope by default. By contrast with GO labels, both of these have indefinite extent. When a lexical function or variable binding is established, that binding continues to exist for as long as references to it are possible, even after the construct which established that binding has terminated. References to lexical variables and functions after the termination of their establishing construct are possible thanks to lexical closures.
Lexical binding is the default binding mode for Common Lisp variables. For an individual symbol, it can be switched to dynamic scope, either by a local declaration, by a global declaration. The latter may occur implicitly through the use of a construct like DEFVAR or DEFPARAMETER. It is an important convention in Common Lisp programming that special (i.e. dynamically scoped) variables have names which begin and end with an asterisk sigil codice_62 in what is called the "earmuff convention". If adhered to, this convention effectively creates a separate namespace for special variables, so that variables intended to be lexical are not accidentally made special.
Lexical scope is useful for several reasons.
Firstly, references to variables and functions can be compiled to efficient machine code, because the run-time environment structure is relatively simple. In many cases it can be optimized to stack storage, so opening and closing lexical scopes has minimal overhead. Even in cases where full closures must be generated, access to the closure's environment is still efficient; typically each variable becomes an offset into a vector of bindings, and so a variable reference becomes a simple load or store instruction with a base-plus-offset addressing mode.
Secondly, lexical scope (combined with indefinite extent) gives rise to the lexical closure, which in turn creates a whole paradigm of programming centered around the use of functions being first-class objects, which is at the root of functional programming.
Thirdly, perhaps most importantly, even if lexical closures are not exploited, the use of lexical scope isolates program modules from unwanted interactions. Due to their restricted visibility, lexical variables are private. If one module A binds a lexical variable X, and calls another module B, references to X in B will not accidentally resolve to the X bound in A. B simply has no access to X. For situations in which disciplined interactions through a variable are desirable, Common Lisp provides special variables. Special variables allow for a module A to set up a binding for a variable X which is visible to another module B, called from A. Being able to do this is an advantage, and being able to prevent it from happening is also an advantage; consequently, Common Lisp supports both lexical and dynamic scope.
Macros.
A "macro" in Lisp superficially resembles a function in usage. However, rather than representing an expression which is evaluated, it represents a transformation of the program source code. The macro gets the source it surrounds as arguments, binds them to its parameters and computes a new source form. This new form can also use a macro. The macro expansion is repeated until the new source form does not use a macro. The final computed form is the source code executed at runtime.
Typical uses of macros in Lisp:
Various standard Common Lisp features also need to be implemented as macros, such as:
Macros are defined by the "defmacro" macro. The special operator "macrolet" allows the definition of local (lexically scoped) macros. It is also possible to define macros for symbols using "define-symbol-macro" and "symbol-macrolet".
Paul Graham's book On Lisp describes the use of macros in Common Lisp in detail. Doug Hoyte's book Let Over Lambda extends the discussion on macros, claiming "Macros are the single greatest advantage that lisp has as a programming language and the single greatest advantage of any programming language." Hoyte provides several examples of iterative development of macros.
Example using a macro to define a new control structure.
Macros allow Lisp programmers to create new syntactic forms in the language. One typical use is to create new control structures. The example macro provides an codice_73 looping construct. The syntax is:
The macro definition for "until":
(let ((start-tag (gensym "START"))
(end-tag (gensym "END")))
`(tagbody ,start-tag
(when ,test (go ,end-tag))
(progn ,@body)
(go ,start-tag)
,end-tag)))
"tagbody" is a primitive Common Lisp special operator which provides the ability to name tags and use the "go" form to jump to those tags. The backquote "`" provides a notation that provides code templates, where the value of forms preceded with a comma are filled in. Forms preceded with comma and at-sign are "spliced" in. The tagbody form tests the end condition. If the condition is true, it jumps to the end tag. Otherwise, the provided body code is executed and then it jumps to the start tag.
An example of using the above "until" macro:
(write-line "Hello"))
The code can be expanded using the function "macroexpand-1". The expansion for the above example looks like this:
(TAGBODY
#:START1136
(WHEN (ZEROP (RANDOM 10))
(GO #:END1137))
(PROGN (WRITE-LINE "hello"))
(GO #:START1136)
#:END1137)
During macro expansion the value of the variable "test" is "(= (random 10) 0)" and the value of the variable "body" is "((write-line "Hello"))". The body is a list of forms.
Symbols are usually automatically upcased. The expansion uses the TAGBODY with two labels. The symbols for these labels are computed by GENSYM and are not interned in any package. Two "go" forms use these tags to jump to. Since "tagbody" is a primitive operator in Common Lisp (and not a macro), it will not be expanded into something else. The expanded form uses the "when" macro, which also will be expanded. Fully expanding a source form is called "code walking".
In the fully expanded ("walked") form, the "when" form is replaced by the primitive "if":
(TAGBODY
#:START1136
(IF (ZEROP (RANDOM 10))
(PROGN (GO #:END1137))
NIL)
(PROGN (WRITE-LINE "hello"))
(GO #:START1136))
#:END1137)
All macros must be expanded before the source code containing them can be evaluated or compiled normally. Macros can be considered functions that accept and return S-expressions – similar to abstract syntax trees, but not limited to those. These functions are invoked before the evaluator or compiler to produce the final source code. Macros are written in normal Common Lisp, and may use any Common Lisp (or third-party) operator available.
Variable capture and shadowing.
Common Lisp macros are capable of what is commonly called "variable capture", where symbols in the macro-expansion body coincide with those in the calling context, allowing the programmer to create macros wherein various symbols have special meaning. The term "variable capture" is somewhat misleading, because all namespaces are vulnerable to unwanted capture, including the operator and function namespace, the tagbody label namespace, catch tag, condition handler and restart namespaces.
"Variable capture" can introduce software defects. This happens in one of the following two ways:
The Scheme dialect of Lisp provides a macro-writing system which provides the referential transparency that eliminates both types of capture problem. This type of macro system is sometimes called "hygienic", in particular by its proponents (who regard macro systems which do not automatically solve this problem as unhygienic).
In Common Lisp, macro hygiene is ensured one of two different ways.
One approach is to use gensyms: guaranteed-unique symbols which can be used in a macro-expansion without threat of capture. The use of gensyms in a macro definition is a manual chore, but macros can be written which simplify the instantiation and use of gensyms. Gensyms solve type 2 capture easily, but they are not applicable to type 1 capture in the same way, because the macro expansion cannot rename the interfering symbols in the surrounding code which capture its references. Gensyms could be used to provide stable aliases for the global symbols which the macro expansion needs. The macro expansion would use these secret aliases rather than the well-known names, so redefinition of the well-known names would have no ill effect on the macro.
Another approach is to use packages. A macro defined in its own package can simply use internal symbols in that package in its expansion. The use of packages deals with type 1 and type 2 capture.
However, packages don't solve the type 1 capture of references to standard Common Lisp functions and operators. The reason is that the use of packages to solve capture problems revolves around the use of private symbols (symbols in one package, which are not imported into, or otherwise made visible in other packages). Whereas the Common Lisp library symbols are external, and frequently imported into or made visible in user-defined packages.
The following is an example of unwanted capture in the operator namespace, occurring in the expansion of a macro:
;; expansion of UNTIL makes liberal use of DO
(defmacro until (expression &body body)
`(do () (,expression) ,@body))
;; macrolet establishes lexical operator binding for DO
(macrolet ((do (...) ... something else ...))
(until (= (random 10) 0) (write-line "Hello")))
The codice_73 macro will expand into a form which calls codice_75 which is intended to refer to the standard Common Lisp macro codice_75. However, in this context, codice_75 may have a completely different meaning, so codice_73 may not work properly.
Common Lisp solves the problem of the shadowing of standard operators and functions by forbidding their redefinition. Because it redefines the standard operator codice_75, the preceding is actually a fragment of non-conforming Common Lisp, which allows implementations to diagnose and reject it.
Condition system.
The "condition system" is responsible for exception handling in Common Lisp. It provides "conditions", "handler"s and "restart"s. "Condition"s are objects describing an exceptional situation (for example an error). If a "condition" is signaled, the Common Lisp system searches for a "handler" for this condition type and calls the handler. The "handler" can now search for restarts and use one of these restarts to automatically repair the current problem, using information such as the condition type and any relevant information provided as part of the condition object, and call the appropriate restart function.
These restarts, if unhandled by code, can be presented to users (as part of a user interface, that of a debugger for example), so that the user can select and invoke one of the available restarts. Since the condition handler is called in the context of the error (without unwinding the stack), full error recovery is possible in many cases, where other exception handling systems would have already terminated the current routine. The debugger itself can also be customized or replaced using the codice_80 dynamic variable. Code found within "unwind-protect" forms such as finalizers will also be executed as appropriate despite the exception.
In the following example (using Symbolics Genera) the user tries to open a file in a Lisp function "test" called from the Read-Eval-Print-LOOP (REPL), when the file does not exist. The Lisp system presents four restarts. The user selects the "Retry OPEN using a different pathname" restart and enters a different pathname (lispm-init.lisp instead of lispm-int.lisp). The user code does not contain any error handling code. The whole error handling and restart code is provided by the Lisp system, which can handle and repair the error without terminating the user code.
Command: (test ">zippy>lispm-int.lisp")
Error: The file was not found.
For lispm:>zippy>lispm-int.lisp.newest
LMFS:OPEN-LOCAL-LMFS-1
Arg 0: #P"lispm:>zippy>lispm-int.lisp.newest"
s-A, <Resume>: Retry OPEN of lispm:>zippy>lispm-int.lisp.newest
s-B: Retry OPEN using a different pathname
s-C, <Abort>: Return to Lisp Top Level in a TELNET server
s-D: Restart process TELNET terminal
-> Retry OPEN using a different pathname
Use what pathname instead [default lispm:>zippy>lispm-int.lisp.newest]:
lispm:>zippy>lispm-init.lisp.newest
...the program continues
Common Lisp Object System (CLOS).
Common Lisp includes a toolkit for object-oriented programming, the Common Lisp Object System or CLOS. Peter Norvig explains how many Design Patterns are simpler to implement in a dynamic language with the features of CLOS (Multiple Inheritance, Mixins, Multimethods, Metaclasses, Method combinations, etc.).
Several extensions to Common Lisp for object-oriented programming have been proposed to be included into the ANSI Common Lisp standard, but eventually CLOS was adopted as the standard object-system for Common Lisp. CLOS is a dynamic object system with multiple dispatch and multiple inheritance, and differs radically from the OOP facilities found in static languages such as C++ or Java. As a dynamic object system, CLOS allows changes at runtime to generic functions and classes. Methods can be added and removed, classes can be added and redefined, objects can be updated for class changes and the class of objects can be changed.
CLOS has been integrated into ANSI Common Lisp. Generic functions can be used like normal functions and are a first-class data type. Every CLOS class is integrated into the Common Lisp type system. Many Common Lisp types have a corresponding class. There is more potential use of CLOS for Common Lisp. The specification does not say whether conditions are implemented with CLOS. Pathnames and streams could be implemented with CLOS. These further usage possibilities of CLOS for ANSI Common Lisp are not part of the standard. Actual Common Lisp implementations use CLOS for pathnames, streams, input–output, conditions, the implementation of CLOS itself and more.
Compiler and interpreter.
A Lisp interpreter directly executes Lisp source code provided as Lisp objects (lists, symbols, numbers, ...) read from s-expressions. A Lisp compiler generates bytecode or machine code from Lisp source code. Common Lisp allows both individual Lisp functions to be compiled in memory and the compilation of whole files to externally stored compiled code ("fasl" files).
Several implementations of earlier Lisp dialects provided both an interpreter and a compiler. Unfortunately often the semantics were different. These earlier Lisps implemented lexical scoping in the compiler and dynamic scoping in the interpreter. Common Lisp requires that both the interpreter and compiler use lexical scoping by default. The Common Lisp standard describes both the semantics of the interpreter and a compiler. The compiler can be called using the function "compile" for individual functions and using the function "compile-file" for files. Common Lisp allows type declarations and provides ways to influence the compiler code generation policy. For the latter various optimization qualities can be given values between 0 (not important) and 3 (most important): "speed", "space", "safety", "debug" and "compilation-speed".
There is also a function to evaluate Lisp code: codice_81. codice_81 takes code as pre-parsed s-expressions and not, like in some other languages, as text strings. This way code can be constructed with the usual Lisp functions for constructing lists and symbols and then this code can be evaluated with the function codice_81. Several Common Lisp implementations (like Clozure CL and SBCL) are implementing codice_81 using their compiler. This way code is compiled, even though it is evaluated using the function codice_81.
The file compiler is invoked using the function "compile-file". The generated file with compiled code is called a "fasl" (from "fast load") file. These "fasl" files and also source code files can be loaded with the function "load" into a running Common Lisp system. Depending on the implementation, the file compiler generates byte-code (for example for the Java Virtual Machine), C language code (which then is compiled with a C compiler) or, directly, native code.
Common Lisp implementations can be used interactively, even though the code gets fully compiled. The idea of an Interpreted language thus does not apply for interactive Common Lisp.
The language makes a distinction between read-time, compile-time, load-time, and run-time, and allows user code to also make this distinction to perform the wanted type of processing at the wanted step.
Some special operators are provided to especially suit interactive development; for instance, codice_86 will only assign a value to its provided variable if it wasn't already bound, while codice_87 will always perform the assignment. This distinction is useful when interactively evaluating, compiling and loading code in a live image.
Some features are also provided to help writing compilers and interpreters. Symbols consist of first-level objects and are directly manipulable by user code. The codice_88 special operator allows to create lexical bindings programmatically, while packages are also manipulable. The Lisp compiler is available at runtime to compile files or individual functions. These make it easy to use Lisp as an intermediate compiler or interpreter for another language.
Code examples.
Birthday paradox.
The following program calculates the smallest number of people in a room for whom the probability of unique birthdays is less than 50% (the birthday paradox, where for 1 person the probability is obviously 100%, for 2 it is 364/365, etc.). The answer is 23.
In Common Lisp, by convention, constants are enclosed with + characters.
(let ((new-probability (* (/ (- +year-size+ number-of-people)
+year-size+)
probability)))
(if (< new-probability 0.5)
(1+ number-of-people)
(birthday-paradox new-probability (1+ number-of-people)))))
Calling the example function using the REPL (Read Eval Print Loop):
CL-USER > (birthday-paradox 1.0 1)
23
Sorting a list of person objects.
We define a class codice_89 and a method for displaying the name and age of a person.
Next we define a group of persons as a list of codice_89 objects.
Then we iterate over the sorted list.
((name :initarg :name :accessor person-name)
(age :initarg :age :accessor person-age))
(:documentation "The class PERSON with slots NAME and AGE."))
"Displaying a PERSON object to an output stream."
(with-slots (name age) object
(format stream "~a (~a)" name age)))
(defparameter *group*
(list (make-instance 'person :name "Bob" :age 33)
(make-instance 'person :name "Chris" :age 16)
(make-instance 'person :name "Ash" :age 23))
"A list of PERSON objects.")
:key #'person-age))
(display person *standard-output*)
(terpri))
It prints the three names with descending age.
Bob (33)
Ash (23)
Chris (16)
Exponentiating by squaring.
Use of the LOOP macro is demonstrated:
(loop with result = 1
while (plusp n)
when (oddp n) do (setf result (* result x))
do (setf x (* x x)
n (truncate n 2))
finally (return result)))
Example use:
CL-USER > (power 2 200)
1606938044258990275541962092341162602522202993782792835301376
Compare with the built in exponentiation:
CL-USER > (= (expt 2 200) (power 2 200))
T
Find the list of available shells.
WITH-OPEN-FILE is a macro that opens a file and provides a stream. When the form is returning, the file is automatically closed. FUNCALL calls a function object. The LOOP collects all lines that match the predicate.
"Returns a list of lines in file, for which the predicate applied to
the line returns T."
(with-open-file (stream file)
(loop for line = (read-line stream nil nil)
while line
when (funcall predicate line)
collect it)))
The function AVAILABLE-SHELLS calls the above function LIST-MATCHING-LINES with a pathname and an anonymous function as the predicate. The predicate returns the pathname of a shell or NIL (if the string is not the filename of a shell).
(list-matching-lines
file
(lambda (line)
(and (plusp (length line))
(char= (char line 0) #\/)
(pathname
(string-right-trim '(#\space #\tab) line))))))
Example results (on Mac OS X 10.6):
CL-USER > (available-shells)
Comparison with other Lisps.
Common Lisp is most frequently compared with, and contrasted to, Scheme—if only because they are the two most popular Lisp dialects. Scheme predates CL, and comes not only from the same Lisp tradition but from some of the same engineers—Guy Steele, with whom Gerald Jay Sussman designed Scheme, chaired the standards committee for Common Lisp.
Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are extension languages embedded in particular products (GNU Emacs and AutoCAD, respectively). Unlike many earlier Lisps, Common Lisp (like Scheme) uses lexical variable scope by default for both interpreted and compiled code.
Most of the Lisp systems whose designs contributed to Common Lisp—such as ZetaLisp and Franz Lisp—used dynamically scoped variables in their interpreters and lexically scoped variables in their compilers. Scheme introduced the sole use of lexically scoped variables to Lisp; an inspiration from ALGOL 68. CL supports dynamically scoped variables as well, but they must be explicitly declared as "special". There are no differences in scoping between ANSI CL interpreters and compilers.
Common Lisp is sometimes termed a "Lisp-2" and Scheme a "Lisp-1", referring to CL's use of separate namespaces for functions and variables. (In fact, CL has "many" namespaces, such as those for go tags, block names, and codice_72 keywords). There is a long-standing controversy between CL and Scheme advocates over the tradeoffs involved in multiple namespaces. In Scheme, it is (broadly) necessary to avoid giving variables names that clash with functions; Scheme functions frequently have arguments named codice_92, codice_93, or codice_94 so as not to conflict with the system function codice_95. However, in CL it is necessary to explicitly refer to the function namespace when passing a function as an argument—which is also a common occurrence, as in the codice_11 example above.
CL also differs from Scheme in its handling of Boolean values. Scheme uses the special values #t and #f to represent truth and falsity. CL follows the older Lisp convention of using the symbols T and NIL, with NIL standing also for the empty list. In CL, "any" non-NIL value is treated as true by conditionals, such as codice_68, whereas in Scheme all non-#f values are treated as true. These conventions allow some operators in both languages to serve both as predicates (answering a Boolean-valued question) and as returning a useful value for further computation, but in Scheme the value '() which is equivalent to NIL in Common Lisp evaluates to true in a Boolean expression.
Lastly, the Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive. Nonetheless, common CL coding style does not favor the ubiquitous use of recursion that Scheme style prefers—what a Scheme programmer would express with tail recursion, a CL user would usually express with an iterative expression in codice_75, codice_99, codice_72, or (more recently) with the codice_101 package.
Implementations.
See the Category .
Common Lisp is defined by a specification (like Ada and C) rather than by one implementation (like Perl). There are many implementations, and the standard details areas in which they may validly differ.
In addition, implementations tend to come with extensions, which provide functionality not covered in the standard:
Free and open-source software libraries have been created to support extensions to Common Lisp in a portable way, and are most notably found in the repositories of the Common-Lisp.net and CLOCC (Common Lisp Open Code Collection) projects.
Common Lisp implementations may use any mix of native code compilation, byte code compilation or interpretation. Common Lisp has been designed to support incremental compilers, file compilers and block compilers. Standard declarations to optimize compilation (such as function inlining or type specialization) are proposed in the language specification. Most Common Lisp implementations compile source code to native machine code. Some implementations can create (optimized) stand-alone applications. Others compile to interpreted bytecode, which is less efficient than native code, but eases binary-code portability. Some compilers compile Common Lisp code to C code. The misconception that Lisp is a purely interpreted language is most likely because Lisp environments provide an interactive prompt and that code is compiled one-by-one, in an incremental way. With Common Lisp incremental compilation is widely used.
Some Unix-based implementations (CLISP, SBCL) can be used as a scripting language; that is, invoked by the system transparently in the way that a Perl or Unix shell interpreter is.
Applications.
Common Lisp is used to develop research applications (often in Artificial Intelligence), for rapid development of prototypes or for deployed applications.
Common Lisp is used in many commercial applications, including the Yahoo! Store web-commerce site, which originally involved Paul Graham and was later rewritten in C++ and Perl. Other notable examples include:
There also exist open-source applications written in Common Lisp, such as:
Bibliography.
A chronological list of books published (or about to be published) about Common Lisp (the language) or about programming with Common Lisp (especially AI programming).
|
6069
|
194203
|
https://en.wikipedia.org/wiki?curid=6069
|
Color code
|
A color code is a system for encoding and representing non-color information with colors to facilitate communication. This information tends to be categorical (representing unordered/qualitative categories) though may also be sequential (representing an ordered/quantitative variable).
History.
The earliest examples of color codes in use are for long-distance communication by use of flags, as in semaphore communication. The United Kingdom adopted a color code scheme for such communication wherein red signified danger and white signified safety, with other colors having similar assignments of meaning.
As chemistry and other technologies advanced, it became expedient to use coloration as a signal for telling apart things that would otherwise be confusingly similar, such as wiring in electrical and electronic devices, and pharmaceutical pills.
Encoded Variable.
A color code encodes a variable, which may have different representations, where the color code type should match the variable type:
Types.
The types of color code are:
Categorical.
When color is the only varied attribute, the color code is "unidimensional". When other attributes are varied (e.g. shape, size), the code is "multidimensional", where the dimensions can be "independent" (each encoding separate variables) or "redundant" (encoding the same variable). Partial redundancy sees one variable as a subset of another. For example, playing card suits are multidimensional with color (black, red) and shape (club, diamond, heart, spade), which are partially redundant since clubs and spades are always black and diamonds and hearts are always red. Tasks using categorical color codes can be classified as identification tasks, where a single stimulus is shown and must be identified (connotatively or denotatively), versus search tasks, where a color stimulus must be found within a field of heterogenous stimuli. Performance in these tasks is measured by speed and/or accuracy.
The ideal color scheme for a categorical color code depends on whether speed or accuracy is more important. Despite humans being able to distinguish 150 distinct colors along the hue dimension during comparative task, evidence supports that color schemes where colors differ only by hue (equal luminosity and colorfulness) should have a maximum of eight categories with optimized stimulus spacing along the hue dimension, though this would not be color blind accessible. The IALA recommends categorical color codes in seven colors: red, orange, yellow, green, blue, white and black. Adding redundant coding of luminosity and colorfulness adds information and increases speed and accuracy of color decoding tasks. Color codes are superior to others (encoding to letters, shape, size, etc.) in certain types of tasks. Adding color as a redundant attribute to a numeral or letter encoding in search tasks decreased time by 50–75%, but in unidimensional identification tasks, using alphanumeric or line inclination codes caused less errors than color codes.
Several studies demonstrate a subjective preference for color codes over achromatic codes (e.g. shapes), even in studies where color coding did not increase performance over achromatic coding. Subjects reported the tasks as less monotonous and less inducing of eye strain and fatigue.
The ability to discriminate color differences decreases rapidly as the visual angle subtends less than 12' (0.2° or ~2 mm at a viewing distance of 50 cm), so color stimulus of at least 3 mm in diameter or thickness is recommended when the color is on paper or on a screen. Under normal conditions, colored backgrounds do not affect the interpretation of color codes, but chromatic (and/or low) illumination of surface color code can degrade performance.
Criticism.
Color codes present some potential problems. On forms and signage, the use of color can distract from black and white text.
Color codes are often designed without consideration for accessibility to color blind and blind people, and may even be inaccessible for those with normal color vision, since use of many colors to code many variables can lead to use of confusingly similar colors. Only 15–40% of the colorblind can correctly name surface color codes with 8–10 color categories, most of which test as mildly colorblind. This finding uses ideal illumination; when dimmer illumination is used, performance drops sharply.
Examples.
Systems incorporating color-coding include:
|
6085
|
1298193398
|
https://en.wikipedia.org/wiki?curid=6085
|
Cauchy sequence
|
In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences.
It is not sufficient for each term to become arbitrarily close to the term. For instance, in the sequence of square roots of natural numbers:
formula_1
the consecutive terms become arbitrarily close to each other – their differences
formula_2
tend to zero as the index grows. However, with growing values of , the terms formula_3 become arbitrarily large. So, for any index and distance , there exists an index big enough such that formula_4 As a result, no matter how far one goes, the remaining terms of the sequence never get close to ; hence the sequence is not Cauchy.
The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination.
Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets.
In real numbers.
A sequence
formula_5
of real numbers is called a Cauchy sequence if for every positive real number formula_6 there is a positive integer "N" such that for all natural numbers formula_7
formula_8
where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring formula_9 to be infinitesimal for every pair of infinite "m", "n".
For any real number "r", the sequence of truncated decimal expansions of "r" forms a Cauchy sequence. For example, when formula_10 this sequence is (3, 3.1, 3.14, 3.141, ...). The "m"th and "n"th terms differ by at most formula_11 when "m" < "n", and as "m" grows this becomes smaller than any fixed positive number formula_12
Modulus of Cauchy convergence.
If formula_13 is a sequence in the set formula_14 then a "modulus of Cauchy convergence" for the sequence is a function formula_15 from the set of natural numbers to itself, such that for all natural numbers formula_16 and natural numbers formula_17 formula_18
Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let formula_19 be the smallest possible formula_20 in the definition of Cauchy sequence, taking formula_21 to be formula_22). The existence of a modulus also follows from the principle of countable choice. "Regular Cauchy sequences" are sequences with a given modulus of Cauchy convergence (usually formula_23 or formula_24). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice.
Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by and by in constructive mathematics textbooks.
In a metric space.
Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space "X".
To do so, the absolute difference formula_25 is replaced by the distance formula_26 (where "d" denotes a metric) between formula_27 and formula_28
Formally, given a metric space formula_29 a sequence of elements of formula_30
formula_5
is Cauchy, if for every positive real number formula_32 there is a positive integer formula_20 such that for all positive integers formula_7 the distance
formula_35
Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in "X".
Nonetheless, such a limit does not always exist within "X": the property of a space that every Cauchy sequence converges in the space is called "completeness", and is detailed below.
Completeness.
A metric space ("X", "d") in which every Cauchy sequence converges to an element of "X" is called complete.
Examples.
The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number.
A rather different type of example is afforded by a metric space "X" which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of "X" must be constant beyond some fixed point, and converges to the eventually repeating term.
Non-example: rational numbers.
The rational numbers formula_36 are not complete (for the usual distance):
There are sequences of rationals that converge (in formula_37) to irrational numbers; these are Cauchy sequences having no limit in formula_38 In fact, if a real number "x" is irrational, then the sequence ("x""n"), whose "n"-th term is the truncation to "n" decimal places of the decimal expansion of "x", gives a Cauchy sequence of rational numbers with irrational limit "x". Irrational numbers certainly exist in formula_39 for example:
Non-example: open interval.
The open interval formula_46 in the set of real numbers with an ordinary distance in formula_37 is not a complete space: there is a sequence formula_48 in it, which is Cauchy (for arbitrarily small distance bound formula_49 all terms formula_50 of formula_51 fit in the formula_52 interval), however does not converge in formula_30—its 'limit', number 0, does not belong to the space formula_54
Other properties.
These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological.
One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers
(or, more generally, of elements of any complete normed linear space, or Banach space). Such a series
formula_62 is considered to be convergent if and only if the sequence of partial sums formula_63 is convergent, where formula_64 It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers formula_65
formula_66
If formula_67 is a uniformly continuous map between the metric spaces "M" and "N" and ("x""n") is a Cauchy sequence in "M", then formula_68 is a Cauchy sequence in "N". If formula_69 and formula_70 are two Cauchy sequences in the rational, real or complex numbers, then the sum formula_71 and the product formula_72 are also Cauchy sequences.
Generalizations.
In topological vector spaces.
There is also a concept of Cauchy sequence for a topological vector space formula_30: Pick a local base formula_74 for formula_30 about 0; then (formula_76) is a Cauchy sequence if for each member formula_77 there is some number formula_20 such that whenever
formula_79 is an element of formula_80 If the topology of formula_30 is compatible with a translation-invariant metric formula_82 the two definitions agree.
In topological groups.
Since the topological vector space definition of Cauchy sequence requires only that there be a continuous "subtraction" operation, it can just as well be stated in the context of a topological group: A sequence formula_83 in a topological group formula_84 is a Cauchy sequence if for every open neighbourhood formula_85 of the identity in formula_84 there exists some number formula_20 such that whenever formula_88 it follows that formula_89 As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in formula_90
As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in formula_84 that formula_83 and formula_93 are equivalent if for every open neighbourhood formula_85 of the identity in formula_84 there exists some number formula_20 such that whenever formula_88 it follows that formula_98 This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since formula_99 which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since formula_100 where formula_101 and formula_102 are open neighbourhoods of the identity such that formula_103; such pairs exist by the continuity of the group operation.
In groups.
There is also a concept of Cauchy sequence in a group formula_84:
Let formula_105 be a decreasing sequence of normal subgroups of formula_84 of finite index.
Then a sequence formula_69 in formula_84 is said to be Cauchy (with respect to formula_109) if and only if for any formula_110 there is formula_20 such that for all formula_112
Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on formula_113 namely that for which formula_109 is a local base.
The set formula_115 of such Cauchy sequences forms a group (for the componentwise product), and the set formula_116 of null sequences (sequences such that formula_117) is a normal subgroup of formula_118 The factor group formula_119 is called the completion of formula_84 with respect to formula_121
One can then show that this completion is isomorphic to the inverse limit of the sequence formula_122
An example of this construction familiar in number theory and algebraic geometry is the construction of the formula_123-adic completion of the integers with respect to a prime formula_124 In this case, formula_84 is the integers under addition, and formula_126 is the additive subgroup consisting of integer multiples of formula_127
If formula_109 is a cofinal sequence (that is, any normal subgroup of finite index contains some formula_126), then this completion is canonical in the sense that it is isomorphic to the inverse limit of formula_130 where formula_109 varies over normal subgroups of finite index. For further details, see Ch. I.10 in Lang's "Algebra".
In a hyperreal continuum.
A real sequence formula_132 has a natural hyperreal extension, defined for hypernatural values "H" of the index "n" in addition to the usual natural "n". The sequence is Cauchy if and only if for every infinite "H" and "K", the values formula_133 and formula_134 are infinitely close, or adequal, that is,
formula_135
where "st" is the standard part function.
Cauchy completion of categories.
introduced a notion of Cauchy completion of a category. Applied to formula_36 (the category whose objects are rational numbers, and there is a morphism from "x" to "y" if and only if formula_137), this Cauchy completion yields formula_138 (again interpreted as a category using its natural ordering).
|
6088
|
3632083
|
https://en.wikipedia.org/wiki?curid=6088
|
Common Era
|
Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar (and its predecessor, the Julian calendar), the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: "2025 CE" and "AD 2025" each describe the current year; "400 BCE" and "400 BC" are the same year.
The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the (), and to 1635 in English as "Vulgar Era". The term "Common Era" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the late 20th century, BCE and CE have become popular in academic and scientific publications on the grounds that BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms "Christ" and ("Lord") used by the other abbreviations. Nevertheless, its epoch remains the same as that used for the Anno Domini era.
History.
Origins.
Around the year 525, the Christian monk Dionysius Exiguus devised the principle of taking the moment that he believed to be the date of the incarnation of Jesus to be the point from which years are numbered (the epoch) of the Christian ecclesiastical calendar. Dionysius labeled the column of the table in which he introduced the new era as "" (the years of our Lord Jesus Christ). He did this to replace the Era of the Martyrs system (then used for some Easter tables) because he did not wish to continue the memory of a tyrant who persecuted Christians.
This way of numbering years became more widespread in Europe, with its use by Bede in England in 731. Bede also introduced the practice of dating years before what he supposed to have been the year of birth of Jesus, without a year zero.
Vulgar Era.
The term "Common Era" is traced back in English to its appearance as "Vulgar Era" to distinguish years of the Anno Domini era, which was in popular use, from dates of the regnal year (the year of the reign of a sovereign) typically used in national law. (The word 'vulgar' originally meant 'of the ordinary people', with no derogatory associations.)
The first use of the Latin term may be in a 1615 book by Johannes Kepler. Kepler uses it again, as , in a 1616 table of ephemerides, and again, as , in 1617. An English edition of that book from 1635 may contain the earliest known use of "Vulgar Era" in its title page. A 1701 book edited by John Le Clerc includes the phrase "Before Christ according to the Vulgar Æra,6".
The Merriam-Webster Dictionary gives 1716 as the date of first use of the term "vulgar era" (which it defines as "Christian era").
The first published use of "Christian Era" may be the Latin phrase on the title page of a 1584 theology book, . In 1649, the Latin phrase appeared in the title of an English almanac. A 1652 ephemeris may be the first instance of the English use of "Christian Era".
The English phrase "Common Era" appears at least as early as 1708, and in a 1715 book on astronomy, it is used interchangeably with "Christian Era" and "Vulgar Era". A 1759 history book uses "common æra" in a generic sense to refer to "the common era of the Jews". The phrase "before the common era" may have first appeared in a 1770 work that also uses "common era" and "vulgar era" as synonyms in a translation of a book originally written in German. The 1797 edition of the Encyclopædia Britannica uses the terms "vulgar era" and "common era" synonymously.
In 1835, in his book "Living Oracles", Alexander Campbell wrote: "The vulgar Era, or Anno Domini; the fourth year of Jesus Christ, the first of which was but eight days". He refers to the "common era" as a synonym for "vulgar era": "the fact that our Lord was born on the 4th year before the vulgar era, called Anno Domini, thus making (for example) the 42d year from his birth to correspond with the 38th of the common era". The "Catholic Encyclopedia" (1909), in at least one article, reports all three terms (Christian, Vulgar, Common Era) being commonly understood by the early 20th century.
The phrase "common era", in lower case, also appeared in the 19th century in a "generic" sense, not necessarily to refer to the Christian Era, but to any system of dates in everyday use throughout a civilization. Thus, "the common era of the Jews", "the common era of the Mahometans", "common era of the world", or "the common era of the foundation of Rome". When it did refer to the Christian Era, it was sometimes qualified (e.g., "common era of the Incarnation", "common era of the Nativity", or "common era of the birth of Christ").
An adapted translation of "Common Era" into Latin as was adopted in the 20th century by some followers of Aleister Crowley, and thus the abbreviation "e.v." or "EV" may sometimes be seen as a replacement for AD.
History of the use of the CE/BCE abbreviation.
Although Jews have the Hebrew calendar, they often use the Gregorian calendar without the AD prefix, as Judaism does not recognize Jesus as the Messiah. As early as 1825, the abbreviation VE (for Vulgar Era) was in use among Jews to denote years in the Western calendar. , Common Era notation has also been in use for Hebrew lessons for more than a century. Jews have also used the term Current Era.
Contemporary usage.
Some academics in the fields of theology, education, archaeology and history have adopted CE and BCE notation despite some disagreement. A study conducted in 2014 found that the BCE/CE notation is not growing at the expense of BC and AD notation in the scholarly literature, and that both notations are used in a relatively stable fashion.
Australia.
In 2011, media reports suggested that the BC/AD notation in Australian school textbooks would be replaced by BCE/CE notation. The change drew opposition from some politicians and church leaders. Weeks after the story broke, the Australian Curriculum, Assessment and Reporting Authority denied the rumours and stated that the BC/AD notation would remain, with CE and BCE as an optional suggested learning activity.
Canada.
In 2013, the Canadian Museum of Civilization (now the Canadian Museum of History) in Gatineau (opposite Ottawa), which had previously switched to BCE/CE, decided to change back to BC/AD in material intended for the public while retaining BCE/CE in academic content.
Nepal.
The notation is in particularly common use in Nepal in order to disambiguate dates from the local (Indian or Hindu) calendar, Bikram or Vikram Sambat. Disambiguation is needed because the era of the Hindu calendar is quite close to the Common Era.
United Kingdom.
In 2002, an advisory panel for the religious education syllabus for England and Wales recommended introducing BCE/CE dates to schools, and by 2018 some local education authorities were using them.
In 2018, the National Trust said it would continue to use BC/AD as its house style. English Heritage explains its era policy thus: "It might seem strange to use a Christian calendar system when referring to British prehistory, but the BC/AD labels are widely used and understood." Some parts of the BBC use BCE/CE, but some presenters have said they will not. As of October 2019, the BBC News style guide has entries for AD and BC, but not for CE or BCE. The style guide for "The Guardian" says, under the entry for CE/BCE: "some people prefer CE (common era, current era, or Christian era) and BCE (before common era, etc.) to AD and BC, which, however, remain our style".
United States.
In the United States, the use of the BCE/CE notation in textbooks was reported in 2005 to be growing. Some publications have transitioned to using it exclusively. For example, the 2007 World Almanac was the first edition to switch to BCE/CE, ending a period of 138 years in which the traditional BC/AD dating notation was used. BCE/CE is used by the College Board in its history tests, and by the Norton Anthology of English Literature. Others have taken a different approach. The US-based History Channel uses BCE/CE notation in articles on non-Christian religious topics such as Jerusalem and Judaism. The 2006 style guide for the Episcopal Diocese "Maryland Church News" says that BCE and CE should be used.
In June 2006, in the United States, the Kentucky State School Board reversed its decision to use BCE and CE in the state's new Program of Studies, leaving education of students about these concepts a matter of local discretion.
Rationales.
Support.
The use of CE in Jewish scholarship was historically motivated by the desire to avoid the implicit "Our Lord" in the abbreviation "AD". Although other aspects of dating systems are based in Christian origins, AD is a direct reference to Jesus as Lord. Proponents of the Common Era notation assert that the use of BCE/CE shows sensitivity to those who use the same year numbering system as the one that originated with and is currently used by Christians, but who are not themselves Christian. Former United Nations Secretary-General Kofi Annan has argued:
Adena K. Berkowitz, in her application to argue before the United States Supreme Court, opted to use BCE and CE because, "Given the multicultural society that we live in, the traditional Jewish designationsB.C.E. and C.E. cast a wider net of inclusion." In the World History Encyclopedia, Joshua J. Mark wrote "Non-Christian scholars, especially, embraced [CE and BCE] because they could now communicate more easily with the Christian community. Jewish, Islamic, Hindu and Buddhist scholars could retain their [own] calendar but refer to events using the Gregorian Calendar as BCE and CE without compromising their own beliefs about the divinity of Jesus of Nazareth." In "History Today", Michael Ostling wrote: "BC/AD Dating: In the year of whose Lord? The continuing use of AD and BC is not only factually wrong but also offensive to many who are not Christians."
Opposition.
Critics note the fact that there is no difference in the epoch of the two systems—chosen to be close to the date of birth of Jesus. Since the year numbers are the same, BCE and CE dates should be equally offensive to other religions as BC and AD. Roman Catholic priest and writer on interfaith issues Raimon Panikkar argued that the BCE/CE usage is the less inclusive option since they are still using the Christian calendar numbers and forcing it on other nations. In 1993, the English-language expert Kenneth G. Wilson speculated a slippery slope scenario in his style guide that, "if we do end by casting aside the AD/BC convention, almost certainly some will argue that we ought to cast aside as well the conventional numbering system [that is, the method of numbering years] itself, given its Christian basis."
Some Christians are offended by the removal of the reference to Jesus, including the Southern Baptist Convention.
Conventions in style guides.
The abbreviation BCE, just as with BC, always follows the year number. Unlike AD, which still often precedes the year number, CE always follows the year number (if context requires that it be written at all). Thus, the current year is written as 2025 in both notations (or, if further clarity is needed, as 2025 CE, or as AD 2025), and the year that Socrates died is represented as 399 BCE (the same year that is represented by 399 BC in the BC/AD notation). The abbreviations are sometimes written with small capital letters, or with periods (e.g., "B.C.E." or "C.E."). The US-based Society of Biblical Literature style guide for academic texts on religion prefers BCE/CE to BC/AD.
|
6091
|
39191556
|
https://en.wikipedia.org/wiki?curid=6091
|
Charles Robert Malden
|
Charles Robert Malden (9 August 1797 – 23 May 1855) was a nineteenth-century British naval officer, surveyor and educator. He is the discoverer of Malden Island in the central Pacific, which is named in his honour. He also founded Windlesham House School at Brighton, England.
Biography.
Malden was born in Putney, Surrey, son of Jonas Malden, a surgeon. He entered British naval service at the age of 11 on 22 June 1809. He served nine years as a volunteer 1st class, midshipman, and shipmate, including one year in the English Channel and Bay of Biscay (1809), four years at the Cape of Good Hope and in the East Indies (1809–14), two and a half years on the North American and West Indian stations (1814–16), and a year and a half in the Mediterranean (1817–18). He was present at the capture of Mauritius and Java, and at the battles of Baltimore and New Orleans.
He passed the examination in the elements of mathematics and the theory of navigation at the Royal Naval Academy on 2–4 September 1816, and became a 1st Lieutenant on 1 September 1818. In eight years of active service as an officer, he served two and a half years in a surveying ship in the Mediterranean (1818–21), one and a half years in a surveying sloop in the English Channel and off the coast of Ireland (1823–24), and one and a half years as Surveyor of the frigate during a voyage (1824–26) to and from the Hawaiian Islands (then known as the "Sandwich islands").
In Hawaii he surveyed harbours which, he noted, were "said not to exist by Captains Cook and Vancouver." On the return voyage he discovered and explored uninhabited Malden Island in the central Pacific on 30 July 1825. After his return he left active service but remained at half pay. He served for several years as hydrographer to King William IV.
He married Frances Cole, daughter of Rev. William Hodgson Cole, rector of West Clandon and Vicar of Wonersh, near Guildford, Surrey, on 8 April 1828. Malden became the father of seven sons and a daughter.
From 1830 to 1836 he took pupils for the Royal Navy at Ryde, Isle of Wight. He purchased the school of Henry Worsley at Newport, Isle of Wight, in December 1836, reopened it as a preparatory school on 20 February 1837, and moved it to Montpelier Road in Brighton in December 1837. He built the Windlesham House School at Brighton in 1844, and conducted the school until his death there in 1855. He was succeeded as headmaster by his son Henry Charles Malden.
|
6095
|
1301408383
|
https://en.wikipedia.org/wiki?curid=6095
|
Chechnya
|
Chechnya, officially the Chechen Republic, is a republic of Russia. It is situated in the North Caucasus of Eastern Europe, between the Caspian Sea and Black Sea. The republic forms a part of the North Caucasian Federal District, and shares land borders with Georgia to its south; with the Russian republics of Dagestan, Ingushetia, and North Ossetia–Alania to its east, north, and west; and with Stavropol Krai to its northwest.
After the dissolution of the Soviet Union in 1991, the Checheno-Ingush ASSR split into two parts: the Republic of Ingushetia and the Chechen Republic. The latter proclaimed the Chechen Republic of Ichkeria, which declared independence, while the former sided with Russia. Following the First Chechen War of 1994–1996 with Russia, Chechnya gained "de facto" independence as the Chechen Republic of Ichkeria, although "de jure" it remained a part of Russia. Russian federal control was restored in the Second Chechen War of 1999–2009, with Chechen politics being dominated by the former Ichkerian mufti Akhmad Kadyrov, and later his son Ramzan Kadyrov.
The republic covers an area of , with a population of over 1.5 million residents . It is home to the indigenous Chechens, part of the Nakh peoples and who adhere primarily to the Islamic faith. Grozny is the capital and largest city.
History.
Origin of Chechnya's population.
According to Leonti Mroveli, the 11th-century Georgian chronicler, the word "Caucasus" is derived from the Nakh ancestor Kavkas.
According to George Anchabadze of Ilia State University:
American linguist Johanna Nichols "has used language to connect the modern people of the Caucasus region to the ancient farmers of the Fertile Crescent" and her research suggests that "farmers of the region were proto-Nakh-Daghestanians". Nichols stated: "The Nakh–Dagestanian languages are the closest thing we have to a direct continuation of the cultural and linguistic community that gave rise to Western civilisation."
Prehistory.
Traces of human settlement dating back to 40,000 BC were found near Lake Kezenoyam. Cave paintings, artifacts, and other archaeological evidence indicate continuous habitation for some 8,000 years. People living in these settlements used tools, fire, and clothing made of animal skins.
The Caucasian Epipaleolithic and early Caucasian Neolithic era saw the introduction of agriculture, irrigation, and the domestication of animals in the region. Settlements near Ali-Yurt and Magas, discovered in modern times, revealed tools made out of stone: stone axes, polished stones, stone knives, stones with holes drilled in them, clay dishes, etc. Settlements made out of clay bricks were discovered in the plains. In the mountains there were settlements made from stone and surrounded by walls; some of them dated back to 8000 BC. This period also saw the appearance of the wheel (3000 BC), horseback riding, metal works (copper, gold, silver, iron), dishes, armor, daggers, knives and arrow tips in the region. The artifacts were found near Nasare-Cort, Muzhichi, Ja-E-Bortz (alternatively known as Surkha-khi), Abbey-Gove (also known as Nazran or Nasare).
Pre-imperial era.
In the 14th and 15th centuries, there was frequent warfare between the Chechens, Tamerlane and Tokhtamysh, culminating in the Battle of the Terek River (see Tokhtamysh–Timur war). The Chechen tribes built fortresses, castles, and defensive walls, protecting the mountains from the invaders (see Vainakh tower architecture). Part of the lowland tribes were occupied by Mongols. However, during the mid-14th century a strong Chechen Princedom called Simsim emerged under Khour II, a Chechen king that led the Chechen politics and wars. He was in charge of an army of Chechens against the rogue warlord Mamai and defeated him in the Battle of Tatar-tup in 1362. The kingdom of Simsim was almost destroyed during the Timurid invasion of the Caucasus, when Khour II allied himself with the Golden Horde Khan Tokhtamysh in the Battle of the Terek River. Timur sought to punish the highlanders for their allegiance to Tokhtamysh and as a consequence invaded Simsim in 1395.
The 16th century saw the first Russian involvement in the Caucasus. In 1558, Temryuk of Kabarda sent his emissaries to Moscow requesting help from Ivan the Terrible against the Vainakh tribes. Ivan the Terrible married Temryuk's daughter Maria Temryukovna. An alliance was formed to gain the ground in the central Caucasus for the expanding Tsardom of Russia against Vainakh defenders.
In 1667 Mehk-Da Aldaman Gheza defended the borders of Chechnya from invasions of Kabardinians and Avars during the Battle of Khachara. The Chechens converted over the next few centuries to Sunni Islam, as Islam was associated with resistance to Russian encroachment.
Imperial rule.
Russian Emperor Peter the Great first sought to increase Russia's political influence in the Caucasus and the Caspian Sea at the expense of Safavid Persia when he launched the Russo-Persian War of 1722–1723. Russian forces succeeded in taking much of the Caucasian territories from Persia for several years.
As the Imperial Russian Army took control of the Caspian corridor and moved into Persian-ruled Dagestan, Peter's forces ran into mountain tribes. Peter sent a cavalry force to subdue them, but the Chechens routed them. In 1732, after Russia had already ceded back most of the Caucasus to Persia, now led by Nader Shah, following the Treaty of Resht, Russian troops clashed again with Chechens in a village called Chechen-aul along the Argun River. The Russians were defeated again and withdrew, but this battle is responsible for the apocryphal story about how the Nokhchiy came to be known as "Chechens" – the people ostensibly named for the place the battle had taken place. However, the name "Chechen" had already been used as early as 1692.
Under intermittent Persian rule since 1555, in 1783, the eastern Georgians of Kartl-Kakheti, led by Erekle II, and the Russians signed the Treaty of Georgievsk. According to this treaty, Kartl-Kakheti received protection from Russia, and Georgia abjured any dependence on Iran. To increase its influence in the Caucasus and secure communication with Kartli and other Christian-inhabited regions of Transcaucasia, which it considered useful in its wars against Persia and the Ottoman Empire, the Russian Empire began conquering the Northern Caucasus mountains. The Russian Empire used Christianity to justify its conquests. This allowed Islam to spread widely among the Chechens, as it positioned itself as the religion of liberation from the Tsardom of Russia, which viewed Nakh tribes as "bandits". The rebellion was led by Mansur Ushurma, a Chechen sheikh belonging to the Naqshbandi Sufi order—with wavering military support from other North Caucasian tribes. Mansur hoped to establish an Islamic state based in the Transcaucasus under "Sharia" law. He was unable to fully achieve this because, in the course of the war, he was betrayed by the Ottoman Turks, handed over to the Russians, and executed in 1794.
After Persia was forced to cede the current territories of Dagestan, most of Azerbaijan, and Georgia to Russia following the Russo-Persian War of 1804–1813 and its resultant Treaty of Gulistan, Russia significantly widened its foothold in the Caucasus at Persia's expense. Another successful Caucasus war against Persia several years later, starting in 1826 and ending in 1828 with the Treaty of Turkmenchay, and a successful war against the Ottoman Empire in 1828–1829, enabled Russia to use a much larger portion of its army in subduing the natives of the North Caucasus.
The resistance of the Nakh tribes never ended and was a fertile ground for a new Muslim-Avar commander, Imam Shamil, who fought against the Russians from 1834 to 1859 (see Murid War). In 1859, Shamil was captured by the Russians at aul Gunib. Shamil left Baysangur of Benoa, a Chechen with one arm, one eye, and one leg, in charge of command at Gunib. Baysangur broke through the siege and continued to fight Russia for another two years until he was captured and killed by Russians. The Russian Tsar hoped that by sparing the life of Shamil, the resistance in the North Caucasus would stop, but it did not. Russia began to use a colonization tactic by destroying Nakh settlements and building Cossack defense lines in the lowlands. The Cossacks suffered defeat after defeat and were constantly attacked by mountaineers, who robbed them of food and weaponry.
The Russian Tsarist regime used a different approach at the end of the 1860s. They offered Chechens and Ingush to leave the Caucasus for the Ottoman Empire (see Muhajir (Caucasus)). It is estimated that about 80% of Chechens and Ingush left the Caucasus during the deportation. It weakened the resistance, which went from open warfare to insurgent warfare. One of the notable Chechen resistance fighters at the end of the 19th century was a Chechen abrek Zelimkhan Gushmazukaev and his comrade-in-arms Ingush abrek Sulom-Beck Sagopshinski. Together they built up small units which constantly harassed Russian military convoys, government mints, and the postal service, mainly in Ingushetia and Chechnya. Ingush aul Kek was completely burned when the Ingush refused to hand over Zelimkhan. Zelimkhan was killed at the beginning of the twentieth century. The war between Nakh tribes and Russia resurfaced during the times of the Russian Revolution, which saw the Nakh struggle against Anton Denikin and later against the Soviet Union.
On 21 December 1917, Ingushetia, Chechnya, and Dagestan declared independence from Russia and formed a single state: the United Mountain Dwellers of the North Caucasus, which was recognized by major world powers of the time. The capital of the new state was moved to Temir-Khan-Shura (today in Dagestan). Tapa Tchermoeff, a prominent Chechen statesman, was elected the first prime minister of the state. The second prime minister elected was Vassan-Girey Dzhabagiev, an Ingush statesman, who also was the author of the constitution of the republic in 1917, and in 1920 he was re-elected for the third term. In 1921 the Russians attacked and occupied the country and forcibly absorbed it into the Soviet state. The Caucasian war for independence restarted, and the government went into exile.
Soviet rule.
Under the Soviet Union, Chechnya and Ingushetia were combined to form the Checheno-Ingush Autonomous Soviet Socialist Republic. In the 1930s, Chechnya was flooded with many Ukrainians fleeing a famine. As a result, many of the Ukrainians settled in Chechen-Ingush ASSR permanently and survived the famine. Although over 50,000 Chechens and over 12,000 Ingush were fighting against Nazi Germany on the front line (including Heroes of the USSR: Abukhadzhi Idrisov, Khanpasha Nuradilov, Movlid Visaitov), and although Nazi German troops advanced as far as the Ossetian ASSR city of Ordzhonikidze and the Chechen-Ingush ASSR city of Malgobek after capturing half of the Caucasus in less than a month, Chechens and Ingush were falsely accused as Nazi supporters and entire nations were deported during Operation Lentil to the Kazakh SSR (later Kazakhstan) in 1944 near the end of World War II where over 60% of Chechen and Ingush populations perished. American historian Norman Naimark writes:
The deportation was justified by the materials prepared by NKVD officer Bogdan Kobulov accusing Chechens and Ingush in a mass conspiracy preparing rebellion and providing assistance to the German forces. Many of the materials were later proven to be fabricated. Even distinguished Red Army officers who fought bravely against Germans (e.g. the commander of 255th Separate Chechen-Ingush regiment Movlid Visaitov, the first to contact American forces at Elbe river) were deported. There is a theory that the real reason why Chechens and Ingush were deported was the desire of Russia to attack Turkey, an anti-communist country, as Chechens and Ingush could impede such plans. In 2004, the European Parliament recognized the deportation of Chechens and Ingush as an act of genocide.
The territory of the Chechen-Ingush Autonomous Soviet Socialist Republic was divided between Stavropol Krai (where Grozny Okrug was formed), the Dagestan ASSR, the North Ossetian ASSR, and the Georgian SSR.
The Chechens and Ingush were allowed to return to their land after 1956 during de-Stalinisation under Nikita Khrushchev when the Chechen-Ingush ASSR was restored but with both the boundaries and ethnic composition of the territory significantly changed. There were many (predominantly Russian) migrants from other parts of the Soviet Union, who often settled in the abandoned family homes of Chechens and Ingushes. The republic lost its Prigorodny District which transferred to North Ossetian ASSR but gained predominantly Russian Naursky District and Shelkovskoy District that is considered the homeland for Terek Cossacks.
The Russification policies towards Chechens continued after 1956, with Russian language proficiency required in many aspects of life to provide Chechens better opportunities for advancement in the Soviet system. On 26 November 1990, the Supreme Council of Chechen-Ingush ASSR adopted the "Declaration of State Sovereignty of the Chechen-Ingush Republic". This declaration was part of the reorganisation of the Soviet Union. This new treaty was to be signed 22 August 1991, which would have transformed 15 republic states into more than 80. The 19–21 August 1991 Soviet coup d'état attempt led to the abandonment of this reorganisation.
With the impending dissolution of the Soviet Union in 1991, an independence movement, the Chechen National Congress, was formed, led by ex-Soviet Air Force general and new Chechen President Dzhokhar Dudayev. It campaigned for the recognition of Chechnya as a separate nation. This movement was opposed by Boris Yeltsin's Russian Federation, which argued that Chechnya had not been an independent entity within the Soviet Union—as the Baltic, Central Asian, and other Caucasian states such as Georgia had—but was part of the Russian Soviet Federative Socialist Republic and hence did not have a right under the Soviet constitution to secede. It also argued that other republics of Russia, such as Tatarstan, would consider seceding from the Russian Federation if Chechnya were granted that right. Finally, it argued that Chechnya was a major hub in the oil infrastructure of Russia and hence its secession would hurt the country's economy and energy access.
During the Chechen Revolution, the Soviet Chechen leader Doku Zavgayev was overthrown and Dzhokhar Dudayev seized power. On 1 November 1991, Dudaev's Chechnya issued a unilateral declaration of independence. In the ensuing decade, the territory was locked in an ongoing struggle between various factions, usually fighting unconventionally.
Chechen Wars and brief independence.
The First Chechen War, during which Russian forces attempted to regain control over Chechnya, took place from 1994 to 1996. Despite overwhelming numerical superiority in troops, weaponry, and air support, the Russian forces were unable to establish effective permanent control over the mountainous area due to numerous successful full-scale battles and insurgency raids. The Budyonnovsk hospital hostage crisis in 1995 shocked the Russian public. In April 1996, the first democratically elected president of Chechnya, Dzhokhar Dudayev, was killed by Russian forces using a booby trap bomb and a missile fired from a warplane after he was located by triangulating the position of a satellite phone he was using.
The widespread demoralisation of the Russian Army in the area and a successful offensive to retake Grozny by Chechen rebel forces led by Aslan Maskhadov prompted Russian President Boris Yeltsin to declare a ceasefire in 1996, and sign a peace treaty a year later that saw a withdrawal of Russian troops.
After the war, parliamentary and presidential elections took place in January 1997 in Chechnya and brought to power new President Aslan Maskhadov, chief of staff and prime minister in the Chechen coalition government, for a five-year term. Maskhadov sought to maintain Chechen sovereignty while pressing the Russian government to help rebuild the republic, whose formal economy and infrastructure were virtually destroyed. Russia continued to send money for the rehabilitation of the republic; it also provided pensions and funds for schools and hospitals. Nearly half a million people (40% of Chechnya's prewar population) had been internally displaced and lived in refugee camps or overcrowded villages. There was an economic downturn. Two Russian brigades were permanently stationed in Chechnya.
In light of the devastated economic structure, kidnapping emerged as the principal source of income countrywide, procuring over US$200 million during the three-year independence of the chaotic fledgling state, although victims were rarely killed. In 1998, 176 people were kidnapped, 90 of whom were released, according to official accounts. President Maskhadov started a major campaign against hostage-takers, and on 25 October 1998, Shadid Bargishev, Chechnya's top anti-kidnapping official, was killed in a remote-controlled car bombing. Bargishev's colleagues then insisted they would not be intimidated by the attack and would go ahead with their offensive. Political violence and religious extremism, blamed on Salafism and Wahhabism, was rife. In 1998, Grozny authorities declared a state of emergency. Tensions led to open clashes between the Chechen National Guard and Islamist militants, such as the July 1998 confrontation in Gudermes.
The War of Dagestan began on 7 August 1999, during which the Islamic International Peacekeeping Brigade (IIPB) began an unsuccessful incursion into the neighboring Russian republic of Dagestan in favor of the Shura of Dagestan, which sought independence from Russia. In September, a series of apartment bombings that killed around 300 people in several Russian cities, including Moscow, were blamed on Chechen separatists. Some journalists contested the official explanation, instead blaming the Russian secret services for blowing up the buildings to initiate a new military campaign against Chechnya. In response to the bombings, a prolonged air campaign of retaliatory strikes against the Ichkerian regime and a ground offensive that began in October 1999 marked the beginning of the Second Chechen War. Much better organized and planned than the First Chechen War, the Russian armed forces took control of most regions. The Russian forces used brutal force, killing 60 Chechen civilians during a mop-up operation in Aldy, Chechnya on 5 February 2000. After the re-capture of Grozny in February 2000, the Ichkerian regime fell apart.
Post-war reconstruction and insurgency.
Chechen separatists continued to fight Russian troops and conduct terror attacks after the occupation of Grozny. In October 2002, 40–50 Chechen rebels seized a Moscow theater and took about 900 civilians hostage. The crisis ended with 117 hostages and up to 50 rebels dead, mostly due to an unknown aerosol pumped into the building by Russian special forces to incapacitate the people inside.
In response to these attacks, Russia tightened its grip on Chechnya and expanded its anti-terrorist operations throughout the region. Russia installed a pro-Russian Chechen regime. In 2003, a referendum was held on a constitution that reintegrated Chechnya within Russia but provided limited autonomy. According to the Chechen government, the referendum passed with 95.5% of the votes and almost 80% turnout. "The Economist" was skeptical of the results, arguing that "few outside the Kremlin regard the referendum as fair".
In September 2004, separatist rebels occupied a school in the town of Beslan, North Ossetia, demanding recognition of the independence of Chechnya and a Russian withdrawal. 1,100 people (including 777 children) were taken hostage. The attack lasted three days, resulting in the deaths of over 331 people, including 186 children. After the 2004 school siege, Russian President Vladimir Putin announced sweeping security and political reforms, sealing borders in the Caucasus region and revealing plans to give the central government more power. He also vowed to take tougher action against domestic terrorism, including preemptive strikes against Chechen separatists. In 2005 and 2006, separatist leaders Aslan Maskhadov and Shamil Basayev were killed.
Since 2007, Chechnya has been governed by Ramzan Kadyrov. Kadyrov's rule has been characterized by high-level corruption, a poor human rights record, widespread use of torture, and a growing cult of personality. Allegations of anti-gay purges in Chechnya were initially reported on 1 April 2017.
In April 2009, Russia ended its counter-terrorism operations and pulled out the bulk of its army. The insurgency in the North Caucasus continued even after this date. The Caucasus Emirate had fully adopted the tenets of a Salafi-jihadist group through its strict adherence to the Sunni Hanbali obedience to the literal interpretation of the Quran and the Sunnah.
The Chechen government has been outspoken in its support for the 2022 Russian invasion of Ukraine, where a Chechen military force, the Kadyrovtsy, which is under Kadyrov's personal command, has played a leading role, notably in the Siege of Mariupol. Meanwhile, a substantial number of Chechen separatists have allied themselves to the Ukrainian cause and are fighting a mutual Russian enemy in the Donbas.
In March 2025, Chechnya blocked Telegram app due to concerns that it could be used by "enemies".
Geography.
Situated in the eastern part of the North Caucasus in Eastern Europe, Chechnya is surrounded on nearly all sides by Russian Federal territory. In the west, it borders North Ossetia and Ingushetia, in the north, Stavropol Krai, in the east, Dagestan, and to the south, Georgia. Its capital is Grozny. Chechnya is well known for being mountainous, but it is in fact split between the more flat areas north of the Terek, and the highlands south of the Terek.
Rivers:
Climate.
Despite a relatively small territory, Chechnya is characterized by a variety of climate conditions. The average temperature in Grozny is .
Administrative divisions.
The Chechen Republic is divided into 15 districts and three cities of republican significance.
Demographics.
According to the 2021 Census, the population of the republic is 1,510,824, up from 1,268,989 in the 2010 Census. As of the 2021 Census, Chechens at 1,456,792 make up 96.4% of the republic's population. Other groups include Russians (18,225, or 1.2%), Kumyks (12,184, or 0.8%) and a host of other small groups, each accounting for less than 0.5% of the total population. The birth rate was 25.41 in 2004. (25.7 in Achkhoi Martan, 19.8 in Groznyy, 17.5 in Kurchaloi, 28.3 in Urus Martan and 11.1 in Vedeno).
The languages used in the Republic are Chechen and Russian. Chechen belongs to the Vaynakh or North-central Caucasian language family, which also includes Ingush and Batsb. Some scholars place it in a wider North Caucasian languages.
Life expectancy.
Despite its difficult past, Chechnya has a high life expectancy, one of the highest in Russia. But the pattern of life expectancy is unusual, and according to numerous statistics, Chechnya stands out from the overall picture. In 2020, Chechnya had the deepest fall in life expectancy, but in 2021 it had the biggest rise. Chechnya has the highest excess of life expectancy in rural areas over cities.
Religion.
Islam.
Sunni Islam is the predominant religion in Chechnya, practiced by 95% of those polled in Grozny in 2010. Most of the population is Sunni and follows either the Shafi'i or the Hanafi schools of Islamic jurisprudence. The Shafi'i school of jurisprudence has a long tradition among the Chechens, and thus it remains the most practiced. Many Chechens are also Sufis, of either the Qadiri or Naqshbandi orders.
Following the collapse of the Soviet Union, there has been an Islamic revival in Chechnya, and in 2011 it was estimated that there were 465 mosques, including the Akhmad Kadyrov Mosque in Grozny accommodating 10,000 worshippers, as well 31 madrasas, including an Islamic university named Kunta-haji, the Kurchaloy Islamic Institute named Akhmad Kadyrov, and the Center of Islamic Medicine in Grozny, which is the largest such institution in Europe. A supreme Islamic administrative territorial organisation in Chechnya is the Spiritual Administration of the Muslims of the Chechen Republic or the Muftiate of the Chechen Republic.
Christianity.
From the 11th to 13th centuries (i.e. before Mongol invasions of Durdzuketia), there was a mission of Georgian Orthodox missionaries to the Nakh peoples. Their success was limited, though a couple of highland teips did convert to Christianity (conversion was largely by teips). However, during the Mongol invasions of Durdzuketia, these Christianized teips gradually reverted to paganism, perhaps due to the loss of Transcaucasian contacts, as the Georgians fought the Mongols and briefly fell under their dominion.
The once-strong Russian minority in Chechnya, mostly Terek Cossacks and estimated as numbering approximately 25,000 in 2012, are predominantly Russian Orthodox, although currently only one church exists in Grozny. In August 2011, Archbishop Zosima of Vladikavkaz and Makhachkala performed the first mass baptism ceremony in the history of the Chechen Republic in the Terek River of Naursky District, in which 35 citizens of Naursky and Shelkovsky districts were converted to Russian Orthodoxy. As of 2020, there are eight Eastern Orthodox churches in Chechnya, the largest is the temple of the Archangel Michael in Grozny.
Politics.
Since 1990, the Chechen Republic has had many legal, military, and civil conflicts involving separatist movements and pro-Russian authorities. Chechnya has enjoyed a period of relative stability under the Russian-appointed government, although there is still some separatist movement activity. Its regional constitution entered into effect on 2 April 2003, after an all-Chechen referendum was held on 23 March 2003. Some Chechens were controlled by regional teips, or clans, despite the existence of pro- and anti-Russian political structures.
In the 2024 Russian presidential election, which critics called rigged and fraudulent, Russian President Vladimir Putin won 98.99% of the vote in Chechnya.
Regional government.
The former separatist religious leader (mufti) Akhmad Kadyrov was elected president with 83% of the vote in an internationally monitored election on 5 October 2003. Incidents of ballot stuffing and voter intimidation by Russian soldiers and the exclusion of separatist parties from the polls were subsequently reported by Organization for Security and Co-operation in Europe (OSCE) monitors. On 9 May 2004, Kadyrov was assassinated in Grozny football stadium by a landmine explosion that was planted beneath a VIP stage and detonated during a parade, and Sergey Abramov was appointed acting prime minister after the incident. However, since 2005 Ramzan Kadyrov (son of Akhmad Kadyrov) has been the caretaker prime minister, and in 2007 was appointed as the new president. Many allege he is the wealthiest and most powerful man in the republic, with control over a large private militia (the Kadyrovites). The militia, which began as his father's security force, has been accused of killings and kidnappings by human rights organisations such as Human Rights Watch.
Separatist government.
Ichkeria was a member of the Unrepresented Nations and Peoples Organisation between 1991 and 2010. Former president of Georgia, Zviad Gamsakhurdia, deposed in a military coup of 1991 and a participant of the Georgian Civil War, recognized the independence of the Chechen Republic of Ichkeria in 1993. Diplomatic relations with Ichkeria were also established by the partially recognised Islamic Emirate of Afghanistan under the Taliban government on 16 January 2000. This recognition ceased with the fall of the Taliban in 2001. However, despite Taliban recognition, there were no friendly relations between the Taliban and Ichkeria—Maskhadov rejected their recognition, stating that the Taliban were illegitimate. Ichkeria also received vocal support from the Baltic countries, a group of Ukrainian nationalists, and Poland; Estonia once voted to recognize, but the act never was followed through due to pressure applied by both Russia and the EU.
The president of this government was Aslan Maskhadov, and the foreign minister was Ilyas Akhmadov, who was the spokesman for the president. Maskhadov had been elected for four years in an internationally monitored election in 1997, which took place after signing a peace agreement with Russia. In 2001, he issued a decree prolonging his office for one additional year; he was unable to participate in the 2003 presidential election since separatist parties were barred by the Russian government, and Maskhadov faced accusations of terrorist offenses in Russia. Maskhadov left Grozny and moved to the separatist-controlled areas of the south at the onset of the Second Chechen War. Maskhadov was unable to influence a number of warlords who retain effective control over Chechen territory, and his power was diminished as a result. Russian forces killed Maskhadov on 8 March 2005, and the assassination was widely criticized since it left no legitimate Chechen separatist leader with whom to conduct peace talks. Akhmed Zakayev, deputy prime minister and a foreign minister under Maskhadov, was appointed shortly after the 1997 election and is currently living under asylum in England. He and others chose Abdul Khalim Saidullayev, a relatively unknown Islamic judge who was previously the host of an Islamic program on Chechen television, to replace Maskhadov following his death. On 17 June 2006, it was reported that Russian special forces killed Abdul Khalim Saidullayev in a raid in the Chechen town of Argun. On 10 July 2006, Shamil Basayev, a leader of the Chechen rebel movement, was killed in a truck explosion during an arms deal.
The successor of Saidullayev became Doku Umarov. On 31 October 2007, Umarov abolished the Chechen Republic of Ichkeria and its presidency and in its place proclaimed the Caucasus Emirate with himself as its Emir. This change of status has been rejected by many Chechen politicians and military leaders who continue to support the existence of the republic.
During the 2022 Russian invasion of Ukraine, the Ukrainian parliament voted to recognize the "Chechen Republic of Ichkeria as territory temporarily occupied by the Russian Federation".
Human rights.
Тhe Internal Displacement Monitoring Center reports that after hundreds of thousands of ethnic Russians and Chechens fled their homes following inter-ethnic and separatist conflicts in Chechnya in 1994 and 1999, more than 150,000 people still remain displaced in Russia today.
Нuman rights organizations criticized the conduct of the 2005 parliamentary elections as unfairly influenced by the central Russian government and military. In 2006, Human Rights Watch reported that pro-Russian Chechen forces under the command of Ramzan Kadyrov, as well as Russian federal police personnel, used torture to get information about separatist forces. "If you are detained in Chechnya, you face a real and immediate risk of torture. And there is little chance that your torturer will be held accountable", said Holly Cartner, Director of the Europe and Central Asia division of the Human Rights Watch.
In 2009, the U. S. government-financed American organization Freedom House included Chechnya in the "Worst of the Worst" list of most repressive societies in the world, together with Burma, North Korea, Tibet, and others. Memorial considers Chechnya under Kadyrov to be a totalitarian regime.
On February 1, 2009, "The New York Times" released extensive evidence to support allegations of consistent torture and executions under the Kadyrov government. The accusations were sparked by the assassination in Austria of a former Chechen rebel who had gained access to Kadyrov's inner circle, 27-year-old Umar Israilov.
On July 1, 2009, Amnesty International released a detailed report covering the human rights violations committed by the Russian Federation against Chechen citizens. Among the most prominent features was that those abused had no method of redress against assaults, ranging from kidnapping to torture, while those responsible were never held accountable. This led to the conclusion that Chechnya was being ruled without law, being run into further devastating destabilization.
On 10 March 2011, Human Rights Watch reported that since Chechenization, the government has pushed for enforced Islamic dress code. The president Ramzan Kadyrov is quoted as saying "I have the right to criticize my wife. She doesn't [have the right to criticize me]. With us [in Chechen society], a wife is a housewife. A woman should know her place. A woman should give her love to us [men]... She would be [man's] property. And the man is the owner. Here, if a woman does not behave properly, her husband, father, and brother are responsible. According to our tradition, if a woman fools around, her family members kill her... That's how it happens, a brother kills his sister or a husband kills his wife... As a president, I cannot allow for them to kill. So, let women not wear shorts...". He has also openly defended honor killings on several occasions.
On 9 July 2017, Russian newspaper reported that a number of people were extrajudicially executed on the night of 26 January 2017. It published a list of 27 names of the people known to be dead, but stressed that the list is "not all [of those killed]"; the newspaper asserted that 50 people may have been executed. Some of the dead were gay, but not all. The killings appeared to have been precipitated by the death of a policeman; according to the author of the report, Elena Milashina, the victims were executed for engaging in terrorism.
In December 2021, up to 50 family members of critics of the Kadyrov government were abducted in a wave of mass kidnappings beginning on 22 December. In a case-study published during the same year, Freedom House reported that Kadyrov also conducts a total transnational repression campaign against Chechen exiles outside of Russia, including assassinations of critics and digital intimidation.
LGBT rights.
Although homosexuality is officially legal in Chechnya per Russian law, it is de facto illegal. Chechen authorities have reportedly arrested, imprisoned and killed persons based on their perceived sexual orientation.
In 2017, it was reported by and human rights groups that Chechen authorities had set up concentration camps, one of which is in Argun, where gay men are interrogated and subjected to physical violence. On 27 June 2018, the Parliamentary Assembly of the Council of Europe noted "cases of abduction, arbitrary detention, and torture ... with the direct involvement of Chechen law enforcement officials and on the orders of top-level Chechen authorities" and expressed dismay "at the statements of Chechen and Russian public officials denying the existence of LGBTI people in the Chechen Republic". Kadyrov's spokesman Alvi Karimov told Interfax that gay people "simply do not exist in the republic" and made an approving reference to honor killings by family members "if there were such people in Chechnya". In a 2021 Council of Europe report into anti-LGBTI hate-crimes, rapporteur Foura ben Chikha described the "state-sponsored attacks carried out against LGBTI people in Chechnya in 2017" as "the single most egregious example of violence against LGBTI people in Europe that has occurred in decades".
On 11 January 2019, it was reported that another "gay purge" had begun in the country in December 2018, with several men and women being detained. The Russian LGBT Network believes that around 40 people were detained and two killed.
Economy.
During the First Chechen War, the Chechen economy fell apart. In 1994, the separatists planned to introduce a new currency, but the change did not occur due to the re-taking of Chechnya by Russian troops in the Second Chechen War.
The economic situation in Chechnya has improved considerably since 2000. According to the "New York Times", major efforts to rebuild Grozny have been made, and improvements in the political situation have led some officials to consider setting up a tourism industry, though there are claims that construction workers are being irregularly paid and that poor people have been displaced.
Chechnya's unemployment was 67% in 2006 and fell to 21.5% in 2014.
Total revenue of the budget of Chechnya for 2017 was 59.2 billion rubles. Of these, 48.5 billion rubles were grants from the federal budget of the Russian Federation.
In late 1970s, Chechnya produced up to 20 million tons of oil annually, production declined sharply to approximately 3 million tons in the late 1980s, and to below 2 million tons before 1994, first (1994–1996) second Russian invasion of Chechnya (1999) inflicted material damage on the oil-sector infrastructure, oil production decreased to 750,000 tons in 2001 only to increase to 2 million tons in 2006, by 2012 production was 1 million tons.
Culture.
The culture of Chechnya is based on the native traditions of Chechen people. Chechen mythology along with art have helped shape the culture for over 1,000 years.
From April 2024, all music must have a tempo between 80 and 116 beats per minute, to comply with Chechen traditions. Borrowing musical culture from other peoples is not allowed.
|
6097
|
37031437
|
https://en.wikipedia.org/wiki?curid=6097
|
Canonization
|
Canonization is the declaration of a deceased person as an officially recognized saint, specifically, the official act of a Christian communion declaring a person worthy of public veneration and entering their name in the canon catalogue of saints, or authorized list of that communion's recognized saints.
Catholic Church.
Canonization is a papal declaration that the Catholic faithful may venerate a particular deceased member of the church. Popes began making such decrees in the tenth century. Up to that point, the local bishops governed the veneration of holy men and women within their own dioceses; and there may have been, for any particular saint, no formal decree at all. In subsequent centuries, the procedures became increasingly regularized and the Popes began restricting to themselves the right to declare someone a Catholic saint. In contemporary usage, the term is understood to refer to the act by which any Christian church declares that a person who has died is a saint, upon which declaration the person is included in the list of recognized saints, called the "canon".
Biblical roots.
In the Roman Martyrology, the following entry is given for the Penitent Thief: "At Jerusalem, the commemoration of the good Thief, who confessed Christ on the cross, and deserved to hear from Him these words: 'This day thou shalt be with Me in paradise.'
Historical development.
The Roman Canon, the historical Eucharistic Prayer or Anaphora of Canon of the Roman Rite contains only the names of apostles and martyrs, along with that of the Blessed Virgin Mary and, since 1962, that of Saint Joseph her spouse.
By the fourth century, however, "confessors"—people who had confessed their faith not by dying but by word and life—began to be venerated publicly. Examples of such people are Saint Hilarion and Saint Ephrem the Syrian in the East, and Saint Martin of Tours and Saint Hilary of Poitiers in the West. Their names were inserted in the diptychs, the lists of saints explicitly venerated in the liturgy, and their tombs were honoured in like manner as those of the martyrs. Since the witness of their lives was not as unequivocal as that of the martyrs, they were venerated publicly only with the approval by the local bishop. This process is often referred to as "local canonization".
This approval was required even for veneration of a reputed martyr. In his history of the Donatist heresy, Saint Optatus recounts that at Carthage a Catholic matron, named Lucilla, incurred the censures of the Church for having kissed the relics of a reputed martyr whose claims to martyrdom had not been juridically proved. And Saint Cyprian (died 258) recommended that the utmost diligence be observed in investigating the claims of those who were said to have died for the faith. All the circumstances accompanying the martyrdom were to be inquired into; the faith of those who suffered, and the motives that animated them were to be rigorously examined, in order to prevent the recognition of undeserving persons. Evidence was sought from the court records of the trials or from people who had been present at the trials.
Augustine of Hippo (died 430) tells of the procedure which was followed in his day for the recognition of a martyr. The bishop of the diocese in which the martyrdom took place set up a canonical process for conducting the inquiry with the utmost severity. The acts of the process were sent either to the metropolitan or primate, who carefully examined the cause, and, after consultation with the suffragan bishops, declared whether the deceased was worthy of the name of "martyr" and public veneration.
Though not "canonizations" in the narrow sense, acts of formal recognition, such as the erection of an altar over the saint's tomb or transferring the saint's relics to a church, were preceded by formal inquiries into the sanctity of the person's life and the miracles attributed to that person's intercession.
Such acts of recognition of a saint were authoritative, in the strict sense, only for the diocese or ecclesiastical province for which they were issued, but with the spread of the fame of a saint, were often accepted elsewhere also.
Nature.
In the Catholic Church, both in the Latin and the constituent Eastern churches, the act of canonization is reserved to the Apostolic See and occurs at the conclusion of a long process requiring extensive proof that the candidate for canonization lived and died in such an exemplary and holy way that they are worthy to be recognized as a saint. The Church's official recognition of sanctity implies that the person is now in Heaven and that they may be publicly invoked and mentioned officially in the liturgy of the Church, including in the Litany of the Saints.
In the Catholic Church, canonization is a decree that allows universal veneration of the saint. For permission to venerate merely locally, only beatification is needed.
Procedure prior to reservation to the Apostolic See.
For several centuries the bishops, or in some places only the primates and patriarchs, could grant martyrs and confessors public ecclesiastical honor; such honor, however, was always decreed only for the local territory of which the grantors had jurisdiction. Only acceptance of the "cultus" by the Pope made the "cultus" universal, because he alone can rule the universal Catholic Church. Abuses, however, crept into this discipline, due as well to indiscretions of popular fervor as to the negligence of some bishops in inquiring into the lives of those whom they permitted to be honoured as saints.
In the Medieval West, the Apostolic See was asked to intervene in the question of canonizations so as to ensure more authoritative decisions. The canonization of Saint Udalric, Bishop of Augsburg by Pope John XV in 993 was the first undoubted example of papal canonization of a saint from outside of Rome being declared worthy of liturgical veneration for the entire church.
Thereafter, recourse to the judgment of the Pope occurred more frequently. Toward the end of the 11th century, the Popes began asserting their exclusive right to authorize the veneration of a saint against the older rights of bishops to do so for their dioceses and regions. Popes therefore decreed that the virtues and miracles of persons proposed for public veneration should be examined in councils, more specifically in general councils. Pope Urban II, Pope Calixtus II, and Pope Eugene III conformed to this discipline.
Exclusive reservation to the Apostolic See.
Hugh de Boves, Archbishop of Rouen, canonized Walter of Pontoise, or St. Gaultier, in 1153, the final saint in Western Europe to be canonized by an authority other than the Pope: "The last case of canonization by a metropolitan is said to have been that of St. Gaultier, or Gaucher, [A]bbot of Pontoise, by the Archbishop of Rouen. A decree of Pope Alexander III [in] 1170 gave the prerogative to the [P]ope thenceforth, so far as the Western Church was concerned." In a decretal of 1173, Pope Alexander III reprimanded some bishops for permitting veneration of a man who was merely killed while intoxicated, prohibited veneration of the man, and most significantly decreed that "you shall not therefore presume to honor him in the future; for, even if miracles were worked through him, it is not lawful for you to venerate him as a saint without the authority of the Catholic Church." Theologians disagree as to the full import of the decretal of Pope Alexander III: either a new law was instituted, in which case the Pope then for the first time reserved the right of beatification to himself, or an existing law was confirmed.
However, the procedure initiated by the decretal of Pope Alexander III was confirmed by a bull of Pope Innocent III issued on the occasion of the canonization of Cunigunde of Luxembourg in 1200. The bull of Pope Innocent III resulted in increasingly elaborate inquiries to the Apostolic See concerning canonizations. Because the decretal of Pope Alexander III did not end all controversy and some bishops did not obey it in so far as it regarded beatification, the right of which they had certainly possessed hitherto, Pope Urban VIII issued the Apostolic letter "Caelestis Hierusalem cives" of 5 July 1634 that exclusively reserved to the Apostolic See both its immemorial right of canonization and that of beatification. He further regulated both of these acts by issuing his "Decreta servanda in beatificatione et canonizatione Sanctorum" on 12 March 1642.
Procedure from 1734 to 1738 to 1983.
In his "De Servorum Dei beatificatione et de Beatorum canonizatione" of five volumes the eminent canonist Prospero Lambertini (1675–1758), who later became Pope Benedict XIV, elaborated on the procedural norms of Pope Urban VIII's Apostolic letter "Caelestis Hierusalem cives" of 1634 and "Decreta servanda in beatificatione et canonizatione Sanctorum" of 1642, and on the conventional practice of the time. His work published from 1734 to 1738 governed the proceedings until 1917. The article "Beatification and canonization process in 1914" describes the procedures followed until the promulgation of the "Codex" of 1917. The substance of "De Servorum Dei beatifιcatione et de Beatorum canonizatione" was incorporated into the "Codex Iuris Canonici" ("Code of Canon Law") of 1917, which governed until the promulgation of the revised "Codex Iuris Canonici" in 1983 by Pope John Paul II. Prior to promulgation of the revised "Codex" in 1983, Pope Paul VI initiated a simplification of the procedures.
Since 1983.
The Apostolic constitution "Divinus Perfectionis Magister" of Pope John Paul II of 25 January 1983 and the norms issued by the Congregation for the Causes of Saints on 7 February 1983 to implement the constitution in dioceses, continued the simplification of the process initiated by Pope Paul VI. Contrary to popular belief, the reforms did not eliminate the office of the Promoter of the Faith (Latin: "Promotor Fidei"), popularly known as the Devil's advocate, whose office is to question the material presented in favor of canonization. The reforms were intended to reduce the adversarial nature of the process. In November 2012 Pope Benedict XVI appointed Monsignor Carmello Pellegrino as Promoter of the Faith.
Candidates for canonization undergo the following process:
Canonization is a statement of the Church that the person certainly enjoys the beatific vision of Heaven. The title of "Saint" (Latin: "Sanctus" or "Sancta") is then proper, reflecting that the saint is a refulgence of the holiness ("sanctitas") of God himself, which alone comes from God's gift. The saint is assigned a feast day which may be celebrated anywhere in the universal Church, although it is not necessarily added to the General Roman Calendar or local calendars as an "obligatory" feast; parish churches may be erected in their honor; and the faithful may freely celebrate and honor the saint.
Although recognition of sainthood by the Pope does not directly concern a fact of Divine revelation, nonetheless it must be "definitively held" by the faithful as "infallible" pursuant to, at the least, the Universal Magisterium of the Church, because it is a truth related to revelation by historical necessity.
Equipollent canonization.
Popes have several times permitted to the universal Church, without executing the ordinary judicial process of canonization described above, the veneration as a saint, the "cultus" of one long venerated as such locally. This act of a Pope is denominated "equipollent" or "equivalent canonization" and "confirmation of "cultus"". In such cases, there is no need to have a miracle attributed to the saint to allow their canonization. According to the rules Pope Benedict XIV ("regnat" 17 August 1740 – 3 May 1758) instituted, there are three conditions for an equipollent canonization: (1) existence of an ancient "cultus" of the person, (2) a general and constant attestation to the virtues or martyrdom of the person by credible historians, and (3) uninterrupted fame of the person as a worker of miracles.
Protestant denominations.
The majority of Protestant denominations do not formally recognize saints because the Bible uses the term in a way that suggests all Christians are saints. However, some denominations do, as shown below.
Anglican Communion.
The Church of England, the Mother Church of the Anglican Communion, canonized Charles I as a saint, in the Convocations of Canterbury and York of 1660.
United Methodist Church.
The General Conference of the United Methodist Church has formally declared individuals "martyrs", including Dietrich Bonhoeffer (in 2008) and Martin Luther King Jr. (in 2012).
Eastern Orthodox Church.
Various terms are used for canonization by the autocephalous Eastern Orthodox Churches: канонизация ("canonization") or прославление ("glorification", in the Russian Orthodox Church), კანონიზაცია ("kanonizats’ia", Georgian Orthodox Church), канонизација (Serbian Orthodox Church), "canonizare" (Romanian Orthodox Church), and Канонизация (Bulgarian Orthodox Church). Additional terms are used for canonization by other autocephalous Eastern Orthodox Churches: (Katharevousa: ) "agiokatataxi/agiokatataxis", "ranking among saints" (Ecumenical Patriarchate of Constantinople, Church of Cyprus, Church of Greece), "kanonizim" (Albanian Orthodox Church), "kanonizacja" (Polish Orthodox Church), and "kanonizace/kanonizácia" (Czech and Slovak Orthodox Church).
The Orthodox Church in America, an Eastern Orthodox Church partly recognized as autocephalous, uses the term "glorification" for the official recognition of a person as a saint.
Oriental Orthodox Church.
Within the Armenian Apostolic Church, part of Oriental Orthodoxy, there had been discussions since the 1980s about canonizing the victims of the Armenian genocide. On 23 April 2015, all of the victims of the genocide were canonized.
|
6099
|
28481209
|
https://en.wikipedia.org/wiki?curid=6099
|
Carboxylic acid
|
In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group () attached to an R-group. The general formula of a carboxylic acid is often written as ' or ', sometimes as with R referring to an organyl group (e.g., alkyl, alkenyl, aryl), or hydrogen, or other groups. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion.
Examples and nomenclature.
Carboxylic acids are commonly identified by their trivial names. They often have the suffix "-ic acid". IUPAC-recommended names also exist; in this system, carboxylic acids have an "-oic acid" suffix. For example, butyric acid () is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran.
The carboxylate anion ( or ) of a carboxylic acid is usually named with the suffix "-ate", in keeping with the general pattern of "-ic acid" and "-ate" for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate.
Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite it having a moiety that looks like a COOH group.
Physical properties.
Solubility.
Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl ) and hydrogen-bond donors (the hydroxyl ), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water.
Boiling points.
Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilized dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporized, increasing the enthalpy of vaporization requirements significantly.
Acidity.
Carboxylic acids are Brønsted–Lowry acids because they are proton (H+) donors. They are the most common type of organic acid.
Carboxylic acids are typically weak acids, meaning that they only partially dissociate into cations and anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10−5 moles out of 1 mol). Electron-withdrawing substituents such as trifluoromethyl () give stronger acids (the p"K"a of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a p"K"a of 0.23). Electron-donating substituents give weaker acids (the p"K"a of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a p"K"a of 4.76)
Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the −1/2 negative charges on the 2 oxygen atoms.
Odour.
Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume.
Characterization.
Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond ("ν"C=O) between 1680 and 1725 cm−1. A characteristic "ν"O–H band appears as a broad peak in the 2500 to 3000 cm−1 region. By 1H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water.
Occurrence and applications.
Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins.
Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps.
Synthesis.
Industrial routes.
In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment.
Laboratory methods.
Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents.
Less-common reactions.
Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest.
Reactions.
Acid-base reactions.
Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water:
Conversion to esters, amides, anhydrides.
Widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols.
Their conversion to esters is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP.
Converting a carboxylic acid to an amide is possible, but not straightforward. Instead of acting as a nucleophile, an amine will react as a base in the presence of a carboxylic acid to give the ammonium carboxylate salt. Heating the salt to above 100 °C will drive off water and lead to the formation of the amide. This method of synthesizing amides is industrially important, and has laboratory applications as well. In the presence of a strong acid catalyst, carboxylic acids can condense to form acid anhydrides. The condensation produces water, however, which can hydrolyze the anhydride back to the starting carboxylic acids. Thus, the formation of the anhydride via condensation is an equilibrium process.
Under acid-catalyzed conditions, carboxylic acids will react with alcohols to form esters via the Fischer esterification reaction, which is also an equilibrium process. Alternatively, diazomethane can be used to convert an acid to an ester. While esterification reactions with diazomethane often give quantitative yields, diazomethane is only useful for forming methyl esters.
Reduction.
Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group.
The Vilsmaier reagent ("N","N"-Dimethyl(chloromethylene)ammonium chloride; ) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris("t"-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties.
Conversion to acyl halides.
The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters. Thionyl chloride can be used to convert carboxylic acids to their corresponding acyl chlorides. First, carboxylic acid 1 attacks thionyl chloride, and chloride ion leaves. The resulting oxonium ion 2 is activated towards nucleophilic attack and has a good leaving group, setting it apart from a normal carboxylic acid. In the next step, 2 is attacked by chloride ion to give tetrahedral intermediate 3, a chlorosulfite. The tetrahedral intermediate collapses with the loss of sulfur dioxide and chloride ion, giving protonated acyl chloride 4. Chloride ion can remove the proton on the carbonyl group, giving the acyl chloride 5 with a loss of HCl.
Phosphorus(III) chloride (PCl3) and phosphorus(V) chloride (PCl5) will also convert carboxylic acids to acid chlorides, by a similar mechanism. One equivalent of PCl3 can react with three equivalents of acid, producing one equivalent of H3PO3, or phosphorus acid, in addition to the desired acid chloride. PCl5 reacts with carboxylic acids in a 1:1 ratio, and produces phosphorus(V) oxychloride (POCl3) and hydrogen chloride (HCl) as byproducts.
Reactions with carbanion equivalents.
Carboxylic acids react with Grignard reagents and organolithiums to form ketones. The first equivalent of nucleophile acts as a base and deprotonates the acid. A second equivalent will attack the carbonyl group to create a geminal alkoxide dianion, which is protonated upon workup to give the hydrate of a ketone. Because most ketone hydrates are unstable relative to their corresponding ketones, the equilibrium between the two is shifted heavily in favor of the ketone. For example, the equilibrium constant for the formation of acetone hydrate from acetone is only 0.002. The carboxylic group is the most acidic in organic compounds.
Carboxyl radical.
The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid.
|
6100
|
45618159
|
https://en.wikipedia.org/wiki?curid=6100
|
Chernobyl
|
Chernobyl, officially called Chornobyl, is a partially abandoned city in Vyshhorod Raion, Kyiv Oblast, Ukraine. It is located within the Chernobyl Exclusion Zone, to the north of Kyiv and to the southwest of Gomel in neighbouring Belarus. Prior to being evacuated in the aftermath of the Chernobyl disaster in 1986, it was home to approximately 14,000 residents—considerably less than adjacent Pripyat, which was completely abandoned following the incident. Since then, although living anywhere within the Chernobyl Exclusion Zone is technically illegal, Ukrainian authorities have tolerated those who have taken up living in some of the city's less irradiated areas; Chernobyl's 2020 population estimate was 150 people.
First mentioned as a ducal hunting lodge in Kievan Rus' in 1193, the city has changed hands multiple times over the course of its history. In the 16th century, Jews began moving into Chernobyl, and at the end of the 18th century, it had become a major centre of Hasidic Judaism under the Twersky dynasty. During the early 20th century, pogroms and associated emigration caused the local Jewish community to dwindle significantly. By World War II, all remaining Jews in the city were murdered by Nazi Germany as part of the Holocaust.
In 1972, Chernobyl rose to prominence in the Soviet Union when it was selected as the site of the Chernobyl Nuclear Power Plant; Pripyat was constructed nearby to house the facility's workers. Located to the north of Chernobyl proper, it opened in 1977. On 5 May 1986, nine days after Reactor No. 4 at the Chernobyl Nuclear Power Plant exploded, the Soviet government began evacuating the residents of both Chernobyl and Pripyat in preparation for the liquidators' management of the disaster. Following their subsequent settlement in the newly purpose-built city of Slavutych, most of the evacuees never returned. From 1923 onwards, Chernobyl had been the administrative centre of Chernobyl Raion, which was dissolved and merged with Ivankiv Raion in 1988, owing to widespread radioactive contamination in the region. Ivankiv Raion, in turn, was dissolved and merged with Vyshhorod Raion during Ukraine's 2020 administrative reform.
Workers on watch and administrative personnel of the Chernobyl Exclusion Zone are stationed in the city, which has two general stores and a hotel. Though the city's atmosphere remained calm after the disaster was contained, the beginning of the Russian invasion of Ukraine in February 2022 sparked international concern about the stability of Ukrainian nuclear facilities, especially pursuant to reports that Russia's occupation of the Chernobyl Exclusion Zone until April 2022 had caused a spike in radiation levels.
Etymology.
The city's name is the same as one of the Ukrainian names for "Artemisia vulgaris", mugwort or common wormwood: (or more commonly , 'common artemisia'). The name is inherited from or , a compound of + , the parts related to and , 'stalk', so named in distinction to the lighter-stemmed wormwood "A. absinthium".
The name in languages used nearby is:
The name in languages formerly used in the area is:
In English, the Russian-derived spelling "Chernobyl" has been commonly used, but some style guides recommend the spelling "Chornobyl", or the use of romanized Ukrainian names for Ukrainian places generally.
History.
The Polish Geographical Dictionary of the Kingdom of Poland of 1880–1902 states that the time the city was founded is not known.
Identity of Ptolemy's "Azagarium".
Some older geographical dictionaries and descriptions of modern Eastern Europe mention "Czernobol" (Chernobyl) with reference to Ptolemy's world map (2nd century AD). Czernobol is identified as "oppidium Sarmatiae" (Lat., "a city in Sarmatia"), by the 1605 "Lexicon geographicum" of Filippo Ferrari and the 1677 "Lexicon Universale" of Johann Jakob Hofmann. According to the "Dictionary of Ancient Geography" of Alexander Macbean (London, 1773), Azagarium is "a town of Sarmatia Europaea, on the Borysthenes" (Dnieper), 36° East longitude and 50°40' latitude. The city is "now supposed to be "Czernobol", a town of Poland, in Red Russia [Red Ruthenia], in the Palatinate of Kiow [Kiev Voivodeship], not far from the Borysthenes."
Whether Azagarium is indeed Czernobol is debatable. The question of Azagarium's correct location was raised in 1842 by Habsburg-Slovak historian, Pavel Jozef Šafárik, who published a book titled "Slavic Ancient History" ("Sławiańskie starożytności"), where he claimed Azagarium to be the hill of Zaguryna, which he found on an old Russian map "Bolzoj czertez" (Big drawing) near the city of Pereiaslav, now in central Ukraine.
In 2019, Ukrainian architect Boris Yerofalov-Pylypchak published a book, "Roman Kyiv or Castrum Azagarium at Kyiv-Podil".
Kievan Rus' and post-medieval era (880–1793).
The archaeological excavations that were conducted in 2005–2008 found a cultural layer from the 10–12th centuries AD, which predates the first documentary mention of Chernobyl.
Around the 12th century Chernobyl was part of the land of Kievan Rus′. The first known mention of the settlement as Chernobyl is from an 1193 charter, which describes it as a hunting lodge of Knyaz Rurik Rostislavich. In 1362 it was a crown village of the Grand Duchy of Lithuania. Around that time the town had own castle which was ruined at least on two occasions in 1473 and 1482. The Chernobyl castle was rebuilt in the first quarter of the 16th century being located nearby the settlement in a hard to reach area. With revival of the castle, Chernobyl became a county seat. In 1552 it accounted for 196 buildings with 1,372 residents, out of which over 1,160 were considered city dwellers. In the city were developing various crafts professions such as blacksmith, cooper among others. Near Chernobyl has been excavated bog iron, out of which was produced iron. The village was granted to Filon Kmita, a captain of the royal cavalry, as a fiefdom in 1566. Following the Union of Lublin, the province where Chernobyl is located was transferred to the Crown of the Kingdom of Poland in 1569. Under the Polish Crown, Chernobyl became a seat of eldership (starostwo). During that period Chernobyl was inhabited by Ukrainian peasants, some Polish people and a relatively large number of Jews. Jews were brought to Chernobyl by Filon Kmita, during the Polish campaign of colonization. The first mentioning of Jewish community in Chernobyl is in the 17th century. In 1600 the first Roman Catholic church was built in the town. Local population was persecuted for holding Eastern Orthodox rite services. The traditionally Eastern Orthodox Ukrainian peasantry around the town were forcibly converted, by Poland, to the Ruthenian Uniate Church. In 1626, during the Counter-Reformation, a Dominican church and monastery were founded by Lukasz Sapieha. A group of Old Catholics opposed the decrees of the Council of Trent. The Chernobyl residents actively supported the Khmelnytsky Uprising (1648–1657).
With the signing of the Truce of Andrusovo in 1667, Chernobyl was secured after the Sapieha family. Sometime in the 18th century, the place was passed on to the Chodkiewicz family. In the mid-18th century the area around Chernobyl was engulfed in a number of peasant riots, which caused Prince Riepnin to write from Warsaw to Major General Krechetnikov, requesting hussars to be sent from Kharkiv to deal with the uprising near Chernobyl in 1768. The 8th Lithuanian Infantry Regiment was stationed in the town in 1791. By the end of the 18th century, the town accounted for 2,865 residents and had 642 buildings.
Imperial Russian era (1793–1917).
Following the Second Partition of Poland, in 1793 Chernobyl was annexed by the Russian Empire and became part of Radomyshl county ("uezd") as a supernumerary town ("zashtatny gorod"). Many of the Uniate Church converts returned to Eastern Orthodoxy.
In 1832, following the failed Polish November Uprising, the Dominican monastery was sequestrated. The church of the Old Catholics was disbanded in 1852.
Until the end of the 19th century, Chernobyl was a privately owned city that belonged to the Chodkiewicz family. In 1896 they sold the city to the state, but until 1910 they owned a castle and a house in the city.
Hasidic Jewish dynasty of Chernobyl.
In the second half of the 18th century, Chernobyl became a major centre of Hasidic Judaism. The Chernobyl Hasidic dynasty had been founded by Rabbi Menachem Nachum Twersky. The Jewish population suffered greatly from pogroms in October 1905 and in March–April 1919; many Jews were killed or robbed at the instigation of the Russian nationalist Black Hundreds. When the Twersky Dynasty left Chernobyl in 1920, it ceased to exist as a center of Hasidism.
Chernobyl had a population of 10,800 in 1898, including 7,200 Jews. In the beginning of March 1918 Chernobyl was occupied in World War I by German forces in accordance with the Treaty of Brest-Litovsk
Soviet era (1920–1991).
Ukrainians and Bolsheviks fought over the city in the ensuing Civil War. In the Polish–Soviet War of 1919–20, Chernobyl was taken first by the Polish Army and then by the cavalry of the Red Army. From 1921 onwards, it was officially incorporated into the Ukrainian SSR.
Holodomor.
Between 1929 and 1933, Chernobyl suffered from killings during Stalin's collectivization campaign. It was also affected by the famine that resulted from Stalin's policies. The Polish and German community of Chernobyl was deported to Kazakhstan in 1936, during the Frontier Clearances.
World War II and the Holocaust.
During World War II, Chernobyl was occupied by the German Army from 25 August 1941 to 17 November 1943. When the Germans arrived, only 400 Jews remained in Chernobyl; they were murdered during the Holocaust.
Chernobyl Nuclear Power Plant.
In 1972, the Duga-1 radio receiver, part of the larger Duga over-the-horizon radar array, began construction west-northwest of Chernobyl. It was the origin of the Russian Woodpecker and was designed as part of an anti-ballistic missile early-warning radar network.
On 15 August 1972, the Chernobyl Nuclear Power Plant (officially the Vladimir Ilyich Lenin Nuclear Power Plant) began construction about northwest of Chernobyl. The plant was built alongside Pripyat, an "atomograd" city founded on 4 February 1970 that was intended to serve the nuclear power plant. The decision to build the power plant was adopted by the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the Soviet Union on recommendations of the State Planning Committee that the Ukrainian SSR be its location. It was the first nuclear power plant to be built in Ukraine.
26 April 1986: Chernobyl disaster.
After the nuclear disaster at the Chernobyl Nuclear Power Plant; the worst nuclear disaster in history, the city of Chernobyl was evacuated on 5 May 1986. Along with the residents of the nearby city of Pripyat, built as a home for the plant's workers, the population was relocated to the newly built city of Slavutych. While Pripyat remains completely abandoned with no remaining inhabitants, Chernobyl has since hosted a small population.
Independent Ukrainian era (1991–present).
With the dissolution of the Soviet Union in 1991, Chernobyl remained part of Ukraine within the Chernobyl Exclusion Zone which Ukraine inherited from the Soviet Union.
2022 Russian occupation of Chernobyl.
During the Russian invasion of Ukraine, Russian forces captured the city on 24 February. Following the capture of Chernobyl, the Russian army used the city as a staging point for attacks on Kyiv. Ukrainian officials reported that the radiation levels in the city had started to rise due to recent military activity causing radioactive dust to ascend into the air. Hundreds of Russian soldiers were suffering from radiation poisoning after digging trenches in a contaminated area, and one died. On 31 March it was reported that Russian forces had left the exclusion zone. Ukrainian authorities reasserted control over the area on 2 April.
Geography.
Chernobyl is located about north of Kyiv, and southwest of the Belarusian city of Gomel.
Climate.
Chernobyl has a humid continental climate (Dfb) with very warm, wet summers with cool nights and long, cold, and snowy winters.
Aftermath of the Chernobyl disaster and evacuation.
On 26 April 1986, one of the reactors at the Chernobyl Nuclear Power Plant exploded after a scheduled test on the reactor was carried out improperly by plant operators. The resulting loss of control was due to design flaws of the RBMK reactor, which made it unstable when operated at low power, and prone to thermal runaway where increases in temperature increase reactor power output.
Chernobyl city was evacuated nine days after the disaster. The level of contamination with caesium-137 was around 555 kBq/m2 (surface ground deposition in 1986).
Later analyses concluded that, even with very conservative estimates, relocation of the city (or of any area below 1500 kBq/m2) could not be justified on the grounds of radiological health.
This however does not account for the uncertainty in the first few days of the accident about further depositions and weather patterns.
Moreover, an earlier short-term evacuation could have averted more significant doses from short-lived isotope radiation (specifically iodine-131, which has a half-life of eight days).
The long-term health effects of the Chernobyl disaster are a subject of some controversy.
In 1998, average caesium-137 doses from the accident (estimated at 1–2 mSv per year) did not exceed those from other sources of exposure. Current effective caesium-137 dose rates as of 2019 are 200–250 nSv/h, or roughly 1.7–2.2 mSv per year,
which is comparable to the worldwide average background radiation from natural sources.
The base of operations for the administration and monitoring of the Chernobyl Exclusion Zone was moved from Pripyat to Chernobyl. Chernobyl currently contains offices for the State Agency of Ukraine on the Exclusion Zone Management and accommodations for visitors. Apartment blocks have been repurposed as accommodations for employees of the State Agency. The length of time that workers may spend within the Chernobyl Exclusion Zone is restricted by regulations that have been implemented to limit radiation exposure. Today, visits are allowed to Chernobyl but limited by strict rules.
In 2003, the United Nations Development Programme launched a project, called the Chernobyl Recovery and Development Programme (CRDP), for the recovery of the affected areas. The main goal of the CRDP's activities is supporting the efforts of the Government of Ukraine to mitigate the long-term social, economic, and ecological consequences of the Chernobyl disaster.
The city has become overgrown and many types of animals live there. According to census information collected over an extended period of time, it is estimated that more mammals live there now than before the disaster.
Notably, Mikhail Gorbachev, the final leader of the Soviet Union, stated in respect to the Chernobyl disaster that, "More than anything else, (Chernobyl) opened the possibility of much greater freedom of expression, to the point that the (Soviet) system as we knew it could no longer continue."
|
6102
|
49194911
|
https://en.wikipedia.org/wiki?curid=6102
|
Cyan
|
Cyan () is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 500 and 520 nm, between the wavelengths of green and blue.
In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light. It is commonly seen on a bright, sunny day in the sky.
Shades and variations.
Different shades of cyan can vary in terms of hue, chroma (also known as saturation, intensity, or colorfulness), or lightness (or value, tone, or brightness), or any combination of these characteristics. Differences in value can also be referred to as tints and shades, with a tint being a cyan mixed with white, and a shade being mixed with black.
Color nomenclature is subjective. Many shades of cyan with a bluish hue are called blue. Similarly, those with a greenish hue are referred to as green. A cyan with a dark shade is commonly known as teal. A teal blue shade leans toward the blue end of the spectrum. Variations of teal with a greener tint are commonly referred to as teal green.
Turquoise, reminiscent of the stone with the same name, is a shade in the green spectrum of cyan hues. Celeste is a lightly tinted cyan that represents the color of a clear sky. Other colors in the cyan color range are electric blue, aquamarine, and others described as blue-green.
History.
Cyan boasts a rich and diverse history, holding cultural significance for millennia. In ancient civilizations, turquoise, valued for its aesthetic appeal, served as a highly regarded precious gem. Turquoise comes in a variety of shades from green to blue, but cyan hues are particularly prevalent. Approximately 3,700 years ago, an intricately crafted dragon-shaped treasure made from over 2,000 pieces of turquoise and jade was created. This artifact is widely recognized as the oldest Chinese dragon totem by many Chinese scholars.
Turquoise jewelry also held significant importance among the Aztecs, who often featured this precious gemstone in vibrant frescoes for both symbolic and decorative purposes. The Aztecs revered turquoise, associating its color with the heavens and sacredness. Additionally, ancient Egyptians interpreted cyan hues as representing faith and truth, while Tibetans viewed them as a symbol of infinity.
After earlier uses in various contexts, cyan hues found increased use in diverse cultures due to their appealing aesthetic qualities in religious structures and art pieces. For example, the prominent dome of the Goharshad Mosque in Iran, built in 1418, showcases this trend. Additionally, Jacopo da Pontormo's use of a teal shade for Mary's robe in the 1528 painting "Carmignano Visitation" demonstrates the allure for these hues. During the 16th century, speakers of the English language began using the term "turquoise" to describe the cyan color of objects that resembled the color of the stone.
In the 1870s, the French sculptor Frédéric Bartholdi began the construction of what would later become the Statue of Liberty. Over time, exposure to the elements caused the copper structure to develop its distinctive patina, now recognized as iconic cyan. Following this, there was a significant advancement in the use of cyan during the late 19th and early 20th centuries.
Impressionist artists, such as Claude Monet in his renowned "Water Lilies", effectively incorporated cyan hues into their works. Deviating from traditional interpretations of local color under neutral lighting conditions, the focus of artists was on accurately depicting perceived color and the influence of light on altering object hues. Specifically, daylight plays a significant role in shifting the perceived color of objects toward cyan hues. In 1917, the color term "teal" was introduced to describe deeper shades of cyan.
In the late 19th century, while "traditional" nomenclature of red, yellow, and blue persisted, the printing industry initiated a shift towards utilizing magenta and cyan inks for red and blue hues, respectively. This transition aimed to establish a more versatile color gamut achievable with only three primary colors. In 1949, a document in the printing industry stated: “The four-color set comprises Yellow, Red (magenta), Blue (cyan), Black”. This practice of labeling magenta, yellow, and cyan as red, yellow, and blue persisted until 1961. As the hues evolved, the printing industry maintained the use of the "traditional" terms red, yellow, and blue. Consequently, pinpointing the exact date of origin for CMYK, in which cyan serves as a primary color, proves "challenging".
In August 1991, the HP Deskwriter 500C became the first Deskwriter to offer color printing as an option. It used interchangeable black and color (cyan, magenta, and yellow) inkjet print cartridges. With the inclusion of cyan ink in printers, the term "cyan" has become widely recognized in both home and office settings. According to TUP/Technology User Profile 2020, approximately 70% of online American adults regularly use a home printer.
Etymology and terminology.
Its name is derived from the Ancient Greek word "kyanos" (κύανος), meaning "dark blue enamel, Lapis lazuli". It was formerly known as "cyan blue" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower ("Centaurea cyanus").
In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this "borderline" hue region include "blue green", "aqua", "turquoise", "teal", and "grue".
On the web and in printing.
Web colors cyan and aqua.
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called "aqua", a name also used due to the color's common association with water, such as the appearance of the water at a tropical beach.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color.
Process cyan.
Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.
While both the additive secondary and the subtractive primary are called "cyan", they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of "process cyan" is shown in the color box on the right.
|
6105
|
7903804
|
https://en.wikipedia.org/wiki?curid=6105
|
Conventional insulin therapy
|
Conventional insulin therapy is a therapeutic regimen for treatment of diabetes mellitus which contrasts with the newer intensive insulin therapy.
This older method (prior to the development of home blood glucose monitoring) is still in use in a proportion of cases.
Characteristics.
Conventional insulin therapy is characterized by:
Effects.
The down side of this method is that it is difficult to achieve as good results of glycemic control as with intensive insulin therapy. The advantage is that, for diabetics with a regular lifestyle, the regime is less intrusive than the intensive therapy.
|
6109
|
47404959
|
https://en.wikipedia.org/wiki?curid=6109
|
Cream
|
Cream is a dairy product composed of the higher-fat layer skimmed from the top of milk before homogenization. In un-homogenized milk, the fat, which is less dense, eventually rises to the top. In the industrial production of cream, this process is accelerated by using centrifuges called "separators". In many countries, it is sold in several grades depending on the total butterfat content. It can be dried to a powder for shipment to distant markets, and contains high levels of saturated fat.
Cream skimmed from milk may be called "sweet cream" to distinguish it from cream skimmed from whey, a by-product of cheese-making. Whey cream has a lower fat content and tastes more salty, tangy, and "cheesy". In many countries partially fermented cream is also sold as: sour cream, crème fraîche, and so on. Both forms have many culinary uses in both sweet and savoury dishes.
Cream produced by cattle (particularly Jersey cattle) grazing on natural pasture often contains some fat-soluble carotenoid pigments derived from the plants they eat; traces of these intensely colored pigments concentrated during separation give cream a slightly yellow hue, hence the name of the yellow-tinged off-white color cream. Carotenoids are also the origin of butter's yellow color. Cream from goat's milk, water buffalo milk, or from cows fed indoors on grain or grain-based pellets, is white.
Cuisine.
Cream is used as an ingredient in many foods, including ice cream, many sauces, soups, stews, puddings, and some custard bases, and is also used for cakes. Whipped cream is served as a topping on ice cream sundaes, milkshakes, lassi, eggnog, sweet pies, strawberries, blueberries, or peaches. Cream is also used in Indian curries such as masala dishes. Both single and double cream (see Types for definitions) can be used in cooking. Double cream or full-fat crème fraîche is often used when the cream is added to a hot sauce, to prevent it separating or "splitting". Double cream can be thinned with milk to make an approximation of single cream.
Cream (usually light/single cream or half and half) may be added to coffee.
The French word denotes not only dairy cream but also other thick liquids such as sweet and savory custards, which are normally made with milk, not cream.
Types.
Different grades of cream are distinguished by their fat content, whether they have been heat-treated, whipped, and so on. In many jurisdictions, there are regulations for each type.
Australia and New Zealand.
The Australia New Zealand Food Standards Code – Standard 2.5.2 – Defines cream as a milk product comparatively rich in fat, in the form of an emulsion of fat-in-skim milk, which can be obtained by separation from milk. Cream sold without further specification must contain no less than 350 g/kg (35%) milk fat.
Manufacturers labels may distinguish between different fat contents, a general guideline is as follows:
Canada.
Canadian cream definitions are similar to those used in the United States, except for "light cream", which is very low-fat cream, usually with 5 or 6 percent butterfat. Specific product characteristics are generally uniform throughout Canada, but names vary by both geographic and linguistic area and by manufacturer: "coffee cream" may be 10 or 18 percent cream and "half-and-half" () may be 3, 5, 6 or 10 percent, all depending on location and brand.
Regulations allow cream to contain acidity regulators and stabilizers. For whipping cream, allowed additives include skim milk powder (≤ 0.25%), glucose solids (≤ 0.1%), calcium sulphate (≤ 0.005%), and xanthan gum (≤ 0.02%). The content of milk fat in canned cream must be displayed as a percentage followed by "milk fat", "B.F", or "M.F".
France.
In France, the use of the term "cream" for food products is defined by the decree 80-313 of April 23, 1980. It specifies the minimum rate of milk fat (12%) as well as the rules for pasteurisation or UHT sterilisation. The mention "crème fraîche" (fresh cream) can only be used for pasteurised creams conditioned on production site within 24h after pasteurisation. Even if food additives complying with French and European laws are allowed, usually, none will be found in plain "crèmes" and "crèmes fraîches" apart from lactic ferments (some low cost creams (or close to creams) can contain thickening agents, but rarely). Fat content is commonly shown as "XX% M.G." ("matière grasse").
Russia.
Russia, as well as other EAC countries, legally separates cream into two classes: normal (10–34% butterfat) and heavy (35–58%), but the industry has pretty much standardized around the following types:
Sweden.
In Sweden, cream is usually sold as:
Mellangrädde (27%) is, nowadays, a less common variant.
Gräddfil (usually 12%) and Creme Fraiche (usually around 35%) are two common sour cream products.
Switzerland.
In Switzerland, the types of cream are legally defined as follows:
Sour cream and crème fraîche (German: Sauerrahm, Crème fraîche; French: crème acidulée, crème fraîche; Italian: panna acidula, crème fraîche) are defined as cream soured by bacterial cultures.
Thick cream (German: ; French: ; Italian: ) is defined as cream thickened using thickening agents.
United Kingdom.
In the United Kingdom, these types of cream are produced. Fat content must meet the Food Labelling Regulations 1996.
United States.
In the United States, cream is usually sold as:
Not all grades are defined by all jurisdictions, and the exact fat content ranges vary. The above figures, except for "manufacturer's cream", are based on the Code of Federal Regulations, Title 21, Part 131.
Processing and additives.
Cream may have thickening agents and stabilizers added. Thickeners include sodium alginate, carrageenan, gelatine, sodium bicarbonate, tetrasodium pyrophosphate, and alginic acid.
Other processing may be carried out. For example, cream has a tendency to produce oily globules (called "feathering") when added to coffee. The stability of the cream may be increased by increasing the non-fat solids content, which can be done by partial demineralisation and addition of sodium caseinate, although this is expensive.
Other items called "cream".
Some non-edible substances are called creams due to their consistency: shoe cream is runny, unlike regular waxy shoe polish; hand/body "creme" or "skin cream" is meant for moisturizing the skin.
Regulations in many jurisdictions restrict the use of the word "cream" for foods. Words such as "creme", "kreme", "creame", or "whipped topping" (e.g., Cool Whip) are often used for products which cannot legally be called cream, though in some jurisdictions even these spellings may be disallowed, for example under the doctrine of "idem sonans". Oreo and Hydrox cookies are a type of sandwich cookie in which two biscuits have a soft, sweet filling between them that is called "crème filling." In some cases, foods can be described as cream although they do not contain predominantly milk fats; for example, in Britain, "ice cream" can contain non-milk fat (declared on the label) in addition to or instead of cream, and salad cream is the customary name for a non-dairy condiment that has been produced since the 1920s.
In other languages, cognates of "cream" are also sometimes used for non-food products, such as fogkrém (Hungarian for toothpaste), or Sonnencreme (German for sunscreen).
Some products are described as "cream alternatives". For example, "Elmlea Double", etc. are blends of buttermilk or lentils and vegetable oil with other additives sold by Upfield in the United Kingdom packaged and shelved in the same way as cream, labelled as having "a creamy taste".
|
6111
|
6908984
|
https://en.wikipedia.org/wiki?curid=6111
|
Chemical vapor deposition
|
Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.
In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.
Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics.
The term "chemical vapour deposition" was coined in 1960 by "John M. Blocher, Jr." who intended to differentiate "chemical" from "physical vapour deposition" (PVD).
Types.
CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.
Most modern CVD is either LPCVD or UHVCVD.
Uses.
CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.
Commercially important materials prepared by CVD.
Polysilicon.
Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:
SiHCl3 → Si + Cl2 + HCl
SiH4 → Si + 2 H2
This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.
Silicon dioxide.
Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:
SiH4 + O2 → SiO2 + 2 H2
SiCl2H2 + 2 N2O → SiO2 + 2 N2 + 2 HCl
Si(OC2H5)4 → SiO2 + byproducts
The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of "low- temperature oxide" (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.
Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:
4 PH3 + 5 O2 → 2 P2O5 + 6 H2
Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.
Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.
Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.
Silicon nitride.
Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:
3 SiH4 + 4 NH3 → Si3N4 + 12 H2
3 SiCl2H2 + 4 NH3 → Si3N4 + 6 HCl + 6 H2
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively).
Another two reactions may be used in plasma to deposit SiNH:
2 SiH4 + N2 → 2 SiNH + 3 H2
SiH4 + NH3 → SiNH + 3 H2
These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm).
Metals.
Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:
WF6 → W + 3 F2
WF6 + 3 H2 → W + 6 HF
Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.
CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal "M", the chloride deposition reaction is as follows:
2 MCl5 + 5 H2 → 2 M + 10 HCl
whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:
M(CO)n → M + n CO
the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.
Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:
2 Nb(OC2H5)5 → Nb2O5 + 5 C2H5OC2H5
Graphene.
Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.
The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.
Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.
The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.
The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.
Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.
Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.
On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.
Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.
Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.
Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.
Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.
Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
Graphene nanoribbon.
In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.
Diamond.
CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.
Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.
The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.
CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.
Chalcogenides.
Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements.
|
6112
|
46546583
|
https://en.wikipedia.org/wiki?curid=6112
|
CN Tower
|
The CN Tower () is a communications and observation tower in Toronto, Ontario, Canada. Completed in 1976, it is located in downtown Toronto, built on the former Railway Lands. Its name "CN" referred to Canadian National, the railway company that built the tower. Following the railway's decision to divest non-core freight railway assets prior to the company's privatization in 1995, it transferred the tower to the Canada Lands Company, a federal Crown corporation responsible for the government's real estate portfolio.
The CN Tower held the record for the world's tallest free-standing structure for 32 years, from 1975 until 2007, when it was surpassed by the Burj Khalifa, and was the world's tallest tower until 2009 when it was surpassed by the Canton Tower. It is currently the tenth-tallest free-standing structure in the world and remains the tallest free-standing structure on land in the Western Hemisphere. In 1995, the CN Tower was declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. It also belongs to the World Federation of Great Towers.
It is a signature icon of Toronto's skyline and attracts more than two million international visitors annually. It houses several observation decks, a revolving restaurant at some , and an entertainment complex.
History.
The original concept of the CN Tower was first conceived in 1968 when the Canadian National Railway wanted to build a large television and radio communication platform to serve the Toronto area, and to demonstrate the strength of Canadian industry and CN in particular. These plans evolved over the next few years, and the project became official in 1972.
The tower would have been part of Metro Centre (see CityPlace), a large development south of Front Street on the Railway Lands, a large railway switching yard that was being made redundant after the opening of the MacMillan Yard north of the city in 1965 (then known as Toronto Yard). Key project team members were NCK Engineering as structural engineer; John Andrews Architects; Webb, Zerafa, Menkes, Housden Architects; Foundation Building Construction; and Canron (Eastern Structural Division).
As Toronto grew rapidly during the late 1960s and early 1970s, multiple skyscrapers were constructed in the downtown core, most notably First Canadian Place, which has Bank of Montreal's head offices. The reflective nature of the new buildings reduced the quality of broadcast signals, requiring new, higher antennas that were at least tall. The radio wire is estimated to be long in 44 pieces, the heaviest of which weighs around .
At the time, most data communications took place over point-to-point microwave links, whose dish antennas covered the roofs of large buildings. As each new skyscraper was added to the downtown, former line-of-sight links were no longer possible. CN intended to rent "hub" space for microwave links, visible from almost any building in the Toronto area.
The original plan for the tower envisioned a tripod consisting of three independent cylindrical "pillars" linked at various heights by structural bridges. Had it been built, this design would have been considerably shorter, with the metal antenna located roughly where the concrete section between the main level and The Top lies today. As the design effort continued, it evolved into the current design with a single continuous hexagonal core to The Top, with three support legs blended into the hexagon below the main level, forming a large Y-shape structure at the ground level.
The idea for the main level in its current form evolved around this time, but the Space Deck (currently named The Top) was not part of the plans until later. One engineer in particular felt that visitors would feel the higher observation deck would be worth paying extra for, and the costs in terms of construction were not prohibitive. Also around this time, it was realized that the tower could become the world's tallest free-standing structure to improve signal quality and attract tourists, and plans were changed to incorporate subtle modifications throughout the structure to this end.
Construction.
The CN Tower was built by Canada Cement Company (also known as the Cement Foundation Company of Canada at the time), a subsidiary of Sweden's Skanska, a global project-development and construction group.
Construction began on February 6, 1973, with massive excavations at the tower base for the foundation. By the time the foundation was complete, of earth and shale were removed to a depth of in the centre, and a base incorporating of concrete with of rebar and of steel cable had been built to a thickness of . This portion of the construction was fairly rapid, with only four months needed between the start and the foundation being ready for construction on top.
To create the main support pillar, workers constructed a hydraulically raised slipform at the base. This was a fairly unprecedented engineering feat on its own, consisting of a large metal platform that raised itself on jacks at about per day as the concrete below set. Concrete was poured Monday to Friday (not continuously) by a small team of people until February 22, 1974, at which time it had already become the tallest structure in Canada, surpassing the recently built tall Inco Superstack in Sudbury, built using similar methods.
The tower contains of concrete, all of which was mixed on-site in order to ensure batch consistency. Through the pour, the vertical accuracy of the tower was maintained by comparing the slip form's location to massive plumb bobs hanging from it, observed by small telescopes from the ground. Over the height of the tower, it varies from true vertical accuracy by only .
In August 1974, construction of the main level commenced. Using 45 hydraulic jacks attached to cables strung from a temporary steel crown anchored to the top of the tower, twelve giant steel and wooden bracket forms were slowly raised, ultimately taking about a week to crawl up to their final position. These forms were used to create the brackets that support the main level, as well as a base for the construction of the main level itself. The Top was built of concrete poured into a wooden frame attached to rebar at the lower level deck, and then reinforced with a large steel compression band around the outside.
While still under construction, the CN Tower officially became the world's tallest free-standing structure on March 31, 1975.
The antenna was originally to be raised by crane as well, but, during construction, the Sikorsky S-64 Skycrane helicopter became available when the United States Army sold one to civilian operators. The helicopter, named "Olga", was first used to remove the crane, and then flew the antenna up in 36 sections.
The flights of the antenna pieces were a minor tourist attraction of their own, and the schedule was printed in local newspapers. Use of the helicopter saved months of construction time, with this phase taking only three and a half weeks instead of the planned six months. The tower was topped-off on April 2, 1975, after 26 months of construction, officially capturing the height record from Moscow's Ostankino Tower, and bringing the total mass to .
Two years into the construction, plans for Metro Centre were scrapped, leaving the tower isolated on the Railway Lands in what was then a largely abandoned light-industrial space. This caused serious problems for tourists to access the tower. Ned Baldwin, project architect with John Andrews, wrote at the time that "All of the logic which dictated the design of the lower accommodation has been upset," and that "Under such ludicrous circumstances Canadian National would hardly have chosen this location to build."
Opening.
The CN Tower opened on June 26, 1976. The construction costs of approximately ($ in dollars) were repaid in fifteen years.
From the mid-1970s to the mid-1980s, the CN Tower was practically the only development along Front Street West; it was still possible to see Lake Ontario from the foot of the CN Tower due to the expansive parking lots and lack of development in the area at the time. As the area around the tower was developed, particularly with the completion of the Metro Toronto Convention Centre (north building) in 1984 and SkyDome in 1989 (renamed Rogers Centre in 2005), the former Railway Lands were redeveloped and the tower became the centre of a newly developing entertainment area. Access was greatly improved with the construction of the SkyWalk in 1989, which connected the tower and SkyDome to the nearby Union Station railway and subway station, and, in turn, to the city's Path underground pedestrian system. By the mid-1990s, it was the centre of a thriving tourist district. The entire area continues to be an area of intense building, notably a boom in condominium construction in the early 21st century, as well as the 2013 opening of the Ripley's Aquarium by the base of the tower.
Early years.
When the CN Tower opened in 1976, there were three public observation points: The Top (then known as the Space Deck) that stands at , the Indoor Observation Level (later named Main Observation Level) at , and the Outdoor Observation Terrace (at the same level as the Glass Floor) at . One floor above the Indoor Observation Level was the Top of Toronto Restaurant (now named "360 The Restaurant at the CN Tower"), which completed a revolution once every 72 minutes.
The tower would garner worldwide media attention when stuntman Dar Robinson jumped off the CN Tower on two occasions in 1979 and 1980. The first was for a scene from the movie "Highpoint", in which Robinson received ($ in dollars) for the stunt. The second was for a personal documentary. The first stunt had him use a parachute which he deployed three seconds before impact with the ground, while the second one used a wire decelerator attached to his back.
On June 26, 1986, the tenth anniversary of the tower's opening, high-rise firefighting and rescue advocate Dan Goodwin, in a sponsored publicity event, used his hands and feet to climb the outside of the tower, a feat he performed twice on the same day. Following both ascents, he used multiple rappels to descend to the ground.
From 1985 to 1992, the CN Tower basement level hosted the world's first flight simulator ride, Tour of the Universe, based on the flight of a Space Shuttle. The ride was replaced in 1992 with a similar attraction entitled "Space Race." It was later dismantled and replaced by two other rides in 1998 and 1999.
The 1990s and 2000s.
A glass floor at an elevation of was installed in 1994. Canadian National Railway sold the tower to Canada Lands Company prior to privatizing the company in 1995, when it divested all operations not directly related to its core freight shipping businesses. The tower's name and wordmark were adjusted to remove the CN railways logo, and the tower's official name was renamed Canada's National Tower (from Canadian National Tower), though the tower is commonly called the CN Tower.
Further changes were made from 1997 to January 2004: TrizecHahn Corporation managed the tower and instituted several expansion projects including a entertainment expansion, the 1997 addition of two new elevators (to a total of six) and the consequential relocation of the staircase from the north side leg to inside the core of the building, a conversion that also added nine stairs to the climb. TrizecHahn also owned the Willis Tower (Sears Tower at the time) in Chicago approximately at the same time.
In 2007, light-emitting diode (LED) lights replaced the incandescent lights that lit the CN Tower at night. This was done to take advantage of the cost savings of LED lights over incandescent lights. The colour of the LED lights can change, compared to the constant white colour of the incandescent lights. On September 12, 2007, Burj Khalifa in Dubai, then under construction and known as Burj Dubai, surpassed the CN Tower as the world's tallest free-standing structure on land. In 2008, glass panels were installed in one of the CN Tower elevators, which established a world record () for highest glass floor panelled elevator in the world.
2010s: EdgeWalk.
On August 1, 2011, the CN Tower opened the EdgeWalk, an amusement in which thrill-seekers can walk on and around the roof of the main pod of the tower at , which is directly above the 360 Restaurant. It is the world's highest full-circle, hands-free walk. Visitors are tethered to an overhead rail system and walk around the edge of the CN Tower's main pod above the 360 Restaurant on a metal floor. The attraction is closed throughout the winter and during periods of electrical storms and high winds.
One of the notable guests who visited EdgeWalk was Canadian comedian Rick Mercer, featured as the first episode of the ninth season of his CBC Television news satire show, "Rick Mercer Report". There, he was accompanied by Canadian pop singer Jann Arden. The episode first aired on April 10, 2013.
2015 Pan Am Games.
The tower and surrounding areas were prominent in the 2015 Pan American Games production. In the opening ceremony, a pre-recorded segment featured track-and-field athlete Bruny Surin passing the flame to sprinter Donovan Bailey on the EdgeWalk and parachuting into Rogers Centre. A fireworks display off the tower concluded both the opening and closing ceremonies.
Canada 150.
On July 1, 2017, as part of the nationwide celebrations for Canada 150, which celebrated the 150th anniversary of Canadian Confederation, fireworks were once again shot from the tower in a five-minute display coordinated with the tower lights and music broadcast on a local radio station.
2020s.
The CN Tower closed during much of the COVID-19 pandemic. During much of the pandemic, the gift shop was renovated to take advantage of the lack of visitors from the tower's closure.
Structure.
The CN Tower consists of several substructures. The main portion of the tower is a hollow concrete hexagonal pillar containing the stairwells and power and plumbing connections. The tower's six elevators are located in the three inverted angles created by the Tower's hexagonal shape (two elevators per angle). Each of the three elevator shafts is lined with glass, allowing for views of the city as the glass-windowed elevators make their way through the tower. The stairwell was originally located in one of these angles (the one facing north), but was moved into the central hollow of the tower; the tower's new fifth and sixth elevators were placed in the hexagonal angle that once contained the stairwell. On top of the main concrete portion of the tower is a tall metal broadcast antenna, carrying television and radio signals. There are three visitor areas:
The hexagonal shape is visible between the two highest areas; however, below the main deck, three large supporting legs give the tower the appearance of a large tripod.
The main deck level has seven storeys, some of which are open to the public. Below the public areas—at —is a large white donut-shaped radome containing the structure's UHF transmitters. The glass floor at the Lower Observation Level has an area of and can withstand a pressure of . The floor's thermal glass units are thick, consisting of a pane of laminated glass, airspace and a pane of laminated glass. In 2008, one elevator was upgraded to add a glass floor panel, believed to have the highest vertical rise of any elevator equipped with this feature. The Horizons Cafe and the lookout level are at . The 360 Restaurant (formally "360 The Restaurant at the CN Tower"), a revolving restaurant that completes a full rotation once every 72 minutes, is at . When the tower first opened, it also featured a discotheque named Sparkles (at the Indoor Observation Level), billed as the highest disco and dance floor in the world.
The Top was once the highest public observation deck in the world until it was surpassed by the Shanghai World Financial Center in 2008.
A metal staircase reaches the main deck level after 1,776 steps, and The Top above after 2,579 steps; it is the tallest metal staircase on Earth. These stairs are intended for emergency use only except for charity stair-climb events two times during the year. The average climber takes approximately 30 minutes to climb to the base of the radome, but the fastest climb on record is 7 minutes and 52 seconds in 1989 by Brendan Keenoy, an Ontario Provincial Police officer. In 2002, Canadian Olympian and Paralympic champion Jeff Adams climbed the stairs of the tower in a specially designed wheelchair. The stairs were originally on one of the three sides of the tower (facing north), with a glass view, but these were later replaced with the third elevator pair and the stairs were moved to the inside of the core. Top climbs on the new, windowless stairwell used since around 2003 have generally been over ten minutes.
Falling ice danger.
A freezing rain storm on March 2, 2007, resulted in a layer of ice several centimetres thick forming on the side of the tower and other downtown buildings. The sun thawed the ice, then winds of up to blew some of it away from the structure. There were fears that cars and windows of nearby buildings would be smashed by large chunks of ice. In response, police closed some streets surrounding the tower. During morning rush hour on March 5 of the same year, police expanded the area of closed streets to include the Gardiner Expressway away from the tower as increased winds blew the ice farther, as far north as King Street West, away, where a taxicab window was shattered. Subsequently, on March 6, 2007, the Gardiner Expressway reopened after winds abated.
On April 16, 2018, falling ice from the CN Tower punctured the roof of the nearby Rogers Centre stadium, causing the Toronto Blue Jays to postpone the game that day to the following day as a doubleheader; this was the third doubleheader held at the Rogers Centre. On April 20 of the same year, the CN Tower reopened.
Safety features.
In August 2000, a fire broke out at the Ostankino Tower in Moscow, killing three people and causing extensive damage. The fire was blamed on poor maintenance and outdated equipment. The failure of the fire-suppression systems and the lack of proper equipment for firefighters allowed the fire to destroy most of the interior and sparked fears the tower might even collapse.
The Ostankino Tower was completed nine years before the CN Tower and is only shorter. The parallels between the towers led to some concern that the CN Tower could be at risk of a similar tragedy. However, Canadian officials subsequently stated that it is "highly unlikely" that a similar disaster could occur at the CN Tower, as it has important safeguards that were not present in the Ostankino Tower. Specifically, officials cited:
Officials also noted that the CN Tower has an excellent safety record, although there was an electrical fire in the antennas on August 16, 2017 — the tower's first fire. Moreover, other supertall structures built between 1967 and 1976 — such as the Willis Tower (formerly the Sears Tower), the World Trade Center (until its destruction on September 11, 2001), the Fernsehturm Berlin, the Aon Center, 875 North Michigan Avenue (formerly the John Hancock Center), and First Canadian Place — also have excellent safety records, which suggests that the Ostankino Tower accident was a rare safety failure, and that the likelihood of similar events occurring at other supertall structures is extremely low.
Lighting.
The CN Tower was originally lit at night with incandescent lights, which were removed in 1997 because they were inefficient and expensive to repair. In June 2007, the tower was outfitted with 1,330 super-bright LED lights inside the elevator shafts, shooting over the main pod and upward to the top of the tower's mast to light the tower from dusk until 2 a.m. the next calendar day. The official opening ceremony took place on June 28, 2007, before the Canada Day holiday weekend.
The tower changes its lighting scheme on holidays and to commemorate major events. After the 95th Grey Cup in Toronto, the tower was lit in green and white to represent the colours of the Grey Cup champion Saskatchewan Roughriders. From sundown on August 27, 2011, to sunrise the following day, the tower was lit in orange, the official colour of the New Democratic Party (NDP), to commemorate the death of federal NDP leader and leader of the official opposition Jack Layton. When former South African president Nelson Mandela died, the tower was lit in the colours of the South African flag. When former federal finance minister under Stephen Harper's Conservatives Jim Flaherty died, the tower was lit in green to reflect his Irish Canadian heritage. On the night of the attacks on Paris on November 13, 2015, the tower displayed the colours of the French flag. On June 8, 2021, the tower displayed the colours of the Toronto Maple Leafs' archrivals Montreal Canadiens after they advanced to the semifinals of 2021 Stanley Cup playoffs. The CN Tower was lit in the colours of the Ukrainian flag during the beginning of the Russian invasion of Ukraine in late February 2022. On June 4, 2025, the CN Tower was lit in the colours of the Edmonton Oilers for the 2025 Stanley Cup Final.
Programmed remotely from a desktop computer with a wireless network interface controller in Burlington, Ontario, the LEDs use less energy to light than the previous incandescent lights (10% less energy than the dimly lit version and 60% less than the brightly lit version). The estimated cost to use the LEDs is $1,000 per month.
During the spring and autumn bird migration seasons, the lights are turned off to comply with the voluntary Fatal Light Awareness Program, which "encourages buildings to dim unnecessary exterior lighting to mitigate bird mortality during spring and summer migration."
Height comparisons.
The CN Tower is the tallest freestanding structure in the Western Hemisphere. As of 2013, there were two other freestanding structures in the Western Hemisphere exceeding in height: the Willis Tower in Chicago, which stands at when measured to its pinnacle, and One World Trade Center in New York City, which has a pinnacle height of , or approximately shorter than the CN Tower. Due to the symbolism of the number 1776 (the year of the signing of the United States Declaration of Independence), the height of One World Trade Center is unlikely to be increased. The proposed Chicago Spire was expected to exceed the height of the CN Tower, but its construction was halted early due to financial difficulties amid the Great Recession, and was eventually cancelled in 2010.
Height distinction debate.
"World's Tallest Tower" title.
"Guinness World Records" has called the CN Tower "the world's tallest self-supporting tower" and "the world's tallest free-standing tower". Although Guinness did list this description of the CN Tower under the heading "tallest building" at least once, it has also listed it under "tallest tower", omitting it from its list of "tallest buildings." In 1996, Guinness changed the tower's classification to "World's Tallest Building and Freestanding Structure". Emporis and the Council on Tall Buildings and Urban Habitat both listed the CN Tower as the world's tallest free-standing structure on land, and specifically state that the CN Tower is not a true building, thereby awarding the title of world's tallest building to Taipei 101, which is shorter than the CN Tower. The issue of what was tallest became moot when Burj Khalifa, then under construction, exceeded the height of the CN Tower in 2007 (see below).
Although the CN Tower contains a restaurant, a gift shop and multiple observation levels, it does not have floors continuously from the ground, and therefore it is not considered a building by the Council on Tall Buildings and Urban Habitat (CTBUH) or Emporis. CTBUH defines a building as "a structure that is designed for residential, business, or manufacturing purposes. An essential characteristic of a building is that it has floors." The CN Tower and other similar structures—such as the Ostankino Tower in Moscow, Russia; the Oriental Pearl Tower in Shanghai, China; The Strat in Las Vegas, Nevada, United States; and the Eiffel Tower in Paris, France—are categorized as "towers", which are free-standing structures that may have observation decks and a few other habitable levels, but do not have floors from the ground up. The CN Tower was the tallest tower by this definition until 2010 (see below).
Taller than the CN Tower are numerous radio masts and towers, which are held in place by guy-wires, the tallest being the KVLY-TV mast in Blanchard, North Dakota, in the United States at tall, leading to a distinction between these and "free-standing" structures. Additionally, the Petronius Platform stands above its base on the bottom of the Gulf of Mexico near the Mississippi River Delta, but only the top of this oil and natural gas platform are above water, and the structure is thus partially supported by its buoyancy. Like the CN Tower, none of these taller structures are commonly considered buildings.
On September 12, 2007, Burj Khalifa, which is a hotel, residential and commercial building in Dubai, United Arab Emirates (formerly known as Burj Dubai before opening), passed the CN Tower's 553.33-m height. The CN Tower held the record of the tallest freestanding structure on land for over 30 years.
After Burj Khalifa had been formally recognized by the Guinness World Records as the world's tallest freestanding structure, Guinness re-certified CN Tower as the world's tallest freestanding tower. The tower definition used by Guinness was defined by the Council on Tall Buildings and Urban Habitat as 'a building in which less than 50% of the construction is usable floor space'. "Guinness World Records" editor-in-chief Craig Glenday announced that Burj Khalifa was not classified as a tower because it has too much usable floor space to be considered to be a tower. CN Tower still held world records for highest above ground wine cellar (in 360 Restaurant) at 351 m, highest above-ground restaurant at 346 m (Horizons Restaurant), and tallest free-standing concrete tower during Guinness's recertification. The CN Tower was surpassed in 2009 by the Canton Tower in Guangzhou, China, which stands at tall, as the world's tallest tower; which in turn was surpassed by the Tokyo Skytree in 2011, which currently is the tallest tower at in height. The CN Tower, as of 2022, stands as the tenth-tallest free-standing structure on land, remains the tallest free-standing structure in the Western Hemisphere, and is the third-tallest tower. The CN Tower is the second-tallest free-standing structure in the Commonwealth of Nations behind Merdeka 118 in Kuala Lumpur, Malaysia.
Height records.
Since its construction, the tower has gained the following world height records:
Use.
The CN Tower has been and continues to be used as a communications tower for a number of different media and by numerous companies.
Radio.
There is no AM broadcasting from the CN Tower. The FM transmitters are situated in a metal broadcast antenna, on top of the main concrete portion of the tower at an elevation above from the ground.
In popular culture.
The CN Tower has been featured in numerous films, television shows, music recording covers, and video games. The tower also has its own official mascot, which resembles the tower itself.
|
6113
|
6727347
|
https://en.wikipedia.org/wiki?curid=6113
|
Chain rule
|
In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if formula_1 is the function such that formula_2 for every , then the chain rule is, in Lagrange's notation,
formula_3
or, equivalently,
formula_4
The chain rule may also be expressed in Leibniz's notation. If a variable depends on the variable , which itself depends on the variable (that is, and are dependent variables), then depends on as well, via the intermediate variable . In this case, the chain rule is expressed as
formula_5
and
formula_6
for indicating at which points the derivatives have to be evaluated.
In integration, the counterpart to the chain rule is the substitution rule.
Intuitive explanation.
Intuitively, the chain rule states that knowing the instantaneous rate of change of relative to and that of relative to allows one to calculate the instantaneous rate of change of relative to as the product of the two rates of change.
As put by George F. Simmons: "If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."
The relationship between this example and the chain rule is as follows. Let , and be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is formula_7 Similarly, formula_8 So, the rate of change of the relative positions of the car and the walking man is
formula_9
The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is,
formula_10
or, equivalently,
formula_11
which is also an application of the chain rule.
History.
The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of formula_12 as the composite of the square root function and the function formula_13. He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his "Analyse des infiniment petits". The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first "modern" version of the chain rule appears in Lagrange's 1797 "Théorie des fonctions analytiques"; it also appears in Cauchy's 1823 "Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal".
Statement.
The simplest form of the chain rule is for real-valued functions of one real variable. It states that if ' is a function that is differentiable at a point ' (i.e. the derivative exists) and ' is a function that is differentiable at , then the composite function formula_14 is differentiable at ', and the derivative is
formula_15
The rule is sometimes abbreviated as
formula_16
If and , then this abbreviated form is written in Leibniz notation as:
formula_17
The points where the derivatives are evaluated may also be stated explicitly:
formula_18
Carrying the same reasoning further, given "" functions formula_19 with the composite function formula_20, if each function formula_21 is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):
formula_22
Applications.
Composites of more than two functions.
The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of , , and ' (in that order) is the composite of with . The chain rule states that to compute the derivative of , it is sufficient to compute the derivative of ' and the derivative of . The derivative of can be calculated directly, and the derivative of can be calculated by applying the chain rule again.
For concreteness, consider the function
formula_23
This can be decomposed as the composite of three functions:
formula_24
So that formula_25.
Their derivatives are:
formula_26
The chain rule states that the derivative of their composite at the point is:
formula_27
In Leibniz's notation, this is:
formula_28
or for short,
formula_29
The derivative function is therefore:
formula_30
Another way of computing this derivative is to view the composite function as the composite of and "h". Applying the chain rule in this manner would yield:
formula_31
This is the same as what was computed above. This should be expected because .
Sometimes, it is necessary to differentiate an arbitrarily long composition of the form formula_32. In this case, define
formula_33
where formula_34 and formula_35 when formula_36. Then the chain rule takes the form
formula_37
or, in the Lagrange notation,
formula_38
Quotient rule.
The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function as the product . First apply the product rule:
formula_39
To compute the derivative of , notice that it is the composite of with the reciprocal function, that is, the function that sends to . The derivative of the reciprocal function is formula_40. By applying the chain rule, the last expression becomes:
formula_41
which is the usual formula for the quotient rule.
Derivatives of inverse functions.
Suppose that has an inverse function. Call its inverse function so that we have . There is a formula for the derivative of in terms of the derivative of . To see this, note that and satisfy the formula
formula_42
And because the functions formula_43 and are equal, their derivatives must be equal. The derivative of is the constant function with value 1, and the derivative of formula_43 is determined by the chain rule. Therefore, we have that:
formula_45
To express as a function of an independent variable , we substitute formula_46 for wherever it appears. Then we can solve for .
formula_47
For example, consider the function . It has an inverse . Because , the above formula says that
formula_48
This formula is true whenever is differentiable and its inverse is also differentiable. This formula can fail when one of these conditions is not true. For example, consider . Its inverse is , which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of at zero, then we must evaluate . Since and , we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because is not differentiable at zero.
Back propagation.
The chain rule forms the basis of the back propagation algorithm, which is used in gradient descent of neural networks in deep learning (artificial intelligence).
Higher derivatives.
Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that and , then the first few derivatives are:
formula_49
Proofs.
First proof.
One proof of the chain rule begins by defining the derivative of the composite function , where we take the limit of the difference quotient for as approaches :
formula_50
Assume for the moment that formula_51 does not equal formula_52 for any formula_53 near formula_54. Then the previous expression is equal to the product of two factors:
formula_55
If formula_56 oscillates near , then it might happen that no matter how close one gets to , there is always an even closer such that . For example, this happens near for the continuous function defined by for and otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function formula_57 as follows:
formula_58
We will show that the difference quotient for is always equal to:
formula_59
Whenever is not equal to , this is clear because the factors of cancel. When equals , then the difference quotient for is zero because equals , and the above product is zero because it equals times zero. So the above product is always equal to the difference quotient, and to show that the derivative of at exists and to determine its value, we need only show that the limit as goes to of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are and . The latter is the difference quotient for at , and because is differentiable at by assumption, its limit as tends to exists and equals .
As for , notice that is defined wherever ' is. Furthermore, ' is differentiable at by assumption, so is continuous at , by definition of the derivative. The function is continuous at because it is differentiable at , and therefore is continuous at . So its limit as ' goes to ' exists and equals , which is .
This shows that the limits of both factors exist and that they equal and , respectively. Therefore, the derivative of at "a" exists and equals .
Second proof.
Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function "g" is differentiable at "a" if there exists a real number "g"′("a") and a function "ε"("h") that tends to zero as "h" tends to zero, and furthermore
formula_60
Here the left-hand side represents the true difference between the value of "g" at "a" and at , whereas the right-hand side represents the approximation determined by the derivative plus an error term.
In the situation of the chain rule, such a function "ε" exists because "g" is assumed to be differentiable at "a". Again by assumption, a similar function also exists for "f" at "g"("a"). Calling this function "η", we have
formula_61
The above definition imposes no constraints on "η"(0), even though it is assumed that "η"("k") tends to zero as "k" tends to zero. If we set , then "η" is continuous at 0.
Proving the theorem requires studying the difference as "h" tends to zero. The first step is to substitute for using the definition of differentiability of "g" at "a":
formula_62
The next step is to use the definition of differentiability of "f" at "g"("a"). This requires a term of the form for some "k". In the above equation, the correct "k" varies with "h". Set and the right hand side becomes . Applying the definition of the derivative gives:
formula_63
To study the behavior of this expression as "h" tends to zero, expand "k""h". After regrouping the terms, the right-hand side becomes:
formula_64
Because "ε"("h") and "η"("k""h") tend to zero as "h" tends to zero, the first two bracketed terms tend to zero as "h" tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference , by the definition of the derivative is differentiable at "a" and its derivative is
The role of "Q" in the first proof is played by "η" in this proof. They are related by the equation:
formula_65
The need to define "Q" at "g"("a") is analogous to the need to define "η" at zero.
Third proof.
Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.
Under this definition, a function is differentiable at a point if and only if there is a function , continuous at and such that . There is at most one such function, and if is differentiable at then .
Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions , continuous at , and , continuous at , and such that,
formula_66
and
formula_67
Therefore,
formula_68
but the function given by is continuous at , and we get, for this
formula_69
A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.
Proof via infinitesimals.
If formula_70 and formula_71 then choosing infinitesimal formula_72 we compute the corresponding formula_73 and then the corresponding formula_74, so that
formula_75
and applying the standard part we obtain
formula_76
which is the chain rule.
Multivariable case.
The full generalization of the chain rule to multi-variable functions (such as formula_77) is rather technical. However, it is simpler to write in the case of functions of the form
formula_78
where formula_79, and formula_80 for each formula_81
As this case occurs often in the study of functions of a single variable, it is worth describing it separately.
Case of scalar-valued functions with multiple inputs.
Let formula_79, and formula_80 for each formula_81
To write the chain rule for the composition of functions
formula_85
one needs the partial derivatives of with respect to its arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use "D"-Notation, and to denote by
formula_86
the partial derivative of with respect to its th argument, and by
formula_87
the value of this derivative at .
With this notation, the chain rule is
formula_88
Example: arithmetic operations.
If the function is addition, that is, if
formula_89
then formula_90 and formula_91. Thus, the chain rule gives
formula_92
For multiplication
formula_93
the partials are formula_94 and formula_95. Thus,
formula_96
The case of exponentiation
formula_97
is slightly more complicated, as
formula_98
and, as formula_99
formula_100
It follows that
formula_101
General rule: Vector-valued functions with multiple inputs.
The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions and , and a point in . Let denote the total derivative of at and denote the total derivative of at . These two derivatives are linear transformations and , respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of at :
formula_102
or for short,
formula_103
The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.
Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:
formula_104
or for short,
formula_105
That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).
The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If , , and are 1, so that and , then the Jacobian matrices of and are . Specifically, they are:
formula_106
The Jacobian of is the product of these matrices, so it is , as expected from the one-dimensional chain rule. In the language of linear transformations, is the function which scales a vector by a factor of and is the function which scales a vector by a factor of . The chain rule says that the composite of these two linear transformations is the linear transformation , and therefore it is the function that scales a vector by .
Another way of writing the chain rule is used when "f" and "g" are expressed in terms of their components as and . In this case, the above rule for Jacobian matrices is usually written as:
formula_107
The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the -th coordinate direction is found by multiplying the Jacobian matrix by the -th basis vector. By doing this to the formula above, we find:
formula_108
Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:
formula_109
More conceptually, this rule expresses the fact that a change in the direction may change all of through , and any of these changes may affect .
In the special case where , so that is a real-valued function, then this formula simplifies even further:
formula_110
This can be rewritten as a dot product. Recalling that , the partial derivative is also a vector, and the chain rule says that:
formula_111
Example.
Given where and , determine the value of and using the chain rule.
formula_112
and
formula_113
Higher derivatives of multivariable functions.
Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If is a function of as above, then the second derivative of is:
formula_114
Further generalizations.
All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.
One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of is the composite of the derivative of and the derivative of . This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.
The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.
In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings determines a morphism of Kähler differentials which sends an element to , the exterior differential of . The formula holds in this context as well.
The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a -manifold to a -manifold (its tangent bundle) and a -function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula .
There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) "dX""t" with a twice-differentiable function "f". In Itō's lemma, the derivative of the composite function depends not only on "dX""t" and the derivative of "f" but also on the second derivative of "f". The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types.
|
6115
|
27015025
|
https://en.wikipedia.org/wiki?curid=6115
|
P versus NP problem
|
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be "verified" in polynomial time is "NP", standing for "nondeterministic polynomial time".
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Example.
Consider the following yes/no problem: given an incomplete Sudoku grid of size formula_1, is there at least one legal solution where every row, column, and formula_2 square contains the integers 1 through formula_3? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.)
History.
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the National Security Agency, speculating that the time required to crack a sufficiently complex code would increase exponentially with the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
Context.
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is "deterministic" (given the computer's present state and any inputs, there is only one possible action that the computer might take) and "sequential" (it performs actions one after the other).
In this theory, the class P consists of all "decision problems" (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
NP-completeness.
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so "any" instance of "any" problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine "M" guaranteed to halt in polynomial time, does a polynomial-size input that "M" will accept exist? It is in NP because (given an input) it is simple to check whether "M" accepts the input by simulating "M"; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine "M" that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Harder problems.
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have "exponential" running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2"p"("n")) time, where "p"("n") is a polynomial function of "n". A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an "N" × "N" board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length "n" has a runtime of at least formula_4 for some constant "c". Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
Problems in NP not known to be in P or NP-complete.
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than "k". No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
formula_5
to factor an "n"-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as "Cobham's thesis". It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph "G" contains "H" as a minor, where "H" is fixed, can be solved in a running time of "O"("n"2), where "n" is the number of vertices in "G". However, the big O notation hides a constant that depends superexponentially on "H". The constant is greater than formula_6 (using Knuth's up-arrow notation), and where "h" is the number of vertices in "H".
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Reasons to believe P ≠ NP or P = NP.
Cook provides a restatement of the problem in "The P Versus NP Problem" as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:
DLIN vs NLIN.
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known that DLIN ≠ NLIN.
Consequences of solution.
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP.
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would "not" lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP.
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Results about difficulty of proof.
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
Logical characterizations.
The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
Polynomial-time algorithms.
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
"// Algorithm that accepts the NP-complete language SUBSET-SUM."
"// this is a polynomial-time algorithm if and only if P = NP."
"// "Polynomial-time" means it returns "yes" in polynomial time when"
"// the answer should be "yes", and runs forever when it is "no"."
"// Input: S = a finite set of integers"
"// Output: "yes" if any subset of S adds up to 0."
"// Runs forever with no output otherwise."
"// Note: "Program number M" is the program obtained by"
"// writing the integer M in binary, then"
"// considering that string of bits to be a"
"// program. Every possible program can be"
"// generated this way, though most do nothing"
"// because of syntax errors."
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a "semi-algorithm").
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is "b" bits long, the above algorithm will try at least other programs first.
Formal definitions.
P and NP.
A "decision problem" is a problem that takes as input some string "w" over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length "n" in at most "cnk" steps, where "k" and "c" are constants independent of the input string, then we say that the problem can be solved in "polynomial time" and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
formula_7
where
formula_8
and a deterministic polynomial-time Turing machine is a deterministic Turing machine "M" that satisfies two conditions:
formula_11
formula_12
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of "certificate" and "verifier". Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
Let "L" be a language over a finite alphabet, Σ.
"L" ∈ NP if, and only if, there exists a binary relation formula_13 and a positive integer "k" such that the following two conditions are satisfied:
A Turing machine that decides "LR" is called a "verifier" for "L" and a "y" such that ("x", "y") ∈ "R" is called a "certificate of membership" of "x" in "L".
Not all verifiers must be polynomial-time. However, for "L" to be in NP, there must be a verifier that runs in polynomial time.
Example.
Let
formula_14
formula_15
Whether a value of "x" is composite is equivalent to of whether "x" is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness.
There are many equivalent ways of describing NP-completeness.
Let "L" be a language over a finite alphabet Σ.
"L" is NP-complete if, and only if, the following two conditions are satisfied:
Alternatively, if "L" ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to "L", then "L" is NP-complete. This is a common way of proving some new problem is NP-complete.
Claimed solutions.
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted.
Popular culture.
The film "Travelling Salesman", by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of "The Simpsons" seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of "Elementary", "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.
|
6117
|
47693289
|
https://en.wikipedia.org/wiki?curid=6117
|
Charles Sanders Peirce
|
Charles Sanders Peirce ( ; September 10, 1839 – April 19, 1914) was an American scientist, mathematician, logician, and philosopher who is sometimes known as "the father of pragmatism". According to philosopher Paul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician". Bertrand Russell wrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".
Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories of relations and quantification. C. I. Lewis wrote, "The contributions of C. S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." For Peirce, logic also encompassed much of what is now called epistemology and the philosophy of science. He saw logic as the formal branch of semiotics or study of signs, of which he is a founder, which foreshadowed the debate among logical positivists and proponents of philosophy of language that dominated 20th-century Western philosophy. Peirce's study of signs also included a tripartite theory of predication.
Additionally, he defined the concept of abductive reasoning, as well as rigorously formulating mathematical induction and deductive reasoning. He was one of the founders of statistics. As early as 1886, he saw that logical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.
In metaphysics, Peirce was an "objective idealist" in the tradition of German philosopher Immanuel Kant as well as a scholastic realist about universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeled synechism and tychism respectively. Peirce believed an epistemic fallibilism and anti-skepticism went along with these views.
Biography.
Early life.
Peirce was born at 3 Phillips Place in Cambridge, Massachusetts. He was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of mathematics and astronomy at Harvard University. At age 12, Charles read his older brother's copy of Richard Whately's "Elements of Logic", then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning.
He suffered from his late teens onward from a nervous condition then known as "facial neuralgia", which would today be diagnosed as trigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain "he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper". Its consequences may have led to the social isolation of his later life.
Education.
Peirce went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 the Lawrence Scientific School awarded him a Bachelor of Science degree, Harvard's first "summa cum laude" chemistry degree. His academic record was otherwise undistinguished. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, and William James. One of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.
United States Coast Survey.
Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey, which in 1878 was renamed the United States Coast and Geodetic Survey, where he enjoyed his highly influential father's protection until the latter's death in 1880. At the Survey, he worked mainly in geodesy and gravimetry, refining the use of pendulums to determine small local variations in the Earth's gravity.
American Civil War.
This employment exempted Peirce from having to take part in the American Civil War; it would have been very awkward for him to do so, as the Boston Brahmin Peirces sympathized with the Confederacy. No members of the Peirce family volunteered or enlisted. Peirce grew up in a home where white supremacy was taken for granted, and slavery was considered natural. Peirce's father had described himself as a secessionist until the outbreak of the war, after which he became a Union partisan, providing donations to the Sanitary Commission, the leading Northern war charity.
Peirce liked to use the following syllogism to illustrate the unreliability of traditional forms of logic (for the first premise arguably assumes the conclusion):<poem>
All Men are equal in their political rights.
Negroes are Men.
Therefore, negroes are equal in political rights to whites.</poem>
Travels to Europe.
He was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. The Survey sent him to Europe five times, first in 1871 as part of a group sent to observe a solar eclipse. There, he sought out Augustus De Morgan, William Stanley Jevons, and William Kingdon Clifford, British mathematicians and logicians whose turn of mind resembled his own.
Harvard observatory.
From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness of stars and the shape of the Milky Way. In 1872 he founded the Metaphysical Club, a conversational philosophical club that Peirce, the future Supreme Court Justice Oliver Wendell Holmes Jr., the philosopher and psychologist William James, amongst others, formed in January 1872 in Cambridge, Massachusetts, and dissolved in December 1872. Other members of the club included Chauncey Wright, John Fiske, Francis Ellingwood Abbot, Nicholas St. John Green, and Joseph Bangs Warner. The discussions eventually birthed Peirce's notion of pragmatism.
National Academy of Sciences.
On April 20, 1877, he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, the kind of definition employed from 1960 to 1983.
In 1879 Peirce developed Peirce quincuncial projection, having been inspired by H. A. Schwarz's 1869 conformal transformation of a circle onto a polygon of "n" sides (known as the Schwarz–Christoffel mapping).
1880 to 1891.
During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months. Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedic "Century Dictionary". In 1885, an investigation by the Allison Commission exonerated Peirce, but led to the dismissal of Superintendent Julius Hilgard and several other Coast Survey employees for misuse of public funds. In 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhall's request.
Johns Hopkins University.
In 1879, Peirce was appointed lecturer in logic at Johns Hopkins University, which had strong departments in areas that interested him, such as philosophy (Royce and Dewey completed their PhDs at Hopkins), psychology (taught by G. Stanley Hall and studied by Joseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught by J. J. Sylvester, who came to admire Peirce's work on mathematics and logic). His "Studies in Logic by Members of the Johns Hopkins University" (1883) contained works by himself and Allan Marquand, Christine Ladd, Benjamin Ives Gilman, and Oscar Howard Mitchell, several of whom were his graduate students. Peirce's nontenured position at Hopkins was the only academic appointment he ever held.
Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day, Simon Newcomb. Newcomb had been a favourite student of Peirce's father; although "no doubt quite bright", "like Salieri in Peter Shaffer's Amadeus he also had just enough talent to recognize he was not a genius and just enough pettiness to resent someone who was". Additionally "an intensely devout and literal-minded Christian of rigid moral standards", he was appalled by what he considered Peirce's personal shortcomings. Peirce's efforts may also have been hampered by what Brent characterizes as "his difficult personality". In contrast, Keith Devlin believes that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.
Personal life.
Peirce's personal life undoubtedly worked against his professional success. After his first wife, Harriet Melusina Fay ("Zina"), left him in 1875, Peirce, while still legally married, became involved with Juliette, whose last name, given variously as Froissy and Pourtalai, and nationality (she spoke French) remain uncertain. When his divorce from Zina became final in 1883, he married Juliette. That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884. Over the years Peirce sought academic employment at various universities without success. He had no children by either marriage.
Later life and poverty.
In 1887, Peirce spent part of his inheritance from his parents to buy of rural land near Milford, Pennsylvania, which never yielded an economic return. There he had an 1854 farmhouse remodeled to his design. The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives, Charles writing prolifically, with much of his work remaining unpublished to this day (see Works). Living beyond their means soon led to grave financial and legal difficulties. Charles spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on the verso side of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while. Several people, including his brother James Mills Peirce and his neighbors, relatives of Gifford Pinchot, settled his debts and paid his property taxes and mortgage.
Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews for "The Nation" (with whose editor, Wendell Phillips Garrison, he became friendly). He did translations for the Smithsonian Institution, at its director Samuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing. He began but did not complete several books. In 1888, President Grover Cleveland appointed him to the Assay Commission.
From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago, who introduced Peirce to editor Paul Carus and owner Edward C. Hegeler of the pioneering American philosophy journal "The Monist", which eventually published at least 14 articles by Peirce. He wrote many texts in James Mark Baldwin's "Dictionary of Philosophy and Psychology" (1901–1905); half of those credited to him appear to have been written actually by Christine Ladd-Franklin under his supervision. He applied in 1902 to the newly formed Carnegie Institution for a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.
The one who did the most to help Peirce in these desperate times was his old friend William James, dedicating his "Will to Believe" (1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903). Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him. It has been believed that this was also why Peirce used "Santiago" ("St. James" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (See Charles Santiago Sanders Peirce for discussion and references).
Death and legacy.
Peirce died destitute in Milford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania Governor Gifford Pinchot arranged for Juliette's burial in Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.
Bertrand Russell (1959) wrote "Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever". Russell and Whitehead's "Principia Mathematica", published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later). A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own "process" thinking. (On Peirce and process metaphysics, see Lowe 1964.) Karl Popper viewed Peirce as "one of the greatest philosophers of all times". Yet Peirce's achievements were not immediately recognized. His imposing contemporaries William James and Josiah Royce admired him and Cassius Jackson Keyser, at Columbia and C. K. Ogden, wrote about Peirce with respect but to no immediate effect.
The first scholar to give Peirce his considered professional attention was Royce's student Morris Raphael Cohen, the editor of an anthology of Peirce's writings entitled "Chance, Love, and Logic" (1923), and the author of the first bibliography of Peirce's scattered writings. John Dewey studied under Peirce at Johns Hopkins. From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938 "Logic: The Theory of Inquiry" is much influenced by Peirce. The publication of the first six volumes of "Collected Papers" (1931–1935) was the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds; however it did not prompt an outpouring of secondary studies. The editors of those volumes, Charles Hartshorne and Paul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939), Feibleman (1946), and Goudge (1950), the 1941 PhD thesis by Arthur W. Burks (who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). The Charles S. Peirce Society was founded in 1946. Its "Transactions", an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965. (See Phillips 2014, 62 for discussion of Peirce and Dewey relative to transactionalism.)
By 1943 such was Peirce's reputation, in the US at least, that "Webster's Biographical Dictionary" said that Peirce was "now regarded as the most original thinker and greatest logician of his time".
In 1949, while doing unrelated archival work, the historian of mathematics Carolyn Eisele (1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). In 1952, the Scottish philosopher W. B. Gallie had his book "Peirce and Pragmatism" published, which introduced the work of Peirce to an international readership. A.J. Ayer, the English philosopher, provided the Editorial Foreword to Gallie's book. In it he credited Peirce's philosophy as being 'not only of great historical significance, as one of the original sources of American pragmatism, but also extremely important in itself.' Ayer concluded: 'it is clear from Professor Gallie’s exposition of his doctrines that he is a philosopher from whom we still have much to learn.'
Beginning around 1960, Max Fisch (1900-1995), the philosopher and historian of ideas, emerged as an authority on Peirce (Fisch, 1986). He included many of his relevant articles in a survey (Fisch 1986: 422–448) of the impact of Peirce's thought through 1983.
Peirce has gained an international following, marked by university research centers devoted to Peirce studies and pragmatism in Brazil (CeneP/CIEP and Centro de Estudos de Pragmatismo), Finland (HPRC and ), Germany (Wirth's group, Hoffman's and Otte's group, and Deuser's and Härle's group), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was the University of Toronto, thanks in part to the leadership of Thomas Goudge and David Savan. In recent years, U.S. Peirce scholars have clustered at Indiana University – Purdue University Indianapolis, home of the Peirce Edition Project (PEP) –, and Pennsylvania State University.
In recent years, Peirce's trichotomy of signs is exploited by a growing number of practitioners for marketing and design tasks.
John Deely writes that Peirce was the last of the "moderns" and "first of the postmoderns". He lauds Peirce's doctrine of signs as a contribution to the dawn of the Postmodern epoch. Deely additionally comments that "Peirce stands...in a position analogous to the position occupied by Augustine as last of the Western Fathers and first of the medievals".
Works.
Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such as "Proceedings of the American Academy of Arts and Sciences", the "Journal of Speculative Philosophy", "The Monist", "Popular Science Monthly", the "American Journal of Mathematics", "Memoirs of the National Academy of Sciences", "The Nation", and others. See Articles by Peirce, published in his lifetime for an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime was "Photometric Researches" (1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he edited "Studies in Logic" (1883), containing chapters by himself and his graduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; see Lectures by Peirce.
After Peirce's death, Harvard University obtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967) catalogued this "Nachlass" did it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages, mostly still unpublished except on microfilm. On the vicissitudes of Peirce's papers, see Houser (1989). Reportedly the papers remain in unsatisfactory condition.
The first published anthology of Peirce's articles was the one-volume "Chance, Love and Logic: Philosophical Essays", edited by Morris Raphael Cohen, 1923, still in print. Other one-volume anthologies were published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included:
1931–1958: "Collected Papers of Charles Sanders Peirce" (CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes. Edited (1–6) by Charles Hartshorne and Paul Weiss and (7–8) by Arthur Burks, in print and online.
1975–1987: "Charles Sanders Peirce: Contributions to" The Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 in "The Nation". Edited by Kenneth Laine Ketner and James Edward Cook, online.
1976: "The New Elements of Mathematics by Charles S. Peirce", 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print.
1977: "Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby" (2nd edition 2001), included Peirce's entire correspondence (1903–1912) with Victoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of the "Collected Papers", and the 20-odd pre-1890 items included so far in the "Writings". Edited by Charles S. Hardwick with James Cook, out of print.
1982–now: "Writings of Charles S. Peirce, A Chronological Edition" (W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of the "Collected Papers" led Max Fisch and others in the 1970s to found the Peirce Edition Project (PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work. "Writings of Charles S. Peirce", 8 was published in November 2010; and work continues on "Writings of Charles S. Peirce", 7, 9, and 11. In print and online.
1985: "Historical Perspectives on Peirce's Logic of Science: A History of Science", 2 volumes. Auspitz has said, "The extent of Peirce's immersion in the science of his day is evident in his reviews in the "Nation" [...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science", referring latterly to "Historical Perspectives". Edited by Carolyn Eisele, back in print.
1992: "Reasoning and the Logic of Things" collects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary by Hilary Putnam, in print.
1992–1998: "The Essential Peirce" (EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) by "Peirce Edition Project" editors, in print.
1997: "Pragmatism as a Principle and Method of Right Thinking" collects Peirce's 1903 Harvard "Lectures on Pragmatism" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear in "The Essential Peirce", 2. Edited by Patricia Ann Turisi, in print.
2010: "Philosophy of Mathematics: Selected Writings" collects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print.
Mathematics.
Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked on linear algebra, matrices, various geometries, topology and Listing numbers, Bell numbers, graphs, the four-color problem, and the nature of continuity.
He worked on applied mathematics in economics, engineering, and map projections, and was especially active in probability and statistics.
↓ The Peirce arrow, <br>symbol for "(neither) ... nor ...", also called the "Quine dagger"
Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died:
In 1860, he suggested a cardinal arithmetic for infinite numbers, years before any work by Georg Cantor (who completed his dissertation in 1867) and without access to Bernard Bolzano's 1851 (posthumous) "Paradoxien des Unendlichen".
In 1880–1881, he showed how Boolean algebra could be done via a repeated sufficient single binary operation (logical NOR), anticipating Henry M. Sheffer by 33 years. (See also De Morgan's Laws.)
In 1881, he set out the axiomatization of natural number arithmetic, a few years before Richard Dedekind and Giuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of an infinite set (Dedekind-infinite), as a set that can be put into a one-to-one correspondence with one of its proper subsets.
In 1885, he distinguished between first-order and second-order quantification. In the same paper he set out what can be read as the first (primitive) axiomatic set theory, anticipating Zermelo by about two decades (Brady 2000, pp. 132–133).
In 1886, he saw that Boolean calculations could be carried out via electrical switches, anticipating Claude Shannon by more than 50 years.
By the later 1890s he was devising existential graphs, a diagrammatic notation for the predicate calculus. Based on them are John F. Sowa's conceptual graphs and Sun-Joo Shin's diagrammatic reasoning.
Peirce wrote drafts for an introductory textbook, with the working title "The New Elements of Mathematics", that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared in "The New Elements of Mathematics by Charles S. Peirce" (1976), edited by mathematician Carolyn Eisele.
Peirce agreed with Auguste Comte in regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirce classified mathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including the real numbers) and continua. Influenced by his father Benjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the science "about" drawing conclusions necessary and otherwise.
Mathematics of logic.
Mathematical logic and foundations, some noted articles
Probability and statistics.
Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity ("absolute chance") is real (see Tychism on his view). Most of his statistical writings promote the frequency interpretation of probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of) probability when such models are not based on objective randomization. Though Peirce was largely a frequentist, his possible world semantics introduced the "propensity" theory of probability before Karl Popper. Peirce (sometimes with Joseph Jastrow) investigated the probability judgments of experimental subjects, "perhaps the very first" elicitation and estimation of subjective probabilities in experimental psychology and (what came to be called) Bayesian statistics.
Peirce formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With a repeated measures design, Charles Sanders Peirce and Joseph Jastrow introduced blinded, controlled randomized experiments in 1884 (Hacking 1990:205) (before Ronald A. Fisher). He invented optimal design for experiments on gravity, in which he "corrected the means". He used correlation and smoothing. Peirce extended the work on outliers by Benjamin Peirce, his father. He introduced the terms "confidence" and "likelihood" (before Jerzy Neyman and Fisher). (See Stephen Stigler's historical books and Ian Hacking 1990.)
As a philosopher.
Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages of Immanuel Kant's "Critique of Pure Reason", in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines, including mathematics, logic, philosophy, statistics, astronomy, metrology, geodesy, experimental psychology, economics, linguistics, and the history and philosophy of science. This work has enjoyed renewed interest and approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration of how philosophy can be applied effectively to human problems.
Peirce's philosophy includes a pervasive three-category system: belief that truth is immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic as formal semiotic on signs, on arguments, and on inquiry's ways—including philosophical pragmatism (which he founded), critical common-sensism, and scientific method—and, in metaphysics: Scholastic realism, e.g. John Duns Scotus, belief in God, freedom, and at least an attenuated immortality, objective idealism, and belief in the reality of continuity and of absolute chance, mechanical necessity, and creative love. In his work, fallibilism and pragmatism may seem to work somewhat like skepticism and positivism, respectively, in others' work. However, for Peirce, fallibilism is balanced by an anti-skepticism and is a basis for belief in the reality of absolute chance and of continuity, and pragmatism commits one to anti-nominalist belief in the reality of the general (CP 5.453–457).
For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at any waking moment, and does not settle questions by resorting to special experiences. He divided such philosophy into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics, and logic), and (3) metaphysics; his views on them are discussed in order below.
Peirce did not write extensively in aesthetics and ethics, but came by 1902 to hold that aesthetics, ethics, and logic, in that order, comprise the normative sciences. He characterized aesthetics as the study of the good (grasped as the admirable), and thus of the ends governing all conduct and thought.
Influence and legacy.
Umberto Eco described Peirce as "undoubtedly the greatest unpublished writer of our generation" and by Karl Popper as "one of the greatest philosophers of all time". The Internet Encyclopedia of Philosophy says of Peirce that although "long considered an eccentric figure whose contribution to pragmatism was to provide its name and whose importance was as an influence upon James and Dewey, Peirce's significance in his own right is now largely accepted."
Pragmatism.
Some noted articles and lectures
As a movement, pragmatism began in the early 1870s in discussions among Peirce, William James, and others in the Metaphysical Club. James among others regarded some articles by Peirce such as " (1877) and especially " (1878) as foundational to pragmatism. Peirce (CP 5.11–12), like James ("", 1907), saw pragmatism as embodying familiar attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems. Peirce differed from James and the early John Dewey, in some of their tangential enthusiasms, in being decidedly more rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical moods.
In 1905 Peirce coined the new name pragmaticism "for the precise purpose of expressing the original definition", saying that "all went happily" with James's and F.C.S. Schiller's variant uses of the old name "pragmatism" and that he coined the new name because of the old name's growing use in "literary journals, where it gets abused". Yet he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary author Giovanni Papini's declaration of pragmatism's indefinability. Peirce in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists, but he remained allied with them on other issues.
Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce's pragmatism is a method of clarification of conceptions of objects. It equates any conception of an object to a conception of that object's effects to a general extent of the effects' conceivable implications for informed practice. It is a method of sorting out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his "Illustrations of the Logic of Science" series of articles. In the second one, "", Peirce discussed three grades of clearness of conception:
By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of the presuppositions of reasoning in general. In clearness's second grade (the "nominal" grade), he defined truth as a sign's correspondence to its object, and the real as the object of such correspondence, such that truth and the real are independent of that which you or I or any actual, definite community of inquirers think. After that needful but confined step, next in clearness's third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion which "would" be reached, sooner or later but still inevitably, by research taken far enough, such that the real does depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance for the long-run validity of the rule of induction. Peirce argued that even to argue against the independence and discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth with just such independence and discoverability.
Peirce said that a conception's meaning consists in "all general modes of rational conduct" implied by "acceptance" of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such consequent general modes is the whole meaning. His pragmatism does not equate a conception's meaning, its intellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda), outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism also bears no resemblance to "vulgar" pragmatism, which misleadingly connotes a ruthless and Machiavellian search for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of experimentational mental reflection arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use and improvement of verification.
Peirce's pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry, which he variously called speculative, general, formal or universal rhetoric or simply methodeutic. He applied his pragmatism as a method throughout his work.
Theory of inquiry.
In "" (1877), Peirce gives his take on the psychological origin and aim of inquiry. On his view, individuals are motivated to inquiry by desire to escape the feelings of anxiety and unease which Peirce takes to be characteristic of the state of doubt. Doubt is described by Peirce as an "uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief." Peirce uses words like "irritation" to describe the experience of being in doubt and to explain why he thinks we find such experiences to be motivating. The irritating feeling of doubt is appeased, Peirce says, through our efforts to achieve a settled state of satisfaction with what we land on as our answer to the question which led to that doubt in the first place. This settled state, namely, belief, is described by Peirce as "a calm and satisfactory state which we do not wish to avoid." Our efforts to achieve the satisfaction of belief, by whichever methods we may pursue, are what Peirce calls "inquiry". Four methods which Peirce describes as having been actually pursued throughout the history of thought are summarized below in the section after next.
Critical common-sensism.
Critical common-sensism, treated by Peirce as a consequence of his pragmatism, is his combination of Thomas Reid's common-sense philosophy with a fallibilism that recognizes that propositions of our more or less vague common sense now indubitable may later come into question, for example because of transformations of our world through science. It includes efforts to raise genuine doubts in tests for a core group of common indubitables that change slowly, if at all.
Rival methods of inquiry.
In "" (1877), Peirce described inquiry in general not as the pursuit of truth "per se" but as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like, and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome, or hyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from least to most successful:
Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and traditional sentiment, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. Scientific method excels over the others finally by being deliberately designed to arrive—eventually—at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth "per se" but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method.
Scientific method.
Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predictions and testing, pragmatism points beyond the usual duo of foundational alternatives: deduction from self-evident truths, or "rationalism"; and induction from experiential phenomena, or "empiricism".
Based on his critique of three modes of argument and different from either foundationalism or coherentism, Peirce's approach seeks to justify claims by a three-phase dynamic of inquiry:
Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalization "simpliciter", which is a mere re-labeling of phenomenological patterns. Peirce's pragmatism was the first time the scientific method was proposed as an epistemology for philosophical questions.
A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This is an operational notion of truth used by scientists.
Peirce extracted the pragmatic model or theory of inquiry from its raw materials in classical logic and refined it in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning.
Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief. No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the integrity of inquiry strongly limits the effective modularity of its principal components.
Peirce's outline of the scientific method in §III–IV of "A Neglected Argument" is summarized below (except as otherwise noted). There he also reviewed plausibility and inductive precision (issues of critique of arguments).
i. Explication. Not clearly premised, but a deductive analysis of the hypothesis so as to render its parts as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of consequences of the hypothesis as predictions about evidence to be found. Corollarial or, if needed, Theorematic.
i. Classification. Not clearly premised, but an inductive classing of objects of experience under general ideas.
ii. Probation: direct Inductive Argumentation. Crude or Gradual in procedure. Crude Induction, founded on experience in one mass (CP 2.759), presumes that future experience on a question will not differ utterly from all past experience (CP 2.756). Gradual Induction makes a new estimate of the proportion of truth in the hypothesis after each test, and is Qualitative or Quantitative. Qualitative Gradual Induction depends on estimating the relative evident weights of the various qualities of the subject class under investigation (CP 2.759; see also "Collected Papers of Charles Sanders Peirce", 7.114–120). Quantitative Gradual Induction depends on how often, in a fair sample of instances of "S", "S" is found actually accompanied by "P" that was predicted for "S" (CP 2.758). It depends on measurements, or statistics, or counting.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".
Against Cartesianism.
Peirce drew on the methodological implications of the four incapacities—no genuine introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the absolutely incognizable—to attack philosophical Cartesianism, of which he said that:
Theory of categories.
On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to the American Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication, involving three universal categories that Peirce developed in response to reading Aristotle, Immanuel Kant, and G. W. F. Hegel, categories that Peirce applied throughout his work for the rest of his life. Peirce scholars generally regard the "New List" as foundational or breaking the ground for Peirce's "architectonic", his blueprint for a pragmatic philosophy. In the categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work.
is cast as a Kantian deduction; it is short but dense and difficult to summarize. The following table is compiled from that and later works. In 1893, Peirce restated most of it for a less advanced audience.
Logic, or semiotic.
In 1918, the logician C. I. Lewis wrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century."
Relational logic.
Beginning with his first paper on the "Logic of Relatives" (1870), Peirce extended the theory of relations pioneered by Augustus De Morgan. Beginning in 1940, Alfred Tarski and his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective of relation algebra.
Relational logic gained applications. In mathematics, it influenced the abstract analysis of E. H. Moore and the lattice theory of Garrett Birkhoff. In computer science, the relational model for databases was developed with Peircean ideas in work of Edgar F. Codd, who was a doctoral student of Arthur W. Burks, a Peirce scholar. In economics, relational logic was used by Frank P. Ramsey, John von Neumann, and Paul Samuelson to study preferences and utility and by Kenneth J. Arrow in "Social Choice and Individual Values", following Arrow's association with Tarski at City College of New York.
Quantifiers.
On Peirce and his contemporaries Ernst Schröder and Gottlob Frege, Hilary Putnam (1982) documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" (1885), published in the premier American mathematical journal of the day, and cited by Peano and Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic, philosophy of language, and the foundations of mathematics.
Peirce's work on formal logic had admirers besides Ernst Schröder:
Philosophy of logic.
A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982); the Introduction in Nathan Houser "et al." (1997); and Randall Dipert's chapter in Cheryl Misak (2004).
Logic as philosophical.
Peirce regarded logic "per se" as a division of philosophy, as a normative science based on esthetics and ethics, as more basic than metaphysics, and as "the art of devising methods of research". More generally, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited. Peirce called (with no sense of deprecation) "mathematics of logic" much of the kind of thing which, in current research and applications, is called simply "logic". He was productive in both (philosophical) logic and logic's mathematics, which were connected deeply in his work and thought.
Peirce argued that logic is formal semiotic: the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that "all this universe is perfused with signs, if it is not composed exclusively of signs", along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs and sign processes ("semiosis") such as the inquiry process. He divided logic into: (1) speculative grammar, or stechiology, on how signs can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative or universal rhetoric, or methodeutic, the philosophical theory of inquiry, including pragmatism.
Presuppositions of logic.
In his "F.R.L." [First Rule of Logic] (1899), Peirce states that the first, and "in one sense, the sole", rule of reason is that, "to learn, one needs to desire to learn" and desire it without resting satisfied with that which one is inclined to think. So, the first rule is, "to wonder". Peirce proceeds to a critical theme in research practices and the shaping of theories:
<poem>...there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy:
Do not block the way of inquiry.</poem>
Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that "the one unpardonable offence" is a philosophical barricade against truth's advance, an offense to which "metaphysicians in all ages have shown themselves the most addicted". Peirce in many writings holds that logic precedes metaphysics (ontological, religious, and physical).
Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and anomalous phenomena. To refuse absolute theoretical certainty is the heart of "fallibilism", which Peirce unfolds into refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic's presupposition of fallibilism leads at length to the view that chance and continuity are very real (tychism and synechism).
The First Rule of Logic pertains to the mind's presuppositions in undertaking reason and logic; presuppositions, for instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as, collectively, hopes which, in particular cases, one is unable seriously to doubt.
Four incapacities.
The "Journal of Speculative Philosophy" series (1868–1869), including
Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes of reasoning, and the falsity of philosophical Cartesianism (see below).
Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself and later said that to "dismiss make-believes" is a prerequisite for pragmatism.
Logic as formal semiotic.
Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought's processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference, and, as its culmination, a theory of inquiry for the task of saying 'how science works' and devising research methods. This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way to the principles of all methods. Influences radiate from points on parallel lines of inquiry in Aristotle's work, in such "loci" as: the basic terminology of psychology in "On the Soul"; the founding description of sign relations in "On Interpretation"; and the differentiation of inference into three modes that are commonly translated into English as "abduction", "deduction", and "induction", in the "Prior Analytics", as well as inference by analogy (called "paradeigma" by Aristotle), which Peirce regarded as involving the other three modes.
Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He called it both "semiotic" and "semeiotic". Both are current in singular and plural. He based it on the conception of a triadic sign relation, and defined "semiosis" as "action, or influence, which is, or involves, a cooperation of "three" subjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs". As to signs in thought, Peirce emphasized the reverse: "To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs."
Peirce held that all thought is in signs, issuing in and from interpretation, where "sign" is the word for the broadest variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental concepts and ideas, all as determinations of a mind or "quasi-mind", that which at least functions like a mind, as in the work of crystals or bees—the focus is on sign action in general rather than on psychology, linguistics, or social studies (fields which he also pursued).
Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of inquiry on semiotics' three levels:
Peirce uses examples often from common experience, but defines and discusses such things as assertion and interpretation in terms of philosophical logic. In a formal vein, Peirce said:
Signs.
Sign relation.
Peirce's theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim. Anything is a sign—not absolutely as itself, but instead in some relation or other. The "sign relation" is the key. It defines three roles encompassing (1) the sign, (2) the sign's subject matter, called its "object", and (3) the sign's meaning or ramification as formed into a kind of effect called its "interpretant" (a further sign, for example a translation). It is an irreducible "triadic relation", according to Peirce. The roles are distinct even when the things that fill those roles are not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further interpretants.
"Extension × intension = information." Two traditional approaches to sign relation, necessary though insufficient, are the way of "extension" (a sign's objects, also called breadth, denotation, or application) and the way of "intension" (the objects' characteristics, qualities, attributes referenced by the sign, also called depth, comprehension, significance, or connotation). Peirce adds a third, the way of "information", including change of information, to integrate the other two approaches into a unified whole. For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies.
"Determination." A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—an object determines a sign to determine an interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents something "as" a sign representing the object. The object (be it a quality or fact or law or even fictional) determines the sign to an interpretant through one's collateral experience with the object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an absent object. Peirce used the word "determine" not in a strictly deterministic sense, but in a sense of "specializes", "bestimmt", involving variable amount, like an influence. Peirce came to define representation and interpretation in terms of (triadic) determination. The object determines the sign to determine another sign—the interpretant—to be related to the object "as the sign is related to the object", hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.
Semiotic elements.
Peirce held there are exactly three basic elements in semiosis (sign action):
Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign denotes, the mind needs some experience of that sign's object, experience outside of, and collateral to, that sign or sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.
Classes of signs.
Among Peirce's many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its interpretant. Also, each of the three typologies is a three-way division, a trichotomy, via Peirce's three phenomenological categories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.
I. "Qualisign, sinsign, legisign" (also called" tone, token, type," and also called "potisign, actisign, famisign"): This typology classifies every sign according to the sign's own phenomenological category—the qualisign is a quality, a possibility, a "First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a "Second"; and the legisign is a habit, a rule, a representational relation, a "Third".
II. "Icon, index, symbol": This typology, the best known one, classifies every sign according to the category of the sign's way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual connection to its object, and the symbol by a habit or rule for its interpretant.
III. "Rheme, dicisign, argument" (also called "sumisign, dicisign, suadisign," also "seme, pheme, delome," and regarded as very broadened versions of the traditional "term, proposition, argument"): This typology classifies every sign according to the category which the interpretant attributes to the sign's way of denoting its object—the rheme, for example a term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural element of inference.
Every sign belongs to one class or another within (I) "and" within (II) "and" within (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality, and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified at this level of analysis.
Modes of inference.
Borrowing a brace of concepts from Aristotle, Peirce examined three basic modes of inference—"abduction", "deduction", and "induction"—in his "critique of arguments" or "logic proper". Peirce also called abduction "retroduction", "presumption", and, earliest of all, "hypothesis". He characterized it as guessing and as inference to an explanatory hypothesis. He sometimes expounded the modes of inference by transformations of the categorical syllogism Barbara (AAA), for example in "Deduction, Induction, and Hypothesis" (1878). He does this by rearranging the "rule" (Barbara's major premise), the "case" (Barbara's minor premise), and the "result" (Barbara's conclusion):
Deduction.
"Rule:" All the beans from this bag are white. <br>
"Case:" These beans are beans from this bag. <br>
formula_1 "Result:" These beans are white.
Induction.
"Case:" These beans are [randomly selected] from this bag.<br>
"Result:" These beans are white.<br>
formula_1 "Rule:" All the beans from this bag are white.
Hypothesis (Abduction).
"Rule:" All the beans from this bag are white.<br>
"Result:" These beans [oddly] are white.<br>
formula_1 "Case:" These beans are from this bag.
In 1883, in "A Theory of Probable Inference" ("Studies in Logic"), Peirce equated hypothetical inference with the induction of characters of objects (as he had done in effect before). Eventually dissatisfied, by 1900 he distinguished them once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive inference:
The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. "Deduction proves that something "must" be; Induction shows that something "actually is" operative; Abduction merely suggests that something "may be"." Peirce did not remain quite convinced that one logical form covers all abduction. In his methodeutic or theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical explanation, deductive prediction, inductive testing
Metaphysics.
Peirce divided metaphysics into (1) ontology or general metaphysics, (2) psychical or religious metaphysics, and (3) physical metaphysics.
Ontology.
On the issue of universals, Peirce was a scholastic realist, declaring the reality of generals as early as 1868. According to Peirce, his category he called "thirdness", the more general facts about the world, are extra-mental realities. Regarding modalities (possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just how positively real the modalities are. In his 1897 "The Logic of Relatives" he wrote:
Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the pragmaticist is committed to a strong modal realism by conceiving of objects in terms of predictive general conditional propositions about how they "would" behave under certain circumstances.
Continua.
Continuity and synechism are central in Peirce's philosophy: "I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy".
From a mathematical point of view, he embraced infinitesimals and worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum; that a true continuum is the real subject matter of "analysis situs" (topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—any Aleph number (any infinite "multitude" as he called it) of instants.
In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): "It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties."
Psychical or religious metaphysics.
Peirce believed in God, and characterized such belief as founded in an instinct explorable in musing over the worlds of ideas, brute facts, and evolving habits—and it is a belief in God not as an "actual" or "existent" being (in Peirce's sense of those words), but all the same as a "real" being. In "" (1908), Peirce sketches, for God's reality, an argument to a hypothesis of God as the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such purposefulness will "stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing an "infinitely incomprehensible" being, starts off at odds with its own nature as a purportively true conception, and so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though God as the Necessary Being is not vague or growing; but the hypothesis will hold it to be "more" false to say the opposite, that God is purposeless. Peirce also argued that the will is free and (see Synechism) that there is at least an attenuated kind of immortality.
Physical metaphysics.
Peirce held the view, which he called objective idealism, that "matter is effete mind, inveterate habits becoming physical laws". Peirce observed that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop".
Peirce asserted the reality of (1) "absolute chance" or randomness (his tychist view), (2) "mechanical necessity" or physical laws (anancist view), and (3) what he called the "law of love" (agapist view), echoing his categories Firstness, Secondness, and Thirdness, respectively. He held that fortuitous variation (which he also called "sporting"), mechanical necessity, and creative love are the three modes of evolution (modes called "tychasm", "anancasm", and "agapasm") of the cosmos and its parts. He found his conception of agapasm embodied in Lamarckian evolution; the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that overall he was a synechist, holding with reality of continuity, especially of space, time, and law.
Philosophy of science.
Peirce outlined two fields, "Cenoscopy" and "Science of Review", both of which he called philosophy. Both included philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:
Peirce placed, within Science of Review, the work and theory of classifying the sciences (including mathematics and philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are of interest both as a map for navigating his philosophy and as an accomplished polymath's survey of research in his time.
|
6118
|
20445980
|
https://en.wikipedia.org/wiki?curid=6118
|
Carnot heat engine
|
A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.
The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build.
Carnot's diagram.
In the adjacent diagram, from Carnot's 1824 work, "Reflections on the Motive Power of Fire", there are "two bodies "A" and "B", kept each at a constant temperature, that of "A" being higher than that of "B". These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator." Carnot then explains how we can obtain motive power, i.e., "work", by carrying a certain quantity of heat from body "A" to body "B".
It also acts as a cooler and hence can also act as a refrigerator.
Modern diagram.
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat "Q" can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically "Q"H was supplied by a boiler, wherein water was boiled over a furnace; "Q"C was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, "W", is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height".
Carnot cycle.
The Carnot cycle when acting as a heat engine consists of the following steps:
Carnot's theorem.
Carnot's theorem is a formal statement of this fact: "No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs."
formula_1
Explanation.
This maximum efficiency is defined as above:
A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.
It is easily shown that the efficiency is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)
Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.
The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.
Efficiency of real heat engines.
For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.
The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system is equal to the net heat put into the system, the sum of formula_2 > 0 taken up and the waste heat formula_3 < 0 given off:
For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.
During heat transfer from the hot reservoir at formula_4 to the fluid, the fluid would have a slightly lower temperature than formula_4, and the process for the fluid may not necessarily remain isothermal.
Let formula_6 be the total entropy change of the fluid in the process of intake of heat.
where the temperature of the fluid is always slightly lesser than formula_4, in this process.
So, one would get:
Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change formula_8< 0 of the fluid in the process of expelling heat:
where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid is always slightly greater than formula_9.
We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have
The previous three equations, namely (), (), (), substituted into () to give:
For [ΔSh ≥ (Qh/Th)] +[ΔSc ≥ (Qc/Tc)] = 0
[ΔSh ≥ (Qh/Th)] = - [ΔSc ≥ (Qc/Tc)]
= [-ΔSc "≤" (-Qc/Tc)]
it is at least (Qh/Th) "≤" (-Qc/Tc)
Equations () and () combine to give
To derive this step needs two adiabatic processes involved to show an isentropic process property for the ratio of the changing volumes of two isothermal processes are equal.
Most importantly, since the two adiabatic processes are volume works without heat lost, and since the ratio of volume changes for this two processes are the same, so the works for these two adiabatic processes are the same with opposite direction to each other, namely, one direction is work done by the system and the other is work done on the system; therefore, heat efficiency only concerns the amount of work done by the heat absorbed comparing to the amount of heat absorbed by the system.
Therefore, (W/Qh) = (Qh - Qc) / Qh
= 1 - (Qc/Qh)
= 1 - (Tc/Th)
And, from ()
(Qh/Th) "≤" (-Qc/Tc) here Qc it is less than 0 (release heat)
-(Tc/Th) ≥ (Qc/Qh)
"1+"[-(Tc/Th)] ≥ "1+"(Qc/Qh)
1 - (Tc/Th) ≥ (Qh + Qc)/Qh here Qc<0,
1 - (Tc/Th) ≥ (Qh - Qc)/Qh
1 - (Tc/Th) ≥ W/Qh
Hence,
where formula_10 is the efficiency of the real engine, and formula_11 is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures formula_4 and formula_9. For the Carnot engine, the entire process is 'reversible', and Equation () is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.
Equation () signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as formula_14 flows into it at the fixed temperature formula_9, is greater than the entropy loss of the hot reservoir as formula_16 leaves it at its fixed temperature formula_4. The inequality in Equation () is essentially the statement of the Clausius theorem.
According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance".
The Carnot engine and Rudolf Diesel.
In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed a practical Diesel engine.
The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.
For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at , then cycle to one atmosphere at . However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, if it could have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)
Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and . He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.
Even so, the Diesel engine slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, capable by 1969 of 40% efficiency.
As a macroscopic construct.
The Carnot heat engine is, ultimately, a theoretical construct based on an "idealized" thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built.
For example, for the isothermal expansion part of the Carnot cycle, the following
"infinitesimal" conditions must be satisfied simultaneously at every step in the expansion:
Such "infinitesimal" requirements as these (and others) cause the Carnot cycle to take an "infinite amount of time", rendering the production of work impossible.
Other practical requirements that make the Carnot cycle impractical to realize include fine control of the gas, and perfect thermal contact with the surroundings (including high and low temperature reservoirs).
|
6119
|
35279836
|
https://en.wikipedia.org/wiki?curid=6119
|
Context-sensitive
|
Context-sensitive is an adjective meaning "depending on context" or "depending on circumstances". It may refer to:
|
6121
|
19404073
|
https://en.wikipedia.org/wiki?curid=6121
|
Central America
|
Central America is a subregion of North America. Its political boundaries are defined as bordering Mexico to the north, Colombia to the southeast, the Caribbean to the east, and the Pacific Ocean to the southwest. Central America is usually defined as consisting of seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Within Central America is the Mesoamerican biodiversity hotspot, which extends from southern Mexico to southeastern Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a high amount of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury, and property damage.
Most of Central America falls under the Isthmo-Colombian cultural area. Before the Spanish expedition of Christopher Columbus' voyages to the Americas, hundreds of indigenous peoples made their homes in the area. From the year 1502 onwards, Spain began their colonization. From 1609 to 1821, the majority of Central American territories (except for what would become Belize and Panama and including the modern Mexican state of Chiapas) were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence and autonomy from mainland Spain. On 15 September, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire. Some of New Spain's provinces in the Central American region were invaded and annexed to the First Mexican Empire; however in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838.
In 1838, Costa Rica, Guatemala, Honduras, and Nicaragua became the first of Central America's seven states to become independent countries, followed by El Salvador in 1841, Panama in 1903, and Belize in 1981. Despite the dissolution of the Federal Republic of Central America, the five remaining countries, save for Panama and Belize, all preserved and maintained a Central American identity.
The Spanish-speaking countries officially include both North America and South America as a single continent, , which is split into four subregions: Central America, The Caribbean (a.k.a. the West Indies), North America (Mexico and Northern America), and South America.
Definitions.
"Central America" may mean different things to various people, based upon different contexts:
History.
Central America was formed more than 3 million years ago, as part of the Isthmus of Panama, when its portion of land connected each side of water.
In the Pre-Columbian era, the northern areas of Central America were inhabited by the indigenous peoples of Mesoamerica. Most notable among these were the Mayans, who had built numerous cities throughout the region, and the Aztecs, who had created a vast empire. The pre-Columbian cultures of eastern Honduras, Caribbean Nicaragua, most of Costa Rica and Panama were predominantly speakers of the Chibchan languages at the time of European contact and are considered by some culturally different and grouped in the Isthmo-Colombian Area.
Following the Spanish expedition of Christopher Columbus's voyages to the Americas, the Spanish sent many expeditions to the region, and they began their conquest of Maya territory in 1523. Soon after the conquest of the Aztec Empire, Spanish conquistador Pedro de Alvarado commenced the conquest of northern Central America for the Spanish Empire. Beginning with his arrival in Soconusco in 1523, Alvarado's forces systematically conquered and subjugated most of the major Maya kingdoms, including the K'iche', Tz'utujil, Pipil, and the Kaqchikel. By 1528, the conquest of Guatemala was nearly complete, with only the Petén Basin remaining outside the Spanish sphere of influence. The last independent Maya kingdoms – the Kowoj and the Itza people – were finally defeated in 1697, as part of the Spanish conquest of Petén.
In 1538, Spain established the Real Audiencia of Panama, which had jurisdiction over all land from the Strait of Magellan to the Gulf of Fonseca. This entity was dissolved in 1543, and most of the territory within Central America then fell under the jurisdiction of the "Audiencia Real de Guatemala". This area included the current territories of Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Mexican state of Chiapas, but excluded the lands that would become Belize and Panama. The president of the Audiencia, which had its seat in Antigua Guatemala, was the governor of the entire area. In 1609 the area became a captaincy general and the governor was also granted the title of captain general. The Captaincy General of Guatemala encompassed most of Central America, with the exception of present-day Belize and Panama.
The Captaincy General of Guatemala lasted for more than two centuries, but began to fray after a rebellion in 1811 which began in the Intendancy of San Salvador. The Captaincy General formally ended on 15 September 1821, with the signing of the Act of Independence of Central America. Mexican independence was achieved at virtually the same time with the signing of the Treaty of Córdoba and the Declaration of Independence of the Mexican Empire, and the entire region was finally independent from Spanish authority by 28 September 1821.
Slavery in Central America was a key component of the colonial economies established by Spain from the early 16th century. While Indigenous peoples were the initial targets of forced labor systems such as the encomienda, the catastrophic population decline caused by disease and exploitation led to the increasing importation of enslaved Africans. The transatlantic slave trade brought hundreds of thousands of Africans to the region, particularly to present-day Honduras, Nicaragua, Guatemala, Panama, and Costa Rica, to labor in mining, agriculture, and domestic service.
African slavery in Central America was concentrated in port cities, mining regions, and plantation zones. Panama, with its strategic location as a transit point between the Atlantic and Pacific, became an early hub for African slave importation as early as the 1510s. Enslaved people were used to build infrastructure, carry goods across the isthmus, and work in emerging urban centers. In Honduras, enslaved Africans were brought to support mining operations in Olancho and agriculture along the northern coast, where they mixed with Indigenous and later Garífuna populations (a people of mixed African and Indigenous descent). Guatemala also had a significant enslaved population in its early colonial history, particularly in the sugar-producing areas of Escuintla.
African slavery in Central America left enduring cultural, demographic, and social legacies. By the 18th century, the importation of African slaves had declined, and free Afro-descendant populations grew through manumission, escape (maroon communities), and intermarriage. Slavery was gradually abolished in the 19th century following independence from Spain. Guatemala formally ended slavery in 1824, Costa Rica in 1824, El Salvador in 1825, Honduras in 1824, and Nicaragua in 1824. However, forms of coerced Indigenous labor persisted well beyond formal abolition.
Modern Afro-descendant communities across Central America, including Afro-Costa Ricans, Afro-Nicaraguans, Afro-Hondurans, Afro-Panamanians, and Afro-Guatemalans. They part the legacy of this complex history of enslavement, resistance, and cultural resilience.
It is a well-established historical fact that approximately 1.3 million enslaved Africans were taken to Spanish Central America.
From its independence from Spain in 1821 until 1823, the former Captaincy General remained intact as part of the short-lived First Mexican Empire. When the Emperor of Mexico abdicated on 19 March 1823, Central America again became independent. On 1 July 1823, the Congress of Central America peacefully seceded from Mexico and declared absolute independence from all foreign nations, and the region formed the Federal Republic of Central America.
The Federal Republic of Central America, initially known as the United Provinces of Central America, was a sovereign state that existed from 1823 to 1840. It was composed of five states: Guatemala, Honduras, El Salvador, Nicaragua, and Costa Rica. The federation was established after these regions declared independence from Spain in 1821 and briefly joined the Mexican Empire before breaking away to form their own union. The republic adopted a constitution in 1824, which was inspired by the federal system of the United States. It provided for a federal capital, initially located in Guatemala City, and a president for each of the five constituent states. The constitution abolished slavery and maintained the privileges of the Roman Catholic Church, while restricting suffrage to the upper classes.
The territory that now makes up Belize was heavily contested in a dispute that continued for decades after Guatemala achieved independence. Spain, and later Guatemala, considered this land a Guatemalan department. In 1862, Britain formally declared it a British colony and named it British Honduras. It became independent as Belize in 1981.
Panama, situated in the southernmost part of Central America on the Isthmus of Panama, has for most of its history been culturally and politically linked to South America. Panama was part of the Province of Tierra Firme from 1510 until 1538 when it came under the jurisdiction of the newly formed "Audiencia Real de Panama". Beginning in 1543, Panama was administered as part of the Viceroyalty of Peru, along with all other Spanish possessions in South America. Panama remained as part of the Viceroyalty of Peru until 1739, when it was transferred to the Viceroyalty of New Granada, the capital of which was located at Santa Fé de Bogotá. Panama remained as part of the Viceroyalty of New Granada until the disestablishment of that viceroyalty in 1819. A series of military and political struggles took place from that time until 1822, the result of which produced the republic of Gran Colombia. After the dissolution of Gran Colombia in 1830, Panama became part of a successor state, the Republic of New Granada. From 1855 until 1886, Panama existed as Panama State, first within the Republic of New Granada, then within the Granadine Confederation, and finally within the United States of Colombia. The United States of Colombia was replaced by the Republic of Colombia in 1886. As part of the Republic of Colombia, Panama State was abolished and it became the Isthmus Department. Despite the many political reorganizations, Colombia was still deeply plagued by conflict, which eventually led to the secession of Panama on 3 November 1903. Only after that time did some begin to regard Panama as a North or Central American entity.
By the 1930s the United Fruit Company owned of land in Central America and the Caribbean and was the single largest land owner in Guatemala. Such holdings gave it great power over the governments of small countries. That was one of the factors that led to the coining of the phrase banana republic.
After more than two hundred years of social unrest, violent conflict, and revolution, Central America today remains in a period of political transformation. Poverty, social injustice, and violence are still widespread. Nicaragua is the second poorest country in the western hemisphere, after Haiti.
Geography.
Central America is a part of North America consisting of a tapering isthmus running from the southern extent of Mexico to the northwestern portion of South America. Central America has the Gulf of Mexico, a body of water within the Atlantic Ocean, to the north; the Caribbean Sea, also part of the Atlantic Ocean, to the northeast; and the Pacific Ocean to the southwest. Some physiographists define the Isthmus of Tehuantepec as the northern geographic border of Central America, while others use the northwestern borders of Belize and Guatemala. From there, the Central American land mass extends southeastward to the Atrato River, where it connects to the Pacific Lowlands in northwestern South America.
Central America has over 70 active volcanoes, 41 which are located in El Salvador, and Guatemala. The volcano with the most activity in Central America is Santa María. Still experiencing frequent eruptions to this day, with the last one beginning in 2013, and still is going on to this day.
Of the many mountain ranges within Central America, the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia and the Cordillera de Talamanca. At , Volcán Tajumulco is the highest peak in Central America. Other high points of Central America are as listed in the table below:
Between the mountain ranges lie fertile valleys that are suitable for the raising of livestock and for the production of coffee, tobacco, beans and other crops. Most of the population of Honduras, Costa Rica and Guatemala lives in valleys.
Trade winds have a significant effect upon the climate of Central America. Temperatures in Central America are highest just prior to the summer wet season, and are lowest during the winter dry season, when trade winds contribute to a cooler climate. The highest temperatures occur in April, due to higher levels of sunlight, lower cloud cover and a decrease in trade winds.
Central American forests.
Central America is part of the Mesoamerican biodiversity hotspot, boasting 7% of the world's biodiversity. The Pacific Flyway is a major north–south flyway for migratory birds in the Americas, extending from Alaska to Tierra del Fuego. Due to the funnel-like shape of its land mass, migratory birds can be seen in very high concentrations in Central America, especially in the spring and autumn. As a bridge between North America and South America, Central America has many species from the Nearctic and the Neotropical realms. However the southern countries (Costa Rica and Panama) of the region have more biodiversity than the northern countries (Guatemala and Belize), meanwhile the central countries (Honduras, Nicaragua and El Salvador) have the least biodiversity. The table below shows recent statistics:
Over 300 species of the region's flora and fauna are threatened, 107 of which are classified as critically endangered. The underlying problems are deforestation, which is estimated by FAO at 1.2% per year in Central America and Mexico combined, fragmentation of rainforests and the fact that 80% of the vegetation in Central America has already been converted to agriculture.
Efforts to protect fauna and flora in the region are made by creating ecoregions and nature reserves. 36% of Belize's land territory falls under some form of official protected status, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas. In addition, 13% of Belize's marine territory are also protected. A large coral reef extends from Mexico to Honduras: the Mesoamerican Barrier Reef System. The Belize Barrier Reef is part of this. The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world. It is home to 70 hard coral species, 36 soft coral species, 500 species of fish and hundreds of invertebrate species.
So far only about 10% of the species in the Belize barrier reef have been discovered.
National trees.
From 2001 to 2010, of forest were lost in the region. In 2010 Belize had 63% of remaining forest cover, Costa Rica 46%, Panama 45%, Honduras 41%, Guatemala 37%, Nicaragua 29%, and El Salvador 21%. Most of the loss occurred in the moist forest biome, with . Woody vegetation loss was partially set off by a gain in the coniferous forest biome with , and a gain in the dry forest biome at . Mangroves and deserts contributed only 1% to the loss in forest vegetation. The bulk of the deforestation was located at the Caribbean slopes of Nicaragua with a loss of of forest in the period from 2001 to 2010. The most significant regrowth of of forest was seen in the coniferous woody vegetation of Honduras.
Montane forests.
The Central American pine-oak forests ecoregion, in the tropical and subtropical coniferous forests biome, is found in Central America and southern Mexico. The Central American pine-oak forests occupy an area of , extending along the mountainous spine of Central America, extending from the Sierra Madre de Chiapas in Mexico's Chiapas state through the highlands of Guatemala, El Salvador, and Honduras to central Nicaragua. The pine-oak forests lie between elevation, and are surrounded at lower elevations by tropical moist forests and tropical dry forests. Higher elevations above are usually covered with Central American montane forests. The Central American pine-oak forests are composed of many species characteristic of temperate North America including oak, pine, fir, and cypress.
Laurel forest is the most common type of Central American temperate evergreen cloud forest, found in almost all Central American countries, normally more than above sea level. Tree species include evergreen oaks, members of the laurel family, species of "Weinmannia" and "Magnolia", and "Drimys granadensis". The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua, cloud forests are situated near the border with Honduras, but many were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica, there are laurel forests in the Cordillera de Tilarán and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca.
The Central American montane forests are an ecoregion of the tropical and subtropical moist broadleaf forests biome, as defined by the World Wildlife Fund. These forests are of the moist deciduous and the semi-evergreen seasonal subtype of tropical and subtropical moist broadleaf forests and receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Central American montane forests consist of forest patches located at altitudes ranging from , on the summits and slopes of the highest mountains in Central America ranging from Southern Mexico, through Guatemala, El Salvador, and Honduras, to northern Nicaragua. The entire ecoregion covers an area of and has a temperate climate with relatively high precipitation levels.
National birds.
Ecoregions are not only established to protect the forests themselves but also because they are habitats for an incomparably rich and often endemic fauna. Almost half of the bird population of the Talamancan montane forests in Costa Rica and Panama are endemic to this region. Several birds are listed as threatened, most notably the resplendent quetzal ("Pharomacrus mocinno"), three-wattled bellbird ("Procnias tricarunculata"), bare-necked umbrellabird ("Cephalopterus glabricollis"), and black guan ("Chamaepetes unicolor"). Many of the amphibians are endemic and depend on the existence of forest. The golden toad that once inhabited a small region in the Monteverde Reserve, which is part of the Talamancan montane forests, has not been seen alive since 1989 and is listed as extinct by IUCN. The exact causes for its extinction are unknown. Global warming may have played a role, because the development of that frog is typical for this area may have been compromised. Seven small mammals are endemic to the Costa Rica-Chiriqui highlands within the Talamancan montane forest region. Jaguars, cougars, spider monkeys, as well as tapirs, and anteaters live in the woods of Central America. The Central American red brocket is a brocket deer found in Central America's tropical forest.
Geology.
Central America is geologically very active, with volcanic eruptions and earthquakes occurring frequently, and tsunamis occurring occasionally. Many thousands of people have died as a result of these natural disasters.
Most of Central America rests atop the Caribbean Plate. This tectonic plate converges with the Cocos, Nazca, and North American plates to form the Middle America Trench, a major subduction zone. The Middle America Trench is situated some off the Pacific coast of Central America and runs roughly parallel to it. Many large earthquakes have occurred as a result of seismic activity at the Middle America Trench. For example, subduction of the Cocos Plate beneath the North American Plate at the Middle America Trench is believed to have caused the 1985 Mexico City earthquake that killed as many as 40,000 people. Seismic activity at the Middle America Trench is also responsible for earthquakes in 1902, 1942, 1956, 1972, 1982, 1992, January 2001, February 2001, 2007, 2012, 2014, and many other earthquakes throughout Central America.
The Middle America Trench is not the only source of seismic activity in Central America. The Motagua Fault is an onshore continuation of the Cayman Trough which forms part of the tectonic boundary between the North American Plate and the Caribbean Plate. This transform fault cuts right across Guatemala and then continues offshore until it merges with the Middle America Trench along the Pacific coast of Mexico, near Acapulco. Seismic activity at the Motagua Fault has been responsible for earthquakes in 1717, 1773, 1902, 1976, 1980, and 2009.
Another onshore continuation of the Cayman Trough is the Chixoy-Polochic Fault, which runs parallel to, and roughly to the north, of the Motagua Fault. Though less active than the Motagua Fault, seismic activity at the Chixoy-Polochic Fault is still thought to be capable of producing very large earthquakes, such as the 1816 earthquake of Guatemala.
Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972.
Volcanic eruptions are also common in Central America. In 1968 the Arenal Volcano, in Costa Rica, erupted killing 87 people as the 3 villages of Tabacon, Pueblo Nuevo and San Luis were buried under pyroclastic flows and debris. Fertile soils from weathered volcanic lava have made it possible to sustain dense populations in the agriculturally productive highland areas.
Politics.
Integration.
Central America is currently undergoing a process of political, economic and cultural transformation that started in 1907 with the creation of the Central American Court of Justice.
In 1951 the integration process continued with the signature of the San Salvador Treaty, which created the ODECA, the Organization of Central American States. However, the unity of the ODECA was limited by conflicts between several member states.
In 1991, the integration agenda was further advanced by the creation of the Central American Integration System ("Sistema para la Integración Centroamericana", or SICA). SICA provides a clear legal basis to avoid disputes between the member states. SICA membership includes the 7 nations of Central America plus the Dominican Republic, a state that is traditionally considered part of the Caribbean.
On 6 December 2008, SICA announced an agreement to pursue a common currency and common passport for the member nations. No timeline for implementation was discussed.
Central America already has several supranational institutions such as the Central American Parliament, the Central American Bank for Economic Integration and the Central American Common Market.
On 22 July 2011, President Mauricio Funes of El Salvador became the first president "pro tempore" to SICA. El Salvador also became the headquarters of SICA with the inauguration of a new building.
Parliament.
The Central American Parliament (aka PARLACEN) is a political and parliamentary body of SICA. The parliament started around 1980, and its primary goal was to resolve conflicts in Nicaragua, Guatemala, and El Salvador. Although the group was disbanded in 1986, ideas of unity of Central Americans still remained, so a treaty was signed in 1987 to create the Central American Parliament and other political bodies. Its original members were Guatemala, El Salvador, Nicaragua and Honduras. The parliament is the political organ of Central America, and is part of SICA. New members have since then joined including Panama and the Dominican Republic.
Costa Rica is not a member State of the Central American Parliament and its adhesion remains as a very unpopular topic at all levels of the Costa Rican society due to existing strong political criticism towards the regional parliament, since it is regarded by Costa Ricans as a menace to democratic accountability and effectiveness of integration efforts. Excessively high salaries for its members, legal immunity of jurisdiction from any member State, corruption, lack of a binding nature and effectiveness of the regional parliament's decisions, high operative costs and immediate membership of Central American Presidents once they leave their office and presidential terms, are the most common reasons invoked by Costa Ricans against the Central American Parliament.
Foreign relations.
Until recently, all Central American countries maintained diplomatic relations with Taiwan instead of China. President Óscar Arias of Costa Rica, however, established diplomatic relations with China in 2007, severing formal diplomatic ties with Taiwan. After breaking off relations with the Republic of China in 2017, Panama established diplomatic relations with the People's Republic of China. In August 2018, El Salvador also severed ties with Taiwan to formally start recognizing the People's Republic of China as sole China, a move many considered lacked transparency due to its abruptness and reports of the Chinese government's desires to invest in the department of La Union while also promising to fund the ruling party's reelection campaign. The President of El Salvador, Nayib Bukele, broke diplomatic relations with Taiwan and established ties with China. On 9 December 2021, Nicaragua resumed relations with the PRC.
Economy.
Signed in 2004, the Central American Free Trade Agreement (CAFTA) is an agreement between the United States, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic. The treaty is aimed at promoting free trade among its members.
Guatemala has the largest economy in the region. Its main exports are coffee, sugar, bananas, petroleum, clothing, and cardamom. Of its 10.29 billion dollar annual exports, 40.2% go to the United States, 11.1% to neighboring El Salvador, 8% to Honduras, 5.5% to Mexico, 4.7% to Nicaragua, and 4.3% to Costa Rica.
The region is particularly attractive for companies (especially clothing companies) because of its geographical proximity to the United States, very low wages and considerable tax advantages. In addition, the decline in the prices of coffee and other export products and the structural adjustment measures promoted by the international financial institutions have partly ruined agriculture, favouring the emergence of maquiladoras. This sector accounts for 42 per cent of total exports from El Salvador, 55 per cent from Guatemala, and 65 per cent from Honduras. However, its contribution to the economies of these countries is disputed; raw materials are imported, jobs are precarious and low-paid, and tax exemptions weaken public finances.
They are also criticised for the working conditions of employees: insults and physical violence, abusive dismissals (especially of pregnant workers), working hours, non-payment of overtime. According to Lucrecia Bautista, coordinator of the "maquilas" sector of the audit firm Coverco, "labour law regulations are regularly violated in maquilas and there is no political will to enforce their application. In the case of infringements, the labour inspectorate shows remarkable leniency. It is a question of not discouraging investors." Trade unionists are subject to pressure, and sometimes to kidnapping or murder. In some cases, business leaders have used the services of the maras. Finally, black lists containing the names of trade unionists or political activists are circulating in employers' circles.
Economic growth in Central America is projected to slow slightly in 2014–15, as country-specific domestic factors offset the positive effects from stronger economic activity in the United States.
Coasts.
Tourism in Belize has grown considerably in more recent times, and it is now the second largest industry in the nation. Belizean Prime Minister Dean Barrow has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Belize's tourism-driven economy have been significant, with the nation welcoming almost one million tourists in a calendar year for the first time in its history in 2012. Belize is also the only country in Central America with English as its official language, making this country a comfortable destination for English-speaking tourists.
Costa Rica is the most visited nation in Central America. Tourism in Costa Rica is one of the fastest growing economic sectors of the country, having become the largest source of foreign revenue by 1995. Since 1999, tourism has earned more foreign exchange than bananas, pineapples and coffee exports combined. The tourism boom began in 1987, with the number of visitors up from 329,000 in 1988, through 1.03 million in 1999, to a historical record of 2.43 million foreign visitors and $1.92-billion in revenue in 2013. In 2012 tourism contributed with 12.5% of the country's GDP and it was responsible for 11.7% of direct and indirect employment.
Tourism in Nicaragua has grown considerably recently, and it is now the second largest industry in the nation. Nicaraguan President Daniel Ortega has stated his intention to use tourism to combat poverty throughout the country. The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Nicaragua's tourism-driven economy have been significant, with the nation welcoming one million tourists in a calendar year for the first time in its history in 2010.
Transport.
Roads.
The Inter-American Highway is the Central American section of the Pan-American Highway, and spans between Nuevo Laredo, Mexico, and Panama City, Panama. Because of the break in the highway known as the Darién Gap, it is not possible to cross between Central America and South America in an automobile.
Demographics.
Life expectancy.
List of countries by life expectancy at birth for 2023, according to the World Bank Group.
Capital cities.
The population of Central America is estimated at as of . With an area of , it has a population density of . Human Development Index values are from the estimates for 2017.
Languages.
The official language majority in all Central American countries is Spanish, except in Belize, where the official language is English. Mayan languages constitute a language family consisting of about 26 related languages. Guatemala formally recognized 21 of these in 1996. Xinca, Miskito, and Garifuna are also present in Central America.
Ethnic groups.
This region of the continent is very rich in terms of ethnic groups. The majority of the population is mestizo, with sizable Mayan and African descendent populations present, along with numerous other indigenous groups such as the Miskito people. The immigration of Arabs, Jews, Chinese, Europeans and others brought additional groups to the area.
Cathedrals.
The predominant religion in Central America is Christianity (95.6%). Beginning with the Spanish colonization of Central America in the 16th century, Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion.
Source: Jason Mandrik, Operation World Statistics (2020).
|
6122
|
1299468867
|
https://en.wikipedia.org/wiki?curid=6122
|
Continuous function
|
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as "discontinuities". More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is . Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function denoting the height of a growing flower at time would be considered continuous. In contrast, the function denoting the amount of money in a bank account at time would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
History.
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of formula_1 as follows: an infinitely small increment formula_2 of the independent variable "x" always produces an infinitely small change formula_3 of the dependent variable "y" (see e.g. "Cours d'Analyse", p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point "c" unless it was defined at and on both sides of "c", but Édouard Goursat allowed the function to be defined only at and on one side of "c", and Camille Jordan allowed it even if the function was defined only at "c". All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
Real functions.
Definition.
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function with variable is "continuous at" the real number , if the limit of formula_4 as tends to , is equal to formula_5
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval formula_6 (the whole real line) is often called simply a continuous function; one also says that such a function is "continuous everywhere". For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function formula_7 is continuous on its whole domain, which is the closed interval formula_8
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function formula_9 and the tangent function formula_10 When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is "discontinuous" at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions formula_11 and formula_12 are discontinuous at , and remain discontinuous whichever value is chosen for defining them at . A point where a function is discontinuous is called a "discontinuity".
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let formula_13 be a function whose domain formula_14 is contained in formula_15 of real numbers.
Some (but not all) possibilities for formula_14 are:
In the case of an open interval, formula_23 and formula_24 do not belong to formula_14, and the values formula_26 and formula_27 are not defined, and if they are, they do not matter for continuity on formula_14.
Definition in terms of limits of functions.
The function is "continuous at some point" of its domain if the limit of formula_4 as "x" approaches "c" through the domain of "f", exists and is equal to formula_5 In mathematical notation, this is written as
formula_31
In detail this means three conditions: first, has to be defined at (guaranteed by the requirement that is in the domain of ). Second, the limit of that equation has to exist. Third, the value of this limit must equal formula_5
Definition in terms of neighborhoods.
A neighborhood of a point "c" is a set that contains, at least, all points within some fixed distance of "c". Intuitively, a function is continuous at a point "c" if the range of "f" over the neighborhood of "c" shrinks to a single point formula_33 as the width of the neighborhood around "c" shrinks to zero. More precisely, a function "f" is continuous at a point "c" of its domain if, for any neighborhood formula_34 there is a neighborhood formula_35 in its domain such that formula_36 whenever formula_37
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
Definition in terms of limits of sequences.
One can instead require that for any sequence formula_38 of points in the domain which converges to "c", the corresponding sequence formula_39 converges to formula_5 In mathematical notation, formula_41
Weierstrass and Jordan definitions (epsilon–delta) of continuous functions.
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function formula_42 as above and an element formula_43 of the domain formula_14, formula_45 is said to be continuous at the point formula_43 when the following holds: For any positive real number formula_47 however small, there exists some positive real number formula_48 such that for all formula_49 in the domain of formula_45 with formula_51 the value of formula_52 satisfies
formula_53
Alternatively written, continuity of formula_42 at formula_55 means that for every formula_47 there exists a formula_48 such that for all formula_58:
formula_59
More intuitively, we can say that if we want to get all the formula_52 values to stay in some small neighborhood around formula_61 we need to choose a small enough neighborhood for the formula_49 values around formula_63 If we can do that no matter how small the formula_64 neighborhood is, then formula_45 is continuous at formula_63
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval formula_67 be entirely within the domain formula_14, but Jordan removed that restriction.
Definition in terms of control of the remainder.
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function formula_69 is called a control function if
A function formula_71 is "C"-continuous at formula_43 if there exists such a neighbourhood formula_73 that
formula_74
A function is continuous in formula_43 if it is "C"-continuous for some control function "C".
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions formula_76 a function is if it is for some formula_77 For example, the Lipschitz, the Hölder continuous functions of exponent and the uniformly continuous functions below are defined by the set of control functions
formula_78
formula_79
formula_80
respectively.
Definition using oscillation.
Continuity can also be defined in terms of oscillation: a function "f" is continuous at a point formula_43 if and only if its oscillation at that point is zero; in symbols, formula_82 A benefit of this definition is that it discontinuity: the oscillation gives how the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than formula_83 (hence a formula_84 set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the formula_85 definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given formula_86 there is no formula_87 that satisfies the formula_85 definition, then the oscillation is at least formula_89 and conversely if for every formula_83 there is a desired formula_91 the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
Definition using the hyperreals.
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see "Cours d'analyse", page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
Rules for continuity.
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficient is the sinc function, which is defined by and for . The above rules show immediately that the function is continuous for , but, for proving the continuity at , one has to prove
formula_92
As this is true, one gets that the sinc function is continuous function on all real numbers.
Examples of discontinuous functions.
An example of a discontinuous function is the Heaviside step function formula_93, defined by
formula_94
Pick for instance formula_95. Then there is no around formula_96, i.e. no open interval formula_97 with formula_98 that will force all the formula_99 values to be within the of formula_100, i.e. within formula_101. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
formula_102
is discontinuous at formula_96 but continuous everywhere else. Yet another example: the function
formula_104
is continuous everywhere apart from formula_96.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
formula_106
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
formula_107
is nowhere continuous.
Properties.
A useful lemma.
Let formula_52 be a function that is continuous at a point formula_109 and formula_110 be a value such formula_111 Then formula_112 throughout some neighbourhood of formula_63
"Proof:" By the definition of continuity, take formula_114 , then there exists formula_115 such that
formula_116
Suppose there is a point in the neighbourhood formula_117 for which formula_118 then we have the contradiction
formula_119
Intermediate value theorem.
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function "f" is continuous on the closed interval formula_120 and "k" is some number between formula_26 and formula_122 then there is some number formula_123 such that formula_124
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if "f" is continuous on formula_125 and formula_26 and formula_27 differ in sign, then, at some point formula_123 formula_33 must equal zero.
Extreme value theorem.
The extreme value theorem states that if a function "f" is defined on a closed interval formula_125 (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists formula_131 with formula_132 for all formula_133 The same is true of the minimum of "f". These statements are not, in general, true if the function is defined on an open interval formula_134 (or any set that is not both closed and bounded), as, for example, the continuous function formula_135 defined on the open interval (0,1), does not attain a maximum, being unbounded above.
Relation to differentiability and integrability.
Every differentiable function
formula_136
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
formula_137
is everywhere continuous. However, it is not differentiable at formula_96 (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative "f′"("x") of a differentiable function "f"("x") need not be continuous. If "f′"("x") is continuous, "f"("x") is said to be "continuously differentiable". The set of such functions is denoted formula_139 More generally, the set of functions
formula_140
(from an open interval (or open subset of formula_15) formula_142 to the reals) such that "f" is formula_143 times differentiable and such that the formula_143-th derivative of "f" is continuous is denoted formula_145 See differentiability class. In the field of computer graphics, properties related (but not identical) to formula_146 are sometimes called formula_147 (continuity of position), formula_148 (continuity of tangency), and formula_149 (continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
formula_150
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
Pointwise and uniform limits.
Given a sequence
formula_151
of functions such that the limit
formula_152
exists for all formula_153, the resulting function formula_52 is referred to as the pointwise limit of the sequence of functions formula_155 The pointwise limit function need not be continuous, even if all functions formula_156 are continuous, as the animation at the right shows. However, "f" is continuous if all functions formula_156 are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
Directional Continuity.
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is if no jump occurs when the limit point is approached from the right. Formally, "f" is said to be right-continuous at the point "c" if the following holds: For any number formula_158 however small, there exists some number formula_48 such that for all "x" in the domain with formula_160 the value of formula_52 satisfies
formula_162
This is the same condition as continuous functions, except it is required to hold only for "x" strictly larger than "c". Requiring formula_163 to hold instead for all "x" with formula_164 yields the notion of functions. A function is continuous if and only if it is both right-continuous and left-continuous.
Semicontinuity.
A function "f" is at the point "c" if, roughly, any jumps that might occur only go down, but not up. That is, for any formula_47 there exists some number formula_48 such that for all "x" in the domain with formula_167 the value of formula_52 satisfies
formula_169
The reverse condition is .
Continuous functions between metric spaces.
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set formula_170 equipped with a function (called metric) formula_171 that can be thought of as a measurement of the distance of any two elements in "X". Formally, the metric is a function
formula_172
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces formula_173 and formula_174 and a function
formula_175
then formula_45 is continuous at the point formula_177 (with respect to the given metrics) if for any positive real number formula_47 there exists a positive real number formula_48 such that all formula_180 satisfying formula_181 will also satisfy formula_182 As in the case of real functions above, this is equivalent to the condition that for every sequence formula_183 in formula_170 with limit formula_185 we have formula_186 The latter condition can be weakened as follows: formula_45 is continuous at the point formula_188 if and only if for every convergent sequence formula_183 in formula_170 with limit formula_188, the sequence formula_192 is a Cauchy sequence, and formula_188 is in the domain of formula_45.
The set of points at which a function between metric spaces is continuous is a formula_84 set – this follows from the formula_85 definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
formula_197
between normed vector spaces formula_198 and formula_199 (which are vector spaces equipped with a compatible norm, denoted formula_200) is continuous if and only if it is bounded, that is, there is a constant formula_201 such that
formula_202
for all formula_203
Uniform, Hölder and Lipschitz continuity.
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way formula_87 depends on formula_83 and "c" in the definition above. Intuitively, a function "f" as above is uniformly continuous if the formula_87 does
not depend on the point "c". More precisely, it is required that for every real number formula_158 there exists formula_48 such that for every formula_209 with formula_210 we have that formula_211 Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space "X" is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant "K" such that for all formula_212 the inequality
formula_213
holds. Any Hölder continuous function is uniformly continuous. The particular case formula_214 is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant "K" such that the inequality
formula_215
holds for any formula_216 The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
Continuous functions between topological spaces.
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set "X" together with a topology on "X", which is a set of subsets of "X" satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of "X" (with respect to the topology).
A function
formula_175
between two topological spaces "X" and "Y" is continuous if for every open set formula_218 the inverse image
formula_219
is an open subset of "X". That is, "f" is a function between the sets "X" and "Y" (not on the elements of the topology formula_220), but the continuity of "f" depends on the topologies used on "X" and "Y".
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in "Y" are closed in "X".
An extreme example: if a set "X" is given the discrete topology (in which every subset is open), all functions
formula_221
to any topological space "T" are continuous. On the other hand, if "X" is equipped with the indiscrete topology (in which the only open subsets are the empty set and "X") and the space "T" set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
Continuity at a point.
The translation in the language of neighborhoods of the formula_222-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and formula_223 is the largest subset of such that formula_224 this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function formula_225 is continuous at every point of if and only if it is a continuous function.
If "X" and "Y" are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at "x" and "f"("x") instead of all neighborhoods. This gives back the above formula_85 definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that "f" is continuous at "a" if and only if the limit of "f" as "x" approaches "a" is "f"("a"). At an isolated point, every function is continuous.
Given formula_227 a map formula_225 is continuous at formula_49 if and only if whenever formula_230 is a filter on formula_170 that converges to formula_49 in formula_233 which is expressed by writing formula_234 then necessarily formula_235 in formula_236
If formula_237 denotes the neighborhood filter at formula_49 then formula_225 is continuous at formula_49 if and only if formula_241 in formula_236 Moreover, this happens if and only if the prefilter formula_243 is a filter base for the neighborhood filter of formula_52 in formula_236
Alternative definitions.
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
Sequences and nets.
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function formula_225 is sequentially continuous if whenever a sequence formula_183 in formula_170 converges to a limit formula_249 the sequence formula_192 converges to formula_251 Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If formula_170 is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if formula_170 is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
"Proof." Assume that formula_254 is continuous at formula_43 (in the sense of formula_256 continuity). Let formula_257 be a sequence converging at formula_43 (such a sequence always exists, for example, formula_259); since formula_45 is continuous at formula_43
formula_262
For any such formula_263 we can find a natural number formula_264 such that for all formula_265
formula_266
since formula_183 converges at formula_43; combining this with formula_269 we obtain
formula_270
Assume on the contrary that formula_45 is sequentially continuous and proceed by contradiction: suppose formula_45 is not continuous at formula_43
formula_274
then we can take formula_275 and call the corresponding point formula_276: in this way we have defined a sequence formula_277 such that
formula_278
by construction formula_279 but formula_280, which contradicts the hypothesis of sequential continuity. formula_281
Closure operator and interior operator definitions.
In terms of the interior and closure operators, we have the following equivalences,
"Proof."i ⇒ ii.
Fix a subset formula_282 of formula_236 Since formula_284 is open.
and formula_45 is continuous, formula_286 is open in formula_287
As formula_288 we have formula_289
By the definition of the interior, formula_290 is the largest open set contained in formula_291 Hence formula_292
ii ⇒ iii.
Fix formula_293 and let formula_294 Suppose to the contrary that formula_295
then we may find some open neighbourhood formula_198 of formula_52 that is disjoint from formula_298. By ii, formula_299 hence formula_223 is open. Then we have found an open neighbourhood of formula_49 that does not intersect formula_302, contradicting the fact that formula_294
Hence formula_304
iii ⇒ i.
Let formula_305 be closed. Let formula_306 be the preimage of formula_307
By iii, we have formula_308
Since formula_309
we have further that formula_310
Thus formula_311
Hence formula_312 is closed and we are done.
If we declare that a point formula_49 is a subset formula_314 if formula_315 then this terminology allows for a plain English description of continuity: formula_45 is continuous if and only if for every subset formula_317 formula_45 maps points that are close to formula_319 to points that are close to formula_320 Similarly, formula_45 is continuous at a fixed given point formula_180 if and only if whenever formula_49 is close to a subset formula_317 then formula_52 is close to formula_320
Instead of specifying topological spaces by their open subsets, any topology on formula_170 can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset formula_319 of a topological space formula_170 to its topological closure formula_302 satisfies the Kuratowski closure axioms. Conversely, for any closure operator formula_331 there exists a unique topology formula_332 on formula_170 (specifically, formula_334) such that for every subset formula_317 formula_336 is equal to the topological closure formula_337 of formula_319 in formula_339 If the sets formula_170 and formula_341 are each associated with closure operators (both denoted by formula_342) then a map formula_225 is continuous if and only if formula_344 for every subset formula_345
Similarly, the map that sends a subset formula_319 of formula_170 to its topological interior formula_348 defines an interior operator. Conversely, any interior operator formula_349 induces a unique topology formula_332 on formula_170 (specifically, formula_352) such that for every formula_317 formula_354 is equal to the topological interior formula_355 of formula_319 in formula_339 If the sets formula_170 and formula_341 are each associated with interior operators (both denoted by formula_360) then a map formula_225 is continuous if and only if formula_362 for every subset formula_363
Filters and prefilters.
Continuity can also be characterized in terms of filters. A function formula_225 is continuous if and only if whenever a filter formula_230 on formula_170 converges in formula_170 to a point formula_227 then the prefilter formula_369 converges in formula_341 to formula_251 This characterization remains true if the word "filter" is replaced by "prefilter."
Properties.
If formula_225 and formula_373 are continuous, then so is the composition formula_374 If formula_225 is continuous and
The possible topologies on a fixed set "X" are partially ordered: a topology formula_376 is said to be coarser than another topology formula_377 (notation: formula_378) if every open subset with respect to formula_376 is also open with respect to formula_380 Then, the identity map
formula_381
is continuous if and only if formula_378 (see also comparison of topologies). More generally, a continuous function
formula_383
stays continuous if the topology formula_384 is replaced by a coarser topology and/or formula_385 is replaced by a finer topology.
Homeomorphisms.
Symmetric to the concept of a continuous map is an open map, for which of open sets are open. If an open map "f" has an inverse function, that inverse is continuous, and if a continuous map "g" has an inverse, that inverse is open. Given a bijective function "f" between two topological spaces, the inverse function formula_386 need not be continuous. A bijective continuous function with a continuous inverse function is called a .
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
Defining topologies via continuous functions.
Given a function
formula_387
where "X" is a topological space and "S" is a set (without a specified topology), the final topology on "S" is defined by letting the open sets of "S" be those subsets "A" of "S" for which formula_388 is open in "X". If "S" has an existing topology, "f" is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on "S". Thus, the final topology is the finest topology on "S" that makes "f" continuous. If "f" is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by "f".
Dually, for a function "f" from a set "S" to a topological space "X", the initial topology on "S" is defined by designating as an open set every subset "A" of "S" such that formula_389 for some open subset "U" of "X". If "S" has an existing topology, "f" is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on "S". Thus, the initial topology is the coarsest topology on "S" that makes "f" continuous. If "f" is injective, this topology is canonically identified with the subspace topology of "S", viewed as a subset of "X".
A topology on a set "S" is uniquely determined by the class of all continuous functions formula_390 into all topological spaces "X". Dually, a similar idea can be applied to maps formula_391
Related notions.
If formula_392 is a continuous function from some subset formula_393 of a topological space formula_170 then a of formula_45 to formula_170 is any continuous function formula_397 such that formula_398 for every formula_399 which is a condition that often written as formula_400 In words, it is any continuous function formula_397 that restricts to formula_45 on formula_403 This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If formula_392 is not continuous, then it could not possibly have a continuous extension. If formula_341 is a Hausdorff space and formula_393 is a dense subset of formula_170 then a continuous extension of formula_392 to formula_233 if one exists, will be unique. The Blumberg theorem states that if formula_410 is an arbitrary function then there exists a dense subset formula_14 of formula_15 such that the restriction formula_413 is continuous; in other words, every function formula_414 can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function formula_225 between particular types of partially ordered sets formula_170 and formula_341 is continuous if for each directed subset formula_319 of formula_233 we have formula_420 Here formula_421 is the supremum with respect to the orderings in formula_170 and formula_423 respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
formula_424
between two categories is called if it commutes with small limits. That is to say,
formula_425
for any small (that is, indexed by a set formula_426 as opposed to a class) diagram of objects in formula_427.
A is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function formula_428 defined on a Lebesgue measurable set formula_429 is called approximately continuous at a point formula_430 if the approximate limit of formula_45 at formula_43 exists and equals formula_64. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
|
6123
|
49550318
|
https://en.wikipedia.org/wiki?curid=6123
|
Curl (mathematics)
|
In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve.
The notation is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in which also reveals the relation between curl (rotor), divergence, and gradient operators.
Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation formula_1 for the curl.
The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839.
Definition.
</math> and the fingers curl along the orientation of
The curl of a vector field , denoted by , or formula_2, or , is an operator that maps functions in to functions in , and in particular, it maps continuously differentiable functions to continuous functions . It can be defined in several ways, to be mentioned below:
One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if formula_3 is any unit vector, the component of the curl of along the direction formula_3 may be defined to be the limiting value of a closed line integral in a plane perpendicular to formula_3 divided by the area enclosed, as the path of integration is contracted indefinitely around the point.
More specifically, the curl is defined at a point as
formula_6
where the line integral is calculated along the boundary of the area containing point p, being the magnitude of the area. This equation defines the component of the curl of along the direction formula_3. The infinitesimal surfaces bounded by have formula_3 as their normal. is oriented via the right-hand rule.
The above formula means that the component of the curl of a vector field along a certain axis is the "infinitesimal area density" of the circulation of the field in a plane perpendicular to that axis. This formula does not "a priori" define a legitimate vector field, for the individual circulation densities with respect to various axes "a priori" need not relate to each other in the same way as the components of a vector do; that they "do" indeed relate to each other in this precise manner must be proven separately.
To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface.
Another way one can define the curl vector of a function at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing divided by the volume enclosed, as the shell is contracted indefinitely around .
More specifically, the curl may be defined by the vector formula
formula_9
where the surface integral is calculated along the boundary of the volume , being the magnitude of the volume, and formula_10 pointing outward from the surface perpendicularly at every point in .
In this formula, the cross product in the integrand measures the tangential component of at each point on the surface , and points along the surface at right angles to the "tangential projection" of . Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of around , and whose direction is at right angles to this circulation. The above formula says that the "curl" of a vector field at a point is the "infinitesimal volume density" of this "circulation vector" around the point.
To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume.
Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates: formula_11
The equation for each component can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).
If are the Cartesian coordinates and are the orthogonal coordinates, then
formula_12
is the length of the coordinate vector corresponding to . The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1.
Usage.
In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived.
The notation formula_13 has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if formula_14 is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra.
Expanded in 3-dimensional Cartesian coordinates (see "Del in cylindrical and spherical coordinates" for spherical and cylindrical coordinate representations), formula_13 is, for formula_16 composed of formula_17 (where the subscripts indicate the components of the vector, not partial derivatives):
formula_18
where , , and are the unit vectors for the -, -, and -axes, respectively. This expands as follows:
formula_19
Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.
In a general coordinate system, the curl is given by
formula_20
where denotes the Levi-Civita tensor, the covariant derivative, formula_21 is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative:
formula_22
where are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as:
formula_23
Here and are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.
Examples.
Example 1.
Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.
The curl of the vector field at any point is given by the rotation of an infinitesimal area in the "xy"-plane (for "z"-axis component of the curl), "zx"-plane (for "y"-axis component of the curl) and "yz"-plane (for "x"-axis component of the curl vector). This can be seen in the examples below.
Example 2.
The vector field
formula_24
can be decomposed as
formula_25
Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.
Calculating the curl:
formula_26
The resulting vector field describing the curl would at all points be pointing in the negative direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.
Example 3.
For the vector field
formula_27
the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line , the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative direction. Inversely, if placed on , the object would rotate counterclockwise and the right-hand rule would result in a positive direction.
Calculating the curl:
formula_28
The curl points in the negative direction when is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane .
Identities.
In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be
formula_29
Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field:
formula_30
where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space).
Another example is the curl of a curl of a vector field. It can be shown that in general coordinates
formula_31
and this identity defines the vector Laplacian of , symbolized as .
The curl of the gradient of "any" scalar field is always the zero vector field
formula_32
which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives.
The divergence of the curl of any vector field is equal to zero:
formula_33
If is a scalar valued function and is a vector field, then
formula_34
Generalizations.
The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra formula_35 of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and these all being 3-dimensional spaces.
Differential forms.
In 3 dimensions, a differential 0-form is a real-valued function formula_36; a differential 1-form is the following expression, where the coefficients are functions:
formula_37
a differential 2-form is the formal sum, again with function coefficients:
formula_38
and a differential 3-form is defined by a single term with one function as coefficient:
formula_39
The exterior derivative of a -form in is defined as the -form from above—and in if, e.g.,
formula_42
then the exterior derivative leads to
formula_43
The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,
formula_44
and antisymmetry,
formula_45
the twofold application of the exterior derivative yields formula_46 (the zero formula_47-form).
Thus, denoting the space of -forms by formula_48 and the exterior derivative by one gets a sequence:
formula_49
Here formula_50 is the space of sections of the exterior algebra formula_51 vector bundle over R"n", whose dimension is the binomial coefficient formula_52; note that formula_53 for formula_54 or formula_55. Writing only dimensions, one obtains a row of Pascal's triangle:
formula_56
the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.
Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, -forms can be identified with -vector fields (-forms are -covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an "oriented" vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between -vectors and -vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange -forms, -vector fields, -forms, and -vector fields; this is known as Hodge duality. Concretely, on this is given by:
Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:
On the other hand, the fact that corresponds to the identities
formula_57
for any scalar field , and
formula_58
for any vector field .
Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and -forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and -forms are always fiberwise -dimensional and can be identified with vector fields.
Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are
so the curl of a 1-vector field (fiberwise 4-dimensional) is a "2-vector field", which at each point belongs to 6-dimensional vector space, and so one has
formula_59
which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.
However, one can define a curl of a vector field as a "2-vector field" in general, as described below.
Curl geometrically.
2-vectors correspond to the exterior power ; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra of infinitesimal rotations. This has dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have , which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra
The curl of a 3-dimensional vector field which only depends on 2 coordinates (say and ) is simply a vertical vector field (in the direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.
Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.
Inverse.
In the case where the divergence of a vector field is zero, a vector field exists such that . This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential.
If is a vector field with , then adding any gradient vector field to will result in another vector field such that as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law.
|
6125
|
7098284
|
https://en.wikipedia.org/wiki?curid=6125
|
Carl Friedrich Gauss
|
Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory in Germany and professor of astronomy from 1807 until his death in 1855.
While studying at the University of Göttingen, he propounded several mathematical theorems. As an independent scholar, he wrote the masterpieces "Disquisitiones Arithmeticae" and "Theoria motus corporum coelestium". Gauss produced the second and third complete proofs of the fundamental theorem of algebra. In number theory, he made numerous contributions, such as the composition law, the law of quadratic reciprocity and one case of the Fermat polygonal number theorem. He also contributed to the theory of binary and ternary quadratic forms, the construction of the heptadecagon, and the theory of hypergeometric series. Due to Gauss' extensive and fundamental contributions to science and mathematics, more than 100 mathematical and scientific concepts are named after him.
Gauss was instrumental in the identification of Ceres as a dwarf planet. His work on the motion of planetoids disturbed by large planets led to the introduction of the Gaussian gravitational constant and the method of least squares, which he had discovered before Adrien-Marie Legendre published it. Gauss led the geodetic survey of the Kingdom of Hanover together with an arc measurement project from 1820 to 1844; he was one of the founders of geophysics and formulated the fundamental principles of magnetism. His practical work led to the invention of the heliotrope in 1821, a magnetometer in 1833 and – with Wilhelm Eduard Weber – the first electromagnetic telegraph in 1833.
Gauss was the first to discover and study non-Euclidean geometry, which he also named. He developed a fast Fourier transform some 160 years before John Tukey and James Cooley.
Gauss refused to publish incomplete work and left several works to be edited posthumously. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment. Gauss was not a committed or enthusiastic teacher, generally preferring to focus on his own work. Nevertheless, some of his students, such as Dedekind and Riemann, became well-known and influential mathematicians in their own right.
Biography.
Youth and education.
Gauss was born on 30 April 1777 in Brunswick, in the Duchy of Brunswick-Wolfenbüttel (now in the German state of Lower Saxony). His family was of relatively low social status. His father Gebhard Dietrich Gauss (1744–1808) worked variously as a butcher, bricklayer, gardener, and treasurer of a death-benefit fund. Gauss characterized his father as honourable and respected, but rough and dominating at home. He was experienced in writing and calculating, whereas his second wife Dorothea, Carl Friedrich's mother, was nearly illiterate. He had one elder brother from his father's first marriage.
Gauss was a child prodigy in mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of the Duke of Brunswick who sent him to the local "Collegium Carolinum", which he attended from 1792 to 1795 with Eberhard August Wilhelm von Zimmermann as one of his teachers. Thereafter the Duke granted him the resources for studies of mathematics, sciences, and classical languages at the University of Göttingen until 1798. His professor in mathematics was Abraham Gotthelf Kästner, whom Gauss called "the leading mathematician among poets, and the leading poet among mathematicians" because of his epigrams. Astronomy was taught by Karl Felix Seyffer, with whom Gauss stayed in correspondence after graduation; Olbers and Gauss mocked him in their correspondence. On the other hand, he thought highly of Georg Christoph Lichtenberg, his teacher of physics, and of Christian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure. Fellow students of this time were Johann Friedrich Benzenberg, Farkas Bolyai, and Heinrich Wilhelm Brandes.
He was likely a self-taught student in mathematics since he independently rediscovered several theorems. He solved a geometrical problem that had occupied mathematicians since the Ancient Greeks when he determined in 1796 which regular polygons can be constructed by compass and straightedge. This discovery ultimately led Gauss to choose mathematics instead of philology as a career. Gauss's mathematical diary, a collection of short remarks about his results from the years 1796 until 1814, shows that many ideas for his mathematical magnum opus Disquisitiones Arithmeticae (1801) date from this time.
As an elementary student, Gauss and his class were tasked by their teacher, J.G. Büttner, to sum the numbers from 1 to 100. Much to Büttner's surprise, Gauss replied with the correct answer of 5050 in a vastly faster time than expected. Gauss had realised that the sum could be rearranged as 50 pairs of 101 (1+100=101, 2+99=101, etc.). Thus, he simply multiplied 50 by 101. Other accounts state that he computed the sum as 100 sets of 101 and divided by 2.
Private scholar.
Gauss graduated as a Doctor of Philosophy in 1799, not in Göttingen, as is sometimes stated, but at the Duke of Brunswick's special request from the University of Helmstedt, the only state university of the duchy. Johann Friedrich Pfaff assessed his doctoral thesis, and Gauss got the degree "in absentia" without further oral examination. The Duke then granted him the cost of living as a private scholar in Brunswick. Gauss subsequently refused calls from the Russian Academy of Sciences in St. Peterburg and Landshut University. Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. Architect Peter Joseph Krahe made preliminary designs, but one of Napoleon's wars cancelled those plans: the Duke was killed in the battle of Jena in 1806. The duchy was abolished in the following year, and Gauss's financial support stopped.
When Gauss was calculating asteroid orbits in the first years of the century, he established contact with the astronomical communities of Bremen and Lilienthal, especially Wilhelm Olbers, Karl Ludwig Harding, and Friedrich Wilhelm Bessel, forming part of the informal group of astronomers known as the Celestial police. One of their aims was the discovery of further planets. They assembled data on asteroids and comets as a basis for Gauss's research on their orbits, which he later published in his astronomical magnum opus "Theoria motus corporum coelestium" (1809).
Professor in Göttingen.
In November 1807, Gauss was hired by the University of Göttingen, then an institution of the newly founded Kingdom of Westphalia under Jérôme Bonaparte, as full professor and director of the astronomical observatory, and kept the chair until his death in 1855. He was soon confronted with the demand for two thousand francs from the Westphalian government as a war contribution, which he could not afford to pay. Both Olbers and Laplace wanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person from Frankfurt, later discovered to be Prince-primate Dalberg, paid the sum.
Gauss took on the directorship of the 60-year-old observatory, founded in 1748 by Prince-elector George II and built on a converted fortification tower, with usable, but partly out-of-date instruments. The construction of a new observatory had been approved by Prince-elector George III in principle since 1802, and the Westphalian government continued the planning, but Gauss could not move to his new place of work until September 1816. He got new up-to-date instruments, including two meridian circles from Repsold and Reichenbach, and a heliometer from Fraunhofer.
The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: astronomy was the main focus in the first two decades of the 19th century, geodesy in the third decade, and physics, mainly magnetism, in the fourth decade.
Gauss made no secret of his aversion to giving academic lectures. But from the start of his academic career at Göttingen, he continuously gave lectures until 1854. He often complained about the burdens of teaching, feeling that it was a waste of his time. On the other hand, he occasionally described some students as talented. Most of his lectures dealt with astronomy, geodesy, and applied mathematics, and only three lectures on subjects of pure mathematics. Some of Gauss's students went on to become renowned mathematicians, physicists, and astronomers: Moritz Cantor, Dedekind, Dirksen, Encke, Gould, Heine, Klinkerfues, Kupffer, Listing, Möbius, Nicolai, Riemann, Ritter, Schering, Scherk, Schumacher, von Staudt, Stern, Ursin; as geoscientists Sartorius von Waltershausen, and Wappäus.
Gauss did not write any textbook and disliked the popularization of scientific matters. His only attempts at popularization were his works on the date of Easter (1800/1802) and the essay "Erdmagnetismus und Magnetometer" of 1836. Gauss published his papers and books exclusively in Latin or German. He wrote Latin in a classical style but used some customary modifications set by contemporary mathematicians.
Gauss gave his inaugural lecture at Göttingen University in 1808. He described his approach to astronomy as based on reliable observations and accurate calculations, rather than on belief or empty hypothesizing. At university, he was accompanied by a staff of other lecturers in his disciplines, who completed the educational program; these included the mathematician Thibaut with his lectures, the physicist Mayer, known for his textbooks, his successor Weber since 1831, and in the observatory Harding, who took the main part of lectures in practical astronomy. When the observatory was completed, Gauss occupied the western wing of the new observatory, while Harding took the eastern. They had once been on friendly terms, but over time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer. Gauss used the new meridian circles nearly exclusively, and kept them away from Harding, except for some very seldom joint observations.
Brendel subdivides Gauss's astronomic activity chronologically into seven periods, of which the years since 1820 are taken as a "period of lower astronomical activity". The new, well-equipped observatory did not work as effectively as other ones; Gauss's astronomical research had the character of a one-man enterprise without a long-time observation program, and the university established a place for an assistant only after Harding died in 1834.
Nevertheless, Gauss twice refused the opportunity to solve the problem, turning down offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy without burdening lecturing duties, as well as from Leipzig University in 1810 and from Vienna University in 1842, perhaps because of the family's difficult situation. Gauss's salary was raised from 1000 Reichsthaler in 1810 to 2500 Reichsthaler in 1824, and in his later years he was one of the best-paid professors of the university.
When Gauss was asked for help by his colleague and friend Friedrich Wilhelm Bessel in 1810, who was in trouble at Königsberg University because of his lack of an academic title, Gauss provided a doctorate "honoris causa" for Bessel from the Philosophy Faculty of Göttingen in March 1811. Gauss gave another recommendation for an honorary degree for Sophie Germain but only shortly before her death, so she never received it. He also gave successful support to the mathematician Gotthold Eisenstein in Berlin.
Gauss was loyal to the House of Hanover. After King William IV died in 1837, the new Hanoverian King Ernest Augustus annulled the 1833 constitution. Seven professors, later known as the "Göttingen Seven", protested against this, among them his friend and collaborator Wilhelm Weber and Gauss's son-in-law Heinrich Ewald. All of them were dismissed, and three of them were expelled, but Ewald and Weber could stay in Göttingen. Gauss was deeply affected by this quarrel but saw no possibility to help them.
Gauss took part in academic administration: three times he was elected as dean of the Faculty of Philosophy. Being entrusted with the widow's pension fund of the university, he dealt with actuarial science and wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years.
Gauss remained mentally active into his old age, even while suffering from gout and general unhappiness. On 23 February 1855, he died of a heart attack in Göttingen; and was interred in the Albani Cemetery there. Heinrich Ewald, Gauss's son-in-law, and Wolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral.
Gauss was a successful investor and accumulated considerable wealth with stocks and securities, amounting to a value of more than 150,000 Thaler; after his death, about 18,000 Thaler were found hidden in his rooms.
Gauss's brain.
The day after Gauss's death his brain was removed, preserved, and studied by Rudolf Wagner, who found its mass to be slightly above average, at . Wagner's son Hermann, a geographer, estimated the cerebral area to be in his doctoral thesis. In 2013, a neurobiologist at the Max Planck Institute for Biophysical Chemistry in Göttingen discovered that Gauss's brain had been mixed up soon after the first investigations, due to mislabelling, with that of the physician Conrad Heinrich Fuchs, who died in Göttingen a few months after Gauss. A further investigation showed no remarkable anomalies in the brains of either person. Thus, all investigations of Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs.
Family.
Gauss married Johanna Osthoff on 9 October 1805 in St. Catherine's church in Brunswick. They had two sons and one daughter: Joseph (1806–1873), Wilhelmina (1808–1840), and Louis (1809–1810). Johanna died on 11 October 1809, one month after the birth of Louis, who himself died a few months later. Gauss chose the first names of his children in honour of Giuseppe Piazzi, Wilhelm Olbers, and Karl Ludwig Harding, the discoverers of the first asteroids.
On 4 August 1810, Gauss married Wilhelmine (Minna) Waldeck, a friend of his first wife, with whom he had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879), and Therese (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade. Therese then took over the household and cared for Gauss for the rest of his life; after her father's death, she married actor Constantin Staufenau. Her sister Wilhelmina married the orientalist Heinrich Ewald. Gauss's mother Dorothea lived in his house from 1817 until she died in 1839.
The eldest son Joseph, while still a schoolboy, helped his father as an assistant during the survey campaign in the summer of 1821. After a short time at university, in 1824 Joseph joined the Hanoverian army and assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network into the western parts of the kingdom. With his geodetical qualifications, he left the service and engaged in the construction of the railway network as director of the Royal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months.
Eugen left Göttingen in September 1830 and emigrated to the United States, where he spent five years with the army. He then worked for the American Fur Company in the Midwest. He later moved to Missouri and became a successful businessman. Wilhelm married a niece of the astronomer Bessel; he then moved to Missouri, started as a farmer and became wealthy in the shoe business in St. Louis in later years. Eugene and William have numerous descendants in America, but the Gauss descendants left in Germany all derive from Joseph, as the daughters had no children.
Personality.
Scholar.
In the first two decades of the 19th century, Gauss was the only important mathematician in Germany comparable to the leading French mathematicians. His "Disquisitiones Arithmeticae" was the first mathematical book from Germany to be translated into the French language.
Gauss was "in front of the new development" with documented research since 1799, his wealth of new ideas, and his rigour of demonstration. In contrast to previous mathematicians like Leonhard Euler, who let their readers take part in their reasoning, including certain erroneous deviations from the correct path, Gauss introduced a new style of direct and complete exposition that did not attempt to show the reader the author's train of thought.
But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai as follows:
His posthumous papers, his scientific diary, and short glosses in his own textbooks show that he empirically worked to a great extent. He was a lifelong busy and enthusiastic calculator, working extraordinarily quickly and checking his results through estimation. Nevertheless, his calculations were not always free from mistakes. He coped with the enormous workload by using skillful tools. Gauss used numerous mathematical tables, examined their exactness, and constructed new tables on various matters for personal use. He developed new tools for effective calculation, for example the Gaussian elimination. Gauss's calculations and the tables he prepared were often more precise than practically necessary. Very likely, this method gave him additional material for his theoretical work.
Gauss was only willing to publish work when he considered it complete and above criticism. This perfectionism was in keeping with the motto of his personal seal ("Few, but Ripe"). Many colleagues encouraged him to publicize new ideas and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself by claiming that the initial discovery of ideas was easy, but preparing a presentable elaboration was a demanding matter for him, for either lack of time or "serenity of mind". Nevertheless, he published many short communications of urgent content in various journals, but left a considerable literary estate, too. Gauss referred to mathematics as "the queen of sciences" and arithmetics as "the queen of mathematics", and supposedly once espoused a belief in the necessity of immediately understanding Euler's identity as a benchmark pursuant to becoming a first-class mathematician.
On certain occasions, Gauss claimed that the ideas of another scholar had already been in his possession previously. Thus his concept of priority as "the first to discover, not the first to publish" differed from that of his scientific contemporaries. In contrast to his perfectionism in presenting mathematical ideas, his citations were criticized as negligent. He justified himself with an unusual view of correct citation practice: he would only give complete references, with respect to the previous authors of importance, which no one should ignore, but citing in this way would require knowledge of the history of science and more time than he wished to spend.
Private man.
Soon after Gauss's death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw him as a serene and forward-striving man with childlike modesty, but also of "iron character" with an unshakeable strength of mind. Apart from his closer circle, others regarded him as reserved and unapproachable "like an Olympian sitting enthroned on the summit of science". His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by his grumpy behaviour, but a short time later his mood could change, and he would become a charming, open-minded host. Gauss disliked polemic natures; together with his colleague Hausmann he opposed to a call for Justus Liebig on a university chair in Göttingen, "because he was always involved in some polemic."
Gauss's life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the birth of their third child, he revealed the grief in a last letter to his dead wife in the style of an ancient threnody, the most personal of his surviving documents. His second wife and his two daughters suffered from tuberculosis. In a letter to Bessel, dated December 1831, Gauss hinted at his distress, describing himself as "the victim of the worst domestic sufferings".
Because of his wife's illness, both younger sons were educated for some years in Celle, far from Göttingen. The military career of his elder son Joseph ended after more than two decades at the poorly paid rank of first lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married. The second son Eugen shared a good measure of his father's talent in computation and languages but had a lively and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public, Eugen suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken to start, after which his father refused further financial support. The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties getting an appropriate education, and eventually emigrated as well. Only Gauss's youngest daughter Therese accompanied him in his last years of life.
In his later years Gauss habitually collected various types of useful or useless numerical data, such as the number of paths from his home to certain places in Göttingen or peoples' ages in days; he congratulated Humboldt in December 1851 for having reached the same age as Isaac Newton at his death, calculated in days.
Beyond his excellent knowledge of Latin, he was also acquainted with modern languages. Gauss read both classical and modern literature, and English and French works in the original languages. His favorite English author was Walter Scott, his favorite German Jean Paul. At the age of 62, he began to teach himself Russian, very likely to understand scientific writings from Russia, among them those of Lobachevsky on non-Euclidean geometry. Gauss liked singing and went to concerts. He was a busy newspaper reader; in his last years, he would visit an academic press salon of the university every noon. Gauss did not care much for philosophy, and mocked the "splitting hairs of the so-called metaphysicians", by which he meant proponents of the contemporary school of "Naturphilosophie".
Gauss had an "aristocratic and through and through conservative nature", with little respect for people's intelligence and morals, following the motto "mundus vult decipi". He disliked Napoleon and his system and was horrified by violence and revolution of all kinds. Thus he condemned the methods of the Revolutions of 1848, though he agreed with some of their aims, such as that of a unified Germany. He had a low estimation of the constitutional system and he criticized parliamentarians of his time for their perceived ignorance and logical errors.
Some Gauss biographers have speculated on his religious beliefs. He sometimes said "God arithmetizes" and "I succeeded – not on account of my hard efforts, but by the grace of the Lord." Gauss was a member of the Lutheran church, like most of the population in northern Germany, but it seems that he did not believe all Lutheran dogma or understand the Bible fully literally. According to Sartorius, Gauss' religious tolerance, "insatiable thirst for truth" and sense of justice were motivated by his religious convictions.
Mathematics.
Algebra and number theory.
Fundamental theorem of algebra.
In his doctoral thesis from 1799, Gauss proved the fundamental theorem of algebra which states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. Mathematicians including Jean le Rond d'Alembert had produced false proofs before him, and Gauss's dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts led to considerable clarification of the concept of complex numbers.
"Disquisitiones Arithmeticae".
In the preface to the "Disquisitiones", Gauss dates the beginning of his work on number theory to 1795. By studying the works of previous mathematicians like Fermat, Euler, Lagrange, and Legendre, he realized that these scholars had already found much of what he had independently discovered. The "Disquisitiones Arithmeticae", written in 1798 and published in 1801, consolidated number theory as a discipline and covered both elementary and algebraic number theory. Therein he introduces the triple bar symbol () for congruence and uses it for a clean presentation of modular arithmetic. It deals with the unique factorization theorem and primitive roots modulo n. In the main sections, Gauss presents the first two proofs of the law of quadratic reciprocity and develops the theories of binary and ternary quadratic forms.
The "Disquisitiones" include the Gauss composition law for binary quadratic forms, as well as the enumeration of the number of representations of an integer as the sum of three squares. As an almost immediate corollary of his theorem on three squares, he proves the triangular case of the Fermat polygonal number theorem for "n" = 3. From several analytic results on class numbers that Gauss gives without proof towards the end of the fifth section, it appears that Gauss already knew the class number formula in 1801.
In the last section, Gauss gives proof for the constructibility of a regular heptadecagon (17-sided polygon) with straightedge and compass by reducing this geometrical problem to an algebraic one. He shows that a regular polygon is constructible if the number of its sides is either a power of 2 or the product of a power of 2 and any number of distinct Fermat primes. In the same section, he gives a result on the number of solutions of certain cubic polynomials with coefficients in finite fields, which amounts to counting integral points on an elliptic curve. An unfinished chapter, consisting of work done during 1797–1799, was found among his papers after his death.
Further investigations.
One of Gauss's first results was the empirically found conjecture of 1792 – the later called prime number theorem – giving an estimation of the number of prime numbers by using the integral logarithm.
In 1816, Olbers encouraged Gauss to compete for a prize from the French Academy for a proof for Fermat's Last Theorem; he refused, considering the topic uninteresting. However, after his death a short undated paper was found with proofs of the theorem for the cases "n" = 3 and "n" = 5. The particular case of "n" = 3 was proved much earlier by Leonhard Euler, but Gauss developed a more streamlined proof which made use of Eisenstein integers; though more general, the proof was simpler than in the real integers case.
Gauss contributed to solving the Kepler conjecture in 1831 with the proof that a greatest packing density of spheres in the three-dimensional space is given when the centres of the spheres form a cubic face-centred arrangement, when he reviewed a book of Ludwig August Seeber on the theory of reduction of positive ternary quadratic forms. Having noticed some lacks in Seeber's proof, he simplified many of his arguments, proved the central conjecture, and remarked that this theorem is equivalent to the Kepler conjecture for regular arrangements.
In two papers on biquadratic residues (1828, 1832) Gauss introduced the ring of Gaussian integers formula_1, showed that it is a unique factorization domain, and generalized some key arithmetic concepts, such as Fermat's little theorem and Gauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity – as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws.
In the second paper, he stated the general law of biquadratic reciprocity and proved several special cases of it. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claimed the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws.
Analysis.
One of Gauss's first discoveries was the notion of the arithmetic-geometric mean (AGM) of two positive real numbers. He discovered its relation to elliptic integrals in the years 1798–1799 through Landen's transformation, and a diary entry recorded the discovery of the connection of Gauss's constant to lemniscatic elliptic functions, a result that Gauss stated "will surely open an entirely new field of analysis". He also made early inroads into the more formal issues of the foundations of complex analysis, and from a letter to Bessel in 1811 it is clear that he knew the "fundamental theorem of complex analysis" – Cauchy's integral theorem – and understood the notion of complex residues when integrating around poles.
Euler's pentagonal numbers theorem, together with other researches on the AGM and lemniscatic functions, led him to plenty of results on Jacobi theta functions, culminating in the discovery in 1808 of the later called Jacobi triple product identity, which includes Euler's theorem as a special case. His works show that he knew modular transformations of order 3, 5, 7 for elliptic functions since 1808.
Several mathematical fragments in his Nachlass indicate that he knew parts of the modern theory of modular forms. In his work on the multivalued AGM of two complex numbers, he discovered a deep connection between the infinitely many values of the AGM and its two "simplest values". In his unpublished writings he recognized and made a sketch of the key concept of fundamental domain for the modular group. One of Gauss's sketches of this kind was a drawing of a tessellation of the unit disk by "equilateral" hyperbolic triangles with all angles equal to formula_2.
An example of Gauss's insight in analysis is the cryptic remark that the principles of circle division by compass and straightedge can also be applied to the division of the lemniscate curve, which inspired Abel's theorem on lemniscate division. Another example is his publication "Summatio quarundam serierum singularium" (1811) on the determination of the sign of quadratic Gauss sums, in which he solved the main problem by introducing q-analogs of binomial coefficients and manipulating them by several original identities that seem to stem from his work on elliptic function theory; however, Gauss cast his argument in a formal way that does not reveal its origin in elliptic function theory, and only the later work of mathematicians such as Jacobi and Hermite has exposed the crux of his argument.
In the "Disquisitiones generales circa series infinitam..." (1813), he provides the first systematic treatment of the general hypergeometric function formula_3, and shows that many of the functions known at the time are special cases of the hypergeometric function. This work is the first exact inquiry into convergence of infinite series in the history of mathematics. Furthermore, it deals with infinite continued fractions arising as ratios of hypergeometric functions, which are now called Gauss continued fractions.
In 1823, Gauss won the prize of the Danish Society with an essay on conformal mappings, which contains several developments that pertain to the field of complex analysis. Gauss stated that angle-preserving mappings in the complex plane must be complex analytic functions, and used the later-named Beltrami equation to prove the existence of isothermal coordinates on analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and an ellipsoid of revolution.
Numerical analysis.
Gauss often deduced theorems inductively from numerical data he had collected empirically. As such, the use of efficient algorithms to facilitate calculations was vital to his research, and he made many contributions to numerical analysis, such as the method of Gaussian quadrature, published in 1816.
In a private letter to Gerling from 1823, he described a solution of a 4x4 system of linear equations with the Gauss-Seidel method – an "indirect" iterative method for the solution of linear systems, and recommended it over the usual method of "direct elimination" for systems of more than two equations.
Gauss invented an algorithm for calculating what is now called discrete Fourier transforms when calculating the orbits of Pallas and Juno in 1805, 160 years before Cooley and Tukey found their similar Cooley–Tukey algorithm. He developed it as a trigonometric interpolation method, but the paper "Theoria Interpolationis Methodo Nova Tractata" was published only posthumously in 1876, well after Joseph Fourier's introduction of the subject in 1807.
Geometry.
Differential geometry.
The geodetic survey of Hanover fuelled Gauss's interest in differential geometry and topology, fields of mathematics dealing with curves and surfaces. This led him in 1828 to the publication of a work that marks the birth of modern differential geometry of surfaces, as it departed from the traditional ways of treating surfaces as cartesian graphs of functions of two variables, and that initiated the exploration of surfaces from the "inner" point of view of a two-dimensional being constrained to move on it. As a result, the Theorema Egregium ("remarkable theorem"), established a property of the notion of Gaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuring angles and distances on the surface, regardless of the embedding of the surface in three-dimensional or two-dimensional space.
The Theorema Egregium leads to the abstraction of surfaces as doubly-extended manifolds; it clarifies the distinction between the intrinsic properties of the manifold (the metric) and its physical realization in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that a sphere or an ellipsoid cannot be transformed to a plane without distortion, which causes a fundamental problem in designing projections for geographical maps. A portion of this essay is dedicated to a profound study of geodesics. In particular, Gauss proves the local Gauss–Bonnet theorem on geodesic triangles, and generalizes Legendre's theorem on spherical triangles to geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a "sufficiently small" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle, regardless of the behaviour of the surface in the triangle interior.
Gauss's memoir from 1828 lacks the conception of geodesic curvature. However, in a previously unpublished manuscript, very likely written in 1822–1825, he introduced the term "side curvature" (German: "Seitenkrümmung") and proved its invariance under isometric transformations, a result that was later obtained by Ferdinand Minding and published by him in 1830. This Gauss paper contains the core of his lemma on total curvature, but also its generalization, found and proved by Pierre Ossian Bonnet in 1848 and known as the Gauss–Bonnet theorem.
Non-Euclidean geometry.
During Gauss' lifetime, the Parallel postulate of Euclidean geometry was heavily discussed. Numerous efforts were made to prove it in the frame of the Euclidean axioms, whereas some mathematicians discussed the possibility of geometrical systems without it. Gauss thought about the basics of geometry from the 1790s on, but only realized in the 1810s that a non-Euclidean geometry without the parallel postulate could solve the problem. In a letter to Franz Taurinus of 1824, he presented a short comprehensible outline of what he named a "non-Euclidean geometry", but he strongly forbade Taurinus to make any use of it. Gauss is credited with having been the one to first discover and study non-Euclidean geometry, even coining the term as well.
The first publications on non-Euclidean geometry in the history of mathematics were authored by Nikolai Lobachevsky in 1829 and Janos Bolyai in 1832. In the following years, Gauss wrote his ideas on the topic but did not publish them, thus avoiding influencing the contemporary scientific discussion. Gauss commended the ideas of Janos Bolyai in a letter to his father and university friend Farkas Bolyai claiming that these were congruent to his own thoughts of some decades. However, it is not quite clear to what extent he preceded Lobachevsky and Bolyai, as his written remarks are vague and obscure.
Sartorius first mentioned Gauss's work on non-Euclidean geometry in 1856, but only the publication of Gauss's Nachlass in Volume VIII of the Collected Works (1900) showed Gauss's ideas on the matter, at a time when non-Euclidean geometry was still an object of some controversy.
Early topology.
Gauss was also an early pioneer of topology or "Geometria Situs", as it was called in his lifetime. The first proof of the fundamental theorem of algebra in 1799 contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem.
Another encounter with topological notions occurred to him in the course of his astronomical work in 1804, when he determined the limits of the region on the celestial sphere in which comets and asteroids might appear, and which he termed "Zodiacus". He discovered that if the Earth's and comet's orbits are linked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid 7 Iris, he published a further qualitative discussion of the Zodiacus.
In Gauss's letters of 1820–1830, he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify "tract figures", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections of knots. To do so he devised a symbolical scheme, the Gauss code, that in a sense captured the characteristic features of tract figures.
In a fragment from 1833, Gauss defined the linking number of two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. On the same note, he lamented the little progress made in Geometria Situs, and remarked that one of its central problems will be "to count the intertwinings of two closed or infinite curves". His notebooks from that period reveal that he was also thinking about other topological objects such as braids and tangles.
Gauss's influence in later years to the emerging field of topology, which he held in high esteem, was through occasional remarks and oral communications to Mobius and Listing.
Minor mathematical accomplishments.
Gauss applied the concept of complex numbers to solve well-known problems in a new concise way. For example, in a short note from 1836 on geometric aspects of the ternary forms and their application to crystallography, he stated the fundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers. He described rotations of this sphere as the action of certain linear fractional transformations on the extended complex plane, and gave a proof for the geometric theorem that the altitudes of a triangle always meet in a single orthocenter.
Gauss was concerned with John Napier's "Pentagramma mirificum" – a certain spherical pentagram – for several decades; he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic, and analytic aspects. In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons, and Poncelet pentagons in the plane.
Furthermore, he contributed a solution to the problem of constructing the largest-area ellipse inside a given quadrilateral, and discovered a surprising result about the computation of area of pentagons.
Sciences.
Astronomy.
On 1 January 1801, Italian astronomer Giuseppe Piazzi discovered a new celestial object, presumed it to be the long searched planet between Mars and Jupiter according to the so-called Titius–Bode law, and named it Ceres. He could track it only for a short time until it disappeared behind the glare of the Sun. The mathematical tools of the time were not sufficient to predict the location of its reappearance from the few data available. Gauss tackled the problem and predicted a position for possible rediscovery in December 1801. This turned out to be accurate within a half-degree when Franz Xaver von Zach on 7 and 31 December at Gotha, and independently Heinrich Olbers on 1 and 2 January in Bremen, identified the object near the predicted position.
Gauss's method leads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose.
The discovery of Ceres led Gauss to the theory of the motion of planetoids disturbed by large planets, eventually published in 1809 as "Theoria motus corporum coelestium in sectionibus conicis solem ambientum". It introduced the Gaussian gravitational constant.
Since the new asteroids had been discovered, Gauss occupied himself with the perturbations of their orbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object was Pallas, because of its great eccentricity and orbital inclination, whereby Laplace's method did not work. Gauss used his own tools: the arithmetic–geometric mean, the hypergeometric function, and his method of interpolation. He found an orbital resonance with Jupiter in proportion 18:7 in 1812; Gauss gave this result as cipher, and gave the explicit meaning only in letters to Olbers and Bessel. After long years of work, he finished it in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy.
One fruit of Gauss's research on Pallas perturbations was the "Determinatio Attractionis..." (1818) on a method of theoretical astronomy that later became known as the "elliptic ring method". It introduced an averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time the planet takes to follow the corresponding orbital arcs. Gauss presents the method of evaluating the gravitational attraction of such an elliptic ring, which includes several steps; one of them involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate an elliptic integral.
Even after Gauss's contributions to theoretical astronomy came to an end, more practical activities in observational astronomy continued and occupied him during his entire career. As early as 1799, Gauss dealt with the determination of longitude by use of the lunar parallax, for which he developed more convenient formulas than those were in common use. After appointment as director of observatory he attached importance to the fundamental astronomical constants in correspondence with Bessel. Gauss himself provided tables of nutation and aberration, solar coordinates, and refraction. He made many contributions to spherical geometry, and in this context solved some practical problems about navigation by stars. He published a great number of observations, mainly on minor planets and comets; his last observation was the solar eclipse of 28 July 1851.
Chronology.
Gauss's first publication following his doctoral thesis dealt with the determination of the date of Easter (1800), an elementary mathematical topic. Gauss aimed to present a convenient algorithm for people without any knowledge of ecclesiastical or even astronomical chronology, and thus avoided the usual terms of golden number, epact, solar cycle, domenical letter, and any religious connotations. This choice of topic likely had historical grounds. The replacement of the Julian calendar by the Gregorian calendar had caused confusion in the Holy Roman Empire since the 16th century and was not finished in Germany until 1700, when the difference of eleven days was deleted. Even after this, Easter fell on different dates in Protestant and Catholic territories, until this difference was abolished by agreement in 1776. In the Protestant states, such as the Duchy of Brunswick, the Easter of 1777, five weeks before Gauss's birth, was the first one calculated in the new manner.
Error theory.
Gauss likely used the method of least squares to minimize the impact of measurement error when calculating the orbit of Ceres. The method was published first by Adrien-Marie Legendre in 1805, but Gauss claimed in "Theoria motus" (1809) that he had been using it since 1794 or 1795. In the history of statistics, this disagreement is called the "priority dispute over the discovery of the method of least squares". Gauss proved that the method has the lowest sampling variance within the class of linear unbiased estimators under the assumption of normally distributed errors (Gauss–Markov theorem), in the two-part paper "Theoria combinationis observationum erroribus minimis obnoxiae" (1823).
In the first paper he proved Gauss's inequality (a Chebyshev-type inequality) for unimodal distributions, and stated without proof another inequality for moments of the fourth order (a special case of the Gauss-Winckler inequality). He derived lower and upper bounds for the variance of the sample variance. In the second paper, Gauss described recursive least squares methods. His work on the theory of errors was extended in several directions by the geodesist Friedrich Robert Helmert to the Gauss-Helmert model.
Gauss also contributed to problems in probability theory that are not directly concerned with the theory of errors. One example appears as a diary note where he tried to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in "(0,1)". He derived this distribution, now known as the Gauss-Kuzmin distribution, as a by-product of the discovery of the ergodicity of the Gauss map for continued fractions. Gauss's solution is the first-ever result in the metrical theory of continued fractions.
Geodesy.
Gauss was busy with geodetic problems since 1799 when he helped Karl Ludwig von Lecoq with calculations during his survey in Westphalia. Beginning in 1804, he taught himself some practical geodesy in Brunswick and Göttingen.
Since 1816, Gauss's former student Heinrich Christian Schumacher, then professor in Copenhagen, but living in Altona (Holstein) near Hamburg as head of an observatory, carried out a triangulation of the Jutland peninsula from Skagen in the north to Lauenburg in the south. This project was the basis for map production but also aimed at determining the geodetic arc between the terminal sites. Data from geodetic arcs were used to determine the dimensions of the earth geoid, and long arc distances brought more precise results. Schumacher asked Gauss to continue this work further to the south in the Kingdom of Hanover; Gauss agreed after a short time of hesitation. Finally, in May 1820, King George IV gave the order to Gauss.
An arc measurement needs a precise astronomical determination of at least two points in the network. Gauss and Schumacher used the coincidence that both observatories in Göttingen and Altona, in the garden of Schumacher's house, laid nearly in the same longitude. The latitude was measured with both their instruments and a zenith sector of Ramsden that was transported to both observatories.
Gauss and Schumacher had already determined some angles between Lüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818. During the summers of 1821 until 1825 Gauss directed the triangulation work personally, from Thuringia in the south to the river Elbe in the north. The triangle between Hoher Hagen, Großer Inselsberg in the Thuringian Forest, and Brocken in the Harz mountains was the largest one Gauss had ever measured with a maximum size of . In the thinly populated Lüneburg Heath without significant natural summits or artificial buildings, he had difficulties finding suitable triangulation points; sometimes cutting lanes through the vegetation was necessary.
For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named it "heliotrope". Another suitable construction for the same purpose was a sextant with an additional mirror which he named "vice heliotrope". Gauss was assisted by soldiers of the Hanoverian army, among them his eldest son Joseph. Gauss took part in the baseline measurement (Braak Base Line) of Schumacher in the village of Braak near Hamburg in 1820, and used the result for the evaluation of the Hanoverian triangulation.
An additional result was a better value for the flattening of the approximative Earth ellipsoid. Gauss developed the universal transverse Mercator projection of the ellipsoidal shaped Earth (what he named "conform projection") for representing geodetical data in plane charts.
When the arc measurement was finished, Gauss began the enlargement of the triangulation to the west to get a survey of the whole Kingdom of Hanover with a Royal decree from 25 March 1828. The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Gauss, who applied his mathematical inventions such as the method of least squares and the elimination method to it. The project was finished in 1844, and Gauss sent a final report of the project to the government; his method of projection was not edited until 1866.
In 1828, when studying differences in latitude, Gauss first defined a physical approximation for the figure of the Earth as the surface everywhere perpendicular to the direction of gravity; later his doctoral student Johann Benedict Listing called this the "geoid".
Magnetism and telegraphy.
Geomagnetism.
Gauss had been interested in magnetism since 1803. After Alexander von Humboldt visited Göttingen in 1826, both scientists began intensive research on geomagnetism, partly independently, partly in productive cooperation. In 1828, Gauss was Humboldt's guest during the conference of the Society of German Natural Scientists and Physicians in Berlin, where he got acquainted with the physicist Wilhelm Weber.
When Weber got the chair for physics in Göttingen as successor of Johann Tobias Mayer by Gauss's recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge of magnetism with a representation for the unit of magnetism in terms of mass, charge, and time. They founded the "Magnetic Association" (German: "Magnetischer Verein"), an international working group of several observatories, which carried out measurements of Earth's magnetic field in many regions of the world using equivalent methods at arranged dates in the years 1836 to 1841.
In 1836, Humboldt suggested the establishment of a worldwide net of geomagnetic stations in the British dominions with a letter to the Duke of Sussex, then president of the Royal Society; he proposed that magnetic measures should be taken under standardized conditions using his methods. Together with other instigators, this led to a global program known as "Magnetical crusade" under the direction of Edward Sabine. The dates, times, and intervals of observations were determined in advance, the "Göttingen mean time" was used as the standard. 61 stations on all five continents participated in this global program. Gauss and Weber founded a series for publication of the results, six volumes were edited between 1837 and 1843. Weber's departure to Leipzig in 1843 as late effect of the Göttingen Seven affair marked the end of Magnetic Association activity.
Following Humboldt's example, Gauss ordered a magnetic observatory to be built in the garden of the observatory, but the scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magnetic declination, inclination, and intensity and differentiated, unlike Humboldt, between "horizontal" and "vertical" intensity. Together with Weber, he developed methods of measuring the components of the intensity of the magnetic field and constructed a suitable magnetometer to measure "absolute values" of the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus. The precision of the magnetometer was about ten times higher than that of previous instruments. With this work, Gauss was the first to derive a non-mechanical quantity by basic mechanical quantities.
Gauss carried out a "General Theory of Terrestrial Magnetism" (1839), in what he believed to describe the nature of magnetic force; according to Felix Klein, this work is a presentation of observations by use of spherical harmonics rather than a physical theory. The theory predicted the existence of exactly two magnetic poles on the Earth, thus Hansteen's idea of four magnetic poles became obsolete, and the data allowed to determine their location with rather good precision.
Gauss influenced the beginning of geophysics in Russia, when Adolph Theodor Kupffer, one of his former students, founded a magnetic observatory in St. Petersburg, following the example of the observatory in Göttingen, and similarly, Ivan Simonov in Kazan.
Electromagnetism.
The discoveries of Hans Christian Ørsted on electromagnetism and Michael Faraday on electromagnetic induction drew Gauss's attention to these matters. Gauss and Weber found rules for branched electric circuits, which were later found independently and first published by Gustav Kirchhoff and named after him as Kirchhoff's circuit laws, and made inquiries into electromagnetism. They constructed the first electromechanical telegraph in 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen, but they made no further commercial use of this invention.
Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In notebooks from these years, he recorded several innovative formulations; he discovered the vector potential function, independently rediscovered by Franz Ernst Neumann in 1845, and in January 1835 he wrote down an "induction law" equivalent to Faraday's law, which stated that the electromotive force at a given point in space is equal to the instantaneous rate of change (with respect to time) of this function.
Gauss tried to find a unifying law for long-distance effects of electrostatics, electrodynamics, electromagnetism, and induction, comparable to Newton's law of gravitation, but his attempt ended in a "tragic failure".
Potential theory.
Since Isaac Newton had shown theoretically that the Earth and rotating stars assume non-spherical shapes, the problem of attraction of ellipsoids gained importance in mathematical astronomy. In his first publication on potential theory, the "Theoria attractionis..." (1813), Gauss provided a closed-form expression to the gravitational attraction of a homogeneous triaxial ellipsoid at every point in space. In contrast to previous research of Maclaurin, Laplace and Lagrange, Gauss's new solution treated the attraction more directly in the form of an elliptic integral. In the process, he also proved and applied some special cases of the so-called Gauss's theorem in vector analysis.
In the "General theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances" (1840) Gauss gave a basic theory of magnetic potential, based on Lagrange, Laplace, and Poisson; it seems rather unlikely that he knew the previous works of George Green on this subject. However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future.
Optics.
Gauss's calculations enabled instrument maker Johann Georg Repsold in Hamburg to construct a new achromatic lens system in 1810. A main problem, among other difficulties, was that the refractive index and dispersion of the glass used were not precisely known. In a short article from 1817 Gauss dealt with the problem of removal of chromatic aberration in double lenses, and computed adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the optician Carl August von Steinheil, who in 1860 introduced the achromatic Steinheil doublet, partly based on Gauss's calculations. Many results in geometrical optics are scattered in Gauss's correspondences and hand notes.
In the "Dioptrical Investigations" (1840), Gauss gave the first systematic analysis of the formation of images under a paraxial approximation (Gaussian optics). He characterized optical systems under a paraxial approximation only by its cardinal points, and he derived the Gaussian lens formula, applicable without restrictions in respect to the thickness of the lenses.
Mechanics.
Gauss's first work in mechanics concerned the earth's rotation. When his university friend Benzenberg carried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as the Coriolis force, he asked Gauss for a theory-based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and the results corresponded sufficiently with Benzenberg's data, who added Gauss's considerations as an appendix to his book on falling experiments.
After Foucault had demonstrated the earth's rotation by his pendulum experiment in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum. It is described in the Gauss–Gerling correspondence and Weber made some experiments with this apparatus in 1853, but no data were published.
Gauss's principle of least constraint of 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combining D'Alembert's principle with Lagrange's principle of virtual work, and showing analogies to the method of least squares.
Metrology.
In 1828, Gauss was appointed as head of the board for weights and measures of the Kingdom of Hanover. He created standards for length and measure. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical construction. In the correspondence with Schumacher, who was also working on this matter, he described new ideas for high-precision scales. He submitted the final reports on the Hanoverian foot and pound to the government in 1841. This work achieved international importance due to an 1836 law that connected the Hanoverian measures with the English ones.
Honours and awards.
Gauss first became member of a scientific society, the Russian Academy of Sciences, in 1802. Further memberships (corresponding, foreign or full) were awarded by the Academy of Sciences in Göttingen (1802/ 1807), the French Academy of Sciences (1804/ 1820), the Royal Society of London (1804), the Royal Prussian Academy in Berlin (1810), the National Academy of Science in Verona (1810), the Royal Society of Edinburgh (1820), the Bavarian Academy of Sciences of Munich (1820), the Royal Danish Academy in Copenhagen (1821), the Royal Astronomical Society in London (1821), the Royal Swedish Academy of Sciences (1821), the American Academy of Arts and Sciences in Boston (1822), the Royal Bohemian Society of Sciences in Prague (1833), the Royal Academy of Science, Letters and Fine Arts of Belgium (1841/1845), the Royal Society of Sciences in Uppsala (1843), the Royal Irish Academy in Dublin (1843), the Royal Institute of the Netherlands (1845/ 1851), the Spanish Royal Academy of Sciences in Madrid (1850), the Russian Geographical Society (1851), the Imperial Academy of Sciences in Vienna (1848), the American Philosophical Society (1853), the Cambridge Philosophical Society, and the Royal Hollandish Society of Sciences in Haarlem.
Both the University of Kazan and the Philosophy Faculty of the University of Prague appointed him honorary member in 1848.
Gauss received the Lalande Prize from the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations, the Danish Academy of Science prize in 1823 for his memoir on conformal projection, and the Copley Medal from the Royal Society in 1838 for "his inventions and mathematical researches in magnetism".
Gauss was appointed Knight of the French Legion of Honour in 1837, and became one of the first members of the Prussian Order Pour le Merite (Civil class) when it was established in 1842. He received the Order of the Crown of Westphalia (1810), the Danish Order of the Dannebrog (1817), the Hanoverian Royal Guelphic Order (1815), the Swedish Order of the Polar Star (1844), the Order of Henry the Lion (1849), and the Bavarian Maximilian Order for Science and Art (1853).
The Kings of Hanover appointed him the honorary titles "Hofrath" (1816) and "Geheimer Hofrath" (1845). In 1949, on the occasion of his golden doctor degree jubilee, he received honorary citizenship of both Brunswick and Göttingen. Soon after his death a medal was issued by order of King George V of Hanover with the back inscription dedicated "to the Prince of Mathematicians".
The "Gauss-Gesellschaft Göttingen" ("Göttingen Gauss Society") was founded in 1964 for research on the life and work of Carl Friedrich Gauss and related persons. It publishes the "Mitteilungen der Gauss-Gesellschaft" ("Communications of the Gauss Society").
Selected writings.
Correspondence.
The Göttingen Academy of Sciences and Humanities provides a complete collection of the known letters from and to Carl Friedrich Gauss that is accessible online. The literary estate is kept and provided by the Göttingen State and University Library. Written materials from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick.
|
6130
|
7903804
|
https://en.wikipedia.org/wiki?curid=6130
|
Cornish language
|
Cornish (Standard Written Form: or , ) is a Southwestern Brittonic language of the Celtic language family. Along with Welsh and Breton, Cornish descends from Common Brittonic, a language once spoken widely across Great Britain. For much of the medieval period Cornish was the main language of Cornwall, until it was gradually pushed westwards by the spread of English. Cornish remained a common community language in parts of Cornwall until the mid-18th century, and there is some evidence for traditional speakers persisting into the 19th century.
Cornish became extinct as a living community language in Cornwall by the end of the 18th century; knowledge of Cornish persisted within some families and individuals. A revival started in the early 20th century, and in 2010 UNESCO reclassified the language as critically endangered, stating that its former classification of the language as extinct was no longer accurate. The language has a growing number of second-language speakers, and a very small number of families now raise children to speak revived Cornish as a first language.
Cornish is currently recognised under the European Charter for Regional or Minority Languages, and the language is often described as an important part of Cornish identity, culture and heritage. Since the revival of the language, some Cornish textbooks and works of literature have been published, and an increasing number of people are studying the language. Recent developments include Cornish music, independent films, and children's books. A small number of people in Cornwall have been brought up to be bilingual native speakers, and the language is taught in schools and appears on street nameplates. The first Cornish-language day care opened in 2010.
Classification.
Cornish is a Southwestern Brittonic language, a branch of the Insular Celtic section of the Celtic language family, which is a sub-family of the Indo-European language family. Brittonic also includes Welsh, Breton, Cumbric and possibly Pictish, the last two of which are extinct. Scottish Gaelic, Irish and Manx are part of the separate Goidelic branch of Insular Celtic.
Joseph Loth viewed Cornish and Breton as being two dialects of the same language, claiming that "Middle Cornish is without doubt closer to Breton as a whole than the modern Breton dialect of Quiberon [] is to that of Saint-Pol-de-Léon []." Also, Kenneth Jackson argued that it is almost certain that Cornish and Breton would have been mutually intelligible as long as Cornish was a living language, and that Cornish and Breton are especially closely related to each other and less closely related to Welsh.
History.
Cornish evolved from the Common Brittonic spoken throughout Britain south of the Firth of Forth during the British Iron Age and Roman period. As a result of westward Anglo-Saxon expansion, the Britons of the southwest were separated from those in modern-day Wales and Cumbria, which Jackson links to the defeat of the Britons at the Battle of Deorham in about 577. The western dialects eventually evolved into modern Welsh and the now extinct Cumbric, while Southwestern Brittonic developed into Cornish and Breton, the latter as a result of emigration to parts of the continent, known as Brittany over the following centuries.
Old Cornish.
The area controlled by the southwestern Britons was progressively reduced by the expansion of Wessex over the next few centuries. During the Old Cornish () period (800–1200), the Cornish-speaking area was largely coterminous with modern-day Cornwall, after the Saxons had taken over Devon in their south-westward advance, which probably was facilitated by a second migration wave to Brittany that resulted in the partial depopulation of Devon. The maintaining of close links with Breton-speakers in Brittany allowed for a level of mutual intelligibility between Cornish and Breton.
The earliest written record of the Cornish language comes from this period: a 9th-century gloss in a Latin manuscript of by Boethius, which used the words . The phrase may mean "it [the mind] hated the gloomy places", or alternatively, as Andrew Breeze suggests, "she hated the land". Other sources from this period include the "Saints' List", a list of almost fifty Cornish saints, the Bodmin manumissions, which is a list of manumittors and slaves, the latter with mostly Cornish names, and, more substantially, a Latin–Cornish glossary (the or Cottonian Vocabulary), a Cornish translation of Ælfric of Eynsham's Latin–Old English Glossary, which is thematically arranged into several groups, such as the Genesis creation narrative, anatomy, church hierarchy, the family, names for various kinds of artisans and their tools, flora, fauna, and household items. The manuscript was widely thought to be in Old Welsh until the 18th century when it was identified as Cornish by Edward Lhuyd. Some Brittonic glosses in the 9th-century colloquy were once identified as Old Cornish, but they are more likely Old Welsh, possibly influenced by a Cornish scribe. No single phonological feature distinguishes Cornish from both Welsh and Breton until the beginning of the assibilation of dental stops in Cornish, which is not found before the second half of the eleventh century, and it is not always possible to distinguish Old Cornish, Old Breton, and Old Welsh orthographically.
Middle Cornish.
The Cornish language continued to flourish well through the Middle Cornish () period (1200–1600), reaching a peak of about 39,000 speakers in the 13th century, after which the number started to decline. This period provided the bulk of traditional Cornish literature, and was used to reconstruct the language during its revival. Most important is the , a cycle of three mystery plays, , and . Together these provide about 8,734 lines of text. The three plays exhibit a mixture of English and Brittonic influences, and, like other Cornish literature, may have been written at Glasney College near Penryn. From this period also are the hagiographical dramas ("The Life of Meriasek") and ("The Life of Ke"), both of which feature as an antagonist the villainous and tyrannical King Tewdar (or Teudar), a historical medieval king in Armorica and Cornwall, who, in these plays, has been interpreted as a lampoon of either of the Tudor kings Henry VII or Henry VIII.
Others are the "Charter Fragment", the earliest known continuous text in the Cornish language, apparently part of a play about a medieval marriage, and ("The Passion of Our Lord"), a poem probably intended for personal worship, were written during this period, probably in the second half of the 14th century. Another important text, the , was realized to be Cornish in 1949, having previously been incorrectly classified as Welsh. It is the longest text in the traditional Cornish language, consisting of around 30,000 words of continuous prose. This text is a late 16th century translation of twelve of Bishop Bonner's thirteen homilies by a certain John Tregear, tentatively identified as a vicar of St Allen from Crowan, and has an additional catena, Sacrament an Alter, added later by his fellow priest, Thomas Stephyn. In the reign of Henry VIII, an account was given by Andrew Boorde in his 1542 . He states, ""
When Parliament passed the Act of Uniformity 1549, which established the 1549 edition of the English Book of Common Prayer as the sole legal form of worship in England, including Cornwall, people in many areas of Cornwall did not speak or understand English. The passing of this Act was one of the causes of the Prayer Book Rebellion (which may also have been influenced by government repression after the failed Cornish rebellion of 1497), with "the commoners of Devonshyre and Cornwall" producing a manifesto demanding a return to the old religious services and included an article that concluded, "and so we the Cornyshe men (whereof certen of us understande no Englysh) utterly refuse thys newe Englysh." In response to their articles, the government spokesman (either Philip Nichols or Nicholas Udall) wondered why they did not just ask the king for a version of the liturgy in their own language. Archbishop Thomas Cranmer asked why the Cornishmen should be offended by holding the service in English, when they had before held it in Latin, which even fewer of them could understand. Anthony Fletcher points out that this rebellion was primarily motivated by religious and economic, rather than linguistic, concerns. The rebellion prompted a heavy-handed response from the government, and 5,500 people died during the fighting and the rebellion's aftermath. Government officials then directed troops under the command of Sir Anthony Kingston to carry out pacification operations throughout the West Country. Kingston subsequently ordered the executions of numerous individuals suspected of involvement with the rebellion as part of the post-rebellion reprisals.
The rebellion eventually proved a turning-point for the Cornish language, as the authorities came to associate it with sedition and "backwardness". This proved to be one of the reasons why the Book of Common Prayer was never translated into Cornish (unlike Welsh), as proposals to do so were suppressed in the rebellion's aftermath. The failure to translate the Book of Common Prayer into Cornish led to the language's rapid decline during the 16th and 17th centuries. Peter Berresford Ellis cites the years 1550–1650 as a century of immense damage for the language, and its decline can be traced to this period. In 1680 William Scawen wrote an essay describing 16 reasons for the decline of Cornish, among them the lack of a distinctive Cornish alphabet, the loss of contact between Cornwall and Brittany, the cessation of the miracle plays, loss of records in the Civil War, lack of a Cornish Bible and immigration to Cornwall. Mark Stoyle, however, has argued that the 'glotticide' of the Cornish language was mainly a result of the Cornish gentry adopting English to dissociate themselves from the reputation for disloyalty and rebellion associated with the Cornish language since the 1497 uprising.
Late Cornish.
By the middle of the 17th century, the language had retreated to Penwith and Kerrier, and transmission of the language to new generations had almost entirely ceased. In his "Survey of Cornwall", published in 1602, Richard Carew writes:[M]ost of the inhabitants can speak no word of Cornish, but very few are ignorant of the English; and yet some so affect their own, as to a stranger they will not speak it; for if meeting them by chance, you inquire the way, or any such matter, your answer shall be, "," "I [will] speak no Saxonage."
The Late Cornish () period from 1600 to about 1800 has a less substantial body of literature than the Middle Cornish period, but the sources are more varied in nature, including songs, poems about fishing and curing pilchards, and various translations of verses from the Bible, the Ten Commandments, the Lord's Prayer and the Creed. Edward Lhuyd's "Archaeologia Britannica", which was mainly recorded in the field from native speakers in the early 1700s, and his unpublished field notebook are seen as important sources of Cornish vocabulary, some of which are not found in any other source. "Archaeologia Britannica" also features a complete version of a traditional folk tale, "John of Chyanhor", a short story about a man from St Levan who goes far to the east seeking work, eventually returning home after three years to find that his wife has borne him a child during his absence.
In 1776, William Bodinar, who describes himself as having learned Cornish from old fishermen when he was a boy, wrote a letter to Daines Barrington in Cornish, with an English translation, which was probably the last prose written in the traditional language. In his letter, he describes the sociolinguistics of the Cornish language at the time, stating that there are no more than four or five old people in his village who can still speak Cornish, concluding with the remark that Cornish is no longer known by young people. However, the last recorded traditional Cornish literature may have been the "Cranken Rhyme", a corrupted version of a verse or song published in the late 19th century by John Hobson Matthews, recorded orally by John Davey (or Davy) of Boswednack, of uncertain date but probably originally composed during the last years of the traditional language. Davey had traditional knowledge of at least some Cornish. John Kelynack (1796–1885), a fisherman of Newlyn, was sought by philologists for old Cornish words and technical phrases in the 19th century.
Decline of Cornish speakers between 1300 and 1800.
It is difficult to state with certainty when Cornish ceased to be spoken, due to the fact that its last speakers were of relatively low social class and that the definition of what constitutes "a living language" is not clear cut. Peter Pool argues that by 1800 nobody was using Cornish as a daily language and no evidence exists of anyone capable of conversing in the language at that date. However, passive speakers, semi-speakers and rememberers, who retain some competence in the language despite not being fluent nor using the language in daily life, generally survive even longer.
The traditional view that Dolly Pentreath (1692–1777) was the last native speaker of Cornish has been challenged, and in the 18th and 19th centuries there was academic interest in the language and in attempting to find the last speaker of Cornish. It has been suggested that, whereas Pentreath was probably the last "fluent" speaker, the last "native" speaker may have been John Davey of Zennor, who died in 1891. However, although it is clear Davey possessed some traditional knowledge in addition to having read books on Cornish, accounts differ of his competence in the language. Some contemporaries stated he was able to converse on certain topics in Cornish whereas others affirmed they had never heard him claim to be able to do so. Robert Morton Nance, who reworked and translated Davey's Cranken Rhyme, remarked, "There can be no doubt, after the evidence of this rhyme, of what there was to lose by neglecting John Davey."
The search for the last speaker is hampered by a lack of transcriptions or audio recordings, so that it is impossible to tell from this distance whether the language these people were reported to be speaking was Cornish, or English with a heavy Cornish substratum, nor what their level of fluency was. Nevertheless, this academic interest, along with the beginning of the Celtic Revival in the late 19th century, provided the groundwork for a Cornish language revival movement.
Notwithstanding the uncertainty over who was the last speaker of Cornish, researchers have posited the following numbers for the prevalence of the language between 1050 and 1800.
Revived Cornish.
In 1904, the Celtic language scholar and Cornish cultural activist Henry Jenner published "A Handbook of the Cornish Language". The publication of this book is often considered to be the point at which the revival movement started. Jenner wrote about the Cornish language in 1905, "one may fairly say that most of what there was of it has been preserved, and that it has been continuously preserved, for there has never been a time when there were not some Cornishmen who knew some Cornish."
The revival focused on reconstructing and standardising the language, including coining new words for modern concepts, and creating educational material in order to teach Cornish to others. In 1929 Robert Morton Nance published his Unified Cornish () system, based on the Middle Cornish literature while extending the attested vocabulary with neologisms and forms based on Celtic roots also found in Breton and Welsh, publishing a dictionary in 1938. Nance's work became the basis of revived Cornish () for most of the 20th century. During the 1970s, criticism of Nance's system, including the inconsistent orthography and unpredictable correspondence between spelling and pronunciation, as well as on other grounds such as the archaic basis of Unified and a lack of emphasis on the spoken language, resulted in the creation of several rival systems. In the 1980s, Ken George published a new system, ('Common Cornish'), based on a reconstruction of the phonological system of Middle Cornish, but with an approximately morphophonemic orthography. It was subsequently adopted by the Cornish Language Board and was the written form used by a reported 54.5% of all Cornish language users according to a survey in 2008, but was heavily criticised for a variety of reasons by Jon Mills and Nicholas Williams, including making phonological distinctions that they state were not made in the traditional language , failing to make distinctions that they believe "were" made in the traditional language at this time, and the use of an orthography that deviated too far from the traditional texts and Unified Cornish. Also during this period, Richard Gendall created his Modern Cornish system (also known as Revived Late Cornish), which used Late Cornish as a basis, and Nicholas Williams published a revised version of Unified; however neither of these systems gained the popularity of Unified or Kemmyn.
The revival entered a period of factionalism and public disputes, with each orthography attempting to push the others aside. By the time that Cornish was recognised by the UK government under the European Charter for Regional or Minority Languages in 2002, it had become recognised that the existence of multiple orthographies was unsustainable with regards to using the language in education and public life, as none had achieved a wide consensus. A process of unification was set about which resulted in the creation of the public-body Cornish Language Partnership in 2005 and agreement on a Standard Written Form in 2008. In 2010 a new milestone was reached when UNESCO altered its classification of Cornish, stating that its previous label of "extinct" was no longer accurate.
Geographic distribution and number of speakers.
Speakers of Cornish reside primarily in Cornwall, which has a population of 563,600 (2017 estimate), out of whom the vast majority are native speakers of English, complemented by a smaller number of recent immigrants from various countries and their Cornish-born descendants. There are also some speakers living outside Cornwall, particularly in the countries of the Cornish diaspora, as well as in other Celtic nations. Estimates of the number of Cornish speakers vary according to the definition of a speaker, and is difficult to determine accurately due to the individualised nature of language take-up. Nevertheless, there is recognition that the number of Cornish speakers is growing. From before the 1980s to the end of the 20th century there was a sixfold increase in the number of speakers to around 300. One figure for the number of people who know a few basic words, such as knowing that "Kernow" means "Cornwall", was 300,000; the same survey gave the number of people able to have simple conversations as 3,000.
The Cornish Language Strategy project commissioned research to provide quantitative and qualitative evidence for the number of Cornish speakers: due to the success of the revival project it was estimated that 2,000 people were fluent (surveyed in spring 2008), an increase from the estimated 300 people who spoke Cornish fluently suggested in a study by Kenneth MacKinnon in 2000.<ref name="BBC BBC/British Council"></ref>
Jenefer Lowe of the Cornish Language Partnership said in an interview with the BBC in 2010 that there were around 300 fluent speakers. Bert Biscoe, a councillor and bard, in a statement to the "Western Morning News" in 2014 said there were "several hundred fluent speakers". Cornwall Council estimated in 2015 that there were 300–400 fluent speakers who used the language regularly, with 5,000 people having a basic conversational ability in the language.
A report on the 2011 Census published in 2013 by the Office for National Statistics placed the number of speakers at somewhere between 325 and 625. In 2017 the ONS released data based on the 2011 Census that placed the number of speakers at 557 people in England and Wales who declared Cornish to be their main language, 464 of whom lived in Cornwall. The 2021 census listed the number of Cornish speakers at 563.
A study that appeared in 2018 established the number of people in Cornwall with at least minimal skills in Cornish, such as the use of some words and phrases, to be more than 3,000, including around 500 estimated to be fluent.
The Institute of Cornish Studies at the University of Exeter is working with the Cornish Language Partnership to study the Cornish language revival of the 20th century, including the growth in number of speakers.
Legal status and recognition.
In 2002, Cornish was recognized by the UK government under Part II of the European Charter for Regional or Minority Languages. UNESCO's "Atlas of World Languages" classifies Cornish as "critically endangered". UNESCO has said that a previous classification of 'extinct' "does not reflect the current situation for Cornish" and is "no longer accurate".
Within the UK.
Cornwall Council's policy is to support the language, in line with the European Charter. A motion was passed in November 2009 in which the council promoted the inclusion of Cornish, as appropriate and where possible, in council publications and on signs. This plan has drawn some criticism. In October 2015, the council announced that staff would be encouraged to use "basic words and phrases" in Cornish when dealing with the public. In 2021 Cornwall Council prohibited a marriage ceremony from being conducted in Cornish as the Marriage Act 1949 only allowed for marriage ceremonies in English or Welsh.
In 2014, the Cornish people were recognised by the UK Government as a national minority under the Framework Convention for the Protection of National Minorities. The FCNM provides certain rights and protections to a national minority with regard to their minority language.
In 2016, British government funding for the Cornish language ceased, and responsibility transferred to Cornwall Council.
Orthography.
Old Cornish orthography.
Until around the middle of the 11th century, Old Cornish scribes used a traditional spelling system shared with Old Breton and Old Welsh, based on the pronunciation of British Latin. By the time of the , usually dated to around 1100, Old English spelling conventions, such as the use of thorn (Þ, þ) and eth (Ð, ð) for dental fricatives, and wynn (Ƿ, ƿ) for /w/, had come into use, allowing documents written at this time to be distinguished from Old Welsh, which rarely uses these characters, and Old Breton, which does not use them at all. Old Cornish features include using initial ⟨ch⟩, ⟨c⟩, or ⟨k⟩ for /k/, and, in internal and final position, ⟨p⟩, ⟨t⟩, ⟨c⟩, ⟨b⟩, ⟨d⟩, and ⟨g⟩ are generally used for the phonemes /b/, /d/, /ɡ/, /β/, /ð/, and /ɣ/ respectively, meaning that the results of Brittonic lenition are not usually apparent from the orthography at this time.
Middle Cornish orthography.
Middle Cornish orthography has a significant level of variation, and shows influence from Middle English spelling practices. Yogh (Ȝ ȝ) is used in certain Middle Cornish texts, where it is used to represent a variety of sounds, including the dental fricatives /θ/ and /ð/, a usage which is unique to Middle Cornish and is never found in Middle English. Middle Cornish scribes tend to use ⟨c⟩ for /k/ before back vowels, and ⟨k⟩ for /k/ before front vowels, though this is not always true, and this rule is less consistent in certain texts. Middle Cornish scribes almost universally use ⟨wh⟩ to represent /ʍ/ (or /hw/), as in Middle English. Middle Cornish, especially towards the end of this period, tends to use orthographic ⟨g⟩ and ⟨b⟩ in word-final position in stressed monosyllables, and ⟨k⟩ and ⟨p⟩ in word-final position in unstressed final syllables, to represent the reflexes of late Brittonic /ɡ/ and /b/, respectively.
Late Cornish orthography.
Written sources from this period are often spelled following English spelling conventions since many of the writers of the time had not been exposed to Middle Cornish texts or the Cornish orthography within them. Around 1700, Edward Lhuyd visited Cornwall, introducing his own partly phonetic orthography that he used in his , which was adopted by some local writers, leading to the use of some Lhuydian features such as the use of circumflexes to denote long vowels, ⟨k⟩ before front vowels, word-final ⟨i⟩, and the use of ⟨dh⟩ to represent the voiced dental fricative /ð/.
Revived Cornish orthography.
After the publication of Jenner's "Handbook of the Cornish Language", the earliest revivalists used Jenner's orthography, which was influenced by Lhuyd's system. This system was abandoned following the development by Nance of a "unified spelling", later known as Unified Cornish, a system based on a standardization of the orthography of the early Middle Cornish texts. Nance's system was used by almost all Revived Cornish speakers and writers until the 1970s. Criticism of Nance's system, particularly the relationship of spelling to sounds and the phonological basis of Unified Cornish, resulted in rival orthographies appearing by the early 1980s, including Gendal's Modern Cornish, based on Late Cornish native writers and Lhuyd, and Ken George's Kernewek Kemmyn, a mainly morphophonemic orthography based on George's reconstruction of Middle Cornish , which features a number of orthographic, and phonological, distinctions not found in Unified Cornish. Kernewek Kemmyn is characterised by the use of universal ⟨k⟩ for /k/ (instead of ⟨c⟩ before back vowels as in Unified); ⟨hw⟩ for /hw/, instead of ⟨wh⟩ as in Unified; and ⟨y⟩, ⟨oe⟩, and ⟨eu⟩ to represent the phonemes /ɪ/, /o/, and /œ/ respectively, which are not found in Unified Cornish. Criticism of all of these systems, especially Kernewek Kemmyn, by Nicolas Williams, resulted in the creation of Unified Cornish Revised, a modified version of Nance's orthography, featuring: an additional phoneme not distinguished by Nance, "ö in German ", represented in the UCR orthography by ⟨ue⟩; replacement of ⟨y⟩ with ⟨e⟩ in many words; internal ⟨h⟩ rather than ⟨gh⟩; and use of final ⟨b⟩, ⟨g⟩, and ⟨dh⟩ in stressed monosyllables. A Standard Written Form, intended as a compromise orthography for official and educational purposes, was introduced in 2008, although a number of previous orthographic systems remain in use and, in response to the publication of the SWF, another new orthography, Kernowek Standard, was created, mainly by Nicholas Williams and Michael Everson, which is proposed as an amended version of the Standard Written Form.
Phonology.
The phonological system of Old Cornish, inherited from Proto-Southwestern Brittonic and originally differing little from Old Breton and Old Welsh, underwent various changes during its Middle and Late phases, eventually resulting in several characteristics not found in the other Brittonic languages. The first sound change to distinguish Cornish from both Breton and Welsh, the assibilation of the dental stops and in medial and final position, had begun by the time of the , or earlier. This change, and the subsequent, or perhaps dialectical, palatalization (or occasional rhotacization in a few words) of these sounds, results in orthographic forms such as Middle Cornish 'father', Late Cornish (Welsh ), Middle Cornish 'believe', Late Cornish (Welsh ), and Middle Cornish 'leave', Late Cornish (Welsh ). A further characteristic sound change, pre-occlusion, occurred during the 16th century, resulting in the nasals and being realised as and respectively in stressed syllables, and giving Late Cornish forms such as 'head' (Welsh ) and 'crooked' (Welsh ).
As a revitalised language, the phonology of contemporary spoken Cornish is based on a number of sources, including various reconstructions of the sound system of middle and early modern Cornish based on an analysis of internal evidence such as the orthography and rhyme used in the historical texts, comparison with the other Brittonic languages Breton and Welsh, and the work of the linguist Edward Lhuyd, who visited Cornwall in 1700 and recorded the language in a partly phonetic orthography.
Vocabulary.
Cornish is a Celtic language, and the majority of its vocabulary, when usage frequency is taken into account, at every documented stage of its history is inherited from Proto-Celtic, either directly from the ancestral Proto-Indo-European language or through vocabulary borrowed from unknown substrate language(s) at some point in the development of the Celtic proto-language from PIE. Examples of the PIE > PCelt. development are various terms related to kinship and people, including 'mother', 'aunt, mother's sister', 'sister', 'son', 'man', 'person, human', and 'people', and words for parts of the body, including 'hand' and 'tooth'. Inherited adjectives with an Indo-European etymology include 'new', 'broad, wide', 'red', 'old', 'young', and 'alive, living'.
Several Celtic or Brittonic words cannot be reconstructed to Proto-Indo-European, and are suggested to have been borrowed from unknown substrate language(s) at an early stage, such as Proto-Celtic or Proto-Brittonic. Proposed examples in Cornish include 'beer' and 'badger'.
Other words in Cornish inherited direct from Proto-Celtic include a number of toponyms, for example 'hill', 'fort', and 'land', and a variety of animal names such as 'mouse', 'wether', 'pigs', and 'bull'.
During the Roman occupation of Britain a large number (around 800) of Latin loan words entered the vocabulary of Common Brittonic, which subsequently developed in a similar way to the inherited lexicon. These include 'arm' (from British Latin ), 'net' (from ), and 'cheese' (from ).
A substantial number of loan words from English and to a lesser extent French entered the Cornish language throughout its history. Whereas only 5% of the vocabulary of the Old Cornish Vocabularium Cornicum is thought to be borrowed from English, and only 10% of the lexicon of the early modern Cornish writer William Rowe, around 42% of the vocabulary of the whole Cornish corpus is estimated to be English loan words, without taking frequency into account. (However, when frequency "is" taken into account, this figure for the entire corpus drops to 8%.) The many English loanwords, some of which were sufficiently well assimilated to acquire native Cornish verbal or plural suffixes or be affected by the mutation system, include 'to read', 'to understand', 'way', 'boot' and 'art'.
Many Cornish words, such as mining and fishing terms, are specific to the culture of Cornwall. Examples include 'mine waste' and 'to mend fishing nets'. and are different types of pastries. is a 'traditional Cornish dance get-together' and is a specific kind of ceremonial dance that takes place in Cornwall. Certain Cornish words may have several translation equivalents in English, so for instance may be translated into English as either 'book' or 'volume' and can mean either 'hand' or 'fist'.
Like other Celtic languages, Cornish lacks a number of verbs commonly found in other languages, including modals and psych-verbs; examples are 'have', 'like', 'hate', 'prefer', 'must/have to' and 'make/compel to'. These functions are instead fulfilled by periphrastic constructions involving a verb and various prepositional phrases.
Grammar.
The grammar of Cornish shares with other Celtic languages a number of features which, while not unique, are unusual in an Indo-European context. The grammatical features most unfamiliar to English speakers of the language are the initial consonant mutations, the verb–subject–object word order, inflected prepositions, fronting of emphasised syntactic elements and the use of two different forms for 'to be'.
Morphology.
Mutations.
Cornish has initial consonant mutation: The first sound of a Cornish word may change according to grammatical context. As in Breton, there are four types of mutation in Cornish (compared with three in Welsh, two in Irish and Manx and one in Scottish Gaelic). These changes apply to only certain letters (sounds) in particular grammatical contexts, some of which are given below:
Articles.
Cornish has no indefinite article. can either mean 'harbour' or 'a harbour'. In certain contexts, can be used, with the meaning 'a certain, a particular', e.g. 'a certain harbour'. There is, however, a definite article 'the', which is used for all nouns regardless of their gender or number, e.g. 'the harbour'.
Nouns.
Cornish nouns belong to one of two grammatical genders, masculine and feminine, but are not inflected for case. Nouns may be singular or plural. Plurals can be formed in various ways, depending on the noun:
Some nouns are collective or mass nouns. Singulatives can be formed from collective nouns by the addition of the suffix ⫽-enn⫽ (SWF "-en"):
Verbs.
Verbs are conjugated for person, number, tense and mood. For example, the verbal noun 'see' has derived forms such as 1st person singular present indicative 'I see', 3rd person plural imperfect indicative 'they saw', and 2nd person singular imperative 'see!' Grammatical categories can be indicated either by inflection of the main verb, or by the use of auxiliary verbs such as 'be' or 'do'.
Prepositions.
Cornish uses inflected (or conjugated) prepositions: Prepositions are inflected for person and number. For example, (with, by) has derived forms such as 'with me', 'with him', and 'with you (plural)'.
Syntax.
Word order in Cornish is somewhat fluid and varies depending on several factors such as the intended element to be emphasised and whether a statement is negative or affirmative. In a study on Cornish word order in the play Bewnans Meriasek (), Ken George has argued that the most common word order in main clauses in Middle Cornish was, in affirmative statements, SVO, with the verb in the third person singular:
When affirmative statements are in the less common VSO order, they usually begin with an adverb or other element, followed by an affirmative particle, with the verb inflected for person and tense:
In negative statements, the order was usually VSO, with an initial negative particle and the verb conjugated for person and tense:
A similar structure is used for questions:
Elements can be fronted for emphasis:
Sentences can also be constructed periphrastically using auxiliary verbs such as 'be, exist':
As Cornish lacks verbs such as 'to have', possession can also be indicated in this way:
Enquiring about possession is similar, using a different interrogative form of :
Nouns usually precede the adjective, unlike in English:
Some adjectives usually precede the noun, however:
Culture.
The Celtic Congress and Celtic League are groups that advocate cooperation amongst the Celtic Nations in order to protect and promote Celtic languages and cultures, thus working in the interests of the Cornish language.
There have been films such as , some televised, made entirely, or significantly, in Cornish. Some businesses use Cornish names.
Cornish has significantly and durably affected Cornwall's place-names as well as Cornish surnames and knowledge of the language helps the understanding of these ancient meanings. Cornish names are adopted for children, pets, houses and boats.
There is Cornish literature, including spoken poetry and song, as well as traditional Cornish chants historically performed in marketplaces during religious holidays and public festivals and gatherings.
There are periodicals solely in the language, such as the monthly , and . BBC Radio Cornwall has a news broadcast in Cornish and sometimes has other programmes and features for learners and enthusiasts. Local newspapers such as the "Western Morning News" have articles in Cornish, and newspapers such as "The Packet", "The West Briton", and "The Cornishman" have also been known to have Cornish features. There is an online radio and TV service in Cornish called , publishing a one-hour podcast each week, based on a magazine format. It includes music in Cornish as well as interviews and features.
The language has financial sponsorship from sources including the Millennium Commission. A number of language organisations exist in Cornwall: (Our Language), the Cornish sub-group of the European Bureau for Lesser-Used Languages, , (the Cornish Language Board) and (the Cornish Language Fellowship).
There are ceremonies, some ancient, some modern, that use the language or are entirely in the language.
Cultural events.
Cornwall has had cultural events associated with the language, including the international Celtic Media Festival, hosted in St Ives in 1997. The Old Cornwall Society has promoted the use of the language at events and meetings. Two examples of ceremonies that are performed in both the English and Cornish languages are Crying the Neck and the annual mid-summer bonfires.
Since 1969, there have been three full performances of the "Ordinalia", originally written in the Cornish language, the most recent of which took place at the plen-an-gwary in St Just in September 2021. While significantly adapted from the original, as well as using mostly English-speaking actors, the plays used sizable amounts of Cornish, including a character who spoke only in Cornish and another who spoke both English and Cornish. The event drew thousands over two weeks, also serving as a celebration of Celtic culture. The next production, scheduled for 2024, could, in theory, be entirely in Cornish, without English, if assisted by a professional linguist.
Outside of Cornwall, efforts to revive the Cornish language and culture through community events are occurring in Australia. A biennial festival, Kernewek Lowender, takes place in South Australia, where both cultural displays and language lessons are offered.
Study and teaching.
Cornish is taught in some schools; it was previously taught at degree level at the University of Wales, though the only existing course in the language at university level is as part of a course in Cornish studies at the University of Exeter. In March 2008 a course in the language was started as part of the Celtic Studies curriculum at the University of Vienna, Austria.
The University of Cambridge offers courses in Cornish through its John Trim Resources Centre, which is part of the university's Language Centre. In addition, the Department of Anglo-Saxon, Norse and Celtic (which is part of the Faculty of English) also carries out research into the Cornish language.
In 2015 a university-level course aiming at encouraging and supporting practitioners working with young children to introduce the Cornish language into their settings was launched. The "Cornish Language Practice Project (Early Years)" is a level 4 course approved by Plymouth University and run at Cornwall College. The course is not a Cornish-language course but students will be assessed on their ability to use the Cornish language constructively in their work with young children. The course will cover such topics as "Understanding Bilingualism", "Creating Resources" and "Integrating Language and Play", but the focus of the language provision will be on Cornish. A non-accredited specialist Cornish-language course has been developed to run alongside the level 4 course for those who prefer tutor support to learn the language or develop their skills for use with young children.
Cornwall's first Cornish-language crèche, , was established in 2010 at Cornwall College, Camborne. The nursery teaches children aged between two and five years alongside their parents to ensure the language is also spoken in the home.
A number of dictionaries are available in the various orthographies, including "A Learners' Cornish Dictionary in the Standard Written Form" by Steve Harris (ed.), by Ken George, by Nicholas Williams and "A Practical Dictionary of Modern Cornish" by Richard Gendall. Course books include the three-part series, , and , as well as the more recent and . Several online dictionaries are now available, including one organised by An Akademi Kernewek in SWF.
Classes and conversation groups for adults are available at several locations in Cornwall as well as in London, Cardiff and Bristol. Since the onset of the COVID-19 pandemic a number of conversation groups entitled have been held online, advertised through Facebook and other media. A surge in interest, not just from people in Cornwall but from all over the world, has meant that extra classes have been organised.
Cornish studies.
William Scawen produced a manuscript on the declining Cornish language that continually evolved until he died in 1689, aged 89. He was one of the first to realise the language was dying out and wrote detailed manuscripts which he started working on when he was 78. The only version that was ever published was a short first draft but the final version, which he worked on until his death, is a few hundred pages long. At the same time a group of scholars led by John Keigwin (nephew of William Scawen) of Mousehole tried to preserve and further the Cornish language and chose to write in Cornish. One of their number, Nicholas Boson, tells how he had been discouraged from using Cornish to servants by his mother. This group left behind a large number of translations of parts of the Bible, proverbs and songs. They were contacted by the Welsh linguist Edward Lhuyd, who came to Cornwall to study the language.
Early Modern Cornish was the subject of a study published by Lhuyd in 1707, and differs from the medieval language in having a considerably simpler structure and grammar. Such differences included sound changes and more frequent use of auxiliary verbs. The medieval language also possessed two additional tenses for expressing past events and an extended set of possessive suffixes.
John Whitaker, the Manchester-born rector of Ruan Lanihorne, studied the decline of the Cornish language. In his 1804 work "the Ancient Cathedral of Cornwall" he concluded that: "[T]he English Liturgy, was not desired by the Cornish, but forced upon them by the tyranny of England, at a time when the English language was yet unknown in Cornwall. This act of tyranny was at once gross barbarity to the Cornish people, and a death blow to the Cornish language."
Robert Williams published the first comprehensive Cornish dictionary in 1865, the . As a result of the discovery of additional ancient Cornish manuscripts, 2000 new words were added to the vocabulary by Whitley Stokes in "A Cornish Glossary". William C. Borlase published "Proverbs and Rhymes in Cornish" in 1866 while "A Glossary of Cornish Names" was produced by John Bannister in the same year. Frederick Jago published his "English–Cornish Dictionary" in 1882.
In 2002, the Cornish language gained new recognition because of the European Charter for Regional and Minority Languages. Conversely, along with government provision was the governmental basis of "New Public Management", measuring quantifiable results as means of determining effectiveness. This put enormous pressure on finding a single orthography that could be used in unison. The revival of Cornish required extensive rebuilding. The Cornish orthographies that were reconstructed may be considered versions of Cornish because they are not traditional sociolinguistic variations. In the middle-to-late twentieth century, the debate over Cornish orthographies angered more people because several language groups received public funding. This caused other groups to sense favouritism as playing a role in the debate.
A governmental policymaking structure called New Public Management (NPM) has helped the Cornish language by managing public life of the Cornish language and people. In 2007, the Cornish Language Partnership MAGA represents separate divisions of government and their purpose is to further enhance the Cornish Language Developmental Plan. MAGA established an Ad-Hoc Group, which resulted in three orthographies being presented. The relations for the Ad-Hoc Group were to obtain consensus among the three orthographies and then develop a "single written form". The result was creating a new form of Cornish, which had to be natural for both new learners and skilled speakers.
Literature.
Recent Modern Cornish literature.
In 1981, the Breton library edited (Passion of our lord), a 15th-century Cornish poem. The first complete translation of the Bible into Cornish, translated from English, was published in 2011. Another Bible translation project translating from original languages is underway. The New Testament and Psalms were made available online on YouVersion (Bible.com) and Bibles.org in July 2014 by the Bible Society.
A few small publishers produce books in Cornish which are stocked in some local bookshops, as well as in Cornish branches of Waterstones and WH Smith, although publications are becoming increasingly available on the Internet. Printed copies of these may also be found from Amazon. The Truro Waterstones hosts the annual literary awards, established by to recognise publications relating to Cornwall or in the Cornish language. In recent years, a number of Cornish translations of literature have been published, including "Alice's Adventures in Wonderland" (2009), "Around the World in Eighty Days" (2009), "Treasure Island" (2010), "The Railway Children" (2012), "Hound of the Baskervilles" (2012), "The War of the Worlds" (2012), "The Wind in the Willows" (2013), "Three Men in a Boat" (2013), "Alice in Wonderland and Through the Looking-Glass" (2014), and "A Christmas Carol" (which won the 2012 award for Cornish Language books), as well as original Cornish literature such as "" ("The Lyonesse Stone") by Craig Weatherhill. Literature aimed at children is also available, such as ("Where's Spot?"), ("The Beast of Bodmin Moor"), three "Topsy and Tim" titles, two "Tintin" titles and ("Briallen and the Alien"), which won the 2015 award for Cornish Language books for children. In 2014 , Nicholas Williams's translation of J. R. R. Tolkien's "The Hobbit", was published.
is a monthly magazine published entirely in the Cornish language. Members contribute articles on various subjects. The magazine is produced by Graham Sandercock who has been its editor since 1976.
Media.
In 1983 BBC Radio Cornwall started broadcasting around two minutes of Cornish every week. In 1987, however, they gave over 15 minutes of airtime on Sunday mornings for a programme called ('Holdall'), presented by John King, running until the early 1990s. It was eventually replaced with a five-minute news bulletin called ('The News'). The bulletin was presented every Sunday evening for many years by Rod Lyon, then Elizabeth Stewart, and currently a team presents in rotation. Pirate FM ran short bulletins on Saturday lunchtimes from 1998 to 1999. In 2006, Matthew Clarke who had presented the Pirate FM bulletin, launched a web-streamed news bulletin called ('Weekly News'), which in 2008 was merged into a new weekly magazine podcast (RanG).
Cornish television shows have included a 1982 series by Westward Television with each episode containing a three-minute lesson in Cornish. , an eight-episode series produced by Television South West and broadcast between June and July 1984, later on S4C from May to July 1985, and as a schools programme in 1986. Also by Television South West were two bilingual programmes on Cornish Culture called .
In 2016 Kelly's Ice Cream of Bodmin introduced a light hearted television commercial in the Cornish language and this was repeated in 2017.
The first episode from the third season of the US television program "Deadwood" features a conversation between miners, purportedly in the Cornish language, but really in Irish. One of the miners is then shot by thugs working for businessman George Hearst who justify the murder by saying, "He come at me with his foreign gibberish."
A number of Cornish language films have been made, including "Hwerow Hweg", a 2002 drama film written and directed by Hungarian film-maker Antal Kovacs and "Trengellick Rising", a short film written and directed by Guy Potter.
Screen Cornwall works with Cornwall Council to commission a short film in the Cornish language each year, with their FilmK competition. Their website states "FylmK is an annual contemporary Cornish language short film competition, producing an imaginative and engaging film, in any genre, from distinctive and exciting filmmakers".
A monthly half-hour online TV show began in 2017 called (The Month). It contained news items about cultural events and more mainstream news stories all through Cornish. It also ran a cookery segment called "" ('Esther's Kitchen').
Music.
English composer Peter Warlock wrote a Christmas carol in Cornish (setting words by Henry Jenner). The Cornish electronic musician Aphex Twin has used Cornish names for track titles, most notably on his "Drukqs" album.
Several traditional Cornish folk songs have been collected and can be sung to various tunes. These include ", ", and "".
In 2018, the singer Gwenno Saunders released an album in Cornish, entitled , saying: "I speak Cornish with my son: if you're comfortable expressing yourself in a language, you want to share it."
Place-names and surnames.
The Cornish language features in the toponymy of Cornwall, with a significant contrast between English place-names prevalent in eastern Cornwall and Cornish place-names to the west of the Camel-Fowey river valleys, where English place-names are much less common. Hundreds of Cornish family names have an etymology in the Cornish language, the majority of which are derived from Cornish place-names. Long before the agreement of the Standard Written Form of Cornish in the 21st century, Late Cornish orthography in the Early Modern period usually followed Welsh to English transliteration, phonetically rendering C for K, I for Y, U for W, and Z for S. This meant that place names were adopted into English with spellings such as 'Porthcurno' and 'Penzance'; they are written and in the Standard Written Form of Cornish, agreed upon in 2008. Likewise words such as ('island') can be found spelled as "" as at Ince Castle. These apparent mistransliterations can, however, reveal an insight into how names and places were actually pronounced, explaining, for example, how anglicised is still pronounced [ˈlansǝn] with emphasis on the first element, perhaps from Cornish , though the "Concise Oxford Dictionary of English Place-Names" considers this unlikely.
The following tables present some examples of Cornish place names and surnames and their anglicised versions:
Samples.
From the Universal Declaration of Human Rights:
From , the Cornish anthem:
From the wrestler's oath:
|
6134
|
43066271
|
https://en.wikipedia.org/wiki?curid=6134
|
Charybdis
|
Charybdis (; , ; , ) is a sea monster in Greek mythology. Charybdis, along with the sea monster Scylla, appears as a challenge to epic characters such as Odysseus, Jason, and Aeneas. Scholarship locates her in the Strait of Messina.
The idiom "between Scylla and Charybdis" has come to mean being forced to choose between two similarly dangerous situations.
Description.
The sea monster Charybdis was believed to live under a small rock on one side of a narrow channel. Opposite her was Scylla, another sea monster, who lived inside a much larger rock. The sides of the strait were within an arrow-shot of each other, and sailors attempting to avoid one of them would come in reach of the other. To be "between Scylla and Charybdis" therefore means to be presented with two opposite dangers, the task being to find a route that avoids both. Three times a day, Charybdis swallowed a huge amount of water, before belching it back out again, creating large whirlpools capable of dragging a ship underwater. In some variations of the story, Charybdis was simply a large whirlpool instead of a sea monster.
Through the descriptions of Greek mythical chroniclers and Greek historians such as Thucydides, modern scholars generally agree that Charybdis was said to have been located in the Strait of Messina, off the coast of Sicily and opposite a rock on the mainland identified with Scylla. A whirlpool does exist there, caused by currents meeting, but it is dangerous only to small craft in extreme conditions.
Family.
Another myth makes Charybdis the daughter of Poseidon and Gaia and living as a loyal servant to her father.
Mythology.
Origin.
Charybdis aided her father Poseidon in his feud with her paternal uncle Zeus and, as such, helped him engulf lands and islands in water. Zeus, angry over the land she stole from him, sent her to the bottom of the sea with a thunderbolt; from the sea bed, she drank the water from the sea thrice a day, creating whirlpools. She lingered on a rock with Scylla facing her directly on another rock, making a strait.
In some myths, Charybdis was a voracious woman who stole oxen from Heracles, and was hurled by the thunderbolt of Zeus into the sea, where she retained her voracious nature.
The "Odyssey".
Odysseus faced both Charybdis and Scylla while rowing through a narrow channel. He ordered his men to avoid Charybdis, thus forcing them to pass near Scylla, which resulted in the deaths of six of his men. Later, stranded on a raft, Odysseus was swept back through the strait and passed near Charybdis. His raft was sucked into her maw, but he survived by clinging to a fig tree growing on a rock over her lair. On the next outflow of water, when his raft was expelled, Odysseus recovered it and paddled away safely.
Jason and the Argonauts.
The Argonauts were able to avoid both dangers because Hera ordered the Nereid Thetis to guide them through the perilous passage.
The "Aeneid".
In the "Aeneid", the Trojans are warned by Helenus of Scylla and Charybdis, and are advised to avoid them by sailing around Pachynus point (Cape Passero) rather than risk the strait. Later, however, they find themselves passing Etna, and have to row for their lives to escape Charybdis.
Aesop.
Aristotle mentions in his "Meteorologica" that Aesop once teased a ferryman by telling him a myth concerning Charybdis. With one gulp of the sea, she brought the mountains to view; islands appeared after the next. The third is yet to come and will dry the sea altogether, thus depriving the ferryman of his livelihood.
|
6136
|
7903804
|
https://en.wikipedia.org/wiki?curid=6136
|
Carbon monoxide
|
Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called "carbonyl". It is a key ingredient in many processes in industrial chemistry.
The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels.
Indoors CO is one of the most acutely toxic contaminants affecting indoor air quality. CO may be emitted from tobacco smoke and generated from malfunctioning fuel-burning stoves (wood, kerosene, natural gas, propane) and fuel-burning heating systems (wood, oil, natural gas) and from blocked flues connected to these appliances. Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries.
Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic, resulting in carbon monoxide poisoning. It is isoelectronic with both cyanide anion and molecular nitrogen .
Physical and chemical properties.
Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8.
The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of (942 kJ/mol) and represents the strongest chemical bond known.
The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons.
Bonding and dipole moment.
The strength of the bond in carbon monoxide is indicated by the high frequency of its vibration, 2143 cm−1. For comparison, organic carbonyls such as ketones and esters absorb at around 1700 cm−1.
Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ− remains at the carbon end and the molecule has a small dipole moment of 0.122 D.
The molecule is therefore asymmetric: oxygen is more electron dense than carbon and is also slightly positively charged compared to carbon being negative.
Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme.
If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex.
See also the section "Coordination chemistry" below.
Bond polarity and oxidation state.
Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds.
The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom.
Occurrence.
Carbon monoxide occurs in many environments, usually in trace levels. Photochemical degradation of plant matter, for example, generates an estimated 60 million tons/year. Typical concentrations in parts per million are as follows:
Atmospheric presence.
Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 1012 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas.
Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, •OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration.
Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes.
Astronomy.
Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form.
Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star.
In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton.
Solid carbon monoxide is a component of comets. The volatile or "ice" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto.
Carbon monoxide can react with water to form carbon dioxide and hydrogen:
This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution.
If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed:
These reactions can take place in a few million years even at temperatures such as found on Pluto.
Pollution and health effects.
Urban pollution.
Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash).
Large CO pollution events can be observed from space over cities.
Role in ground level ozone formation.
Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (•OH) to produce a radical intermediate •HOCO, which rapidly transfers its radical hydrogen to to form peroxy radical (•) and carbon dioxide (). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide () and hydroxyl radical. gives O(3P) via photolysis, thereby forming following reaction with .
Since hydroxyl radical is formed during the formation of , the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is:
Although the creation of is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone.
Indoor air pollution.
Carbon monoxide is one of the most acutely toxic indoor air contaminants. Carbon monoxide may be emitted from tobacco smoke and generated from malfunctioning fuel burning stoves (wood, kerosene, natural gas, propane) and fuel burning heating systems (wood, oil, natural gas) and from blocked flues connected to these appliances. In developed countries the main sources of indoor CO emission come from cooking and heating devices that burn fossil fuels and are faulty, incorrectly installed or poorly maintained. Appliance malfunction may be due to faulty installation or lack of maintenance and proper use. In low- and middle-income countries the most common sources of CO in homes are burning biomass fuels and cigarette smoke.
Mining.
Miners refer to carbon monoxide as "whitedamp" or the "silent killer". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom "Canary in the coal mine" pertained to an early warning of a carbon monoxide presence.
Health effects.
Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. Acute exposure can also lead to long-term neurological effects such as cognitive and behavioural changes. Severe CO poisoning may lead to unconsciousness, coma and death. Chronic exposure to low concentrations of carbon monoxide may lead to lethargy, headaches, nausea, flu-like symptoms and neuropsychological and cardiovascular issues.
Chemistry.
Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries.
Coordination chemistry.
Most metals form coordination complexes containing covalently attached carbon monoxide. These derivatives, which are called metal carbonyls, tend to be more robust when the metal is in lower oxidation states. For example iron pentacarbonyl () is an air-stable, distillable liquid. Nickel carbonyl is a metal carbonyl complex that forms by the direct combination of carbon monoxide with the metal:
(1 bar, 55 °C)
These volatile complexes are often highly toxic. Some metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford .
As a ligand, CO binds through carbon, forming a kind of triple bond. The lone pair on the carbon atom donates electron density to form a M-CO sigma bond. The two π* orbitals on CO bind to filled metal orbitals. The effect is related to the Dewar-Chatt-Duncanson model. The effects of the quasi-triple M-C bond is reflected in the infrared spectrum of these complexes. Whereas free CO vibrates at 2143 cm−1, its complexes tend to absorb near 1950 cm−1.
Organic and main group chemistry.
In the presence of strong acids, alkenes react with carboxylic acids. Hydrolysis of this species (an acylium ion) gives the carboxylic acid, a net process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, , and HCl.
A mixture of hydrogen gas and CO reacts with alkenes to give aldehydes. The process requires the presence of metal catalysts.
With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct , which is isoelectronic with the acylium cation . CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate . It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate , potassium benzenehexolate , and potassium rhodizonate .
The compounds cyclohexanehexone or triquinoyl () and cyclopentanepentone or leuconic acid (), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive.
Laboratory preparation.
Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide:
Silver nitrate and iodoform also afford carbon monoxide:
Finally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct:
Production.
Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (), such as when operating a stove or an internal combustion engine in an enclosed space.
A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified.
Many methods have been developed for carbon monoxide production.
Industrial production.
A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced equilibrates with the remaining hot carbon to give CO. The reaction of with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product:
(Δ"H"r = 170 kJ/mol)
Another source is "water gas", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon:
(Δ"H"r = 131 kJ/mol)
Other similar "synthesis gases" can be obtained from natural gas and other fuels.
Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst.
Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows:
MO + C → M + CO
Carbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air.
Since CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over in high temperatures.
Use.
Chemical industry.
Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and . Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents.
Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989.
Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel.
In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid.
Metallurgy.
Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide.
Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking.
Proposed use as a rocket fuel.
Carbon monoxide has been proposed for use as a fuel on Mars by NASA researcher Geoffrey Landis. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel.
Landis also proposed manufacturing the fuel from the similar carbon dioxide atmosphere of Venus for a sample return mission, in combination with solar-powered UAVs and rocket balloon ascent.
Electrochemistry.
Carbon monoxide is used in electrochemistry to study the structure of electrodes thanks to its strong affinity to some metals used as electrocatalysts, through a technique known as CO stripping.
Biological and physiological properties.
Physiology.
Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator.
Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care.
Medicine.
Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide.
Microbiology.
Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown.
The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen.
Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases.
Food science.
Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold: carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%.
The technology was first given "generally recognized as safe" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union.
Weaponization.
In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.
Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 "euthanasia" program.
History.
Prehistory.
Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals.
Ancient history.
Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning.
Pre–industrial revolution.
Georg Ernst Stahl mentioned "carbonarii halitus" in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s.
Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800.
Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, "prevents arterials blood from becoming venous". Felix Hoppe-Seyler independently published similar conclusions in the following year.
Advent of industrial chemistry.
Carbon monoxide gained recognition as an essential reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes.
|
6138
|
17350134
|
https://en.wikipedia.org/wiki?curid=6138
|
Conjecture
|
In mathematics, a conjecture is a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
Resolution of conjectures.
Proof.
Formal mathematics is based on "provable" truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.
Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.
A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.
One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.
When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.
Disproof.
Conjectures disproven through counterexample are sometimes referred to as "false conjectures" (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.
Independent conjectures.
Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).
In this case, if a proof uses this statement, researchers will often look for a new proof that "does not" require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.
Conditional proofs.
Sometimes, a conjecture is called a "hypothesis" when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called "conditional proofs": the conjectures assumed appear in the hypotheses of the theorem, for the time being.
These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.
Important examples.
Fermat's Last Theorem.
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers formula_1, "formula_2", and "formula_3" can satisfy the equation "formula_4" for any integer value of "formula_5" greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of "Arithmetica", where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the "Guinness Book of World Records" for "most difficult mathematical problems".
Four color theorem.
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a "map", no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called "adjacent" if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
Hauptvermutung.
The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.
This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.
The manifold version is true in dimensions . The cases were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.
Weil conjectures.
In mathematics, the Weil conjectures were some highly influential proposals by on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.
A variety "V" over a finite field with "q" elements has a finite number of rational points, as well as points over every finite field with "q""k" elements containing that field. The generating function has coefficients derived from the numbers "N""k" of points over the (essentially unique) field with "q""k" elements.
Weil conjectured that such "zeta-functions" should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis was proved by .
Poincaré conjecture.
In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is "homotopy equivalent" to the 3-sphere, then it is necessarily "homeomorphic" to it.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called "Ricci flow with surgery" to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.
The Poincaré conjecture, before being proven, was one of the most important open questions in topology.
Riemann hypothesis.
In mathematics, the Riemann hypothesis, proposed by , is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.
P versus NP problem.
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.
In other sciences.
Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture.
|
6139
|
1005449
|
https://en.wikipedia.org/wiki?curid=6139
|
Christoph Ludwig Agricola
|
Christoph Ludwig Agricola (5 November 1665 – 8 August 1724) was a German landscape painter and etcher. He was born and died in Regensburg (Ratisbon).
Life and career.
Christoph Ludwig Agricola was born on 5 November 1665 in Regensburg in Germany. He trained, as many painters of the period did, by studying nature.
He spent a great part of his life in travel visiting England, the Netherlands and France, and residing for a considerable period in Naples, where he may have been influenced by Nicolas Poussin. He also stayed in Venice for several years around 1712, where he painted many works for Zaccaria Sagredo.
He died in Regensburg in 1724.
Work.
Although he primarily worked in gouache and oils, documentary sources show that he also produced a small number of etchings. He was a good draughtsman, used warm lighting and exhibited a warm, masterly brushstroke.
His numerous landscapes, chiefly cabinet pictures, are remarkable for their fidelity to nature, and especially for their skilful representation of varied phases of climate, especially nocturnal scenes and weather phenomena like thunderstorms. In composition, his style shows the influence of Nicolas Poussin: Agricola's work often displays idealistic scenes like Poussin's work. In light and colour Agricola's work resembles that of Claude Lorrain. His compositions often include ruins of ancient buildings in the foreground, but his favourite foreground figures were men dressed in Oriental attire. He also produced a series of etchings of birds.
His pictures can be found in Dresden, Braunschweig, Vienna, Florence, Naples and many other locations in Germany and Italy.
Legacy.
He probably tutored the artist Johann Theile and had a strong influence on him. Art historians have also noted that the work of the landscape painter Christian Johann Bendeler (1699–1728) was influenced by Agricola.
|
6140
|
7903804
|
https://en.wikipedia.org/wiki?curid=6140
|
Claudius
|
Tiberius Claudius Caesar Augustus Germanicus ( ; ; 1 August 10 BC – 13 October AD 54), or Claudius, was a Roman emperor, ruling from AD 41 to 54. A member of the Julio-Claudian dynasty, Claudius was born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate. He was the first Roman emperor to be born outside Italy.
As he had a limp and slight deafness due to an illness he suffered when young, he was ostracized by his family and was excluded from public office until his consulship (which was shared with his nephew, Caligula, in 37). Claudius's infirmity probably saved him from the fate of many other nobles during the purges throughout the reigns of Tiberius and Caligula, as potential enemies did not see him as a serious threat. His survival led to him being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last adult male of his family.
Despite his lack of experience, Claudius was an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped restore the empire's finances after the excesses of Caligula's reign. He was also an ambitious builder, constructing new roads, aqueducts, and canals across the Empire. During his reign, the Empire started its successful conquest of Britain. Having a personal interest in law, he presided at public trials, and issued edicts daily. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position, which resulted in the deaths of many senators. Those events damaged his reputation among the ancient writers, though more recent historians have revised that opinion. Many authors contend that he was murdered by his own wife, Agrippina the Younger. After his death at the age of 63, his grandnephew and legally adopted step-son, Nero, succeeded him as emperor.
Name.
As a consequence of Roman customs, society, and personal preference, Claudius' full name varied throughout his life:
Family and youth.
Early life.
Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia Minor, may have had two other children who died young. Claudius's maternal grandparents were Mark Antony and Octavia Minor, Augustus's sister, and he was therefore the great-great-grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus's third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumour that his father Nero Claudius Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius's paternal grandfather.
In 9 BC, Claudius's father Drusus died on campaign in Germania from a fall from a horse. Claudius was then raised by his mother, who never remarried. When his disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years.
Livia was a little kinder, but nevertheless sent Claudius short, angry letters of reproof. He was put under the care of a former mule-driver to keep him disciplined, under the logic that his condition was due to laziness and a lack of willpower. However, by the time he reached his teenage years, his symptoms apparently waned and his family began to take some notice of his scholarly interests. In AD 7, Livy was hired to tutor Claudius in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter, as well as the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius's oratory.
Public life.
Claudius' work as a historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, he began work on a history of the Civil Wars that was either too truthful or too critical of Octavian, then reigning as Caesar Augustus. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office, since he could not be trusted to toe the existing party line.
When Claudius returned to the narrative later in life, he skipped over the wars of the Second Triumvirate altogether; but the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honour the Imperial clan in AD 8, Claudius's name (now Tiberius Claudius Nero Germanicus after his elevation to "pater familias" of the Claudii Nerones on the adoption of his brother) was inscribed on the edge, past the deceased princes, Gaius and Lucius, and Germanicus's children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all.
When Augustus died in AD 14, Claudius – then aged 23 – appealed to his uncle Tiberius to allow him to begin the "cursus honorum". Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life.
Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus's death, the "equites", or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained.
During the period immediately after the death of Tiberius's son, Drusus, Claudius was pushed by some quarters as a potential heir to the throne. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility. After the death of Tiberius, the new emperor Caligula (the son of Claudius's brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 to emphasize the memory of Caligula's deceased father Germanicus.
Despite this, Caligula tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio, Claudius became sickly and thin by the end of Caligula's reign, most likely due to stress. A possible surviving portrait of Claudius from this period may support this.
Assassination of Caligula and Declaration of Claudius as Emperor (AD 41).
On 24 January 41, Caligula was assassinated in a conspiracy involving Cassius Chaerea – a military tribune in the Praetorian Guard – and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot – particularly since he left the scene of the crime shortly before his nephew was murdered. However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family.
In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly proclaimed him "princeps". Claudius was spirited away to the Praetorian camp and put under their protection.
The Senate met and debated a change of government, but this devolved into an argument over which of them would be the new "princeps". When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus, claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role so it remains uncertain. Eventually the Senate was forced to give in. In return, Claudius granted a general amnesty, although he executed a few junior officers involved in the conspiracy. The actual assassins, including Cassius Chaerea and Julius Lupus, the murderer of Caligula's wife and daughter, were put to death to ensure Claudius's own safety and as a future deterrent.
Since Claudius was the first emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, they accused him of being the first emperor to resort to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces, although Tiberius and Augustus had both left gifts to the army and guard in their wills and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, issuing coins with tributes to the Praetorians in the early part of his reign.
Emperor.
Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name "Caesar" as a cognomen, as the name still carried great weight with the populace. To do so, he dropped the cognomen "Nero", which he had adopted as "pater familias" of the Claudii Nerones when his brother Germanicus was adopted. As Pharaoh of Egypt, Claudius adopted the royal titulary "Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet" ("Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon").
While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus's sister Octavia, and so he felt that he had the right of family. He also adopted the name "Augustus" as the two previous emperors had done at their accessions. He kept the honorific "Germanicus" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term "filius Drusi" (son of Drusus) in his titles, to remind the people of his legendary father and lay claim to his reputation.
Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, "... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor."
Senate.
Because of the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune, (the Emperor could not officially serve as a Tribune of the Plebes since he was a patrician, but this was a power taken by previous rulers, which he continued). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also restored the peaceful Imperial provinces of Macedonia and Achaea as senatorial provinces.
Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech:
In 47, he assumed the office of "censor" with Lucius Vitellius, which had been allowed to lapse for some time. He struck out the names of many senators and "equites" who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit to the senate eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even joked about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons), i.e. himself. He also increased the number of patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar.
Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor.
Secretariat and centralization of powers.
Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He has however become famous for the new extents at which he made use of such men in the administration of the government, forced by the centralization of the powers of the "princeps" and not wanting free-born magistrates to serve under him as if they were not peers.
The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius's stead before the conquest of Britain.
Since these were important positions, the senators were aghast at their being placed in the hands of former slaves and "well-known eunuchs". If freedmen had total control of money, letters and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by ancient sources. However, these same sources admit that the freedmen were loyal to Claudius.
He had shown himself to be similarly appreciative of their help, giving them due credit for policies which they advised; but punished them with just force if they showed treacherous inclinations, as was the case of Polybius and Pallas's brother, Felix. There is no evidence that the character of Claudius's policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout.
Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder describes several of them as being richer than Crassus, the richest man of the Republican era.
Expansion of the Empire.
Claudius conducted a census in 48 that found 5,984,072 (adult male) Roman citizens (women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus's death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible.
Additionally under Claudius, the Empire underwent its first major territorial expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, as well as the official division of the former client kingdom into two Imperial provinces.
The British Campaign.
The most far-reaching conquest however was that of Britannia: In 43, Claudius sent Aulus Plautius with four legions to Britain ("Britannia") after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its mines and the potential of slave labor, as well as being a haven for Gallic rebels. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The Roman "colonia" of "Colonia Claudia Victricensis" was established as the provincial capital of the newly established province of Britannia at Camulodunum, where a large temple was dedicated in his honour.
He left Britain after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific "Britannicus" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander.
Public works.
Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built or finished two aqueducts, the Aqua Claudia, begun by Caligula, and the Aqua Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo.
He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth, reducing flooding in Rome.
The port at Ostia was part of Claudius's solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine.
The last part of Claudius's plan to avoid famine was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, also making the nearby river navigable year-round. A serious famine is mentioned in the book of Acts as taking place during Claudius' reign, and had been prophesied by a Christian called Agabus while visiting Antioch.
A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over of new arable land; he expanded the Claudian tunnel to three times its original size.
Religious reforms.
Claudius, as the author of a treatise on Augustus's religious reforms, thought himself to be in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods; he restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He also re-established old observances and archaic language.
Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian Mysteries, which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities.
Judicial and legislative affairs.
Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law. He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system. He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 to ensure a more experienced jury pool.
Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria each sent him embassies after riots broke out between the two communities. This resulted in the famous "Letter to the Alexandrians", which reaffirmed Jewish rights in the city but forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire. However, Claudius also expelled Jews from the city of Rome, following disturbances allegedly instigated by Christians. This expulsion is attested to in Acts of the Apostles (), and by Roman historians Suetonius and Cassius Dio along with the fifth-century Christian author Paulus Orosius.
One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens. The Emperor issued a declaration, contained in the "Tabula clesiana", that they would be allowed to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished the false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were to have their property confiscated and selling into slavery, in the words of Suetonius, "such as were ungrateful and a cause of complaint to their patrons".
Numerous edicts were issued throughout Claudius's reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite. Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health. One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder.
Public games and entertainments.
According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters. Claudius also presided over many new and original events. Soon after coming into power he instituted games to be held in honor of his father on the latter's birthday; annual games were also held in honor of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor.
Claudius organized a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus's excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning. Claudius also presented staged naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows.
At Ostia, in front of a crowd of spectators, Claudius fought an orca which was trapped in the harbour. The event was witnessed by Pliny the Elder:
Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track. Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators. He rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication, which he observed from a special platform in the orchestra box.
Plots and coup attempts.
Several coup attempts were made during Claudius's reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius's reign under questionable circumstances. Shortly after this, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus - governor of Dalmatia - and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus' troops, which led to the suicide of the main conspirators.
Many other senators tried different conspiracies and were condemned. Claudius's son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusius Saturninus, Cornelius Lupus, and Pompeius Pedo.
In 46, Asinius Gallus, grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius's own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. Ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious.
Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with Titus Statilius Taurus Corvinus. Most of these conspiracies took place before Claudius's term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus' Annals. This section of Tacitus' history narrates the alleged conspiracy of Claudius's third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius's reign.
Marriages and personal life.
Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer.
Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day.
Plautia Urgulanilla.
Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, daughter of Sejanus.
Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. Later, this action made him the target of criticism by his enemies.
Aelia Paetina.
Soon after, (possibly in 28) Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability. One version suggests that it may have been due to emotional and mental abuse by Paetina.
Valeria Messalina.
Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed (Claudius's grandmother, Octavia the Younger, was Valeria's great-grandmother on both her mother and father's side) and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius's accession.
This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have more sexual partners in a nightand manipulated his policies to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia.
Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage. Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining her rank and protecting her children. The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point, after which she was executed.
Agrippina the Younger.
Claudius married once more. Ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles. She gradually seized power from Claudius and successfully conspired to eliminate his son's rivals, opening the way for her son to become emperor.
The truth is probably more political. The attempted coup d'état by Silius and Messalina probably made Claudius realize the weakness of his position as a member of the Claudian (but not the Julian) family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy. Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Nero) was one of the last males of the Imperial family. Coup attempts might rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, an attempt to end the feud between the Julian and Claudian branches. This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions that Tiberius had punished.
Another reason was to bring in Lucius Domitius Ahenobarbus as a candidate for the succession. His prestige as the descendent of Augustus and Germanicus made him popular, and marking him as an heir would have helped the survival of Claudius' regime. In any case, Claudius accepted Agrippina and later adopted the mature Ahenobarbus as his son, renaming him as 'Nero Claudius Caesar'.
Nero was married to Claudius's daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs, and Tiberius had named Caligula as his joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome when a suitable natural adult heir was unavailable, as was the case during Britannicus's minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign.
Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family to ensure his right to be Emperor (although that did not stop others from making him the object of a coup attempt against Nero a few years later), besides being the half-brother of Valeria Messalina, which told against him. Nero was more popular with the general public as both the grandson of Germanicus and the direct descendant of Augustus.
Affliction and personality.
The historian Suetonius describes the physical manifestations of Claudius's condition. His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his "Apocolocyntosis" that Claudius's voice belonged to no land animal, and that his hands were weak as well.
However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of "dignitas". When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne. Claudius himself claimed that he had exaggerated his ailments to save his life.
Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves's Claudius novels, first published in the 1930s. "The New York Times" wrote in 1934 that Claudius suffered from infantile paralysis (which led to his limp state) and measles (which made him deaf) at seven months of age, among several other ailments. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause. Tourette syndrome has also been considered a possibility.
As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians. They also paint him as bloodthirsty and cruel, over-fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper. According to the ancient historians he was also excessively trusting, and easily manipulated by his wives and freedmen, but at the same time they portray him as paranoid and apathetic, dull and easily confused.
Scholarly works and their impact.
Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius, which covers the peak of Claudius's literary career, it became impolitic to speak of republican Rome. The trend among the young historians was either to write about the new empire or about obscure antiquarian topics. Claudius was the rare scholar who covered both.
Besides his history of Augustus' reign that caused him so much grief, his major works included "Tyrrhenika", a twenty-book Etruscan history, and "Carchedonica", an eight-volume history of Carthage, as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the topic of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history.
He proposed a reform of the Latin alphabet by the addition of three new letters; he officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste. Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches.
None of the works survived, but other sources' reference to him provide material for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius's autobiography once and must have used it as a source numerous times. Tacitus uses Claudius's arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's "Natural History".
The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies.
His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect; also, his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the power of the censorship office to introduce the letter "R" and so used his own term to introduce his new letters.
Death.
Ancient historians agree that Claudius was murdered by poison – possibly contained in mushrooms or on a feather (ostensibly put down his throat to induce vomiting) – and died in the early hours of 13 October 54.
Nearly all implicate his final and powerful wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family. Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power.
Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance. Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again. Among his contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors of his poisoning.
Some historians have cast doubt on whether Claudius was murdered or merely died from illness or old age. Evidence against his murder include his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. Claudius had been so ill the year before that Nero vowed games for his recovery and the year of 54 seems to have been such an unhealthy year that one sitting member of each magistracy died within the span of a few months. He may even have died by eating a naturally poisonous mushroom, possibly "Amanita muscaria". On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime. Claudius's ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier.
Legacy.
Divine honours.
Already, while alive, he received the widespread private worship of a living "princeps" and was worshipped in Britannia in his own temple in Camulodunum.
Claudius was deified by Nero and the Senate almost immediately.
Views of the new regime.
Agrippina had sent Narcissus away shortly before Claudius's death, and now had the freedman murdered.
The last act of this secretary of letters was to burn all of Claudius's correspondence – most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius's private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts, Nero often criticized the deceased Emperor, and many Claudian laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them.
Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all. Claudius's temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House.
Flavian and later perspectives.
The Flavians, who had risen to prominence under Claudius, took a different tack. They needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were associated with a good regime. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill.
However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state-cult in Rome probably continued until the abolition of all cults of dead Emperors by Maximinus Thrax in 237–238. The "Feriale Duranum", probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August. And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century.
Views of ancient historians.
The ancient historians Tacitus, Suetonius (in "The Twelve Caesars"), and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or "equites". They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus's letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and crediting his good works to his retinue.
Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing. He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and public life. During his Censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius's writings and to have omitted Claudius's character from his works. Even his version of Claudius's Lyons tablet speech is edited to be devoid of the emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus, the conception of Claudius as a weak fool, controlled by those he supposedly ruled, was preserved for the ages.
As time passed, Claudius was mostly forgotten outside of the historians' accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius.
In modern media.
In literature, Claudius and his contemporaries appear in the historical novel "The Roman" by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves's Claudius story, in his two novels, "Empire of the Atom" and "The Wizard of Linn".
The historical novel "Chariot of the Soul" by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius.
|
6172
|
7903804
|
https://en.wikipedia.org/wiki?curid=6172
|
Cantor set
|
In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and mentioned by German mathematician Georg Cantor in 1883.
Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned this ternary construction only in passing, as an example of a perfect set that is nowhere dense.
More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). The Cantor set is naturally homeomorphic to the countable product formula_1 of the discrete two-point space formula_2. By a theorem of L. E. J. Brouwer, this is equivalent to being perfect, nonempty, compact, metrizable and zero-dimensional.
Construction and formula of the ternary set.
The Cantor ternary set formula_3 is created by iteratively deleting the "open" middle third from a set of line segments. One starts by deleting the open middle third formula_4 from the interval formula_5, leaving two line segments: formula_6. Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: formula_7.
The Cantor ternary set contains all points in the interval formula_8 that are not deleted at any step in this infinite process. The same construction can be described recursively by setting
formula_9
and
formula_10
for formula_11, so that
formula_12 formula_13 formula_14 for any formula_15.
The first six steps of this process are illustrated below.
Using the idea of self-similar transformations, formula_16 formula_17 and formula_18 the explicit closed formulas for the Cantor set are
formula_19
where every middle third is removed as the open interval formula_20 from the closed interval formula_21 surrounding it, or
formula_22
where the middle third formula_23 of the foregoing closed interval formula_24 is removed by intersecting with formula_25
This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.
In arithmetical terms, the Cantor set consists of all real numbers of the unit interval formula_8 that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point. "Requiring" the digit 1 is critical: formula_27, which is included in the Cantor set, can be written as formula_28, but also as formula_29, which contains no 1 digits and corresponds to an initial left turn followed by infinitely many right turns in the binary tree.
Mandelbrot's construction by "curdling".
In "The Fractal Geometry of Nature", mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of formula_3. His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter "curdles" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter "curdles" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.
Composition.
Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression
formula_31
So that the proportion left is formula_32.
This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the "middle third" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment formula_4 from the original interval formula_34 leaves behind the points and . Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).
It may appear that "only" the endpoints of the construction segments are left, but that is not the case either. The number , for example, has the unique ternary form 0.020202... = . It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of .
All endpoints of segments are "terminating" ternary fractions and are contained in the set
formula_35
which is a countably infinite set.
As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like . The whole Cantor set is in fact not countable.
Properties.
Cardinality.
It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function "f" from the Cantor set formula_3 to the closed interval formula_34 that is surjective (i.e. "f" maps from formula_3 onto formula_34) so that the cardinality of formula_3 is no less than that of formula_34. Since formula_3 is a subset of formula_34, its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.
To construct this function, consider the points in the formula_34 interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of formula_45, admit more than one representation in this notation, as for example , that can be written as 0.13 = 3, but also as 0.0222...3 = 3, and , that can be written as 0.23 = 3 but also as 0.1222...3 = 3.
When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of
This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.
The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first "two" digits is 1.
Continuing in this way, for a number not to be excluded at step "n", it must have a ternary representation whose "n"th digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.
It is worth emphasizing that numbers like 1, = 0.13 and = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 3, = 0.0222...3 = 3 and = 0.20222...3 = 3.
All the latter numbers are "endpoints", and these examples are right limit points of formula_3. The same is true for the left limit points of formula_3, e.g. = 0.1222...3 = 3 = 3 and = 0.21222...3 = 3 = 3. All these endpoints are "proper ternary" fractions (elements of formula_48) of the form , where denominator "q" is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and "ends" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of formula_3 if its ternary representation contains no 1's and "ends" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of formula_3 if it again its ternary expansion contains no 1's and "ends" in infinitely many recurring 2s.
This set of endpoints is dense in formula_3 (but not dense in formula_34) and makes up a countably infinite set. The numbers in formula_3 which are "not" endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.
The function from formula_3 to formula_34 is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,
formula_56 where formula_57
For any number "y" in formula_34, its binary representation can be translated into a ternary representation of a number "x" in formula_3 by replacing all the 1s by 2s. With this, "f"("x") = "y" so that "y" is in the range of "f". For instance if "y" = = 0.100110011001...2 = , we write "x" = = 0.200220022002...3 = . Consequently, "f" is surjective. However, "f" is "not" injective — the values for which "f"("x") coincides are those at opposing ends of one of the "middle thirds" removed. For instance, take
= 3 (which is a right limit point of formula_3 and a left limit point of the middle third [, ]) and
= 3 (which is a left limit point of formula_3 and a right limit point of the middle third [, ])
so
formula_62
Thus there are as many points in the Cantor set as there are in the interval formula_34 (which has the uncountable cardinality However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is , which can be written as 0.020202...3 = in ternary notation. In fact, given any formula_64, there exist formula_65 such that formula_66. This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that formula_67 for every formula_64. Since this construction provides an injection from formula_69 to formula_70, we have formula_71 as an immediate corollary. Assuming that formula_72 for any infinite set formula_73 (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that formula_74.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal in base 3, this would imply that all members of the Cantor set are either rational or transcendental.
Self-similarity.
The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, formula_75 and formula_17, which leave the Cantor set invariant up to homeomorphism: formula_77
Repeated iteration of formula_78 and formula_79 can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set formula_80 together with function composition forms a monoid, the dyadic monoid.
The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points formula_81 and formula_82 in the Cantor set formula_3, there exists a homeomorphism formula_84 with formula_85. An explicit construction of formula_86 can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space formula_87. Then the map formula_88 defined by formula_89 is an involutive homeomorphism exchanging formula_81 and formula_82.
Topological and analytical properties.
Although "the" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about "a" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.
As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.
For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.
Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.
For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into "halves" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.
As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space formula_92, where each copy carries the discrete topology. This is the space of all sequences in two digits
formula_93
which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.
From the above characterization, the Cantor set is homeomorphic to the "p"-adic integers, and, if one point is removed from it, to the "p"-adic numbers.
The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the "p"-adic metric on formula_94: given two sequences formula_95, the distance between them is formula_96, where formula_97 is the smallest index such that formula_98; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.
We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.
The Cantor set is sometimes regarded as "universal" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The "universal" property has important applications in functional analysis, where it is sometimes known as the "representation theorem for compact metric spaces".
For any integer "q" ≥ 2, the topology on the group G = Z"q"ω (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Z"q"ω, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case "q" = 2. (See Rudin 1962 p 40.)
Measure and probability.
The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.
In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of formula_99 in its dimension of formula_100.
Cantor numbers.
If we define a Cantor number as a member of the Cantor set, then
Descriptive set theory.
The Cantor set is a meagre set (or a set of first category) as a subset of formula_34 (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of "size" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set formula_103, the Cantor set formula_3 is "small" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of formula_34. However, unlike formula_103, which is countable and has a "small" cardinality, formula_107, the cardinality of formula_3 is the same as that of formula_34, the continuum formula_110, and is "large" in the sense of cardinality. In fact, it is also possible to construct a subset of formula_34 that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of "fat" Cantor sets formula_112 of measure formula_113 (see Smith–Volterra–Cantor set below for the construction), we obtain a set formula_114which has a positive measure (equal to 1) but is meagre in [0,1], since each formula_112 is nowhere dense. Then consider the set formula_116. Since formula_117, formula_118 cannot be meagre, but since formula_119, formula_118 must have measure zero.
Variants.
Smith–Volterra–Cantor set.
Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder formula_121 as formula_122 for any formula_123 such that formula_124.
On the other hand, "fat Cantor sets" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length formula_125 (formula_126) is removed from the middle of each segment at the "n"th iteration, then the total length removed is formula_127, and the limiting set will have a Lebesgue measure of formula_128. Thus, in a sense, the middle-thirds Cantor set is a limiting case with formula_129. If formula_130, then the remainder will have positive measure with formula_131. The case formula_132 is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of formula_133.
Cantor dust.
Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.
A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.
Historical remarks.
Cantor introduced what we call today the Cantor ternary set formula_134 as an example "of a perfect point-set, which is not everywhere-dense in any interval, however small." Cantor described formula_134 in terms of ternary expansions, as "the set of all real numbers given by the formula: formula_136where the coefficients formula_137 arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements."
A topological space formula_138 is perfect if all its points are limit points or, equivalently, if it coincides with its derived set formula_139. Subsets of the real line, like formula_134, can be seen as topological spaces under the induced subspace topology.
Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.
Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal Geometry of Nature, he described how "When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves," and added that "every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming formula_134 to be interesting in science."
|
6173
|
17350134
|
https://en.wikipedia.org/wiki?curid=6173
|
Cardinal number
|
In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter formula_1 (aleph) marked with subscript indicating their rank among the infinite cardinals.
Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for two infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.
There is a transfinite sequence of cardinal numbers:
formula_2
This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see ), there are infinite cardinals that are not aleph numbers.
Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.
History.
The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not "equal", but have the "same cardinality", namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.
Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N "denumerable (countably infinite) sets", which all share the same cardinal number. This cardinal number is called formula_3, aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.
Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number "z" may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple ("a"0, "a"1, ..., "an"), "ai" ∈ Z together with a pair of rationals ("b"0, "b"1) such that "z" is the unique root of the polynomial with coefficients ("a"0, "a"1, ..., "an") that lies in the interval ("b"0, "b"1).
In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol formula_4 for it.
Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (formula_3, aleph-null), and that for every cardinal number there is a next-larger cardinal
formula_6
His continuum hypothesis is the proposition that the cardinality formula_4 of the set of real numbers is the same as formula_8. This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
Motivation.
In informal use, a cardinal number is what is normally referred to as a "counting number", provided that 0 is included: 0, 1, 2, ... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.
More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d'...>, and we can construct the set {a,b,c}, which has 3 elements.
However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.
The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.
A set "Y" is at least as big as a set "X" if there is an injective mapping from the elements of "X" to the elements of "Y". An injective mapping identifies each element of the set "X" with a unique element of the set "Y". This is most easily understood by an example; suppose we have the sets "X" = {1,2,3} and "Y" = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:
1 → a
2 → b
3 → c
which is injective, and hence conclude that "Y" has cardinality greater than or equal to "X". The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.
We can then extend this to an equality-style relation. Two sets "X" and "Y" are said to have the same "cardinality" if there exists a bijection between "X" and "Y". By the Schroeder–Bernstein theorem, this is equivalent to there being "both" an injective mapping from "X" to "Y", "and" an injective mapping from "Y" to "X". We then write |"X"| = |"Y"|. The cardinal number of "X" itself is often defined as the least ordinal "a" with |"a"| = |"X"|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as "some" ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.
The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:
1 → 2
2 → 3
3 → 4
"n" → "n" + 1
With this assignment, we can see that the set {1,2,3...} has the same cardinality as the set {2,3,4...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4...} is a proper subset of {1,2,3...}.
When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.
It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.
Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as "equipotence", "equipollence", or "equinumerosity". It is thus said that two sets with the same cardinality are, respectively, "equipotent", "equipollent", or "equinumerous".
Formal definition.
Formally, assuming the axiom of choice, the cardinality of a set "X" is the least ordinal number α such that there is a bijection between "X" and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set "X" (implicit in Cantor and explicit in Frege and "Principia Mathematica") is as the class ["X"] of all sets that are equinumerous with "X". This does not work in ZFC or other related systems of axiomatic set theory because if "X" is non-empty, this collection is too large to be a set. In fact, for "X" ≠ ∅ there is an injection from the universe into ["X"] by mapping a set "m" to {"m"} × "X", and so by the axiom of limitation of size, ["X"] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with "X" that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, formula_9 in ordinal arithmetic while formula_10 in cardinal arithmetic, although the von Neumann assignment puts formula_11. On the other hand, Scott's trick implies that the cardinal number 0 is formula_12, which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.
Formally, the order among cardinal numbers is defined as follows: |"X"| ≤ |"Y"| means that there exists an injective function from "X" to "Y". The Cantor–Bernstein–Schroeder theorem states that if |"X"| ≤ |"Y"| and |"Y"| ≤ |"X"| then |"X"| = |"Y"|. The axiom of choice is equivalent to the statement that given two sets "X" and "Y", either |"X"| ≤ |"Y"| or |"Y"| ≤ |"X"|.
A set "X" is Dedekind-infinite if there exists a proper subset "Y" of "X" with |"X"| = |"Y"|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set "X" is finite if and only if |"X"| = |"n"| = "n" for some natural number "n". Any other set is infinite.
Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal formula_3 (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented formula_1) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality formula_3). The next larger cardinal is denoted by formula_8, and so on. For every ordinal α, there is a cardinal number formula_17 and this list exhausts all infinite cardinal numbers.
Cardinal arithmetic.
We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.
Successor cardinal.
If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+ such that formula_18) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.
Cardinal addition.
If "X" and "Y" are disjoint, addition is given by the union of "X" and "Y". If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace "X" by "X"×{0} and "Y" by "Y"×{1}).
formula_19
Zero is an additive identity "κ" + 0 = 0 + "κ" = "κ".
Addition is associative ("κ" + "μ") + "ν" = "κ" + ("μ" + "ν").
Addition is commutative "κ" + "μ" = "μ" + "κ".
Addition is non-decreasing in both arguments:
formula_20
Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either "κ" or "μ" is infinite, then
formula_21
Subtraction.
Assuming the axiom of choice and, given an infinite cardinal "σ" and a cardinal "μ", there exists a cardinal "κ" such that "μ" + "κ" = "σ" if and only if "μ" ≤ "σ". It will be unique (and equal to "σ") if and only if "μ" < "σ".
Cardinal multiplication.
The product of cardinals comes from the Cartesian product.
formula_22
Zero is a multiplicative absorbing element: "κ"·0 = 0·"κ" = 0.
There are no nontrivial zero divisors: "κ"·"μ" = 0 → ("κ" = 0 or "μ" = 0).
One is a multiplicative identity: "κ"·1 = 1·"κ" = "κ".
Multiplication is associative: ("κ"·"μ")·"ν" = "κ"·("μ"·"ν").
Multiplication is commutative: "κ"·"μ" = "μ"·"κ".
Multiplication is non-decreasing in both arguments:
"κ" ≤ "μ" → ("κ"·"ν" ≤ "μ"·"ν" and "ν"·"κ" ≤ "ν"·"μ").
Multiplication distributes over addition:
"κ"·("μ" + "ν") = "κ"·"μ" + "κ"·"ν" and
("μ" + "ν")·"κ" = "μ"·"κ" + "ν"·"κ".
Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either "κ" or "μ" is infinite and both are non-zero, then
formula_23
Thus the product of two infinite cardinal numbers is equal to their sum.
Division.
Assuming the axiom of choice and given an infinite cardinal "π" and a non-zero cardinal "μ", there exists a cardinal "κ" such that "μ" · "κ" = "π" if and only if "μ" ≤ "π". It will be unique (and equal to "π") if and only if "μ" < "π".
Cardinal exponentiation.
Exponentiation is given by
formula_24
where "XY" is the set of all functions from "Y" to "X". It is easy to check that the right-hand side depends only on formula_25 and formula_25.
κ0 = 1 (in particular 00 = 1), see empty function.
If "μ" ≥ 1, then 0"μ" = 0.
1"μ" = 1.
"κ"1 = "κ".
"κ""μ" + "ν" = "κ""μ"·"κ""ν".
κ"μ" · "ν" = ("κ""μ")"ν".
("κ"·"μ")"ν" = "κ""ν"·"μ""ν".
Exponentiation is non-decreasing in both arguments:
(1 ≤ "ν" and "κ" ≤ "μ") → ("ν""κ" ≤ "ν""μ") and
("κ" ≤ "μ") → ("κ""ν" ≤ "μ""ν").
2|"X"| is the cardinality of the power set of the set "X" and Cantor's diagonal argument shows that 2|"X"| > |"X"| for any set "X". This proves that no largest cardinal exists (because for any cardinal "κ", we can always find a larger cardinal 2"κ"). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)
All the remaining propositions in this section assume the axiom of choice:
If "κ" and "μ" are both finite and greater than 1, and "ν" is infinite, then "κ""ν" = "μ""ν".
If "κ" is infinite and "μ" is finite and non-zero, then "κ""μ" = "κ".
If 2 ≤ "κ" and 1 ≤ "μ" and at least one of them is infinite, then:
Max ("κ", 2"μ") ≤ "κ""μ" ≤ Max (2"κ", 2"μ").
Using König's theorem, one can prove "κ" < "κ"cf("κ") and "κ" < cf(2"κ") for any infinite cardinal "κ", where cf("κ") is the cofinality of "κ".
Roots.
Assuming the axiom of choice and, given an infinite cardinal "κ" and a finite cardinal "μ" greater than 0, the cardinal "ν" satisfying formula_27 will be formula_28.
Logarithms.
Assuming the axiom of choice and, given an infinite cardinal "κ" and a finite cardinal "μ" greater than 1, there may or may not be a cardinal "λ" satisfying formula_29. However, if such a cardinal exists, it is infinite and less than "κ", and any finite cardinality "ν" greater than 1 will also satisfy formula_30.
The logarithm of an infinite cardinal number "κ" is defined as the least cardinal number "μ" such that "κ" ≤ 2"μ". Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.
The continuum hypothesis.
The continuum hypothesis (CH) states that there are no cardinals strictly between formula_3 and formula_32 The latter cardinal number is also often denoted by formula_4; it is the cardinality of the continuum (the set of real numbers). In this case formula_34
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal formula_28, there are no cardinals strictly between formula_28 and formula_37. Both the continuum hypothesis and the generalized continuum hypothesis have been proved to be independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals formula_28, the only restrictions ZFC places on the cardinality of formula_37 are that formula_40, and that the exponential function is non-decreasing.
References.
Notes
Bibliography
|
6174
|
310173
|
https://en.wikipedia.org/wiki?curid=6174
|
Cardinality
|
In mathematics, cardinality is an intrinsic property of sets, roughly meaning the number of individual objects they contain, which may be infinite. The cardinal number corresponding to a set formula_1 and is written as formula_2 between two vertical bars. For finite sets, cardinality coincides with the natural number found by counting its elements. Beginning in the late 19th century, this concept of cardinality was generalized to infinite sets.
Two sets are said to be equinumerous or "have the same cardinality" if there exists a one-to-one correspondence between them. That is, if their objects can be paired such that each object has a pair, and no object is paired more than once (see image). A set is countably infinite if it can be placed in one-to-one correspondence with the set of natural numbers formula_3 For example, the set of even numbers formula_4, the set of prime numbers formula_5, and the set of rational numbers are all countable. A set is uncountable if it is both infinite and cannot be put in correspondence with the set of natural numbers—for example, the set of real numbers or the powerset of the set of natural numbers.
Cardinal numbers extend the natural numbers as representatives of size. Most commonly, the aleph numbers are defined via ordinal numbers, and represent a large class of sets. The question of whether there is a set whose cardinality is greater than the integers but less than that of the real numbers, is known as the continuum hypothesis, which has been shown to be unprovable in standard set theories such as Zermelo–Fraenkel set theory.
Definition and etymology.
Cardinality is an intrinsic property of sets which defines their size, roughly corresponding to the number of individual objects they contain. Fundamentally however, it is different from the concepts of number or counting as the cardinalities of two sets can be compared without referring to their number of elements, or defining number at all. For example, in the image above, a set of apples is compared to a set of oranges such that every fruit is used exactly once which shows these two sets have the same cardinality, even if one doesn't know how many of each there are. Thus, cardinality is measured by putting sets in one-to-one correspondence. If it is possible, the sets are said to have the "same cardinality", and if not, one set is said to be "strictly larger" or "strictly smaller" than the other.
Georg Cantor, the originator of the concept, defined cardinality as "the general concept which, with the aid of our intelligence, results from a set when we abstract from the nature of its various elements and from the order of their being given." This definition was considered to be imprecise, unclear, and purely psychological. Thus, cardinal numbers, a means of measuring cardinality, became the main way of presenting the concept. The distinction between the two is roughly analogous to the difference between an object's mass and its mass "in kilograms". However, somewhat confusingly, the phrases "The cardinality of M" and "The cardinal number of M" are used interchangeably.
In English, the term "cardinality" originates from the post-classical Latin "cardinalis", meaning "principal" or "chief", which derives from "cardo", a noun meaning "hinge". In Latin, "cardo" referred to something central or pivotal, both literally and metaphorically. This concept of centrality passed into medieval Latin and then into English, where "cardinal" came to describe things considered to be, in some sense, fundamental, such as "cardinal virtues", "cardinal sins", "cardinal directions", and (grammatically defined) "cardinal numbers". The last of which referred to numbers used for counting (e.g., "one", "two", "three"), as opposed to "ordinal numbers", which express order (e.g., "first, second, third"), and "nominal numbers" used for labeling without meaning (e.g., jersey numbers and serial numbers). In mathematics, the notion of cardinality was first introduced by Georg Cantor in the late 19th century, wherein he used the used the term "Mächtigkeit", which may be translated as "magnitude" or "power", though Cantor credited the term to a work by Jakob Steiner on projective geometry. The terms "cardinality" and "cardinal number" were eventually adopted from the grammatical sense, and later translations would use these terms.
History.
Ancient history.
From the 6th century BCE, the writings of Greek philosophers, such as Anaximander, show hints of comparing infinite sets or shapes, however, it was generally viewed as paradoxical and imperfect (cf. "Zeno's paradoxes"). Aristotle distinguished between the notions of actual infinity and potential infinity, arguing that Greek mathematicians understood the difference, and that they "do not need the [actual] infinite and do not use it." The Greek notion of number ("αριθμός", "arithmos") was used exclusively for a definite number of definite objects (i.e. finite numbers). This would be codified in Euclid's "Elements", where the fifth common notion states "The whole is greater than the part", often called the "Euclidean principle". This principle would be the dominating philosophy in mathematics until the 19th century.
Around the 4th century BCE, Jaina mathematics would be the first to discuss different sizes of infinity. They defined three major classes of number: enumerable (finite numbers), unenumerable ("asamkhyata", roughly, countably infinite), and infinite ("ananta"). Then they had five classes of infinite numbers: infinite in one direction, infinite in both directions, infinite in area, infinite everywhere, and infinite perpetually.
One of the earliest explicit uses of a one-to-one correspondence is recorded in Aristotle's "Mechanics" (), known as Aristotle's wheel paradox. The paradox can be briefly described as follows: A wheel is depicted as two concentric circles. The larger, outer circle is tangent to a horizontal line (e.g. a road that it rolls on), while the smaller, inner circle is rigidly affixed to the larger. Assuming the larger circle rolls along the line without slipping (or skidding) for one full revolution, the distances moved by both circles are the same: the circumference of the larger circle. Further, the lines traced by the bottom-most point of each is the same length. Since the smaller wheel does not skip any points, and no point on the smaller wheel is used more than once, there is a one-to-one correspondence between the two circles.
Pre-Cantorian set theory.
Galileo Galilei presented what was later coined Galileo's paradox in his book "Two New Sciences" (1638), where he presents a seeming paradox in infinite sequences of numbers. It goes roughly as follows: for each square number formula_6 1, 4, 9, 16, and so on, there is a unique square root formula_7 1, 2, 3, 4, and so on. Therefore, there are as many square roots as there are squares. However, every number is a square root, since it can be squared, but not every number is a square number. Moreover, the proportion of square numbers as one passes larger values diminishes, and is eventually smaller than any given fraction. He denied that this was fundamentally contradictory, however he concluded that this meant we could not compare the sizes of infinite sets, missing the opportunity to discover cardinality.
In "A Treatise of Human Nature" (1739), David Hume is quoted for saying ""When two numbers are so combined, as that the one has always a unit answering to every unit of the other, we pronounce them equal"," now called "Hume's principle", which was used extensively by Gottlob Frege later during the rise of set theory.
Bernard Bolzano's "Paradoxes of the Infinite" ("Paradoxien des Unendlichen", 1851) is often considered the first systematic attempt to introduce the concept of sets into mathematical analysis. In this work, Bolzano defended the notion of actual infinity, presented an early formulation of what would later be recognized as one-to-one correspondence between infinite sets. He discussed examples such as the pairing between the intervals formula_8 and formula_9 by the relation formula_10 and revisited Galileo's paradox. However, he too resisted saying that these sets were, in that sense, the same size. While "Paradoxes of the Infinite" anticipated several ideas central to later set theory, the work had little influence on contemporary mathematics, in part due to its posthumous publication and limited circulation.
Early set theory.
Georg Cantor.
The concept of cardinality emerged nearly fully formed in the work of Georg Cantor during the 1870s and 1880s, in the context of mathematical analysis. In a series of papers beginning with "On a Property of the Collection of All Real Algebraic Numbers" (1874), Cantor introduced the idea of comparing the sizes of infinite sets, through the notion of one-to-one correspondence. He showed that the set of real numbers was, in this sense, strictly larger than the set of natural numbers using a nested intervals argument. This result was later refined into the more widely known diagonal argument of 1891, published in "Über eine elementare Frage der Mannigfaltigkeitslehre," where he also proved the more general result (now called Cantor's Theorem) that the power set of any set is strictly larger than the set itself.
Cantor introduced the notion cardinal numbers in terms of ordinal numbers. He viewed cardinal numbers as an abstraction of sets, introduced the notations, where, for a given set formula_11, the order type of that set was written formula_12, and the cardinal number was formula_11, a double abstraction. He also introduced the Aleph sequence for infinite cardinal numbers. These notations appeared in correspondence and were formalized in his later writings, particularly the series "Beiträge zur Begründung der transfiniten Mengenlehre" (18951897). In these works, Cantor developed an arithmetic of cardinal numbers, defining addition, multiplication, and exponentiation of cardinal numbers based on set-theoretic constructions. This led to the formulation of the Continuum Hypothesis (CH), the proposition that no set has cardinality strictly between the cardinality of the natural numbers formula_14 and the cardinality of the continuum formula_15, that is whether formula_16. Cantor was unable to resolve CH and left it as an open problem.
Other contributors.
Parallel to Cantor’s development, Richard Dedekind independently formulated many advanced theorems of set theory, and helped establish set-theoretic foundations of algebra and arithmetic. Dedekind’s "Was sind und was sollen die Zahlen?" (1888) emphasized structural properties over extensional definitions, and supported the bijective formulation of size and number. Dedekind was in correspondence with Cantor during the development of set theory; he supplied Cantor with a proof of the countability of the algebraic numbers, and gave feedback and modifications on Cantor's proofs before publishing.
After Cantor's 1883 proof that all finite-dimensional spaces formula_17 have the same cardinality, in 1890, Giuseppe Peano introduced the Peano curve, which was a more visual proof that the unit interval formula_18 has the same cardinality as the unit square on formula_19 This created a new area of mathematical analysis studying what is now called space-filling curves.
German logician Gottlob Frege attempted to ground the concepts of number and arithmetic in logic using Cantor's theory of cardinality and Hume's principle in "Die Grundlagen der Arithmetik" (1884) and the subsequent "Grundgesetze der Arithmetik" (1893, 1903). Frege defined cardinal numbers as equivalence classes of sets under equinumerosity. However, Frege's approach to set theory was later shown to be flawed. His approach was eventually reformalized by Bertrand Russell and Alfred Whitehead in "Principia Mathematica" (19101913, vol. II) using a theory of types. Though Russell initially had difficulties understanding Cantor's and Frege’s intuitions of cardinality. This definition of cardinal numbers is now referred to as the "FregeRussell" definition. This definition was eventually superseded by the convention established by John von Neumann in 1928 which uses representatives to define cardinal numbers.
At the Paris conference of the International Congress of Mathematicians in 1900, David Hilbert, one of the most influential mathematicians of the time, gave a speech wherein he presented ten unsolved problems (of a total of 23, later published, now called "Hilbert's problems"). Of these, he placed "Cantor's problem" (now called the Continuum Hypothesis) as the first on the list. This list of problems would prove to be very influential in 20th century mathematics, and attracted a lot of attention from other mathematicians toward Cantor's theory of cardinality.
Axiomatic set theory.
In 1908, Ernst Zermelo proposed the first axiomatization of set theory, now called Zermelo set theory, primarily to support his earlier (1904) proof of the Well-ordering theorem, which showed that all cardinal numbers could be represented as Alephs, though the proof required a controversial principle now known as the Axiom of Choice (AC). Zermelo's system would later be extended by Abraham Fraenkel and Thoralf Skolem in the 1920s to create the standard foundation of set theory, called Zermelo–Fraenkel set theory (ZFC, "C" for the Axiom of Choice). ZFC provided a rigorous foundation through which infinite cardinals could be systematically studied while avoiding the paradoxes of naive set theory.
In 1940, Kurt Gödel showed that CH cannot be "disproved" from the axioms of ZFC. Gödel's proof shows that both CH and AC hold in his constructible universe: an inner model of ZFC where CH holds. The existence of an inner model of ZFC in which additional axioms hold shows that the additional axioms are (relatively) consistent with ZFC. In 1963, Paul Cohen showed that CH cannot be "proven" from the ZFC axioms, which showed that CH was independent from ZFC. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with Gödel's model of ZFC in which CH holds and constructs another model which contains more sets than the original in a way that CH does not hold. Cohen was awarded the Fields Medal in 1966 for his proof.
Comparing sets.
Introduction.
The basic notions of sets and functions are used to develop the concept of cardinality, and technical terms therein are used throughout this article. A set can be understood as any collection of objects, usually represented with curly braces. For example, formula_20 specifies a set, called formula_21, which contains the numbers 1, 2, and 3. The symbol formula_22 represents set membership, for example formula_23 says "1 is a member of the set formula_21" which is true by the definition of formula_21 above.
A function is an association that maps elements of one set to the elements of another, often represented with an arrow diagram. For example, the adjacent image depicts a function which maps the set of natural numbers to the set of even numbers by multiplying by 2. If a function does not map two elements to the same place, it is called injective. If a function covers every element in the output space, it is called surjective. If a function is both injective and surjective, it is called bijective. (For further clarification, see "Bijection, injection and surjection".)
Equinumerosity.
The intuitive property of two sets having the "same size" is that their objects can be paired one-to-one. A one-to-one pairing between two sets defines a bijective function between them by mapping each object to its pair. Similarly, a bijection between two sets defines a pairing of their elements by pairing each object with the one it maps to. Therefore, these notions of "pairing" and "bijection" are intuitively equivalent. In fact, it is often the case that functions are defined literally as this set of pairings (see ""). Thus, the following definition is given:
Two sets formula_1 and formula_27 are said to have the "same cardinality" if their elements can be paired one-to-one. That is, if there exists a function formula_28 which is bijective. This is written as formula_29 formula_30 formula_31 and eventually formula_32 once formula_2 has been defined. Alternatively, these sets, formula_34 and formula_35 may be said to be "equivalent", "similar", "equinumerous", "equipotent", or "equipollent". For example, the set formula_36 of non-negative even numbers has the same cardinality as the set formula_37 of natural numbers, since the function formula_38 is a bijection from to (see picture above).
The intuitive property for finite sets that "the whole is greater than the part" is no longer true for infinite sets, and the existence of surjections or injections that don't work does not prove that there is no bijection. For example, the function from to , defined by formula_39 is injective, but not surjective (since 2, for instance, is not mapped to), and from to , defined by formula_40 (see: "floor function") is surjective, but not injective, (since 0 and 1 for instance both map to 0). Neither nor can challenge formula_41 which was established by the existence of .
Equivalence.
A fundamental result necessary in developing a theory of cardinality is relating it to an equivalence relation. A binary relation is an equivalence relation if it satisfies the three basic properties of equality: reflexivity, symmetry, and transitivity.
Since equinumerosity satisfies these three properties, it forms an equivalence relation. This means that cardinality, in some sense, partitions sets into equivalence classes, and one may assign a representative to denote this class. This motivates the notion of a cardinal number. Somewhat more formally, a relation must be a certain set of ordered pairs. Since there is no set of all sets in standard set theory (see: ""), equinumerosity is not a relation in the usual sense, but a predicate or a relation over classes.
Inequality.
A set formula_1 is not larger than a set formula_27 if it can be mapped into formula_27 without overlap. That is, the cardinality of formula_1 is less than or equal to the cardinality of formula_27 if there is an injective function from formula_1 to formula_27. This is written formula_78 formula_79 and eventually formula_80 If formula_78 but there is no injection from formula_27 to formula_44 then formula_1 is said to be "strictly" smaller than formula_49 written without the underline as formula_86 or formula_87 For example, if formula_1 has four elements and formula_27 has five, then the following are true formula_90 formula_78 and formula_92
The basic properties of an inequality are reflexivity (for any formula_93 formula_94), transitivity (if formula_95 and formula_96 then formula_97) and antisymmetry (if formula_95 and formula_99 then formula_100) (See "Inequality § Formal definitions"). Cardinal inequality formula_101 as defined above is reflexive since the identity function is injective, and is transitive by function composition. Antisymmetry is established by the Schröder–Bernstein theorem. The proof roughly goes as follows.
Given sets formula_1 and formula_27, where formula_28 is the function that proves formula_105 and formula_106 proves formula_107, then consider the sequence of points given by applying formula_50 then formula_65 to each element over and over. Then we can define a bijection formula_110 as follows: If a sequence forms a cycle, begins with an element formula_111 not mapped to by formula_65, or extends infinitely in both directions, define formula_113 for each formula_114 in those sequences. The last case, if a sequence begins with an element formula_115, not mapped to by formula_50, define formula_117 for each formula_114 in that sequence. Then formula_119 is a bijection.
The above shows that cardinal inequality is a partial order. A total order has the additional property that, for any formula_120 and formula_121, either formula_95 or formula_123 This can be established by the well-ordering theorem. Every well-ordered set is isomorphic to a unique ordinal number, called the order type of the set. Then, by comparing their order types, one can show that formula_105 or formula_107. This result is equivalent to the axiom of choice.
Countability.
Countable sets.
A set is called "countable" if it is finite or has a bijection with the set of natural numbers formula_126 in which case it is called "countably infinite". The term "denumerable" is also sometimes used for countably infinite sets. For example, the set of all even natural numbers is countable, and therefore has the same cardinality as the whole set of natural numbers, even though it is a proper subset. Similarly, the set of square numbers is countable, which was considered paradoxical for hundreds of years before modern set theory (see: ""). However, several other examples have historically been considered surprising or initially unintuitive since the rise of set theory.
The rational numbers formula_127 are those which can be expressed as the quotient or fraction of two integers. The rational numbers can be shown to be countable by considering the set of fractions as the set of all ordered pairs of integers, denoted formula_128 which can be visualized as the set of all integer points on a grid. Then, an intuitive function can be described by drawing a line in a repeating pattern, or spiral, which eventually goes through each point in the grid. For example, going through each diagonal on the grid for positive fractions, or through a lattice spiral for all integer pairs. These technically over cover the rationals, since, for example, the rational number formula_129 gets mapped to by all the fractions formula_130 as the grid method treats these all as distinct ordered pairs. So this function shows formula_131 not formula_132 This can be corrected by "skipping over" these numbers in the grid, or by designing a function which does this naturally, for example using the Calkin–Wilf tree.
A number is called algebraic if it is a solution of some polynomial equation (with integer coefficients). For example, the square root of two formula_133 is a solution to formula_134 and the rational number formula_135 is the solution to formula_136 Conversely, a number which cannot be the root of any polynomial is called transcendental. Two examples include Euler's number (') and pi (). In general, proving a number is transcendental is considered to be very difficult, and only a few classes of transcendental numbers are known. However, it can be shown that the set of algebraic numbers is countable (for example, see '). Since the set of algebraic numbers is countable while the real numbers are uncountable (shown in the following section), the transcendental numbers must form the vast majority of real numbers, even though they are individually much harder to identify. That is to say, almost all real numbers are transcendental.
Uncountable sets.
A set is called "uncountable" if it is not countable. That is, it is infinite and strictly larger than the set of natural numbers. The usual first example of this is the set of real numbers formula_137, which can be understood as the set of all numbers on the number line. One method of proving that the reals are uncountable is called Cantor's diagonal argument, credited to Cantor for his 1891 proof, though his differs from the more common presentation.
It begins by assuming, by contradiction, that there is some one-to-one mapping between the natural numbers and the set of real numbers between 0 and 1 (the interval formula_18). Then, take the decimal expansions of each real number, which looks like formula_139 Considering these real numbers in a column, create a new number such that the first digit of the new number is different from that of the first number in the column, the second digit is different from the second number in the column and so on. We also need to make sure that the number we create has a unique decimal representation, that is, it cannot end in repeating nines or repeating zeros. For example, if the digit isn't a 7, make the digit of the new number a 7, and if it was a seven, make it a 3. Then, this new number will be different from each of the numbers in the list by at least one digit, and therefore must not be in the list. This shows that the real numbers cannot be put into a one-to-one correspondence with the naturals, and thus must be strictly larger.
Another classical example of an uncountable set, established using a related reasoning, is the power set of the natural numbers, denoted formula_140. This is the set of all subsets of formula_141, including the empty set and formula_141 itself. The method is much closer to Cantor's original diagonal argument. Again, assume by contradiction that there exists a one-to-one correspondence formula_50 between formula_141 and formula_140, so that every subset of formula_141 is assigned to some natural number. These subsets are then placed in a column, in the order defined by formula_50 (see image). Now, one may define a subset formula_148 of formula_141 which is not in the list by taking the negation of the "diagonal" of this column as follows:
If formula_150, then formula_151, that is, if 1 is in the first subset of the list, then 1 is "not" in the subset formula_148. Further, if formula_153, then formula_154, that is if the number 2 is not in the second subset of the column, then 2 "is" in the subset formula_148. Then in general, for each natural number formula_156, formula_157 if and only if formula_158, meaning formula_156 is put in the subset formula_148 only if the nth subset in the column does not contain the number formula_156. Then, for each natural number formula_156, formula_163, meaning, formula_148 is not the nth subset in the list, for any number formula_156, and so it cannot appear anywhere in the list defined by formula_50. Since formula_50 was chosen arbitrarily, this shows that every function from formula_141 to formula_140 must be missing at least one element, therefore no such bijection can exist, and so formula_140 must be not be countable.
These two sets, formula_171 and formula_140 can be shown to have the same cardinality (by, for example, assigning each subset to a decimal expansion). Whether there exists a set formula_1 with cardinality between these two sets formula_174 is known as the continuum hypothesis.
Cantor's theorem generalizes the second theorem above, showing that every set is strictly smaller than its powerset. The proof roughly goes as follows: Given a set formula_1, assume by contradiction that there is a bijection formula_50 from formula_1 to formula_178. Then, the subset formula_179 given by taking the negation of the "diagonal", formally, formula_180, cannot be in the list. Therefore, every function is missing at least one element, and so formula_181. Further, since formula_178 is itself a set, the argument can be repeated to show formula_183. Taking formula_184, this shows that formula_185 is even larger than formula_186, which was already shown to be uncountable. Repeating this argument shows that there are infinitely many "sizes" of infinity.
Cardinal numbers.
In the above section, "cardinality" of a set was described relationally. In other words, one set could be compared to another, intuitively comparing their "size". Cardinal numbers are a means of measuring this "size" more explicitly. For finite sets, this is simply the natural number found by counting the elements. This number is called the "cardinal number" of that set, or simply "the cardinality" of that set. The cardinal number of a set formula_1 is generally denoted by formula_188 with a vertical bar on each side, though it may also be denoted by formula_1, formula_190 or formula_191
For infinite sets, "cardinal number" is somewhat more difficult to define formally. Cardinal numbers are not usually thought of in terms of their formal definition, but immaterially in terms of their arithmetic/algebraic properties. The assumption that there is "some" cardinal function formula_192 which satisfies formula_193, sometimes called the "axiom of cardinal number" or "Hume's principle", is sufficient for deriving most properties of cardinal numbers.
Commonly in mathematics, if a relation satisfies the properties of an equivalence relation, the objects used to materialize this relation are equivalence classes, which groups all the objects equivalent to one another. These called the "FregeRussell" cardinal numbers. However, this would mean that cardinal numbers are too large to form sets (apart from the cardinal number formula_194 whose only element is the empty set), since, for example, the cardinal number formula_195 would be the set of all sets with one element, and would therefore be a proper class. Thus, due to John von Neumann, it is more common to assign representatives of these classes. For example, the set formula_196 would represent the cardinal number formula_195.
Finite sets.
Given a basic sense of natural numbers, a set is said to have cardinality formula_156 if it can be put in one-to-one correspondence with the set formula_199 analogous to counting its elements. For example, the set formula_200 has a natural correspondence with the set formula_201 and therefore is said to have cardinality 4. Other terminologies include "Its cardinality is 4" or "Its cardinal number is 4". In formal contexts, the natural numbers can be understood as some construction of objects satisfying the Peano axioms.
Showing that such a correspondence exists is not always trivial. Combinatorics is the area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. The notion cardinality of finite sets is closely tied to many basic combinatorial principles, and provides a set-theoretic foundation to prove them. It can be shown by induction on the possible sizes of sets that finite cardinality corresponds uniquely with natural numbers (cf. "").
The addition principle asserts that given disjoint sets formula_1 and formula_27, formula_204, intuitively meaning that the sum of parts is equal to the sum of the whole. The multiplication principle asserts that given two sets formula_1 and formula_27, formula_207, intuitively meaning that there are formula_208 ways to pair objects from these sets. Both of these can be proven by a bijective proof, together with induction. The more general result is the inclusion–exclusion principle, which defines how to count the number of elements in overlapping sets.
Naturally, a set is defined to be finite if it can be put in correspondence with the set formula_199 for some natural number formula_210 However, there exist other definitions of "finite" which don't rely on a definition of "number." For example, a set is called Dedekind-finite if it cannot be put in one-to-one correspondece with a proper subset of itself.
Aleph numbers.
The aleph numbers are a sequence of cardinal numbers that denote the size of infinite sets, denoted with an aleph formula_211 the first letter of the Hebrew alphabet. The first aleph number is formula_212 called "aleph-nought", "aleph-zero", or "aleph-null", which represents the cardinality of the set of all natural numbers: formula_213 Then, formula_214 represents the next largest cardinality. The most common way this is formalized in set theory is through Von Neumann ordinals, known as Von Neumann cardinal assignment.
Ordinal numbers generalize the notion of "order" to infinite sets. For example, 2 comes after 1, denoted formula_215 and 3 comes after both, denoted formula_216 Then, one defines a new number, formula_217 which comes after every natural number, denoted formula_218 Further formula_219 and so on. More formally, these ordinal numbers can be defined as follows:
formula_220 the empty set, formula_221 formula_222 formula_223 and so on. Then one can define formula_224 for example, formula_225 therefore formula_226 Defining formula_227 (a limit ordinal) gives formula_228 the desired property of being the smallest ordinal greater than all finite ordinal numbers. Further, formula_229, and so on.
Since formula_230 by the natural correspondence, one may define formula_14 as the set of all finite ordinals. That is, formula_232 Then, formula_214 is the set of all countable ordinals (all ordinals formula_234 with cardinality formula_235), the first uncountable ordinal. Since a set cannot contain itself, formula_214 must have a strictly larger cardinality: formula_237 Furthermore, formula_238 is the set of all ordinals with cardinality less than or equal to formula_239 and in general the successor cardinal formula_240is the set of all ordinals with cardinality up to formula_241. By the well-ordering theorem, there cannot exist any set with cardinality between formula_14 and formula_239 and every infinite set has some cardinality corresponding to some aleph formula_244 for some ordinal formula_245
Cardinal arithmetic.
Basic arithmetic can be done on cardinal numbers in a very natural way, by extending the theorems for finite combinatorial principles above. The intuitive principle that is formula_1 and formula_27 are disjoint then addition of these sets is simply taking their union, written as formula_204. Thus if formula_1 and formula_27 are infinite, cardinal addition is defined as formula_251 where formula_252 denotes disjoint union. Similarly, the multiplication of two sets is intuitively the number of ways to pair their elements (as in the multiplication principle), therefore cardinal multiplication is defined as formula_253, where formula_254 denotes the Cartesian product. These definitions can be shown to satisfy the basic properties of standard arithmetic:
Cardinal exponentiation formula_260 is defined via set exponentiation, the set of all functions formula_261, that is, formula_262 For finite sets this can be shown to coincide with standard natural number exponentiation, but includes as a corollary that zero to the power of zero is one formula_263 since there is exactly one function from the empty set to itself: the empty function. A combinatroial argument can be used to show formula_264. In general, cardinal exponentiation is not as well-behaved as cardinal addition and multiplication. For example, even though it can be proven that the expression formula_265does indeed correspond to some aleph, it is unprovable from standard set theories which aleph it corresponds to. The following properties of exponentiation can be shown by currying:
Cardinality of the continuum.
The number line is a geometric construct of the intuitive notions of "space" and "distance" wherein each point corresponds to a distinct quantity or position along a continuous path. The terms "continuum" and "continuous" refer to the totality of this line, having some space (other points) between any two points on the line (dense and archimedian) and the absence of any gaps (completeness), This intuitive construct is formalized by the set of real numbers formula_137 which model the continuum as a complete, densely ordered, uncountable set.
The cardinality of the continuum, denoted by "formula_270" (a lowercase fraktur script "c"), remains invariant under various transformations and mappings, many considered surprising. For example, all intervals on the real line e.g. formula_18, and formula_272, have the same cardinality as the entire set formula_171. First, formula_274 is a bijection from formula_18 to formula_272. Then, the tangent function is a bijection from the interval formula_277 to the whole real line. A more surprising example is the Cantor set, which is defined as follows: take the interval formula_18 and remove the middle third formula_279, then remove the middle third of each of the two remaining segments, and continue removing middle thirds (see image). The Cantor set is the set of points that survive this process. This set that remains is all of the points whose decimal expansion can be written in ternary without a 1. Reinterpreting these decimal expansions as binary (e.g. by replacing the 2s with 1s) gives a bijection between the Cantor set and the interval formula_18.
Space-filling curves are continuous surjective maps from the unit interval formula_18 onto the unit square on formula_282, with classical examples such as the Peano curve and Hilbert curve. Although such maps are not injective, they are indeed surjective, and thus suffice to demonstrate cardinal equivalence. They can be reused at each dimension to show that formula_283 for any dimension formula_284 The infinite cartesian product formula_285, can also be shown to have cardinality formula_270. This can be established by cardinal exponentiation: formula_287. Thus, the real numbers, all finite-dimensional real spaces, and the countable cartesian product share the same cardinality.
As shown in , the set of real numbers is strictly larger than the set of natural numbers. Specifically, formula_288. The Continuum Hypothesis (CH) asserts that the real numbers have the next largest cardinality after the natural numbers, that is formula_16. As shown by Gödel and Cohen, the continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. The Generalized Continuum Hypothesis (GCH) extends this to all infinite cardinals, stating that formula_290 for every ordinal formula_234. Without GHC, the cardinality of formula_171 cannot be written in terms of specific alephs. The Beth numbers provide a concise notation for powersets of the real numbers starting from formula_293, then formula_294, and formula_295, and in general formula_296 and formula_297 if formula_298 is a limit ordinal.
Paradoxes.
During the rise of set theory, several paradoxes (see: "Paradoxes of set theory"). These can be divided into two kinds: "real paradoxes" and "apparent paradoxes". Apparent paradoxes are those which follow a series of reasonable steps and arrive at a conclusion which seems impossible or incorrect according to one's intuition, but aren't necessarily logically impossible. Two historical examples have been given, "Galileo's Paradox" and "Aristotle's Wheel", in . Real paradoxes are those which, through reasonable steps, prove a logical contradiction. The real paradoxes here apply to naive set theory or otherwise informal statements, and have been resolved by restating the problem in terms of a formalized set theory, such as Zermelo–Fraenkel set theory.
Apparent paradoxes.
Hilbert's hotel.
Hilbert's Hotel is a thought experiment devised by the German mathematician David Hilbert to illustrate a counterintuitive property of countably infinite sets, allowing them to have the same cardinality as a proper subset of themselves. The scenario begins by imagining a hotel with an infinite number of rooms, all of which are occupied. But then a new guest walks in asking for a room. The hotel accommodates by moving the occupant of room 1 to room 2, the occupant of room 2 to room 3, room three to room 4, and in general, room n to room n+1. Then every guest still has a room, but room 1 opens up for the new guest.
Then, the scenario continues by imagining an infinite bus of new guests seeking a room. The hotel accommodates by moving the person in room 1 to room 2, room 2 to room 4, and in general, room n to room 2n. Thus, all the even-numbered rooms are occupied, but all the odd-numbered rooms are vacant, leaving room for the infinite bus of new guests. The scenario continues by assuming an infinite number of these infinite buses arrive at the hotel, and showing that the hotel is still able to accommodate. Finally, an infinite bus which has a seat for every real number arrives, and the hotel is no longer able to accommodate.
Skolem's paradox.
In model theory, a model corresponds to a specific interpretation of a formal language or theory. It consists of a domain (a set of objects) and an interpretation of the symbols and formulas in the language, such that the axioms of the theory are satisfied within this structure. The Löwenheim–Skolem theorem shows that any model of set theory in first-order logic, if it is consistent, has an equivalent model which is countable. This appears contradictory, because Georg Cantor proved that there exist sets which are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets, satisfies the first-order sentence that intuitively states "there are uncountable sets".
A mathematical explanation of the paradox, showing that it is not a true contradiction in mathematics, was first given in 1922 by Thoralf Skolem. He explained that the countability of a set is not absolute, but relative to the model in which the cardinality is measured. Skolem's work was harshly received by Ernst Zermelo, who argued against the limitations of first-order logic and Skolem's notion of "relativity", but the result quickly came to be accepted by the mathematical community.
Real paradoxes.
Cantor's paradox.
Cantor's theorem state's that, for any set formula_44 possibly infinite, its powerset formula_178 has a strictly greater cardinality. For example, this means there is no bijection from formula_141 to formula_302 Cantor's paradox is a paradox in naive set theory, which shows that there cannot exist a "set of all sets" or "universe set". It starts by assuming there is some set of all sets, formula_303 then it must be that formula_304 is strictly smaller than formula_305 thus formula_306 But since formula_304 contains all sets, we must have that formula_308 and thus formula_309 Therefore formula_310 contradicting Cantor's theorem. This was one of the original paradoxes that added to the need for a formalized set theory to avoid these paradoxes. This paradox is usually resolved in formal set theories by disallowing unrestricted comprehension and the existence of a universe set.
Set of all cardinal numbers.
Similar to Cantor's paradox, the paradox of the set of all cardinal numbers is a result due to unrestricted comprehension. It often uses the definition of cardinal numbers as ordinal numbers for representatives. It is related to the Burali-Forti paradox. It begins by assuming there is some set formula_311 Then, if there is some largest element formula_312 then the powerset formula_313 is strictly greater, and thus not in formula_314 Conversly, if there is no largest element, then the union formula_315 contains the elements of all elements of formula_316 and is therefore greater than or equal to each element. Since there is no largest element in formula_316 for any element formula_318 there is another element formula_319 such that formula_320 and formula_321 Thus, for any formula_318 formula_323 and so formula_324
Alternative and additional axioms.
Without the axiom of choice.
The Axiom of Choice (AC) is a controversial principle in the foundations of mathematics. Roughly, it states that given any collection of non-empty sets, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. In many cases, a set created by choosing elements can be made without invoking the axiom of choice. But it is not possible to do this in general, in which case, the axiom of choice must be invoked.
If the Axiom of Choice is assumed to be false, it has several implications:
Proper classes.
Some set theories allow for proper classes, which are, roughly, collections "too large" to form sets. For example, the Universe of all sets, the class of all cardinal numbers, or the class of all ordinal numbers. Such set theories include Von Neumann–Bernays–Gödel set theory, and Morse–Kelley set theory. In such set theories, it is perfectly fine to define cardinal numbers as classes of equinumerous sets, as done by Frege and Russell, mentioned above. Some authors find this definition more elegant than assigning representatives, as it more accurately describes the concept by "definition by abstraction".
Classes, technically, can be assigned cardinalities. The first to distinguish between sets and classes was John von Neumann, who defined classes as, roughly, "too large" to form sets. More accurately, he showed that a collection of objects is a proper class if and only if it can be put in one-to-one correspondence with the whole Universe of sets (cf. "Axiom of limitation of size"). Thus, all proper classes have the same "size". Before Neumann, Cantor called this size "Absolute infinite" denoted with the Greek letter formula_335, and associated the concept with God.
|
6176
|
29463730
|
https://en.wikipedia.org/wiki?curid=6176
|
Cecil B. DeMille
|
Cecil Blount DeMille (; August 12, 1881January 21, 1959) was an American filmmaker and actor. Between 1914 and 1958, he made 70 features, both silent and sound films. He is acknowledged as a founding father of American cinema and the most commercially successful producer-director in film history, with many films dominating the box office three or four at a time. His films were distinguished by their epic scale and by his cinematic showmanship. His silent films included social dramas, comedies, Westerns, farces, morality plays, and historical pageants. He was an active Freemason and member of Prince of Orange Lodge #16 in New York City.
DeMille was born in Ashfield, Massachusetts, where his parents were vacationing for the summer. He grew up in New York City. He began his career as a stage actor in 1900. He later began to write and direct stage plays, a few with his older brother William de Mille, and some with Jesse L. Lasky, who was then a vaudeville producer.
DeMille's first film, "The Squaw Man" (1914), was the first full-length feature film shot in Hollywood. Its interracial love story was commercially successful, and the film marked Hollywood as the new home of the U.S. film industry. It had previously been based in New York and New Jersey. Based on continued film successes, DeMille founded Famous Players Lasky which was later reverse merged into Paramount Pictures with Lasky and Adolph Zukor. His first biblical epic, "The Ten Commandments" (1923), was both a critical and commercial success; it held the Paramount revenue record for 25 years.
DeMille directed "The King of Kings" (1927), a biography of Jesus, which gained approval for its sensitivity and reached more than 800 million viewers. "The Sign of the Cross" (1932) is said to be the first sound film to integrate all aspects of cinematic technique. "Cleopatra" (1934) was his first film to be nominated for the Academy Award for Best Picture.
After more than 30 years in film production, DeMille reached a pinnacle in his career with "Samson and Delilah" (1949), a biblical epic that became the highest-grossing film of 1950. Along with biblical and historical narratives, he also directed films oriented toward "neo-naturalism", which tried to portray the laws of man fighting the forces of nature.
DeMille received his first nomination for the Academy Award for Best Director for his circus drama "The Greatest Show on Earth" (1952), which won both the Academy Award for Best Picture and the Golden Globe Award for Best Motion Picture – Drama. His last and best-known film, "The Ten Commandments" (1956), also a Best Picture Academy Award nominee, and it is the eighth-highest-grossing film of all time, adjusted for inflation.
In addition to his Best Picture Awards, DeMille received an Academy Honorary Award for his film contributions, the Palme d'Or (posthumously) for "Union Pacific" (1939), a DGA Award for Lifetime Achievement, and the Irving G. Thalberg Memorial Award. He was the first recipient of the Golden Globe Cecil B. DeMille Award, which was named in his honor. DeMille's reputation had a renaissance in the 2010s, and his work has influenced numerous other films and directors.
Biography.
1881–1899: early years.
Cecil Blount DeMille was of paternal Dutch ancestry. His surname was spelled de Mil before his grandfather William added an "le" for "visual symmetry".
As an adult, Cecil De Mille adopted the spelling "DeMille" because he believed it would look better on a marquee, but continued to use "de Mille" in private life. The family name "de Mille" was used by his children Cecilia, John, Richard, and Katherine. Cecil's brother, William, and his daughters, Margaret and Agnes, as well as DeMille's granddaughter, Cecilia de Mille Presley, also used the "de Mille" spelling.
DeMille was born on August 12, 1881, in a boarding house on Main Street in Ashfield, Massachusetts, where his parents had been vacationing for the summer. On September 1, 1881, the family returned with the newborn DeMille to their flat in New York. DeMille was named after his grandmothers Cecelia Wolff and Margarete Blount. He was the second of three children of Henry Churchill de Mille (September 4, 1853 – February 10, 1893) and his wife, Matilda Beatrice deMille (née Samuel; January 30, 1853 – October 8, 1923), known as Beatrice. His older brother, William C. deMille, was born on July 25, 1878.
Henry de Mille, whose ancestors were of English and Dutch-Belgian descent, was a North Carolina-born dramatist, actor, and lay reader in the Episcopal Church. In New York, Henry also taught English at Columbia College (now Columbia University). He worked as a playwright, administrator, and faculty member during the early years of the American Academy of Dramatic Arts, established in New York City in 1884. Henry de Mille frequently collaborated with David Belasco in playwriting; their best-known collaborations included "The Wife", "Lord Chumley", "The Charity Ball", and "Men and Women".
Cecil B. DeMille's mother, Beatrice, a literary agent and scriptwriter, was the daughter of German Jews. She had emigrated from England with her parents in 1871 when she was 18; the newly arrived family settled in Brooklyn, New York, where they maintained a middle-class, English-speaking household.
DeMille's parents met as members of a music and literary society in New York. Henry was a tall, red-headed student. Beatrice was intelligent, educated, forthright, and strong-willed. They married on July 1, 1876, despite Beatrice's parents' objections because of the young couple's differing religions; Beatrice converted to Episcopalianism.
DeMille was a brave and confident child. He gained his love of theater while watching his father and Belasco rehearse their plays. A lasting memory for DeMille was a lunch with his father and actor Edwin Booth. As a child, DeMille created an alter ego, Champion Driver, a Robin Hood-like character, evidence of his creativity and imagination.
His father and his family had lived in Washington, North Carolina, until Henry built a three-story Victorian-style house for his family in Pompton Lakes, New Jersey; they named this estate "Pamlico". John Philip Sousa was a friend of the family, and DeMille recalled throwing mud balls in the air so neighbor Annie Oakley could practice her shooting. DeMille's sister, Agnes, was born on April 23, 1891; his mother nearly did not survive the birth. Agnes died on February 11, 1894, from spinal meningitis.
DeMille's parents operated a private school in Pompton Lakes and attended Christ Episcopal Church. DeMille recalled that this church was the place where he visualized the story of his 1923 version of "The Ten Commandments".
On January 8, 1893, at age 40, Henry de Mille died suddenly from typhoid fever, leaving Beatrice with three children. To provide for her family, she opened the Henry C. de Mille School for Girls in her home in February 1893. The aim of the school was to teach young women to properly understand and fulfill the women's duty to themselves, their home, and their country. Beatrice had "enthusiastically supported" Henry's theatrical aspirations. She later became the second female play broker on Broadway. On Henry's deathbed, he told his wife that he did not want his sons to become playwrights. DeMille's mother sent him to Pennsylvania Military College (now Widener University) in Chester, Pennsylvania, at age 15. He fled the school to join the Spanish–American War, but failed to meet the age requirement. At the military college, even though his grades were average, he reportedly excelled in personal conduct.
DeMille attended the American Academy of Dramatic Arts (tuition-free due to his father's service to the academy). He graduated in 1900, and for graduation, his performance was the play "The Arcady Trail". In the audience was Charles Frohman, who cast DeMille in his play "Hearts are Trumps", DeMille's Broadway debut.
1900–1912: theater.
Charles Frohman, Constance Adams, and David Belasco.
Cecil B. DeMille began his career as an actor on stage in 1900 in the theatrical company of Charles Frohman. He debuted on February 21, 1900, in the play "Hearts Are Trumps" at New York's Garden Theater. In 1901, DeMille starred in productions of "A Repentance", "To Have and to Hold", and "Are You a Mason?" At age 21, he married Constance Adams on August 16, 1902, at Adams's father's home in East Orange, New Jersey. The wedding party was small. Beatrice DeMille's family did not attend. Simon Louvish suggests that this was to conceal DeMille's partial Jewish heritage. Adams was 29 years old at the time of the marriage. They had met in a theater in Washington D.C. while they were both acting in "Hearts Are Trumps".
They were sexually incompatible; according to DeMille, Adams was too "pure" to "feel such violent and evil passions" as he. DeMille had more violent sexual preferences and fetishes than his wife. Adams allowed DeMille to have several long-term mistresses during their marriage as an outlet while maintaining an appearance of a faithful marriage. One of DeMille's affairs was with his screenwriter Jeanie MacPherson. Despite his reputation for extramarital affairs, DeMille did not like to have affairs with his stars, as he believed it would cause him to lose control as a director. He once said he maintained his self-control when Gloria Swanson sat on his lap, and refused to touch her.
In 1902, he played a small part in "Hamlet". Publicists wrote that he became an actor in order to learn how to direct and produce, but DeMille admitted that he became an actor in order to pay the bills. From 1904 to 1905, he attempted to make a living as a stock theater actor with his wife, Constance. DeMille made a 1905 reprise in "Hamlet" as Osric. In the summer of 1905, DeMille joined the stock cast at the Elitch Theatre in Denver, Colorado. He appeared in 11 of the 15 plays presented that season, all in minor roles. Maude Fealy was the featured actress in several productions that summer and developed a lasting friendship with DeMille. (He later cast her in "The Ten Commandments".)
His brother, William, was establishing himself as a playwright and sometimes invited DeMille to collaborate. DeMille and William collaborated on "The Genius", "The Royal Mounted", and "After Five". None of these was very successful. William de Mille was most successful when he worked alone.
DeMille and his brother at times worked with the legendary impresario David Belasco, who had been a friend and collaborator of their father. DeMille later adapted Belasco's "The Girl of the Golden West", "Rose of the Rancho", and "The Warrens of Virginia" into films. He was credited with the conception of Belasco's "The Return of Peter Grimm". "The Return of Peter Grimm" sparked controversy, because Belasco had taken DeMille's unnamed screenplay, changed the characters, and named it "The Return of Peter Grimm", producing and presenting it as his own work. DeMille was credited in small print as "based on an idea by Cecil DeMille". The play was successful, and DeMille was distraught that his childhood idol had plagiarized his work.
Losing interest in theater.
DeMille performed on stage with actors he later directed in films: Charlotte Walker, Mary Pickford, and Pedro de Cordoba. He also produced and directed plays. His 1905 performance in "The Prince Chap" as the Earl of Huntington was well received by audiences.
DeMille wrote a few of his own plays in between stage performances, but his playwriting was less successful. His first play was "The Pretender-A Play in a Prologue and 4 Acts" set in 17th-century Russia. Another unperformed play he wrote was "Son of the Winds", a mythological Native American story. Life was difficult for DeMille and his wife as traveling actors, but travel allowed him to experience parts of the United States he had not yet seen. DeMille sometimes worked with the director E. H. Sothern, who influenced DeMille's later perfectionism. In 1907, due to a scandal with one of Beatrice's students, Evelyn Nesbit, the Henry de Mille School lost students. The school closed, and Beatrice filed for bankruptcy. DeMille wrote another play originally called "Sergeant Devil May Care" and renamed "The Royal Mounted". He also toured with the Standard Opera Company, but there are few records of his singing ability.
On November 5, 1908, Constance and DeMille had a daughter, Cecilia, their only biological child. In the 1910s, DeMille began directing and producing other writers' plays.
DeMille was poor and struggled to find work. Consequently, his mother hired him for her agency, The DeMille Play Company, and taught him how to be an agent and a playwright. He became the agency's manager and later a junior partner with his mother. In 1911, DeMille became acquainted with vaudeville producer Jesse Lasky when Lasky was searching for a writer for his new musical. He initially sought out William deMille. William had been a successful playwright, but DeMille was suffering from the failure of his plays "The Royal Mounted" and "The Genius".
Beatrice introduced Lasky to Cecil DeMille instead. The collaboration of DeMille and Lasky produced a successful musical, "California", which opened in New York in January 1912. Another DeMille-Lasky production that opened in January 1912 was "The Antique Girl". In the spring of 1913, DeMille found success producing "Reckless Age" by Lee Wilson, a play about a high-society girl wrongly accused of manslaughter, starring Frederick Burton and Sydney Shields. But changes in the theater rendered DeMille's melodramas obsolete before they were produced, and true theatrical success eluded him. He produced many flops. Having become uninterested in working in theater, DeMille became ignited by passion for film when he watched the 1912 French film "Les Amours de la reine Élisabeth".
1913–1914: entering films.
Desiring a change of scene, DeMille, Lasky, Sam Goldfish (later Samuel Goldwyn), and a group of East Coast businessmen created the Jesse L. Lasky Feature Play Company in 1913, of which DeMille became director-general. Lasky and DeMille were said to have sketched out the organization of the company on the back of a restaurant menu. As director-general, DeMille's job was to make the films. In addition to directing, he was the supervisor and consultant for the first year of films the company made. Sometimes, he directed scenes for other directors at the company in order to release films on time. Moreover, he co-authored other Lasky Company scripts and created screen adaptations that others directed.
The Lasky Play Company tried to recruit William de Mille, but he rejected the offer because he did not believe there was any promise in a film career. When William found out that DeMille had begun working in the motion picture industry, he wrote his brother a letter, saying that he was disappointed that Cecil was willing "to throw away [his] future" when he was "born and raised in the finest traditions of the theater".
The Lasky Company wanted to attract high-class audiences to their films, so it began producing films from literary works. The company bought the rights to Edwin Milton Royle's play "The Squaw Man" and cast Dustin Farnum in the lead role. It offered Farnum a choice between a quarter stock in the company or $250 in weekly salary. Farnum chose the salary. Already $15,000 in debt to Royle for the screenplay of "The Squaw Man", Lasky's relatives bought the $5,000 stock to save the Lasky Company from bankruptcy. With no knowledge of filmmaking, DeMille was introduced to observe the process at film studios. He was eventually introduced to Oscar Apfel, a stage director who had been a director with the Edison Company.
On December 12, 1913, DeMille, his cast, and crew boarded a Southern Pacific train bound for Flagstaff via New Orleans. His tentative plan was to shoot a film in Arizona, but he felt that Arizona lacked the Western look they were searching for. They also learned that other filmmakers were successfully shooting in Los Angeles, even in winter. He continued to Los Angeles. Once there, he chose not to shoot in Edendale, where many studios were, but in Hollywood. DeMille rented a barn to function as their film studio. Filming began on December 29, 1913, and lasted three weeks. Apfel filmed most of "The Squaw Man" due to DeMille's inexperience, but DeMille learned quickly and was particularly adept at impromptu screenwriting as necessary. He made his first film run 60 minutes, as long as a short play. "The Squaw Man" (1914), co-directed by Apfel, was a sensation, and it established the Lasky Company. It was the first feature-length film made in Hollywood. There were problems with the perforation of the film stock, and it was discovered the DeMille had brought a cheap British film perforator that had punched in 65 holes per foot instead of the industry standard of 64. Lasky and DeMille convinced film pioneer Siegmund Lubin of the Lubin Manufacturing Company to have his experienced technicians reperforate the film.
This was the first American feature film, according to its release date. D. W. Griffith's "Judith of Bethulia" was filmed earlier than "The Squaw Man", but released later. This as the only film in which DeMille shared director's credit with Apfel.
"The Squaw Man" was a success, which led to the eventual founding of Paramount Pictures and Hollywood becoming the "film capital of the world". The film grossed more than ten times its budget after its New York premiere in February 1914. DeMille's next project was to aid Apfel in directing "Brewster's Millions", which was wildly successful. In December 1914, Constance Adams brought home John DeMille, a 15-month-old boy, whom the couple legally adopted three years later. Biographer Scott Eyman suggested that she may have decided to adopt after recently having had a miscarriage.
1915–1928: silent era.
Westerns, Paradise, and World War I.
Cecil B. DeMille's second film, credited exclusively to him, was "The Virginian". It is the earliest of DeMille's films available in a quality, color-tinted video format, but that version is actually a 1918 rerelease. The Lasky Company's first few years were spent making films nonstop. DeMille directed 20 films by 1915. The most successful films during this period were "Brewster's Millions" (co-directed by DeMille), "Rose of the Rancho", and "The Ghost Breaker". DeMille adapted Belasco's dramatic lighting techniques to film technology, mimicking moonlight with U.S. cinema's first attempts at "motivated lighting" in "The Warrens of Virginia". This was the first of a few film collaborations with his brother William. They struggled to adapt the play from the stage to the set. After the film was shown, viewers complained that the shadows and lighting prevented the audience from seeing the actors' full faces and said they would pay only half price. Sam Goldwyn suggested that if they called it "Rembrandt" lighting, the audience would pay double the price. Additionally, because of DeMille's cordiality after the "Peter Grimm" incident, DeMille was able to rekindle his partnership with Belasco. He adapted several of Belasco's screenplays into film.
DeMille's most successful film was "The Cheat"; his direction in the film was acclaimed. In 1916, exhausted from three years of nonstop filmmaking, DeMille purchased land in the Angeles National Forest for a ranch that would become his getaway. He called this place "Paradise", declaring it a wildlife sanctuary; no shooting of animals besides snakes was allowed. His wife did not like Paradise, so DeMille often brought his mistresses there with him, including actress Julia Faye. In 1921, DeMille purchased a yacht he called "The Seaward".
While filming "The Captive" in 1915, an extra, Charles Chandler, died on set when another extra failed to heed DeMille's orders to unload all guns for rehearsal. DeMille instructed the guilty man to leave town and never revealed his name. Lasky and DeMille maintained Chandler's widow on the payroll and, according to leading actor House Peters Sr., DeMille refused to stop production for Chandler's funeral. Peters said that he encouraged the cast to attend the funeral with him anyway since DeMille would not be able to shoot the film without him. On July 19, 1916, the Jesse Lasky Feature Play Company merged with Adolph Zukor's Famous Players Film Company, becoming Famous Players–Lasky. Zukor became president, Lasky vice president, DeMille director-general, and Goldwyn chairman of the board. Famous Players–Lasky later fired Goldwyn for frequent clashes with Lasky, DeMille, and Zukor. While on a European vacation in 1921, DeMille contracted rheumatic fever in Paris. He was confined to bed and unable to eat. His poor physical condition upon his return home affected the production of his 1922 film "Manslaughter". According to Richard Birchard, DeMille's weakened state during production may have led to the film being received as uncharacteristically substandard.
During World War I, the Famous Players–Lasky organized a military company underneath the National Guard, the Home Guard, made up of film studio employees, with DeMille as captain. Eventually, the Guard was enlarged to a battalion and recruited soldiers from other film studios. They took time off weekly to practice military drills. Additionally, during the war, DeMille volunteered for the Justice Department's Intelligence Office, investigating friends, neighbors, and others he came in contact with in connection with the Famous Players–Lasky. He also volunteered for the Intelligence Office during World War II. DeMille considered enlisting in World War I, but stayed in the U.S. and made films. He did take a few months to set up a movie theater for the French front. Famous Players–Lasky donated the films. DeMille and Adams adopted Katherine Lester in 1920, whom Adams had found in the orphanage she directed. In 1922, the couple adopted Richard deMille.
Scandalous dramas, Biblical epics, and departure from Paramount.
Film started becoming more sophisticated and the Lasky company's subsequent films were criticized for primitive and unrealistic set design. Consequently, Beatrice deMille introduced the Famous Players–Lasky to Wilfred Buckland, whom DeMille knew from his time at the American Academy of Dramatic Arts, and he became DeMille's art director. William deMille reluctantly became a story editor. William later converted from theater to Hollywood and spent the rest of his career as a film director. DeMille frequently remade his own films. In 1917, he remade "The Squaw Man" (1918), only four years after the original. Despite its quick turnaround, the film was fairly successful. DeMille's second remake at MGM in 1931 was a failure.
After five years and 30 hit films, DeMille became the American film industry's most successful director. In the silent era, he was renowned for "Male and Female" (1919), "Manslaughter" (1922), "The Volga Boatman" (1926), and "The Godless Girl" (1928). His trademark scenes included bathtubs, lion attacks, and Roman orgies. Many of his films featured scenes in two-color Technicolor. In 1923, DeMille released the modern melodrama "The Ten Commandments", a significant change from his previous irreligious films. The film was produced on a budget of $600,000, Paramount's most expensive production. This concerned Paramount executives, but the film was the studio's highest-grossing film. It held the Paramount record for 25 years until DeMille broke the record again.
In the early 1920s, scandal surrounded Paramount; religious groups and the media opposed portrayals of immorality in films. A censorship board called the Hays Code was established. DeMille's film "The Affairs of Anatol" came under fire. Furthermore, DeMille argued with Zukor over his extravagant and over-budget production costs. Consequently, DeMille left Paramount in 1924 despite having helped establish it. He joined the Producers Distributing Corporation. His first film in the new production company, DeMille Pictures Corporation, was "The Road to Yesterday" in 1925. He directed and produced four films on his own, working with Producers Distributing Corporation because he found front office supervision too restricting. Aside from "The King of Kings," none of DeMille's films away from Paramount were successful. "The King of Kings" established DeMille as "master of the grandiose and of biblical sagas". Considered at the time the most successful Christian film of the silent era, DeMille calculated that it had been viewed over 800 million times around the world. After the release of DeMille's "The Godless Girl", silent films in America became obsolete, and DeMille was forced to shoot a shoddy final reel with the new sound production technique. Although this final reel looked so different from the first 11 reels that it appeared to be from another movie, according to Simon Louvish, the film is one of DeMille's strangest and most "DeMillean" film.
The immense popularity of DeMille's silent films enabled him to branch out into other areas. The Roaring Twenties were the boom years and DeMille took full advantage, opening the Mercury Aviation Company, one of America's first commercial airlines. He was also a real estate speculator, an underwriter of political campaigns, vice president of Bank of America, and vice president of the Commercial National Trust and Savings Bank in Los Angeles, where he approved loans for other filmmakers. In 1916, DeMille purchased a mansion in Hollywood. Charlie Chaplin lived next door for a time, and after he moved, DeMille purchased the other house and combined the estates.
1929–1956: sound era.
MGM and return to Paramount.
When "talking pictures" were invented in 1928, DeMille made a successful transition, offering his own innovations to the painful process; he devised a microphone boom and a soundproof camera blimp. He also popularized the camera crane. His first three sound films, "Dynamite", "Madame Satan", and his 1931 remake of "The Squaw Man", were produced at Metro-Goldwyn-Mayer. These films were critically and financially unsuccessful. He had completely adapted to the production of sound film despite the film's poor dialogue. After his contract ended at MGM, he left, but no production studios would hire him. He attempted to create a guild of a half a dozen directors with the same creative desires called the Director's Guild, but the idea failed due to lack of funding and commitment. Moreover, the Internal Revenue Service audited DeMille due to issues with his production company. This was, according to DeMille, the lowest point of his career. He traveled abroad to find employment until he was offered a deal at Paramount.
In 1932, DeMille returned to Paramount at Lasky's request, bringing with him his own production unit. His first film back at Paramount, "The Sign of the Cross", was also his first success since leaving Paramount besides "The King of Kings". Zukor approved DeMille's return on the condition that DeMille not exceed his production budget of $650,000 for "The Sign of the Cross". Produced in eight weeks without exceeding budget, the film was financially successful. "The Sign of the Cross" was the first film to integrate all cinematic techniques. The film was considered a "masterpiece" and surpassed the quality of other sound films of the time. DeMille followed this epic with two dramas released in 1933 and 1934, "This Day and Age" and "Four Frightened People". These were box-office disappointments, though "Four Frightened People" received good reviews. DeMille stuck to large-budget spectaculars for the rest of his career.
Politics and "Lux Radio Theatre".
DeMille was outspoken about his Episcopalian integrity, but his private life included mistresses and adultery. He was a conservative Republican activist, becoming more conservative as he aged. He was known as anti-union and worked to prevent the unionization of film production studios. But according to DeMille himself, he was not anti-union and belonged to a few unions. He said he was rather against union leaders such as Walter Reuther and Harry Bridges, whom he compared to dictators. He supported Herbert Hoover and in 1928 made his largest campaign donation to Hoover. But DeMille also liked Franklin D. Roosevelt, finding him charismatic, tenacious, and intelligent, and agreeing with Roosevelt's abhorrence of Prohibition. DeMille lent Roosevelt a car for his 1932 United States presidential election campaign and voted for him. He never again voted for a Democratic candidate in a presidential election.
From June 1, 1936, until January 22, 1945, DeMille hosted and directed "Lux Radio Theatre", a weekly digest of current feature films. Broadcast on the Columbia Broadcasting System (CBS) from 1935 to 1954, "Lux Radio" was one of the most popular weekly shows in radio history. While DeMille was host, the show had 40 million weekly listeners and DeMille had an annual salary of $100,000. From 1936 to 1945, he produced, hosted, and directed every show, with the occasional exception of a guest director. He resigned from "Lux Radio" because he refused to pay a dollar to the American Federation of Radio Artists (AFRA), on the principle that no organization had the right to "levy a compulsory assessment upon any member".
DeMille sued the union for reinstatement but lost. He appealed to the California Supreme Court and lost again. When the AFRA expanded to television, DeMille was banned from television appearances. Consequently, he formed the DeMille Foundation for Political Freedom to campaign for the right to work. He gave speeches across the nation for the next few years. DeMille's primary criticism was of closed shops, but later included criticism of communism and unions in general. The U.S. Supreme Court declined to review his case, but DeMille lobbied for the Taft–Hartley Act, which passed. It prohibited denying anyone the right to work if they refuse to pay a political assessment. But the law did not apply retroactively, so DeMille's television and radio appearance ban lasted the rest of his life, though he was permitted to appear on radio or television to publicize a movie. William Keighley replaced him. DeMille never worked in radio again.
Adventure films and dramatic spectacles.
In 1939, DeMille's "Union Pacific" was successful through DeMille's collaboration with the Union Pacific Railroad. The Union Pacific gave DeMille access to historical data, early period trains, and expert crews, adding to the film's authenticity. During pre-production, DeMille was dealing with his first serious health issue. In March 1938, he underwent a major emergency prostatectomy. He had a post-surgery infection from which he nearly did not recover, citing streptomycin as his saving grace. The surgery caused him to suffer from sexual dysfunction for the rest of his life, according to some family members. After his surgery and the success of "Union Pacific", DeMille first used three-strip Technicolor in 1940, in "North West Mounted Police". DeMille wanted to film in Canada, but due to budget constraints, the film was instead shot in Oregon and Hollywood. Critics were impressed with the visuals but found the scripts dull, calling it DeMille's "poorest Western". Despite the criticism, it was Paramount's highest-grossing film of the year. Audiences liked its highly saturated color, so DeMille made no further black-and-white features. DeMille was anti-communist and abandoned a project in 1940 to film Ernest Hemingway's "For Whom the Bell Tolls" due to its communist themes, even though he had already paid $100,000 for the rights to the novel. He was so eager to produce the film that he hadn't yet read it. He claimed he abandoned the project in order to complete a different project, but it was actually to preserve his reputation and avoid appearing reactionary. While concurrently filmmaking, he served during World War II at age 60 as his neighborhood air-raid warden.
In 1942, DeMille worked with Jeanie MacPherson and William deMille to produce a film, "Queen of Queens", that was intended to be about Mary, mother of Jesus. After reading the screenplay, Daniel A. Lord warned DeMille that Catholics would find the film too irreverent while non-Catholics would consider it Catholic propaganda. Consequently, the film was never made. MacPherson worked as a scriptwriter on many of DeMille's films. In 1938, DeMille supervised the film compilation "Land of Liberty" as the American film industry's contribution to the 1939 New York World's Fair. He used clips from his own films in it. "Land of Liberty" was not high-grossing, but it was well-received, and DeMille was asked to shorten its running time to allow for more showings per day. MGM distributed the film in 1941 and donated profits to World War II relief charities.
In 1942, DeMille released Paramount's most successful film, "Reap the Wild Wind". It had a large budget and many special effects, including an electronically operated giant squid. After working on it, DeMille was the master of ceremonies at a rally organized by David O. Selznick in the Los Angeles Coliseum in support of the Dewey–Bricker presidential ticket as well as Governor Earl Warren of California. DeMille's 1947 film "Unconquered" had the longest running time (146 minutes), longest filming schedule (102 days), and largest budget ($5 million). Its sets and effects were so realistic that 30 extras needed to be hospitalized due to a scene with fireballs and flaming arrows. It was commercially very successful.
DeMille's next film, "Samson and Delilah" (1949), was Paramount's highest-grossing film up to that time. A Biblical epic with sex, it was a characteristically DeMille film. 1952's "The Greatest Show on Earth" became Paramount's highest-grossing film to that point and won the Academy Award for Best Picture and the Academy Award for Best Story. It began production in 1949. Ringling Brothers-Barnum and Bailey were paid $250,000 for use of the title and facilities. DeMille toured with the circus while helping write the script. Noisy and bright, the film was not well-liked by critics but was an audience favorite. In 1953, DeMille signed a contract with Prentice Hall to publish an autobiography. He reminisced into a voice recorder, the recording was transcribed, and the information was organized by topic. Art Arthur also interviewed people for the autobiography. DeMille did not like the biography's first draft, saying he thought the person portrayed in it was an egotistical "SOB". In the early 1950s, Allen Dulles and Frank Wisner recruited DeMille to serve on the board of the anti-communist National Committee for a Free Europe, the public face of the organization that oversaw Radio Free Europe. In 1954, Secretary of the Air Force Harold E. Talbott asked DeMille for help designing the cadet uniforms at the newly established United States Air Force Academy. DeMille's designs, most notably that of the cadet parade uniform, were praised by Air Force and Academy leadership, adopted, and still worn.
Final works and unrealized projects.
In 1952, DeMille sought approval for a lavish remake of his 1923 silent film "The Ten Commandments". He went before the Paramount board of directors, which was mostly Jewish-American. The board rejected his proposal, even though his last two films, "Samson and Delilah" and "The Greatest Show on Earth", had been record-breaking hits. Adolph Zukor convinced the board to change its mind on the grounds of morality. DeMille did not have an exact budget proposal for the project, and it promised to be the most costly in U.S. film history. Still, the board unanimously approved it. "The Ten Commandments", released in 1956, was DeMille's final film. It was the longest (3 hours, 39 minutes) and most expensive ($13 million) film in Paramount history. Production began in October 1954. The Exodus scene was filmed on-site in Egypt with four Technicolor-VistaVision cameras filming 12,000 people. Filming continued in 1955 in Paris and Hollywood on 30 different sound stages. They even expanded to RKO sound studios for filming. Post-production lasted a year, and the film premiered in Salt Lake City. Nominated for an Academy Award for Best Picture, it grossed over $80 million, which surpassed the gross of "The Greatest Show on Earth" and every other film in history except "Gone with the Wind". DeMille offered ten percent of his profit to the crew, a unique practice at the time.
On November 7, 1954, while in Egypt filming the Exodus sequence for "The Ten Commandments", DeMille (who was 73) climbed a ladder to the top of the set and had a serious heart attack. Despite the urging of his associate producer, DeMille wanted to return to the set right away. He developed a plan with his doctor to allow him to continue directing while reducing his physical stress. DeMille completed the film, but his health was diminished by several more heart attacks. His daughter Cecilia took over as director as DeMille sat behind the camera with Loyal Griggs as the cinematographer. This film was his last.
Due to his frequent heart attacks, DeMille asked his son-in-law, actor Anthony Quinn, to direct a remake of his 1938 film "The Buccaneer". DeMille served as executive producer, overseeing producer Henry Wilcoxon. Despite a cast led by Charlton Heston and Yul Brynner, the 1958 film "The Buccaneer" was a disappointment. DeMille attended its Santa Barbara premiere in December 1958. He was unable to attend its Los Angeles premiere. In the months before his death, DeMille was researching a film biography of Robert Baden-Powell, the founder of the Scout Movement. DeMille asked David Niven to star in the film, but it was never made. DeMille also was planning a film about the space race and a biblical epic based on the Book of Revelation. His autobiography was mostly complete when he died, and was published in November 1959.
Death.
DeMille suffered a series of heart attacks from June 1958 to January 1959, and died on January 21, 1959, following an attack. His funeral was held on January 23 at St. Stephen's Episcopal Church. He was entombed at the Hollywood Memorial Cemetery (now known as Hollywood Forever). After his death, news outlets such as "The New York Times", the "Los Angeles Times", and "The Guardian" called DeMille a "pioneer of movies", "the greatest creator and showman of our industry", and "the founder of Hollywood". DeMille left his multi-million dollar estate in Los Feliz, Los Angeles, in Laughlin Park to his daughter Cecilia because his wife had dementia and was unable to care for an estate. She died a year later. His personal will drew a line between Cecilia and his three adopted children, with Cecilia receiving a majority of DeMille's inheritance and estate. The other three children were surprised by this, as DeMille had not treated them differently in life. Cecilia lived in the house until her death in 1984. The house was auctioned by his granddaughter Cecilia DeMille Presley, who also lived there in the late 1980s.
Filmmaking.
Influences.
DeMille believed his first influences to be his parents, Henry and Beatrice DeMille. His playwright father introduced him to the theater at a young age. Henry was heavily influenced by the work of Charles Kingsley, whose ideas trickled down to DeMille. DeMille noted that his mother had a "high sense of the dramatic" and was determined to continue the artistic legacy of her husband after he died. Beatrice became a play broker and author's agent, influencing DeMille's early life and career. DeMille's father worked with David Belasco who was a theatrical producer, impresario, and playwright. Belasco was known for adding realistic elements in his plays such as real flowers, food, and aromas that could transport his audiences into the scenes. While working in theatre, DeMille used real fruit trees in his play "California", as influenced by Belasco. Similar to Belasco, DeMille's theatre revolved around entertainment rather than artistry. Generally, Belasco's influence of DeMille's career can be seen in DeMille's showmanship and narration. E. H. Sothern's early influence on DeMille's work can be seen in DeMille's perfectionism. DeMille recalled that one of the most influential plays he saw was "Hamlet", directed by Sothern.
Method.
DeMille's filmmaking process always began with extensive research. Next, he would work with writers to develop the story that he was envisioning. Then, he would help writers construct a script. Finally, he would leave the script with artists and allow them to create artistic depictions and renderings of each scene. Plot and dialogue were not a strong point of DeMille's films. Consequently, he focused his efforts on his films' visuals. He worked with visual technicians, editors, art directors, costume designers, cinematographers, and set carpenters in order to perfect the visual aspects of his films. With his editor, Anne Bauchens, DeMille used editing techniques to allow the visual images to bring the plot to climax rather than dialogue. DeMille had large and frequent office conferences to discuss and examine all aspects of the working film including story-boards, props, and special effects.
DeMille rarely gave direction to actors; he preferred to "office-direct", where he would work with actors in his office, going over characters and reading through scripts. Any problems on the set were often fixed by writers in the office rather than on the set. DeMille did not believe a large movie set was the place to discuss minor character or line issues. DeMille was particularly adept at directing and managing large crowds in his films. Martin Scorsese recalled that DeMille had the skill to maintain control of not only the lead actors in a frame but the many extras in the frame as well. DeMille was adept at directing "thousands of extras", and many of his pictures include spectacular set pieces: the toppling of the pagan temple in "Samson and Delilah"; train wrecks in "The Road to Yesterday", "Union Pacific" and "The Greatest Show on Earth"; the destruction of an airship in "Madam Satan"; and the parting of the Red Sea in both versions of "The Ten Commandments".
In his early films, DeMille experimented with photographic light and shade, which created dramatic shadows instead of glare. His specific use of lighting, influenced by his mentor David Belasco, was for the purpose of creating "striking images" and heightening "dramatic situations". DeMille was unique in using this technique. In addition to his use of volatile and abrupt film editing, his lighting and composition were innovative for the time period as filmmakers were primarily concerned with a clear, realistic image. Another important aspect of DeMille's editing technique was to put the film away for a week or two after an initial edit in order to re-edit the picture with a fresh mind. This allowed for the rapid production of his films in the early years of the Lasky Company. The cuts were sometimes rough, but the movies were always interesting.
DeMille often edited in a manner that favored psychological space rather than physical space through his cuts. In this way, the characters' thoughts and desires are the visual focus rather than the circumstances regarding the physical scene. As DeMille's career progressed, he increasingly relied on artist Dan Sayre Groesbeck's concept, costume, and storyboard art. Groesbeck's art was circulated on set to give actors and crew members a better understanding of DeMille's vision. His art was even shown at Paramount meetings when pitching new films. DeMille adored the art of Groesbeck, even hanging it above his fireplace, but film staff found it difficult to convert his art into three-dimensional sets. As DeMille continued to rely on Groesbeck, the nervous energy of his early films transformed into more steady compositions of his later films. While visually appealing, this made the films appear more old-fashioned.
Composer Elmer Bernstein described DeMille as "sparing no effort" when filmmaking. Bernstein recalled that DeMille would scream, yell, or flatter—whatever it took to achieve the perfection he required in his films. DeMille was painstakingly attentive to details on set and was as critical of himself as he was of his crew. Costume designer Dorothy Jeakins, who worked with DeMille on "The Ten Commandments" (1956), said that he was skilled in humiliating people. Jeakins admitted that she received quality training from him, but that it was necessary to become a perfectionist on a DeMille set to avoid being fired. DeMille had an authoritarian persona on set; he required absolute attention from the cast and crew. He had a band of assistants who catered to his needs. He would speak to the entire set, sometimes enormous with countless numbers of crew members and extras, via a microphone to maintain control of the set. He was disliked by many inside and outside of the film industry for his cold and controlling reputation.
DeMille was known for autocratic behavior on the set, singling out and berating extras who were not paying attention. Many of these displays were thought to be staged, however, as an exercise in discipline. He despised actors who were unwilling to take physical risks, especially when he had first demonstrated that the required stunt would not harm them. This occurred with Victor Mature in "Samson and Delilah". Mature refused to wrestle Jackie the Lion, even though DeMille had just tussled with the lion, proving that he was tame. DeMille told the actor that he was "one hundred percent yellow". Paulette Goddard's refusal to risk personal injury in a scene involving fire in "Unconquered" cost her DeMille's favor and a role in "The Greatest Show on Earth". DeMille did receive help in his films, notably from Alvin Wyckoff, who shot forty-three of DeMille's films; brother William deMille who would occasionally serve as his screenwriter; and Jeanie Macpherson, who served as DeMille's exclusive screenwriter for fifteen years; and Eddie Salven, DeMille's favorite assistant director.
DeMille made stars of unknown actors: Gloria Swanson, Bebe Daniels, Rod La Rocque, William Boyd, Claudette Colbert, and Charlton Heston. He also cast established stars such as Gary Cooper, Robert Preston, Paulette Goddard and Fredric March in multiple pictures. DeMille cast some of his performers repeatedly, including Henry Wilcoxon, Julia Faye, Joseph Schildkraut, Ian Keith, Charles Bickford, Theodore Roberts, Akim Tamiroff, and William Boyd. DeMille was credited by actor Edward G. Robinson with saving his career following his eclipse in the Hollywood blacklist.
Style and themes.
Cecil B. DeMille's film production career evolved from critically significant silent films to financially significant sound films. He began his career with reserved yet brilliant melodramas; from there, his style developed into marital comedies with outrageously melodramatic plots. In order to attract a high-class audience, DeMille based many of his early films on stage melodramas, novels, and short stories. He began the production of epics earlier in his career until they began to solidify his career in the 1920s. By 1930, DeMille had perfected his film style of mass-interest spectacle films with Western, Roman, or Biblical themes. DeMille was often criticized for making his spectacles too colorful and for being too occupied with entertaining the audience rather than accessing the artistic and auteur possibilities that film could provide. However, others interpreted DeMille's work as visually impressive, thrilling, and nostalgic. Along the same lines, critics of DeMille often qualify him by his later spectacles and fail to consider several decades of ingenuity and energy that defined him during his generation. Throughout his career, he did not alter his films to better adhere to contemporary or popular styles. Actor Charlton Heston admitted DeMille was, "terribly unfashionable" and Sidney Lumet called DeMille, "the cheap version of D. W. Griffith", adding that DeMille, "[didn't have]...an original thought in his head", though Heston added that DeMille was much more than that.
According to Scott Eyman, DeMille's films were at the same time masculine and feminine due to his thematic adventurousness and his eye for the extravagant. DeMille's distinctive style can be seen through camera and lighting effects as early as "The Squaw Man" with the use of daydream images; moonlight and sunset on a mountain; and side-lighting through a tent flap. In the early age of cinema, DeMille differentiated the Lasky Company from other production companies due to the use of dramatic, low-key lighting they called "Lasky lighting" and marketed as "Rembrandt lighting" to appeal to the public. DeMille achieved international recognition for his unique use of lighting and color tint in his film "The Cheat". DeMille's 1956 version of "The Ten Commandments", according to director Martin Scorsese, is renowned for its level of production and the care and detail that went into creating the film. He stated that "The Ten Commandments" was the final culmination of DeMille's style.
DeMille was interested in art and his favorite artist was Gustave Doré; DeMille based some of his most well-known scenes on the work of Doré. DeMille was the first director to connect art to filmmaking; he created the title of "art director" on the film set. DeMille was also known for his use of special effects without the use of digital technology. Notably, DeMille had cinematographer John P. Fulton create the parting of the Red Sea scene in his 1956 film "The Ten Commandments", which was one of the most expensive special effects in film history, and has been called by Steven Spielberg "the greatest special effect in film history". The actual parting of the sea was created by releasing 360,000 gallons of water into a huge water tank split by a U-shaped trough, overlaying it with a film of a giant waterfall that was built on the Paramount backlot, and playing the clip backward.
Aside from his Biblical and historical epics, which are concerned with how man relates to God, some of DeMille's films contained themes of "neo-naturalism", which portray the conflict between the laws of man and the laws of nature. Although he is known for his later "spectacular" films, his early films are held in high regard by critics and film historians. DeMille discovered the possibilities of the "bathroom" or "boudoir" in the film without being "vulgar" or "cheap". DeMille's films "Male and Female", "Why Change Your Wife?", and "The Affairs of Anatol" can be retrospectively described as high camp and are categorized as "early DeMille films" due to their particular style of production and costume and set design. However, his earlier films "The Captive", "Kindling", "Carmen", and "The Whispering Chorus" are more serious films. It is difficult to typify DeMille's films into one specific genre. His first three films were Westerns, and he filmed many Westerns throughout his career. However, throughout his career, he filmed comedies, periodic and contemporary romances, dramas, fantasies, propaganda, Biblical spectacles, musical comedies, suspense, and war films. At least one DeMille film can represent each film genre. DeMille produced the majority of his films before the 1930s, and by the time sound films were invented, film critics saw DeMille as antiquated, with his best filmmaking years behind him.
DeMille's films contained many similar themes throughout his career. However, the films of his silent era were often thematically different from the films of his sound era. His silent-era films often included the "battle of the sexes" theme due to the era of women's suffrage and the enlarging role of women in society. Moreover, before his religious-themed films, many of his silent era films revolved around "husband-and-wife-divorce-and-remarry satires", considerably more adult-themed. According to Simon Louvish, these films reflected DeMille's inner thoughts and opinions about marriage and human sexuality. Religion was a theme that DeMille returned to throughout his career. Of his seventy films, five revolved around stories of the Bible and the New Testament; however many others, while not direct retellings of Biblical stories, had themes of faith and religious fanaticism in films such as "The Crusades" and "The Road to Yesterday". Western and frontier American were also themes that DeMille returned to throughout his career. His first several films were Westerns, and he produced a chain of westerns during the sound era. Instead of portraying the danger and anarchy of the West, he portrayed the opportunity and redemption found in Western America. Another common theme in DeMille's films is the reversal of fortune and the portrayal of the rich and the poor, including the war of the classes and man versus society conflicts such as in "The Golden Chance" and "The Cheat". In relation to his own interests and sexual preferences, sadomasochism was a minor theme present in some of his films. Another minor characteristic of DeMille's films include train crashes, which can be found in several of his films.
Legacy.
Known as the father of the Hollywood motion picture industry, Cecil B. DeMille made 70 films including several box-office hits. DeMille is one of the more commercially successful film directors in history, with his films before the release of "The Ten Commandments" estimated to have grossed $650 million worldwide. Adjusted for inflation, DeMille's remake of "The Ten Commandments" is the eighth highest-grossing film in the world.
According to Sam Goldwyn, critics did not like DeMille's films, but the audiences did, and "they have the final word". Similarly, scholar David Blanke, argued that DeMille had lost the respect of his colleagues and film critics by his late film career. However, his final films maintained that DeMille was still respected by his audiences. Five of DeMille's films were the highest-grossing films at the year of their release, with only Spielberg topping him with six of his films as the highest-grossing films of the year. DeMille's highest-grossing films include: "The Sign of the Cross" (1932), "Unconquered" (1947), "Samson and Delilah" (1949), "The Greatest Show on Earth" (1952), and "The Ten Commandments" (1956). Director Ridley Scott has been called "the Cecil B. DeMille of the digital era" due to his classical and medieval epics.
Despite his box-office success, awards, and artistic achievements, DeMille has been dismissed and ignored by critics both during his life and posthumously. He was consistently criticized for producing shallow films without talent or artistic care. Compared to other directors, few film scholars have taken the time to academically analyze his films and style. During the French New Wave, critics began to categorize certain filmmakers as auteurs such as Howard Hawks, John Ford, and Raoul Walsh. DeMille was omitted from the list, thought to be too unsophisticated and antiquated to be considered an auteur. However, Simon Louvish wrote "he was the complete master and auteur of his films", and Anton Kozlovic called him the "unsung American auteur". Andrew Sarris, a leading proponent of the auteur theory, ranked DeMille highly as an auteur in the "Far Side of Paradise", just below the "Pantheon". Sarris added that despite the influence of the styles of contemporary directors throughout his career, DeMille's style remained unchanged. Robert Birchard wrote that one could argue the auteurship of DeMille on the basis that DeMille's thematic and visual style remained consistent throughout his career. However, Birchard acknowledged that Sarris's point was more likely that DeMille's style was behind the development of film as an art form. Meanwhile, Sumiko Higashi sees DeMille as "not only a figure who was shaped and influenced by the forces of his era but as a filmmaker who left his own signature on the culture industry." The critic Camille Paglia has called "The Ten Commandments" one of the ten greatest films of all time.
DeMille was one of the first directors to become a celebrity in his own right. He cultivated the image of the omnipotent director, complete with megaphone, riding crop, and jodhpurs. He was known for his unique working wardrobe, which included riding boots, riding pants, and soft, open necked shirts. Joseph Henabery recalled that DeMille looked like "a king on a throne surrounded by his court" while directing films on a camera platform.
DeMille was liked by some of his fellow directors and disliked by others, though his actual films were usually dismissed by his peers as a vapid spectacle. Director John Huston intensely disliked both DeMille and his films. "He was a thoroughly bad director", Huston said. "A dreadful showoff. Terrible. To diseased proportions." Said fellow director William Wellman: "Directorially, I think his pictures were the most horrible things I've ever seen in my life. But he put on pictures that made a fortune. In that respect, he was better than any of us." Producer David O. Selznick wrote: "There has appeared only one Cecil B. DeMille. He is one of the most extraordinarily able showmen of modern times. However much I may dislike some of his pictures, it would be very silly of me, as a producer of commercial motion pictures, to demean for an instant his unparalleled skill as a maker of mass entertainment." Salvador Dalí wrote that DeMille, Walt Disney, and the Marx Brothers were "the three great American Surrealists". DeMille appeared as himself in numerous films, including the MGM comedy "Free and Easy". He often appeared in his coming-attraction trailers and narrated many of his later films, even stepping on screen to introduce "The Ten Commandments". DeMille was immortalized in Billy Wilder's "Sunset Boulevard" when Gloria Swanson spoke the line: "All right, Mr. DeMille. I'm ready for my close-up." DeMille plays himself in the film. DeMille's reputation had a renaissance in the 2010s.
As a filmmaker, DeMille was the aesthetic inspiration of many directors and films due to his early influence during the crucial development of the film industry. DeMille's early silent comedies influenced the comedies of Ernst Lubitsch and Charlie Chaplin's "A Woman of Paris". Additionally, DeMille's epics such as "The Crusades" influenced Sergei Eisenstein's "Alexander Nevsky". Moreover, DeMille's epics inspired directors such as Howard Hawks, Nicholas Ray, Joseph L. Mankiewicz, and George Stevens to try producing epics. Cecil B. DeMille has influenced the work of several well-known directors. Alfred Hitchcock cited DeMille's 1921 film "Forbidden Fruit" as an influence of his work and one of his top ten favorite films. DeMille has influenced the careers of many modern directors. Martin Scorsese cited "Unconquered", "Samson and Delilah", and "The Greatest Show on Earth" as DeMille films that have imparted lasting memories on him. Scorsese said he had viewed "The Ten Commandments" forty or fifty times. Famed director Steven Spielberg stated that DeMille's "The Greatest Show on Earth" was one of the films that influenced him to become a filmmaker. Furthermore, DeMille influenced about half of Spielberg's films, including "War of the Worlds". "The Ten Commandments" inspired DreamWorks Animation's later film about Moses, "The Prince of Egypt". As one of the establishing members of Paramount Pictures and co-founder of Hollywood, DeMille had a role in the development of the film industry. Consequently, the name "DeMille" has become synonymous with filmmaking.
Publicly Episcopalian, DeMille drew on his Christian and Jewish ancestors to convey a message of tolerance. DeMille received more than a dozen awards from Christian and Jewish religious and cultural groups, including B'nai B'rith. However, not everyone received DeMille's religious films favorably. DeMille was accused of antisemitism after the release of "The King of Kings", and director John Ford despised DeMille for what he saw as "hollow" biblical epics meant to promote DeMille's reputation during the politically turbulent 1950s. In response to the claims, DeMille donated some of the profits from "The King of Kings" to charity. In the 2012 "Sight & Sound" poll, both DeMille's "Samson and Delilah" and 1923 version of "The Ten Commandments" received votes, but did not make the top 100 films. Although many of DeMille's films are available on DVD and Blu-ray release, only 20 of his silent films are commercially available on DVD.
Commemoration and tributes.
The original Lasky-DeMille Barn in which "The Squaw Man" was filmed was converted into a museum named the "Hollywood Heritage Museum". It opened on December 13, 1985, and features some of DeMille's personal artifacts. The Lasky-DeMille Barn was dedicated as a California historical landmark in a ceremony on December 27, 1956; DeMille was the keynote speaker. It was listed on the National Register of Historic Places in 2014. The Dunes Center in Guadalupe, California, contains an exhibition of artifacts uncovered in the desert near Guadalupe from DeMille's set of his 1923 version of "The Ten Commandments", known as the "Lost City of Cecil B. DeMille". Donated by the Cecil B. DeMille Foundation in 2004, the moving image collection of Cecil B. DeMille is held at the Academy Film Archive and includes home movies, outtakes, and never-before-seen test footage.
In summer 2019, The Friends of the Pompton Lakes Library hosted a Cecil B DeMille film festival to celebrate DeMille's achievements and connection to Pompton Lakes. They screened four of his films at Christ Church, where DeMille and his family attended church when they lived there. Two schools have been named after him: Cecil B. DeMille Middle School, in Long Beach, California, which was closed and demolished in 2010 to make way for a new high school; and Cecil B. DeMille Elementary School in Midway City, California. The former film building at Chapman University in Orange, California, is named in honor of DeMille. During the Apollo 11 mission, Buzz Aldrin referred to himself in one instance as "Cecil B. DeAldrin", as a humorous nod to DeMille. The title of the 2000 John Waters film "Cecil B. Demented" alludes to DeMille.
DeMille's legacy is maintained by his granddaughter Cecilia DeMille Presley who serves as the president of the Cecil B. DeMille Foundation, which strives to support higher education, child welfare, and film in Southern California. In 1963, the Cecil B. DeMille Foundation donated the "Paradise" ranch to the Hathaway Foundation, which cares for emotionally disturbed and abused children. A large collection of DeMille's materials including scripts, storyboards, and films resides at Brigham Young University in L. Tom Perry Special Collections.
Awards and recognition.
Cecil B. DeMille received many awards and honors, especially later in his career.
In August 1941, DeMille was honored with a block in the forecourt of Grauman's Chinese Theatre.
The American Academy of Dramatic Arts honored DeMille with an Alumni Achievement Award in 1958.
In 1957, DeMille gave the commencement address for the graduation ceremony of Brigham Young University, wherein he received an honorary Doctorate of Letter degree. Additionally, in 1958, he received an honorary Doctorate of Law degree from Temple University.
From the film industry, DeMille received the Irving G. Thalberg Memorial Award at the Academy Awards in 1953, and a Lifetime Achievement Award from the Directors Guild of America Award the same year. In the same ceremony, DeMille received a nomination from Directors Guild of America Award for Outstanding Directorial Achievement in Motion Pictures for "The Greatest Show on Earth". In 1952, DeMille was awarded the first Cecil B. DeMille Award at the Golden Globes. An annual award, the Golden Globe's Cecil B. DeMille Award recognizes lifetime achievement in the film industry. For his contribution to the motion picture and radio industry, DeMille has two stars on the Hollywood Walk of Fame. The first, for radio contributions, is located at 6240 Hollywood Blvd. The second star is located at 1725 Vine Street.
DeMille received two Academy Awards: an Honorary Award for "37 years of brilliant showmanship" in 1950 and a Best Picture award in 1953 for "The Greatest Show on Earth". DeMille received a Golden Globe Award for Best Director and was additionally nominated for the Best Director category at the 1953 Academy Awards for the same film. He was further nominated in the Best Picture category for "The Ten Commandments" at the 1957 Academy Awards. DeMille's "Union Pacific" received a Palme d'Or in retrospect at the 2002 Cannes Film Festival.
Two of DeMille's films have been selected for preservation in the National Film Registry by the United States Library of Congress: "The Cheat" (1915) and "The Ten Commandments" (1956).
Filmography.
DeMille made 70 features, 52 of which are silent. The first 24 of his silents were produced during the first three years of his career (1913–1916). Eight of his films were "epics" with five classified as "Biblical". Six of DeMille's films — "The Arab", "The Wild Goose Chase", "The Dream Girl", "The Devil-Stone", "We Can't Have Everything", and "The Squaw Man" (1918) — were destroyed by nitrate decomposition, and are considered lost. "The Ten Commandments" is broadcast every Saturday at Passover in the United States on the ABC Television Network.
Directed features.
Filmography obtained from "Fifty Hollywood Directors".
Silent films
Sound films
Directing or producing credit.
These are films which DeMille produced or assisted in directing, credited or uncredited.
Acting and cameos.
DeMille frequently made cameos as himself in other Paramount films. Additionally, he often starred in prologues and special trailers that he created for his films, having an opportunity to personally address the audience.
|
6181
|
8766034
|
https://en.wikipedia.org/wiki?curid=6181
|
Chinese Islamic cuisine
|
Chinese Islamic cuisine consists of variations of regionally popular foods that are typical of Han Chinese cuisine, in particular to make them halal. Dishes borrow ingredients from Middle Eastern, Turkic, Iranian and South Asian cuisines, notably mutton and spices. Much like other northern Chinese cuisines, Chinese Islamic cuisine uses wheat noodles as the staple, rather than rice. Chinese Islamic dishes include clear-broth beef noodle soup and "chuanr".
The Hui (ethnic Chinese Muslims), Bonan, Dongxiang, Salar and Uyghurs of China, as well as the Dungans of Central Asia and the Panthays of Burma, collectively contribute to Chinese Islamic cuisine.
History.
Due to the large Muslim population in Western China, many Chinese restaurants cater to or are run by Muslims. Northern Chinese Islamic cuisine originated in China proper. It is heavily influenced by Beijing cuisine, with nearly all cooking methods identical and differs only in material due to religious restrictions. As a result, northern Islamic cuisine is often included in home Beijing cuisine though seldom in east coast restaurants.
During the Yuan dynasty, halal and kosher methods of slaughtering animals and preparing food was banned and forbidden by the Mongol emperors, starting with Genghis Khan who banned Muslims and Jews from slaughtering their animals their own way and made them follow the Mongol method.
Among all the [subject] alien peoples only the Hui-hui say "we do not eat Mongol food." [Cinggis Qa'an replied:] "By the aid of heaven we have pacified you; you are our slaves. Yet you do not eat our food or drink. How can this be right?" He thereupon made them eat. "If you slaughter sheep, you will be considered guilty of a crime." He issued a regulation to that effect ... [In 1279/1280 under Qubilai] all the Muslims say: “if someone else slaughters [the animal] we do not eat." Because the poor people are upset by this, from now on, Musuluman [Muslim] Huihui and Zhuhu [Jewish] Huihui, no matter who kills [the animal] will eat [it] and must cease slaughtering sheep themselves, and cease the rite of circumcision.
Traditionally, there is a distinction between Northern and Southern Chinese Islamic cuisine despite both using lamb and mutton. Northern Chinese Islamic cuisine relies heavily on beef, but rarely ducks, geese, shrimp or seafood, while southern Islamic cuisine is the reverse. The reason for this difference is due to availability of the ingredients. Oxen have been long used for farming and Chinese governments have frequently strictly prohibited the slaughter of oxen for food. However, due to the geographic proximity of the northern part of China to minority-dominated regions that were not subjected to such restrictions, beef could be easily purchased and transported to Northern China. At the same time, ducks, geese and shrimp are rare in comparison to Southern China due to the arid climate of Northern China.
A Chinese Islamic restaurant () can be similar to a Mandarin restaurant with the exception that there is no pork on the menu and the dishes are primarily noodle/soup based.
In most major eastern cities in China, there are very limited Islamic/Halal restaurants, which are typically run by migrants from Western China (e.g., Uyghurs). They primarily offer inexpensive noodle soups only. These restaurants are typically decorated with Islamic motifs such as Islamic writing.
Another difference is that lamb and mutton dishes are more commonly available than in other Chinese restaurants, due to the greater prevalence of these meats in the cuisine of Western Chinese regions. (Refer to image 1.)
Other Muslim ethnic minorities like the Bonan, Dongxiang, Salar and Tibetan Muslims have their own cuisines as well. Dongxiang people operate their own restaurants serving their cuisine.
Many cafeterias (canteens) at Chinese universities have separate sections or dining areas for Muslim students (Hui or Western Chinese minorities), typically labeled "qingzhen". Student ID cards sometimes indicate whether a student is Muslim and will allow access to these dining areas or will allow access on special occasions such as the Eid feast following Ramadan.
Several Hui restaurants serving Chinese Islamic cuisine exist in Los Angeles. San Francisco, despite its huge number of Chinese restaurants, appears to have only one whose cuisine would qualify as halal.
Many Chinese Hui Muslims who moved from Yunnan to Burma (Myanmar) are known as Panthays operate restaurants and stalls serving Chinese Islamic cuisine such as noodles. Chinese Hui Muslims from Yunnan who moved to Thailand are known as Chin Haw and they also own restaurants and stalls serving Chinese Islamic food.
In Central Asia, Dungan people, descendants of Hui, operate restaurants serving Chinese Islamic cuisine, which is respectively referred to as "Dungan cuisine" there. They cater to Chinese businessmen. Chopsticks are used by Dungans. The cuisine of the Dungan resembles northwestern Chinese cuisine.
Most Chinese regard Hui halal food as cleaner than food made by non-Muslims so their restaurants are popular in China. Hui who migrated to Northeast China (Manchuria) after the Chuang Guandong opened many new inns and restaurants to cater to travelers, which were regarded as clean.
The Hui who migrated to Taiwan operate Qingzhen restaurants and stalls serving Chinese Islamic cuisine in Taipei and other big cities.
The Thai Department of Export Promotion claims that "China's halal food producers are small-scale entrepreneurs whose products have little value added and lack branding and technology to push their goods to international standards" to encourage Thai private sector halal producers to market their products in China.
A 1903-started franchise serving Muslim food is Dong Lai Shun in Hankou.
400 meters have to be kept as a distance from each restaurant serving beef noodles to another of its type if they belong to Hui Muslims, since Hui have a pact between each other in Ningxia, Gansu and Shaanxi.
Halal restaurants are checked up upon by clerics from mosques.
Halal food manufacture has been sanctioned by the government of the Ningxia Autonomous Region.
Famous dishes.
Lamian.
Lamian (, Dungan: Ламян) is a Chinese dish of hand-made noodles, usually served in a beef or mutton-flavored soup (湯麪, даңмян, tāngmiàn), but sometimes stir-fried (炒麪, Чаомян, chǎomiàn) and served with a tomato-based sauce. Literally, 拉, ла (lā) means to pull or stretch, while 麪, мян (miàn) means noodle. The hand-making process involves taking a lump of dough and repeatedly stretching it to produce a single very long noodle. There exists a local variant in Lanzhou, the Lanzhou beef noodles, also known as Lanzhou lamian.
Words that begin with L are not native to Turkic — läghmän is a loanword as stated by Uyghur linguist Abdlikim: It is of Chinese derivation and not originally Uyghur.
Beef noodle soup.
Beef noodle soup is a noodle soup dish composed of stewed beef, beef broth, vegetables and wheat noodles. It exists in various forms throughout East and Southeast Asia. It was created by the Hui people during the Qing dynasty of China.
In the west, this food may be served in a small portion as a soup. In China, a large bowl of it is often taken as a whole meal with or without any side dish.
Chuanr.
Chuanr (Chinese: 串儿, Dungan: Чўанр, Pinyin: chuànr (shortened from "chuan er"), "kebab"), originating in the Xinjiang (新疆) province of China and in recent years has been disseminated throughout the rest of that country, most notably in Beijing. It is a product of the Chinese Islamic cuisine of the Uyghur (维吾尔) people and other Chinese Muslims. Yang rou chuan or lamb kebabs, is particularly popular.
Suan cai.
Suan cai is a traditional fermented vegetable dish, similar to Korean kimchi and German sauerkraut, used in a variety of ways. It consists of pickled Chinese cabbage. Suan cai is a unique form of pao cai due to the material used and the method of production. Although "suan cai" is not exclusive to Chinese Islamic cuisine, it is used in Chinese Islamic cuisine to top off noodle soups, especially beef noodle soup.
Nang.
"Nang" (Chinese: 馕, Dungan: Нәң) is a type of round unleavened bread, topped with sesame. It is similar to South and Central Asia naan.
|
6182
|
22604547
|
https://en.wikipedia.org/wiki?curid=6182
|
Cantonese cuisine
|
Cantonese or Guangdong cuisine, also known as Yue cuisine ( or ), is the cuisine of Cantonese people, associated with the Guangdong province of China, particularly the provincial capital Guangzhou, and the surrounding regions in the Pearl River Delta including Hong Kong and Macau. Strictly speaking, Cantonese cuisine is the cuisine of Guangzhou or of Cantonese speakers, but it often includes the cooking styles of all the speakers of Yue Chinese languages in Guangdong.
The Teochew cuisine and Hakka cuisine of Guangdong are considered their own styles. However, scholars may categorize Guangdong cuisine into three major groups based on the region's dialect: Cantonese, Hakka and Chaozhou cuisines. Neighboring Guangxi's cuisine is also considered separate despite eastern Guangxi being considered culturally Cantonese due to the presence of ethnic Zhuang influences in the rest of the province.
Cantonese cuisine is one of the Eight Great Traditions of Chinese cuisine. Its prominence outside China is due to the large number of Cantonese emigrants. Chefs trained in Cantonese cuisine are highly sought after throughout China. Until the late 20th century, most Chinese restaurants in the West served largely Cantonese dishes.
Background.
Guangzhou (Canton) City, the provincial capital of Guangdong and the centre of Cantonese culture, has long been a trading hub and many imported foods and ingredients are used in Cantonese cuisine. Besides pork, beef and chicken, Cantonese cuisine incorporates almost all edible meats, including offal, chicken feet, duck's tongue, frog legs, snakes and snails. However, lamb and goat are less commonly used than in the cuisines of northern or western China. Many cooking methods are used, with steaming and stir-frying being the most favoured due to their convenience and rapidity. Other techniques include shallow frying, double steaming, braising and deep frying.
Compared to other Chinese regional cuisines, the flavours of most traditional Cantonese dishes should be well-balanced and not greasy. Apart from that, spices should be used in modest amounts to avoid overwhelming the flavours of the primary ingredients, and these ingredients in turn should be at the peak of their freshness and quality. There is no widespread use of fresh herbs in Cantonese cooking, in contrast with their liberal use in other cuisines such as Sichuanese, Vietnamese, Lao, Thai and European. Garlic chives and coriander leaves are notable exceptions, although the former are often used as a vegetable and the latter are usually used as mere garnish in most dishes.
Foods.
Sauces and condiments.
In Cantonese cuisine, ingredients such as sugar, salt, soy sauce, rice wine, corn starch, vinegar, scallion and sesame oil suffice to enhance flavour, although garlic is heavily used in some dishes, especially those in which internal organs, such as entrails, may emit unpleasant odours. Ginger, chili peppers, five-spice powder, powdered black pepper, star anise and a few other spices are also used, but often sparingly.
Dried and preserved ingredients.
Although Cantonese cooks pay much attention to the freshness of their primary ingredients, Cantonese cuisine also uses a long list of preserved food items to add flavour to a dish. This may be influenced by Hakka cuisine, since the Hakkas were once a dominant group occupying imperial Hong Kong and other southern territories.
Some items gain very intense flavours during the drying/preservation/oxidation process and some foods are preserved to increase their shelf life. Some chefs combine both dried and fresh varieties of the same items in a dish. Dried items are usually soaked in water to rehydrate before cooking. These ingredients are generally not served "a la carte", but rather with vegetables or other Cantonese dishes.
Traditional dishes.
A number of dishes have been part of Cantonese cuisine since the earliest territorial establishments of Guangdong. While many of these are on the menus of typical Cantonese restaurants, some simpler ones are more commonly found in Cantonese homes. Home-made Cantonese dishes are usually served with plain white rice.
Deep fried dishes.
There are a small number of deep-fried dishes in Cantonese cuisine, which can often be found as street food. They have been extensively documented in colonial Hong Kong records of the 19th and 20th centuries. A few are synonymous with Cantonese breakfast and lunch, even though these are also part of other cuisines.
Soups.
Old fire soup, or "lou fo tong" (), is a clear broth prepared by simmering meat and other ingredients over a low heat for several hours. Chinese herbs are often used as ingredients. There are basically two ways to make old fire soup – put ingredients and water in the pot and heat it directly on fire, which is called "bou tong" (); or put the ingredients in a small stew pot, and put it in a bigger pot filled with water, then heat the bigger pot on fire directly, which is called "dun tong" (). The latter way can keep the most original taste of the soup.
Soup chain stores or delivery outlets in cities with significant Cantonese populations, such as Hong Kong, serve this dish due to the long preparation time required of slow-simmered soup.
Seafood.
Due to Guangdong's location along the South China Sea coast, fresh seafood is prominent in Cantonese cuisine, and many Cantonese restaurants keep aquariums or seafood tanks on the premises. In Cantonese cuisine, as in cuisines from other parts of Asia, if seafood has a repugnant odour, strong spices and marinating juices are added; the freshest seafood is odourless and, in Cantonese culinary arts, is best cooked by steaming. For instance, in some recipes, only a small amount of soy sauce, ginger and spring onion is added to steamed fish. In Cantonese cuisine, the light seasoning is used only to bring out the natural sweetness of the seafood. As a rule of thumb, the spiciness of a dish is usually negatively correlated to the freshness of the ingredients.
Noodle dishes.
Noodles are served either in soup broth or fried. These are available as home-cooked meals, on dim sum side menus, or as street food at dai pai dongs, where they can be served with a variety of toppings such as fish balls, beef balls, or fish slices.
Siu mei.
"Siu mei" () is essentially the Chinese rotisserie style of cooking. Unlike most other Cantonese dishes, "siu mei" solely consists of meat, with no vegetables.
All Cantonese-style cooked meats, including siu mei, lou mei and preserved meat can be classified as siu laap ().
Lou mei.
Lou mei () is the name given to dishes made from internal organs, entrails and other left-over parts of animals. It is widely available in southern Chinese regions.
Meat and rice plates.
A portion of meat, such as char siu, served on a bed of steamed white rice. A typical variant consists of half-and-half portions of two types of siu mei and lou mei (or sometimes more than two). A steamed vegetable (such as choy sum) is frequently, but not always included.
Little pot rice.
Little pot rice () are dishes cooked and served in a flat-bottomed pot (as opposed to a round-bottomed wok). Usually this is a saucepan or braising pan (see clay pot cooking). Such dishes are cooked by covering and steaming, making the rice and ingredients very hot and soft. Usually the ingredients are layered on top of the rice with little or no mixing in between. Many standard combinations exist.
Banquet/dinner dishes.
A number of dishes are traditionally served in Cantonese restaurants only at dinner time. Dim sum restaurants stop serving bamboo-basket dishes after the yum cha period (equivalent to afternoon tea) and begin offering an entirely different menu in the evening. Some dishes are standard while others are regional. Some are customised for special purposes such as Chinese marriages or banquets. Salt and pepper dishes are one of the few spicy dishes.
Dessert.
After the evening meal, most Cantonese restaurants offer "tong sui" (), a sweet soup. Many varieties of "tong sui" are also found in other Chinese cuisines. Some desserts are traditional, while others are recent innovations. The more expensive restaurants usually offer their specialty desserts. Sugar water is the general name of dessert in Guangdong province. It is cooked by adding water and sugar to some other cooking ingredients.
Delicacies.
Certain Cantonese delicacies consist of parts taken from rare or endangered animals, which raises controversy over animal rights and environmental issues. This is often due to alleged health benefits of certain animal products. For example, the continued spreading of the idea that shark cartilage can cure cancer has led to decreased shark populations even though scientific research has found no evidence to support the credibility of shark cartilage as a cancer cure.
|
6183
|
49397919
|
https://en.wikipedia.org/wiki?curid=6183
|
Teochew cuisine
|
Teochew cuisine, also known as Chiuchow cuisine, Chaozhou cuisine or Teo-swa cuisine, originated from the Chaoshan region in the eastern part of China's Guangdong Province, which includes the cities of Chaozhou, Shantou and Jieyang. Teochew cuisine bears more similarities to that of Fujian cuisine, particularly Southern Min cuisine, due to the similarity of Teochew's and Fujian's culture, language, and their geographic proximity to each other. However, Teochew cuisine is also influenced by Cantonese cuisine in its style and technique.
Background.
Teochew cuisine is well known for its seafood and vegetarian dishes. Its use of flavouring is much less heavy-handed than most other Chinese cuisines and depends much on the freshness and quality of the ingredients for taste and flavour. As a delicate cuisine, oil is not often used in large quantities and there is a relatively heavy emphasis on poaching, steaming and braising, as well as the common Chinese method of stir-frying. Teochew cuisine is also known for serving congee (; or "mue"), in addition to steamed rice or noodles with meals. The Teochew "" is rather different from the Cantonese counterpart, being very watery with the rice sitting loosely at the bottom of the bowl, while the Cantonese dish is more a thin gruel.
Authentic Teochew restaurants serve very strong oolong tea called Tieguanyin in very tiny cups before and after the meal. Presented as "gongfu" tea, the tea has a thickly bittersweet taste, colloquially known as "gam gam" ().
A condiment that is popular in Fujian and Taiwanese cuisine and commonly associated with cuisine of certain Teochew groups is shacha sauce (). It is made from soybean oil, garlic, shallots, chilies, brill fish and dried shrimp. The paste has a savoury and slightly spicy taste. As an ingredient, it has multiple uses: as a base for soups, as a rub for barbecued meats, as a seasoning for stir-fried dishes, or as a component for dipping sauces.
In addition to soy sauce (widely used in all Chinese cuisines), Teochew people also use fish sauce in their cooking.
Teochew chefs often use a special stock called siang teng (), literally translates from the Teochew dialect as "superior broth". This stock remains on the stove and is continuously replenished. Portrayed in popular media, some Hong Kong chefs allegedly use the same superior broth that is preserved for decades. This stock can as well be seen on Chaozhou TV's cooking programmes.
There is a notable feast in Teochew cuisine called "" (). A myriad of dishes are often served, which include shark fin soup, bird's nest soup, lobster, steamed fish, roasted suckling pig and braised goose.
Teochew chefs take pride in their skills of vegetable carving, and carved vegetables are used as garnishes on cold dishes and on the banquet table.
Teochew cuisine is also known for a late night meal known as "meh siao" () or "daa laang" () among the Cantonese. Teochew people enjoy eating out close to midnight in restaurants or at roadside food stalls. Some dai pai dong-like eateries stay open till dawn.
Unlike the typical menu selections of many other Chinese cuisines, Teochew restaurant menus often have a dessert section.
Many people of Teochew origin, also known as Teochiu or Teochew people, have settled in Hong Kong and places in Southeast Asia like Malaysia, Singapore, Cambodia and Thailand. Influences they bring can be noted in Singaporean cuisine and that of other settlements. A large number of Teochew people have also settled in Taiwan, evident in Taiwanese cuisine. Other notable Teochew diaspora communities are in Vietnam, Cambodia and France. A popular noodle soup in both Vietnam and Cambodia, known as hu tieu, originated from the Teochew . There is also a large diaspora of Teochew people (most were from Southeast Asia) in the United States - particularly in California. There is a Teochew Chinese Association in Paris called L'Amicale des Teochews en France.
|
6184
|
18872885
|
https://en.wikipedia.org/wiki?curid=6184
|
Co-NP
|
In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP if and only if for every "no"-instance we have a polynomial-length "certificate" and there is a polynomial-time algorithm that can be used to verify any purported certificate.
That is, co-NP is the set of decision problems where there exists a polynomial and a polynomial-time bounded Turing machine "M" such that for every instance "x", "x" is a "no"-instance if and only if: for some possible certificate "c" of length bounded by , the Turing machine "M" accepts the pair .
Complementary problems.
While an NP problem asks whether a given instance is a "yes"-instance, its "complement" asks whether an instance is a "no"-instance, which means the complement is in co-NP. Any "yes"-instance for the original NP problem becomes a "no"-instance for its complement, and vice versa.
Unsatisfiability.
An example of an NP-complete problem is the Boolean satisfiability problem: given a Boolean formula, is it "satisfiable" (is there a possible input for which the formula outputs true)? The complementary problem asks: "given a Boolean formula, is it "unsatisfiable" (do all possible inputs to the formula output false)?". Since this is the "complement" of the satisfiability problem, a certificate for a "no"-instance is the same as for a "yes"-instance from the original NP problem: a set of Boolean variable assignments which make the formula true. On the other hand, a certificate of a "yes"-instance for the complementary problem (whatever form it might take) would be equally as complex as for the "no"-instance of the original NP satisfiability problem.
co-NP-completeness.
A problem "L" is co-NP-complete if and only if "L" is in co-NP and for any problem in co-NP, there exists a polynomial-time reduction from that problem to "L".
Tautology reduction.
Determining if a formula in propositional logic is a tautology is co-NP-complete: that is, if the formula evaluates to true under every possible assignment to its variables.
Relationship to other classes.
P, the class of polynomial time solvable problems, is a subset of both NP and co-NP. P is thought to be a strict subset in both cases. Because P is closed under complementation, and NP and co-NP are complementary, it cannot be strict in one case and not strict in the other: if P equals NP, it must also equal co-NP, and vice versa.
NP and co-NP are also thought to be unequal, and their equality would imply the collapse of the polynomial hierarchy PH to NP. If they are unequal, then no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP. This can be shown as follows. Suppose for the sake of contradiction there exists an NP-complete problem that is in co-NP. Since all problems in NP can be reduced to , it follows that for every problem in NP, we can construct a non-deterministic Turing machine that decides its complement in polynomial time; i.e., . From this, it follows that the set of complements of the problems in NP is a subset of the set of complements of the problems in co-NP; i.e., . Thus . The proof that no co-NP-complete problem can be in NP if is symmetrical.
co-NP is a subset of PH, which itself is a subset of PSPACE.
Integer factorization.
An example of a problem that is known to belong to both NP and co-NP (but not known to be in P) is Integer factorization: given positive integers "m" and "n", determine if "m" has a factor less than "n" and greater than one. Membership in NP is clear; if "m" does have such a factor, then the factor itself is a certificate. Membership in co-NP is also straightforward: one can just list the prime factors of "m", all greater or equal to "n", which the verifier can confirm to be valid by multiplication and the AKS primality test. It is presently not known whether there is a polynomial-time algorithm for factorization, equivalently that integer factorization is in P, and hence this example is interesting as one of the most natural problems known to be in NP and co-NP but not known to be in P.
|
6185
|
47518519
|
https://en.wikipedia.org/wiki?curid=6185
|
Chuck Yeager
|
Brigadier General Charles Elwood Yeager ( , February 13, 1923December 7, 2020) was a United States Air Force officer, flying ace, and record-setting test pilot who in October 1947 became the first pilot in history confirmed to have exceeded the speed of sound in level flight.
Yeager was raised in Hamlin, West Virginia. His career began in World War II as a private in the United States Army, assigned to the Army Air Forces in 1941. After serving as an aircraft mechanic, in September 1942, he entered enlisted pilot training and upon graduation was promoted to the rank of flight officer (the World War II Army Air Force version of the Army's warrant officer), later achieving most of his aerial victories as a P-51 Mustang fighter pilot on the Western Front, where he was credited with shooting down 11.5 enemy aircraft. The half credit is from a second pilot assisting him in a single shootdown. On October 12, 1944, he attained "ace in a day" status, shooting down five enemy aircraft in one mission.
After the war, Yeager became a test pilot and flew many types of aircraft, including experimental rocket-powered aircraft for the National Advisory Committee for Aeronautics (NACA). Through the NACA program, he became the first human to officially break the sound barrier on October 14, 1947, when he flew the experimental Bell X-1 at Mach 1.05 at an altitude of , for which he won both the Collier and Mackay trophies in 1948. He broke several other speed and altitude records in the following years. In 1962, he became the first commandant of the USAF Aerospace Research Pilot School, which trained and produced astronauts for NASA and the Air Force.
Yeager later commanded fighter squadrons and wings in Germany, as well as in Southeast Asia during the Vietnam War. In recognition of his achievements and the outstanding performance ratings of those units, he was promoted to brigadier general in 1969 and inducted into the National Aviation Hall of Fame in 1973, retiring on March 1, 1975, for its colloquial similarity to "Mach 1". His three-war active-duty flying career spanned more than 30 years and took him to many parts of the world, including the Korean War zone and the Soviet Union during the height of the Cold War.
Yeager is referred to by many as one of the greatest pilots of all time, and was ranked fifth on "Flying" list of the 51 Heroes of Aviation in 2013. He flew more than 360 different types of aircraft over a 70-year period, and continued to fly for two decades after retirement as a consultant pilot for the United States Air Force. In 2020 at the age of 97, Yeager died in a Los Angeles-area hospital.
Early life and education.
Yeager was born February 13, 1923, in Myra, West Virginia, to farming parents Albert Hal Yeager (1896–1963) and Susie Mae Yeager (; 1898–1987). When he was five years old, his family moved to Hamlin, West Virginia. Yeager had two brothers, Roy and Hal Jr., and two sisters, Doris Ann (accidentally killed at age two by four-year-old Roy playing with a firearm) and Pansy Lee.
He attended Hamlin High School, where he played basketball and football, receiving his best grades in geometry and typing. He graduated from high school in June 1941.
His first experience with the military was as a teen at the Citizens Military Training Camp at Fort Benjamin Harrison, Indianapolis, Indiana, during the summers of 1939 and 1940. On February 26, 1945, Yeager married Glennis Dickhouse. The couple had four children. Glennis Yeager died in 1990, predeceasing her husband by 30 years.
His cousin, Steve Yeager, was a professional baseball catcher.
Career.
World War II.
On September 12, 1941, Yeager enlisted as a private in the U.S. Army Air Forces (USAAF), and became an aircraft mechanic at George Air Force Base, Victorville, California. At enlistment, Yeager was not eligible for flight training because of his age and educational background, but the entry of the U.S. into World War II less than three months later prompted the USAAF to alter its recruiting standards. Yeager had unusually sharp vision, a visual acuity rated 20/10, which once enabled him to shoot a deer at .
At the time of his flight training acceptance, he was a crew chief on an AT-11. He received his pilot wings and a promotion to flight officer at Luke Field, Arizona, where he graduated from Class 43C on March 10, 1943. Assigned to the 357th Fighter Group at Tonopah, Nevada, he initially trained as a fighter pilot, flying Bell P-39 Airacobras (being grounded for seven days for clipping a farmer's tree during a training flight), and shipped overseas with the group on November 23, 1943.
Stationed in the United Kingdom at RAF Leiston, Yeager flew P-51 Mustangs in combat with the 363d Fighter Squadron. He named his aircraft "Glamorous Glen" after his girlfriend, Glennis Faye Dickhouse, who became his wife in February 1945. Yeager had gained one victory before he was shot down over France in his first aircraft (P-51B-5-NA s/n 43-6763) on March 5, 1944, on his eighth mission. He escaped to Spain on March 30, 1944, with the help of the "Maquis" (French Resistance) and returned to England on May 15, 1944. During his stay with the "Maquis", Yeager assisted the guerrillas in duties that did not involve direct combat; he helped construct bombs for the group, a skill that he had learned from his father. He was awarded the Bronze Star for helping a navigator, Omar M. "Pat" Patterson Jr., to cross the Pyrenees.
Despite a regulation prohibiting "evaders" (escaped pilots) from flying over enemy territory again, the purpose of which was to prevent resistance groups from being compromised by giving the enemy a second chance to possibly capture him, Yeager was reinstated to flying combat. He had joined another evader, fellow P-51 pilot 1st Lt Fred Glover, in speaking directly to the Supreme Allied Commander, General Dwight D. Eisenhower, on June 12, 1944. "I raised so much hell that General Eisenhower finally let me go back to my squadron" Yeager said. "He cleared me for combat after D Day, because all the free Frenchmen – Maquis and people like that – had surfaced". Eisenhower, after gaining permission from the War Department to decide the requests, concurred with Yeager and Glover. In the meantime, Yeager shot down his second enemy aircraft, a German Junkers Ju 88 bomber, over the English Channel.
Yeager demonstrated outstanding flying skills and combat leadership. On October 12, 1944, he became the first pilot in his group to make "ace in a day," downing five enemy aircraft in a single mission. Two of these victories were scored without firing a single shot: when he flew into firing position against a Messerschmitt Bf 109, the pilot of the aircraft panicked, breaking to port and colliding with his wingman. Yeager said both pilots bailed out. He finished the war with 11.5 official victories, including one of the first air-to-air victories over a jet fighter, a German Messerschmitt Me 262 that he shot down as it was on final approach for landing.
Yeager's official statement of the 12 October mission states:
In his 1986 memoirs, Yeager recalled with disgust that "atrocities were committed by both sides", and said he went on a mission with orders from the Eighth Air Force to "strafe anything that moved". During the mission briefing, he whispered to Major Donald H. Bochkay, "If we are going to do things like this, we sure as hell better make sure we are on the winning side". Yeager said, "I'm certainly not proud of that particular strafing mission against civilians. But it is there, on the record and in my memory". He also expressed bitterness at his treatment in England during World War II, describing the British as "arrogant" and "nasty" on Twitter.
Yeager was commissioned a second lieutenant while at Leiston, and was promoted to captain before the end of his tour. He flew his 61st and final mission on January 15, 1945, and returned to the United States in early February 1945. As an evader, he received his choice of assignments and, because his new wife was pregnant, chose Wright Field to be near his home in West Virginia. His high number of flight hours and maintenance experience qualified him to become a functional test pilot of repaired aircraft, which brought him under the command of Colonel Albert Boyd, head of the Aeronautical Systems Flight Test Division.
Post-World War II.
Test pilot – breaking the sound barrier.
After the war, Yeager remained in the U.S. Army Air Forces. Upon graduating from Air Materiel Command Flight Performance School (Class 46C), Yeager became a test pilot at Muroc Army Air Field (now Edwards Air Force Base). After Bell Aircraft test pilot Chalmers "Slick" Goodlin demanded to break the sound "barrier", the USAAF selected the 24-year-old Yeager to fly the rocket-powered Bell XS-1 in a NACA program to research high-speed flight. Under the National Security Act of 1947, the USAAF became the United States Air Force (USAF) on September 18.
Yeager's flight was scheduled for October 14. Two nights before his flight, Yeager went horseback riding with his wife, fell, and broke two ribs under his right arm. Worried the injury would remove him from the mission, Yeager had a civilian doctor in nearby Rosamond tape his ribs.
To seal the hatch of the XS-1, the pilot needed to hold the hatch in position and use their right arm to slam down a heavy lever. Yeager would not be able to seal the hatch with his broken ribs, so Yeager secretly asked his friend and fellow project pilot Jack Ridley for a solution. Ridley sawed off the end of a broom handle for Yeager to use as a lever to seal the hatch.
Yeager broke the sound barrier on October 14, 1947, in level flight while piloting the X-1 "Glamorous Glennis" at Mach 1.05 at an altitude of over the Rogers Dry Lake of the Mojave Desert in California. The success of the mission was not announced to the public for nearly eight months, until June 10, 1948. Yeager was awarded the Mackay Trophy and the Collier Trophy in 1948 for his mach-transcending flight, and the Harmon International Trophy in 1954. The X-1 he flew that day was later put on permanent display at the Smithsonian Institution's National Air and Space Museum. During 1952, he attended the Air Command and Staff College.
Yeager continued to break many speed and altitude records. He was one of the first American pilots to fly a Mikoyan-Gurevich MiG-15, after its pilot, No Kum-sok, defected to South Korea. Returning to Muroc, during the latter half of 1953, Yeager was involved with the USAF team that was working on the X-1A, an aircraft designed to surpass Mach 2 in level flight. That year, he flew a chase aircraft for the civilian pilot Jackie Cochran as she became the first woman to fly faster than sound.
On November 20, 1953, the U.S. Navy program involving the Douglas D-558-II Skyrocket and its pilot, Scott Crossfield, became the first team to reach twice the speed of sound. After they were bested, Ridley and Yeager decided to beat rival Crossfield's speed record in a series of test flights that they dubbed "Operation NACA Weep". Not only did they beat Crossfield by setting a new record at Mach 2.44 on December 12, 1953, but they did it in time to spoil a celebration planned for the 50th anniversary of flight in which Crossfield was to be called "the fastest man alive".
The new record flight, however, did not entirely go to plan, since shortly after reaching Mach 2.44, Yeager lost control of the X-1A at about due to inertia coupling, a phenomenon largely unknown at the time. With the aircraft simultaneously rolling, pitching, and yawing out of control, Yeager dropped in less than a minute before regaining control at around . He then managed to land without further incident. For this feat, Yeager was awarded the Distinguished Service Medal (DSM) in 1954.
Military command.
Yeager was foremost a fighter pilot and held several squadron and wing commands. From 1954 to 1957, he commanded the F-86H Sabre-equipped 417th Fighter-Bomber Squadron (50th Fighter-Bomber Wing) at Hahn AB, West Germany, and Toul-Rosieres Air Base, France; and from 1957 to 1960 the F-100D Super Sabre-equipped 1st Fighter Day Squadron at George Air Force Base, California, and Morón Air Base, Spain.
He was a full colonel in 1962, after completion of a year's studies and final thesis on STOL aircraft at the Air War College. He became the first commandant of the USAF Aerospace Research Pilot School, which produced astronauts for NASA and the USAF, after its redesignation from the USAF Flight Test Pilot School. He had only a high school education, so he was not eligible to become an astronaut like those he trained. In April 1962, Yeager made his only flight with Neil Armstrong. Their job, flying a T-33, was to evaluate Smith Ranch Dry Lake in Nevada for use as an emergency landing site for the North American X-15.
In his autobiography, he wrote that he knew the lake bed was unsuitable for landings after recent rains, but Armstrong insisted on flying out anyway. As Armstrong suggested that they do a touch-and-go, Yeager advised against it, telling him "You may touch, but you ain't gonna go!" When Armstrong did touch down, the wheels became stuck in the mud, bringing the plane to a sudden stop and provoking Yeager to fits of laughter. They had to wait for rescue.
Yeager's participation in the test pilot training program for NASA included controversial behavior. Yeager reportedly did not believe that Ed Dwight, the first African American pilot admitted into the program, should be a part of it. In the 2019 documentary series "Chasing the Moon", the filmmakers made the claim that Yeager instructed staff and participants at the school that "Washington is trying to cram the nigger down our throats. [President] Kennedy is using this to make 'racial equality,' so do not speak to him, do not socialize with him, do not drink with him, do not invite him over to your house, and in six months he'll be gone." In his autobiography, Dwight details how Yeager's leadership led to discriminatory treatment throughout his training at Edwards Air Force Base.
Between December 1963 and January 1964, Yeager completed five flights in the NASA M2-F1 lifting body. An accident during a December 1963 test flight in one of the school's NF-104s resulted in serious injuries. After climbing to a near-record altitude, the plane's controls became ineffective, and it entered a flat spin. After several turns, and an altitude loss of approximately 95,000 feet, Yeager ejected from the plane. During the ejection, the seat straps released normally, but the seat base slammed into Yeager, with the still-hot rocket motor breaking his helmet's plastic faceplate and causing his emergency oxygen supply to catch fire. The resulting burns to his face required extensive and agonizing medical care. This was Yeager's last attempt at setting test-flying records due to his apparent inability to fly the required flight profiles for optimum climb performance.
In 1966, Yeager took command of the 405th Tactical Fighter Wing at Clark Air Base, the Philippines, whose squadrons were deployed on rotational temporary duty (TDY) in South Vietnam and elsewhere in Southeast Asia. There he flew 127 missions. In February 1968, Yeager was assigned command of the 4th Tactical Fighter Wing at Seymour Johnson Air Force Base, North Carolina, and led the McDonnell Douglas F-4 Phantom II wing in South Korea during the "Pueblo" crisis.
Yeager was promoted to brigadier general and was assigned in July 1969 as the vice-commander of the Seventeenth Air Force.
From 1971 to 1973, at the behest of Ambassador Joseph Farland, Yeager was assigned as the Air Attache in Pakistan to advise the Pakistan Air Force which was led by Abdur Rahim Khan (the first Pakistani to break the sound barrier). He arrived in Pakistan at a time when tensions with India were at a high level. One of Yeager's jobs during this time was to assist Pakistani technicians in installing AIM-9 Sidewinders on PAF's Shenyang F-6 fighters. He also had a keen interest in interacting with PAF personnel from various Pakistani Squadrons and helping them develop combat tactics.
In one instance in 1972, while visiting the No. 15 Squadron "Cobras" at Peshawar Airbase, the Squadron's OC Wing Commander Najeeb Khan escorted him to K2 in a pair of F-86Fs after Yeager requested a visit to the second highest mountain on Earth. After hostilities broke out in 1971, he decided to stay in West Pakistan and continued overseeing the PAF's operations. Yeager recalled "the Pakistanis whipped the Indians' asses in the sky... the Pakistanis scored a three-to-one kill ratio, knocking out 102 Russian-made Indian jets and losing 34 airplanes of their own".
During the war, he flew around the western front in a helicopter documenting wreckages of Indian aircraft of Soviet origin, which included Sukhoi Su-7s and MiG-21s. These aircraft were transported to the United States after the war for analysis. Yeager also flew around in his Beechcraft Queen Air, a small passenger aircraft that was assigned to him by the Pentagon, picking up shot-down Indian fighter pilots. The Beechcraft was later destroyed during an air raid by the IAF at a Pakistani airbase when Yeager was not present. Edward C. Ingraham, a U.S. diplomat who had served as political counselor to Ambassador Farland in Islamabad, recalled this incident in the "Washington Monthly" of October 1985: "After Yeager's Beechcraft was destroyed during an Indian air raid, he raged to his cowering colleagues that the Indian pilot had been specifically instructed by Indira Gandhi to blast his plane. 'It was', he later wrote, 'the Indian way of giving Uncle Sam the finger'". Yeager was incensed over the incident and demanded U.S. retaliation.
Post-retirement and in popular culture.
On March 1, 1975, Yeager retired from the Air Force at Norton Air Force Base, California.
Yeager made a cameo appearance in the movie "The Right Stuff" (1983). He played "Fred", a bartender at "Pancho's Place", which was most appropriate, because he said, "if all the hours were ever totaled, I reckon I spent more time at her place than in a cockpit over those years". Sam Shepard portrayed Yeager in the film, which chronicles in part his famous 1947 record-breaking flight.
Yeager has been referenced several times in the shared "Star Trek" universe, including having a namesake fictional type of starship, a dangerous starship formation-maneuver named after him called the "Yeager Loop" (most notably mentioned in the ' episode "The First Duty"), and appearing in archival footage within the opening title sequence for the series ' (2001–2005). For "Enterprise", executive producer Rick Berman said that he envisaged the lead character, Captain Jonathan Archer, as being "halfway between Chuck Yeager and Han Solo".
For several years in the 1980s, Yeager was connected to General Motors, publicizing ACDelco, the company's automotive parts division. In 1986, he was invited to drive the Chevrolet Corvette pace car for the 70th running of the Indianapolis 500. In 1988, Yeager was again invited to drive the pace car, this time at the wheel of an Oldsmobile Cutlass Supreme. In 1986, President Reagan appointed Yeager to the Rogers Commission that investigated the explosion of the Space Shuttle "Challenger".
During this time, Yeager also served as a technical adviser for three Electronic Arts flight simulator video games. The games include "Chuck Yeager's Advanced Flight Trainer", "Chuck Yeager's Advanced Flight Trainer 2.0", and "Chuck Yeager's Air Combat". The game manuals feature quotes and anecdotes from Yeager and were well received by players. Missions feature several of Yeager's accomplishments and let players challenge his records. "Chuck Yeager's Advanced Flight Trainer" was Electronic Art's top-selling game for 1987.
In 2009, Yeager participated in the documentary "The Legend of Pancho Barnes and the Happy Bottom Riding Club", a profile of his friend Pancho Barnes. The documentary was screened at film festivals, aired on public television in the United States, and won an Emmy Award.
On October 14, 1997, on the 50th anniversary of his historic flight past Mach 1, he flew a new "Glamorous Glennis III", an F-15D Eagle, past Mach 1. The chase plane for the flight was an F-16 Fighting Falcon piloted by Bob Hoover, a longtime test, fighter, and aerobatic pilot who had been Yeager's wingman for the first supersonic flight. At the end of his speech to the crowd in 1997, Yeager concluded, "All that I am ... I owe to the Air Force". Later that month, he was the recipient of the Tony Jannus Award for his achievements.
On October 14, 2012, on the 65th anniversary of breaking the sound barrier, Yeager did it again at the age of 89, flying as co-pilot in a McDonnell Douglas F-15 Eagle piloted by Captain David Vincent out of Nellis Air Force Base.
In October 2016, Yeager reached international headlines when a Twitter argument he was having with an Irish teenager led to him lashing out at the British and Irish, namely calling Irish people British, and labeling all British people as "nasty" and "arrogant". No stranger to controversy in his life, this was one of Yeager's last major public faux-pas.
Awards and decorations.
In 1973, Yeager was inducted into the National Aviation Hall of Fame, arguably aviation's highest honor. In 1974, Yeager received the Golden Plate Award of the American Academy of Achievement. In December 1975, the U.S. Congress awarded Yeager a silver medal "equivalent to a noncombat Medal of Honor ... for contributing immeasurably to aerospace science by risking his life in piloting the X-1 research airplane faster than the speed of sound on October 14, 1947". President Gerald Ford presented the medal to Yeager in a ceremony at the White House on December 8, 1976.
Yeager never attended college and was often modest about his background, but is considered by many, including "Flying Magazine", the California Hall of Fame, the State of West Virginia, National Aviation Hall of Fame, a few U.S. presidents, and the United States Army Air Force, to be one of the greatest pilots of all time. "Air & Space/Smithsonian" magazine ranked him the fifth greatest pilot of all time in 2003. Regardless of his lack of higher education, West Virginia's Marshall University named its highest academic scholarship the Society of Yeager Scholars in his honor. He was the chairman of Experimental Aircraft Association (EAA)'s Young Eagle Program from 1994 to 2004, and was named the program's chairman emeritus.
In 1966, Yeager was inducted into the International Air & Space Hall of Fame. He was inducted into the International Space Hall of Fame in 1981. He was inducted into the Aerospace Walk of Honor 1990 inaugural class.
Yeager Airport in Charleston, West Virginia, is named in his honor. The Interstate 64/Interstate 77 bridge over the Kanawha River in Charleston is named in his honor. He also flew directly under the Kanawha Bridge and West Virginia named it the Chuck E. Yeager Bridge. On October 19, 2006, the state of West Virginia also honored Yeager with a marker along Corridor G (part of U.S. Highway 119) in his home Lincoln County, and also renamed part of it the "Yeager Highway".
Yeager was an honorary board member of the humanitarian organization Wings of Hope. On August 25, 2009, Governor Arnold Schwarzenegger and Maria Shriver announced that Yeager would be one of 13 California Hall of Fame inductees in The California Museum's yearlong exhibit. The induction ceremony was on December 1, 2009, in Sacramento, California. "Flying Magazine" ranked Yeager number 5 on its 2013 list of The 51 Heroes of Aviation; for many years, he was the highest-ranked living person on the list.
The Civil Air Patrol, the volunteer auxiliary of the USAF, awards the Charles E. "Chuck" Yeager Award to its senior members as part of its Aerospace Education program.
Personal life.
Yeager named his plane after his wife, Glennis, as a good-luck charm: "You're my good-luck charm, hon. Any airplane I name after you always brings me home." Yeager and Glennis moved to Grass Valley, California, after his retirement from the Air Force in 1975. The couple prospered as a result of Yeager's best-selling autobiography, speaking engagements, and commercial ventures. Glennis Yeager died of ovarian cancer in 1990. They had four children (Susan, Don, Mickey, and Sharon). Yeager's son Mickey (Michael) died unexpectedly in Oregon, on March 26, 2011.
Yeager appeared in a Texas advertisement for George H. W. Bush's 1988 presidential campaign.
In 2000, Yeager met actress Victoria Scott D'Angelo on a hiking trail in Nevada County. The pair started dating shortly thereafter, and married in August 2003. A bitter dispute arose between Yeager, his children, and D'Angelo. The children contended that she, at least 35 years Yeager's junior, had married him for his fortune. Yeager and D'Angelo both denied the charge. Litigation ensued, in which his children accused D'Angelo of "undue influence" on Yeager, and Yeager accused his children of diverting millions of dollars from his assets. In August 2008, the California Court of Appeal ruled for Yeager, finding that his daughter Susan had breached her duty as trustee.
Yeager lived in Grass Valley, Northern California and died in the afternoon of December 7, 2020 (National Pearl Harbor Remembrance Day), at age 97, in a Los Angeles hospital. Following his death, President Donald Trump issued a statement of condolences stating Yeager "was one of the greatest pilots in history, a proud West Virginian, and an American original who relentlessly pushed the boundaries of human achievement".
|
6186
|
16416757
|
https://en.wikipedia.org/wiki?curid=6186
|
Cajun cuisine
|
Cajun cuisine ( , ) is a subset of Louisiana cooking developed by the Cajuns, itself a Louisianan development incorporating elements of Native American, West African, French, and Spanish cuisine.
Cajun cuisine is often referred to as a "rustic" cuisine, meaning that it is based on locally available ingredients and that preparation is simple. Cajuns historically cooked their dishes, gumbo for example, in one pot.
Crawfish, shrimp, and andouille sausage are staple meats used in a variety of dishes. The aromatic vegetables green bell pepper (), onion, and celery are called "the trinity" by chefs in Cajun and Louisiana Creole cuisines. Roughly diced and combined in cooking, the method is similar to the use of the "mirepoix" in traditional French cuisine which blends roughly diced carrot, onion, and celery. Additional characteristic aromatics for both the Creole and Cajun versions may include parsley, bay leaf, thyme, green onions, ground cayenne pepper, and ground black pepper. Cayenne and Louisiana-style hot sauce are the primary sources of spice in Cajun cuisine, which usually tends towards a moderate, well-balanced heat, despite the national "Cajun hot" craze of the 1980s and 1990s.
History.
The Acadians were a group of French colonists who lived in Acadia, what is today Eastern Canada. In the mid-18th century, they were deported from Acadia by the British during the French and Indian War in what they termed "le Grand Dérangement", and many of them ended up settling in southern Louisiana.
Due to the extreme change in climate from that of Acadia, Acadians were unable to cook their original dishes. Soon, their former culinary traditions were adapted and, in time, incorporated not only Native American traditions, but also African-American traditions—as is exemplified in the classic Cajun dish "gumbo", which takes its name from the word for its principal ingredient, okra, in the West African Bambara language. In Louisiana, the Acadian settlers replaced the whole wheat bread they were accustomed to with cornbread, which by the beginning of the 19th century they were eating with cane syrup. Between 1790 and 1810 most Louisiana Acadians bought one to three enslaved black persons, many of whom who had come from the West Indies, from whom they learned the use of new ingredients, including okra, to incorporate in their cuisine. The ragu sauces that the Cajuns developed are very similar to sauces used in French West Africa, possibly introduced by enslaved cooks.
Many other meals developed along these lines, adapted in no small part from Haiti, to become what is now considered classic Cajun cuisine traditions (not to be confused with the more modern concept associated with Prudhomme's style).
Up through the 20th century, the meals were not elaborate but instead, rather basic. The public's false perception of "Cajun" cuisine was based on Prudhomme's style of Cajun cooking, which was spicy, flavorful, and not true to the classic form of the cuisine.
Cajun and Creole cuisine have mistakenly been considered the same, but the origins of Creole cooking are in New Orleans, and Cajun cooking arose 40 years after its establishment. Today, most restaurants serve dishes that consist of Cajun styles, which Paul Prudhomme dubbed "Louisiana cooking". In Cajun home-cooking, these individual styles are still kept separate. However, there are fewer and fewer people cooking the classic Cajun dishes that would have been eaten by the original settlers.
Cultural aspects.
According to political scientist Kevin V. Mulcahy writing on cultural identity, Cajun cuisine today is different from that of the 19th and early 20th centuries, but still defines Cajun culture for many people within and outside Acadiana. Its heritage reflects French, Spanish, American Indian, German, and Afro-Caribbean influences. Cajun food is the result of this assimilation or "cultural blending". Rural Cajun cuisine is distinct from the urban Creole cuisine, having arisen by economic necessity among the Acadian immigrants who came to Louisiana in the 18th century. These settlers lived off the land and survived on foods they could obtain by hunting, fishing, ranching, foraging, or growing crops.
Although there is a large variety of dishes within the regions that make up Cajun country in Louisiana, rural Cajuns generally prefer strong dark roast coffee, highly seasoned foods, hot peppers, vegetables smothered in brown gravy, and one-pot dishes served with rice. Each region has its own specialties, such as "andouille" sausage on the west bank of the Mississippi River above New Orleans, formerly known as the German Coast; barbecued shrimp in Terrebonne Parish; tasso ham made from hog's shoulder in the area around Opelousas; and crawfish all across the parishes of southern Louisiana, where they are abundant in the fresh water wetlands and waterways.
Many Cajun recipes are based on rice and the "holy trinity" of onions, celery, and green pepper, and use locally caught shell fish such as shrimp and crawfish. Much of Cajun cookery starts with a "roux" made of wheat flour cooked and slowly stirred with a fat such as oil, butter or lard, known especially as the base for étouffée, gumbo and sauce piquante. Cajun cooks in south Louisiana historically have cooked meals in single pots, and still cook meats by braising. Almost all Cajun households had gardens up until the latter years of the 20th century, and lifted regional culinary standards by adding the fresh vegetables they grew to their dishes.
There was continuity in cuisines between the southern Bayou Teche area and the northern boundary of Cajun country in Avoyelles Parish. Fresh sausage, pork, and the use of salt and pepper as the main seasonings were universal in the region's foodway traditions, north and south. The role of seafood in the cuisine of the southern parishes distinguished it from that of the prairies, where more wild game was consumed instead.
Anthropologist Charlotte Paige Gutierrez has written extensively on Louisiana's traditional foodways. She writes: "The term foodways, as it is now used by writers in various disciplines, has a broad definition. The study of foodways may include the production, distribution, preparation, preservation, serving, and eating of food, as well as the social, symbolic, psychological, and behavioral aspects of food." Modern conveniences influenced Louisiana's culinary traditions: with the introduction of electricity and refrigerators, consuming freshly butchered meat immediately was not imperative as in the past, thus community events such as hog-killings ("boucheries") occurred less frequently. Improved transportation and increased incomes made food stores more accessible and buying produce became more affordable for working families. Cajuns now bought their bread at a grocery store rather than baking their own. According to Gutierrez, when the economy of southern Louisiana boomed with the expansion of oil industry operations in the 1970s, Cajuns gained a renewed pride in their ethnicity.
Only those Cajuns who live near the coast are able to regularly harvest seafood such as crabs, oysters, shrimp, and saltwater fish directly from their habitats. Shrimping, crabbing, fishing, frog-gigging, and gardening have been practiced in Terrebonne and Lafourche Parishes as subsistence and commercial pursuits for many generations. Before the introduction of modern transportation and refrigeration, Cajuns who lived in the southwestern prairie parishes away from the coast had little opportunity to incorporate seafood into their diets. Today, fresh seafood is available all across Acadiana, so that now it is regarded as a regional food rather than one available only to coastal residents.
The cooking traditions of the western prairies and those of the Bayou country in southeastern Louisiana overlap in the lower and middle Bayou Teche region. The complicated network of lakes, streams, bayous, and the flood plains with their rich soil characterize the terrain of Iberia, St. Martin, and St. Mary parishes. The traditional cuisine uses those resources available in the area: pork from hog farms on the plains and seafood from the lowlands.
Seasoning practices in the Teche country occupy a middle place between the salt and black pepper-based approach to spices in the Bayou country and the prevalent use of cayenne pepper in the Cajun prairies. People along the lower and middle Teche use cayenne more often than in the Laforche area. Hot pepper sauce has a more dominant role in the Teche country cuisine than in other Cajun regions.
In the upper Teche region, wild game, freshwater fish, and pork are important in the local diet, with rabbit, duck, and venison being eaten more often than among their neighbors to southward. Avoyelles Parish, along the northern edge of Cajun country where cultural influences converge, shares some of these dietary features, although local cooking traditions are somewhat different than in the Teche country. Natives of the parish make fresh sausage, but cling to certain European customs, notably the preparation of "cochon de lait róti", or roasted suckling pig. After the young pigs are slaughtered, they are suspended vertically by a rope tied to a tree limb and hang over a hardwood fire. For even cooking of the pig, it is rotated with a stick. Halfway through the roasting, the carcass is turned end for end to assure even heating of the meat. Local cooks have constructed improvised rotisseries, some fitting theirs with small motors for mechanized rotation.
The upper prairie, historically an area of small farms, ranches, and rice fields, has its own distinctive cuisine, well known for its smoked meats and "boudin blanc", white sausage made of pork, rice, and seasonings. Local hardwoods such as oak, pecan, and hickory are used to smoke sausages and tasso. Smoked meats are comparatively rare, however, in other Cajun communities.
Cajun cooking methods.
Deep frying of turkeys or oven-roasted turduckens entered southern Louisiana cuisine more recently. Also, blackening of fish or chicken and barbecuing of shrimp in the shell are excluded because they were not prepared in traditional Cajun cuisine. Blackening was actually an invention by chef Paul Prudhomme in the 1970s, becoming associated with Cajun cooking, and presented as such by him, but is not a true historical or traditional Cajun cooking process.
Ingredients.
In the late 18th century, about the same time that Acadian musicians embraced the Spanish guitar, spices from the Iberian Peninsula were adopted in the Acadian cuisine. With the cross-cultural borrowing that took place between them and their neighbors in southern Louisiana, Acadians were eating African okra and American Indian corn by the time of the Louisiana Purchase (1803) in such dishes as "gumbo", "pain de maïs", and "soupe de maïs", which did not closely resemble the African and Indian versions.
The following is a partial list of ingredients used in Cajun cuisine and some of the staple ingredients of the Acadian food culture.
Meat and seafood.
Cajun foodways include many ways of preserving meat, some of which are waning due to the availability of refrigeration and mass-produced meat at the grocer. Smoking of meats remains a fairly common practice, but once-common preparations such as turkey or duck confit (preserved in poultry fat, with spices) are now seen even by Acadians as quaint rarities.
Game (and hunting) are still uniformly popular in Acadiana.
The recent increase of catfish farming in the Mississippi Delta has brought about an increase in its usage in Cajun cuisine in place of the more traditional wild-caught speckled trout.
Beef and dairy<br>
Though parts of Acadiana are well suited to cattle or dairy farming, beef is not often used in a pre-processed or uniquely Cajun form. It is usually prepared fairly simply as chops, stews, or steaks, taking a cue from Texas to the west. Ground beef is used as is traditional throughout the US, although seasoned differently.
Dairy farming is not as prevalent as in the past, but there are still some farms in the business. There are no unique dairy items prepared in Cajun cuisine. Traditional Cajun and New Orleans Creole-influenced desserts are common.
Seasonings.
Thyme, sage, mint, marjoram, savory, and basil are considered sweet herbs. In colonial times a herbes de Provence would be several sweet herbs tied up in a muslin.
Cajun seasonings consist of a blend of salt with a variety of spices, most common being cayenne pepper and garlic. The spicy heat comes from the cayenne pepper, while other flavors come from bell pepper, paprika, green onions, parsley and more.
Preparation of a dark roux is probably the most involved or complicated procedure in Cajun cuisine, involving heating fat and flour very carefully, constantly stirring for about 15–45 minutes (depending on the color of the desired product), until the mixture has darkened in color and developed a nutty flavor. The temperature should not be too high, as a burnt roux renders a dish unpalatable.
A light roux, on the other hand, is better suited for strictly seafood dishes and unsuitable for meat gumbos for the reason that it does not support the heavier meat flavor as well. Pairing roux with protein follows the same orthodox philosophy as pairing wine with protein.
Cajun dishes.
Primary favorites.
Boudin—a type of sausage made from pork, pork liver, rice, garlic, green onions and other spices. It is widely available by the link or pound from butcher shops. "Boudin" is typically stuffed in a natural casing and has a softer consistency than other, better-known sausage varieties. It is usually served with side dishes such as rice dressing, "maque choux" or bread. "Boudin" balls are commonly served in southern Louisiana restaurants and are made by taking the "boudin" out of the case and frying it in spherical form.
Gumbo—High on the list of favorites of Cajun cooking are the soups called gumbos. Contrary to non-Cajun or Continental beliefs, gumbo does not mean simply "everything in the pot". Gumbo exemplifies the influence of French, Spanish, African and Native American food cultures on Cajun cuisine.
The origins of the word "gumbo" are in West Africa. Kellersberger Vass lists "kingumbo" and "tshingombo" as the Bantu words for okra, while John Laudon of the University of Louisiana says the word "gombo" is a French word that came to the Western Hemisphere from West Africa, where okra was known as "(ki) ngombo" along much of the region's coast.
Both filé and okra can be used as thickening agents in gumbo. Historically, large amounts of filé were added directly to the pot when okra was out of season. While a distinction between filé gumbo and okra gumbo is still held by some, many people enjoy putting filé in okra gumbo simply as a flavoring. Regardless of which is the dominant thickener, filé is also provided at the table and added to taste.
Many claim that gumbo is a Cajun dish, but gumbo was established long before the Acadian arrival.
Its early existence came via the early French Creole culture in New Orleans, Louisiana, where French, Spanish and Africans frequented and also influenced by later waves of Italian, German and Irish settlers.
The backbone of a gumbo is roux, as described above. Cajun gumbo typically favors darker roux, often approaching the color of chocolate or coffee beans. Since the starches in the flour break down more with longer cooking time, a dark roux has less thickening power than a lighter one. While the stovetop method is traditional, flour may also be dry-toasted in an oven for a fat-free roux, or a regular roux may be prepared in a microwave oven for a hands-off method. If the roux is for immediate use, the "trinity" may be sauteed in it, which stops the cooking process.
A classic gumbo is made with chicken and andouille, especially in the colder months, but the ingredients vary according to what is available. Seafood gumbos are also very popular in Cajun country.
Jambalaya—The only certain thing that can be said about jambalaya is that it contains rice, some sort of meat (often chicken, ham, sausage, or a combination), seafood (such as shrimp or crawfish), plus other items that may be available. Usually, it will include green peppers, onions, celery, tomatoes and hot chili peppers. This is also a great pre-Acadian dish, established by the Spanish in Louisiana. Jambalaya may be a tomato-rich New Orleans-style "red" jambalaya of Spanish Creole roots, or a Cajun-style "brown" jambalaya which draws its color and flavor from browned meat and caramelized onions. Historically, tomatoes were not as widely available in Acadiana as the area around New Orleans, but in modern times, both styles are popular across the state. Brown is the style served at the annual World Jambalaya Festival in Gonzales.
Rice and gravy—Rice and gravy dishes are a staple of Cajun cuisine and is usually a brown gravy based on pan drippings, which are deglazed and simmered with extra seasonings and served over steamed or boiled rice.
The dish is traditionally made from cheaper cuts of meat and cooked in a cast-iron pot, typically for an extended time period to let the tough cuts of meat become tender. Beef, pork, chicken or any of a large variety of game meats are used for its preparation. Popular local varieties include hamburger steak, smothered rabbit, turkey necks, and chicken fricassee.
Food as an event.
Crawfish boil.
The crawfish boil is a celebratory event where Cajuns boil crawfish, potatoes, onions and corn in large pots over propane cookers. Lemons and small muslin bags containing a mixture of bay leaves, mustard seeds, cayenne pepper, and other spices, commonly known as "crab boil" or "crawfish boil" are added to the water for seasoning.
The results are then dumped onto large, newspaper-draped tables and in some areas covered in Creole/Cajun spice blends, such as REX, Zatarain's, Louisiana Fish Fry, or Tony Chachere's. Also, cocktail sauce, mayonnaise, and hot sauce are sometimes used. The seafood is scooped onto large trays or plates and eaten by hand.
During times when crawfish are not abundant, shrimp and crabs are prepared and served in the same manner.
Attendees are encouraged to "suck the head" of a crawfish by separating the head from the abdomen of the crustacean and sucking out the fat and juices from the head.
Often, newcomers to the crawfish boil or those unfamiliar with the traditions are jokingly warned "not to eat the dead ones." This comes from the common belief that when live crawfish are boiled, their tails curl beneath themselves, but when dead crawfish are boiled, their tails are straight and limp.
Seafood boils with crabs and shrimp are also popular.
Family.
The traditional Cajun outdoor food event is hosted by a farmer in the rural areas of Acadiana. Family and friends of the farmer gather to socialize, play games, dance, drink, and have a copious meal consisting of hog and other dishes. Men have the task of slaughtering a hog, cutting it into usable parts, and cooking the main pork dishes while women have the task of making boudin.
Similar to a family , the is a food event that revolves around pork but does not need to be hosted by a farmer. Traditionally, a suckling pig was purchased for the event, but in modern , adult pigs are used.
Unlike the family , a hog is not butchered by the hosts and there are generally not as many guests or activities. The host and male guests have the task of roasting the pig (see pig roast) while female guests bring side dishes.
Rural Mardi Gras.
The traditional Cajun Mardi Gras (see: "Courir de Mardi Gras") is a Mardi Gras celebration in rural Cajun Parishes. The tradition originated in the 18th century with the Cajuns of Louisiana, but it was abandoned in the early 20th century because of unwelcome violence associated with the event. In the early 1950s the tradition was revived in Mamou in Evangeline Parish.
The event revolves around male maskers on horseback who ride into the countryside to collect food ingredients for the party later on. They entertain householders with Cajun music, dancing, and festive antics in return for the ingredients. The preferred ingredient is fresh chicken: the householder throws a live chicken to the maskers, allowing them to chase it down (symbolizing a hunt); other ingredients include rice, sausage, vegetables, or a frozen chicken if a live one is not available.
Unlike other Cajun events, men take no part in cooking the main course for the party, and women prepare the chicken and ingredients for the gumbo. Once the festivities begin, the Cajun community members eat and dance to Cajun music until midnight after which is the beginning of Lent.
|
6187
|
1300080179
|
https://en.wikipedia.org/wiki?curid=6187
|
Cologne
|
Cologne ( ; ; ) is the largest city of the German state of North Rhine-Westphalia and the fourth-most populous city of Germany with nearly 1.1 million inhabitants in the city proper and over 3.1 million people in the Cologne Bonn urban region. Cologne is also part of the Rhine-Ruhr metropolitan region, the second biggest metropolitan region by GDP in the European Union. Centered on the left (west) bank of the Rhine, Cologne is located on the River Rhine (Lower Rhine), about 35km (21.748 miles) southeast of the North Rhine-Westphalia state capital Düsseldorf and 22km (13.67 miles) northwest of Bonn, the former capital of West Germany.
The city's medieval Cologne Cathedral () was the from 1880 to 1890 and is today the third-tallest church and tallest cathedral in the world. It was constructed to house the Shrine of the Three Kings and is a globally recognized landmark and one of the most visited sights and pilgrimage destinations in Europe. The cityscape is further shaped by the Twelve Romanesque churches of Cologne. Cologne is famous for Eau de Cologne, which has been produced in the city since 1709; "cologne" has since come to be a generic term.
Cologne was founded and established in Germanic Ubii territory in the 1st century AD as the Roman , hence its name. was later dropped (except in Latin), and became the name of the city in its own right, which developed into modern German as . , the French version of the city's name, has become standard in English as well. Cologne functioned as the capital of the Roman province of and as the headquarters of the Roman military in the region until occupied by the Franks in 462. During the Middle Ages the city flourished as being located on one of the most important major trade routes between eastern and western Europe (including the Brabant Road, Via Regia and Publica). Cologne was a free imperial city of the Holy Roman Empire and one of the major members of the trade union Hanseatic League. It was one of the largest European cities in medieval and renaissance times.
Prior to World War II, the city had undergone occupations by the French (1794–1815) and the British (1918–1926), and was part of Prussia beginning in 1815. Cologne was one of the most heavily bombed cities in Germany during World War II. The bombing reduced the population by 93% mainly due to evacuation, and destroyed around 80% of the millennia-old city center. The post-war rebuilding has resulted in a mixed cityscape, restoring most major historic landmarks like city gates and churches (31 of them being Romanesque). The city nowadays consists of around 25% pre World War II buildings and boasts around 9,000 historic buildings.
Cologne is a major cultural center for the Rhineland; it hosts more than 30 museums and hundreds of galleries. There are many institutions of higher education, most notably the University of Cologne, one of Europe's oldest and largest universities; the Technical University of Cologne, Germany's largest university of applied sciences; and the German Sport University Cologne. It hosts three Max Planck science institutes and is a major research hub for the aerospace industry, with the German Aerospace Center and the European Astronaut Centre headquarters. Lufthansa, Europe's largest airline, have their main corporate headquarters in Cologne. It also has a significant chemical and automobile industry. Cologne Bonn Airport is a regional hub, the main airport for the region being Düsseldorf Airport. The Cologne Trade Fair hosts a number of trade shows.
History.
Roman Cologne.
The first urban settlement on the grounds of modern-day Cologne was "Oppidum Ubiorum", founded in 38 BC by the Ubii, a Cisrhenian Germanic tribe. In AD 50, the Romans founded Colonia Claudia Ara Agrippinensium (Cologne) on the river Rhine, a colonia which was named after Emperor Claudius and his wife, who was born here, Agrippina the Younger. In 85, the city became the provincial capital of Germania Inferior. It was also known as . Considerable Roman remains can be found in present-day Cologne, especially near the wharf area, where a 1,900-year-old Roman boat was discovered in late 2007. From 260 to 271, Cologne was the capital of the Gallic Empire under Postumus, Marius, and Victorinus. In 310, under emperor Constantine I, a bridge was built over the Rhine at Cologne. Roman imperial governors resided in the city and it became one of the most important trade and production centers in the Roman Empire north of the Alps. Cologne is shown on the 4th century Peutinger Map.
Maternus, who was elected as bishop in 313, was the first known bishop of Cologne. The city was the capital of a Roman province until it was occupied by the Ripuarian Franks in 462. Parts of the original Roman sewers are preserved underneath the city, with the new sewerage system having opened in 1890.
After the destruction of the Second Temple in the Siege of Jerusalem and the associated dispersion (diaspora) of the Jews, there is evidence of a Jewish community in Cologne. In 321, Emperor Constantine approved the settlement of a Jewish community with all the freedoms of Roman citizens. It is assumed that it was located near the Marspforte within the city wall. The Edict of Constantine to the Jews is the oldest documented evidence in Germany.
Middle Ages.
Early medieval Cologne was part of Austrasia within the Frankish Empire. Cunibert, made bishop of Cologne in 623, was an important advisor to the Merovingian King Dagobert I and served with domesticus Pepin of Landen as tutor to the king's son and heir Siegebert III, the future king of Austrasia. In 716, Charles Martel commanded an army for the first time and suffered the only defeat of his life when Chilperic II, King of Neustria, invaded Austrasia and the city fell to him in the Battle of Cologne. Charles fled to the Eifel mountains, rallied supporters and took the city back that same year after defeating Chilperic in the Battle of Amblève. Cologne had been the seat of a bishop since the Roman period; under Charlemagne, in 795, bishop Hildebold was promoted to archbishop. In the 843 Treaty of Verdun Cologne fell into the dominion of Lothair I's Middle Francia – later called Lotharingia (Lower Lorraine).
In 953, the archbishops of Cologne first gained noteworthy secular power when bishop Bruno was appointed as duke by his brother Otto I, King of Germany. In order to weaken the secular nobility, who threatened his power, Otto endowed Bruno and his archiepiscopal successors with the prerogatives of secular princes, thus establishing the Electorate of Cologne, formed by the temporal possessions of the archbishopric and included in the end a strip of territory along the left Bank of the Rhine east of Jülich, as well as the Duchy of Westphalia on the other side of the Rhine, beyond Berg and Mark. By the end of the 12th century, the Archbishop of Cologne was one of the seven electors of the Holy Roman Emperor. Besides being prince elector, he was Archchancellor of Italy as well, technically from 1238 and permanently from 1263 until 1803.
Following the Battle of Worringen in 1288, Cologne gained its independence from the archbishops and became a Free City. Archbishop Sigfried II von Westerburg was forced to reside in Bonn. The archbishop nevertheless preserved the right of capital punishment. Thus the municipal council (though in strict political opposition towards the archbishop) depended upon him in all matters concerning criminal justice. This included torture, the sentence for which was only allowed to be handed down by the episcopal judge known as the greve. This legal situation lasted until the French conquest of Cologne.
Besides its economic and political significance Cologne also became an important centre of medieval pilgrimage, when Cologne's archbishop, Rainald of Dassel, gave the relics of the Three Wise Men to Cologne's cathedral in 1164 (after they had been taken from Milan). Besides the three magi Cologne preserves the relics of Saint Ursula and Albertus Magnus.
Cologne's location on the river Rhine placed it at the intersection of the major trade routes between east and west as well as the main south–north Western Europe trade route, Venice to Netherlands; even by the mid-10th century, merchants in the town were already known for their prosperity and luxurious standard of living due to the availability of trade opportunities. The intersection of these trade routes was the basis of Cologne's growth. By the end of the 12th century, Archbishop Phillip von Heinsberg enclosed the entire city with walls. By 1300 the city population was 50,000–55,000. Cologne was a member of the Hanseatic League in 1475, when Frederick III confirmed the city's imperial immediacy. Cologne was so influential in regional commerce that its systems of weights and measurements (e.g. the Cologne mark) were used throughout Europe.
Early modern history.
The economic structures of medieval and early modern Cologne were characterised by the city's status as a major harbour and transport hub on the Rhine. Craftsmanship was organised by self-administering guilds, some of which were exclusive to women.
As a free imperial city, Cologne was a self-ruling state within the Holy Roman Empire, an imperial estate with seat and vote at the Imperial Diet, and as such had the right (and obligation) to contribute to the defense of the Empire and maintain its own military force. As they wore a red uniform, these troops were known as the "Rote Funken" (red sparks). These soldiers were part of the Army of the Holy Roman Empire ("Reichskontingent"). They fought in the wars of the 17th and 18th century, including the wars against revolutionary France in which the small force was almost completely wiped out in combat. The tradition of these troops is preserved as a military persiflage by Cologne's most outstanding carnival society, the "Rote Funken".
The Free Imperial City of Cologne must not be confused with the Electorate of Cologne, which was a state of its own within the Holy Roman Empire. Since the second half of the 16th century, the majority of archbishops were drawn from the Bavarian Wittelsbach dynasty. Due to the free status of Cologne, the archbishops were usually not allowed to enter the city. Thus they took up residence in Bonn and later in Brühl on the Rhine. As members of an influential and powerful family, and supported by their outstanding status as electors, the archbishops of Cologne repeatedly challenged and threatened the free status of Cologne during the 17th and 18th centuries, resulting in complicated affairs, which were handled by diplomatic means and propaganda as well as by the supreme courts of the Holy Roman Empire.
From the 19th century until World War I.
Cologne lost its status as a free city during the French period. According to the Treaty of Lunéville (1801) all the territories of the Holy Roman Empire on the left bank of the Rhine were officially incorporated into the French Republic (which had already occupied Cologne in 1794). Thus this region later became part of Napoleon's Empire. Cologne was part of the French Département Roer (named after the river Roer, German: Rur) with Aachen (French: Aix-la-Chapelle) as its capital. The French modernised public life, for example by introducing the Napoleonic code and removing the old elites from power. The Napoleonic code remained in use on the left bank of the Rhine until 1900, when a unified civil code (the "Bürgerliches Gesetzbuch") was introduced in the German Empire. In 1815 at the Congress of Vienna, Cologne was made part of the Kingdom of Prussia, first in the Province of Jülich-Cleves-Berg and then the Rhine Province.
The permanent tensions between the Catholic Rhineland and the overwhelmingly Protestant Prussian state repeatedly escalated with Cologne being in the focus of the conflict. In 1837 the archbishop of Cologne, Clemens August von Droste-Vischering, was arrested and imprisoned for two years after a dispute over the legal status of marriages between Catholics and Protestants ("Mischehenstreit"). In 1874, during the Kulturkampf, Archbishop Paul Melchers was imprisoned before taking asylum in the Netherlands. These conflicts alienated the Catholic population from Berlin and contributed to a deeply felt anti-Prussian resentment, which was still significant after World War II, when the former mayor of Cologne, Konrad Adenauer, became the first West German chancellor.
During the 19th and 20th centuries, Cologne absorbed numerous surrounding towns, and by World War I had already grown to 700,000 inhabitants. Industrialisation changed the city and spurred its growth. Vehicle and engine manufacturing was especially successful, though the heavy industry was less ubiquitous than in the Ruhr area. The cathedral, started in 1248 but abandoned around 1560, was eventually finished in 1880 not just as a place of worship but also as a German national monument celebrating the newly founded German empire and the continuity of the German nation since the Middle Ages. Some of this urban growth occurred at the expense of the city's historic heritage with much being demolished (for example, the city walls or the area around the cathedral) and sometimes replaced by contemporary buildings.
Cologne was designated as one of the Fortresses of the German Confederation. It was turned into a heavily armed fortress (opposing the French and Belgian fortresses of Verdun and Liège) with two fortified belts surrounding the city, the remains of which can be seen to this day. The military demands on what became Germany's largest fortress presented a significant obstacle to urban development, with forts, bunkers, and wide defensive dugouts completely encircling the city and preventing expansion; this resulted in a very densely built-up area within the city itself.
During World War I Cologne was the target of several minor air raids but suffered no significant damage. Cologne was occupied by the British Army of the Rhine until 1926, under the terms of the Armistice and the subsequent Versailles Peace Treaty. In contrast with the harsh behaviour of the French occupation troops in Germany, the British forces were more lenient to the local population. Konrad Adenauer, the mayor of Cologne from 1917 until 1933 and later a West German chancellor, acknowledged the political impact of this approach, especially since Britain had opposed French demands for a permanent Allied occupation of the entire Rhineland.
As part of the demilitarisation of the Rhineland, the city's fortifications had to be dismantled. This was an opportunity to create two green belts ("Grüngürtel") around the city by converting the fortifications and their fields of fire into large public parks. This was not completed until 1933. In 1919 the University of Cologne, closed by the French in 1798, was reopened. This was considered to be a replacement for the loss of the University of Strasbourg on the west bank of the Rhine, which reverted to France with the rest of Alsace. Cologne prospered during the Weimar Republic (1919–33), and progress was made especially in public governance, city planning, housing and social affairs. Social housing projects were considered exemplary and were copied by other German cities. Cologne competed to host the Olympics, and a modern sports stadium was erected at Müngersdorf. When the British occupation ended, the prohibition of civil aviation was lifted and Cologne Butzweilerhof Airport soon became a hub for national and international air traffic, second in Germany only to Berlin Tempelhof Airport.
The democratic parties lost the local elections in Cologne in March 1933 to the Nazi Party and other extreme-right parties. The Nazis then arrested the Communist and Social Democrats members of the city assembly, and Mayor Adenauer was dismissed. Compared to some other major cities, however, the Nazis never gained decisive support in Cologne. (Significantly, the number of votes cast for the Nazi Party in Reichstag elections had always been the national average.) By 1939, the population had risen to 772,221 inhabitants.
World War II.
During World War II, Cologne was a Military Area Command Headquarters () for Wehrkreis VI (headquartered at Münster). Cologne was under the command of Lieutenant-General Freiherr Roeder von Diersburg, who was responsible for military operations in Bonn, Siegburg, Aachen, Jülich, Düren, and Monschau. Cologne was home to the 211th Infantry Regiment and the 26th Artillery Regiment.
The Allies dropped 44,923.2 tons of bombs on the city during World War II, destroying 61% of its built-up area. During the Bombing of Cologne in World War II, Cologne endured 262 air raids by the Western Allies, which caused approximately 20,000 civilian casualties and almost completely wiped out the central part of the city. During the night of 31 May 1942, Cologne was the target of "Operation Millennium", the first 1,000 bomber raid by the Royal Air Force in World War II. 1,046 heavy bombers attacked their target with 1,455 tons of explosives, approximately two-thirds of which were incendiary. This raid lasted about 75 minutes, destroyed of built-up area (61%), killed 486 civilians and made 59,000 people homeless. The devastation was recorded by Hermann Claasen from 1942 until the end of the war, and presented in his exhibition and book of 1947 "Singing in the furnace. Cologne – Remains of an old city".
Cologne was taken by the American First Army in early March 1945 during the Invasion of Germany after a battle. By the end of the war, the population of Cologne had been reduced by 95%. This loss was mainly caused by a massive evacuation of the people to more rural areas. The same happened in many other German cities in the last two years of war. By the end of 1945, however, the population had already recovered to approximately 450,000. By the end of the war, essentially all of Cologne's pre-war Jewish population of 11,000 had been deported or killed by the Nazis. The six synagogues of the city were destroyed. The synagogue on Roonstraße was rebuilt in 1959.
Post-war and Cold War eras.
Despite Cologne's status as the largest city in the region, nearby Düsseldorf was chosen as the political capital of the federated state of North Rhine-Westphalia. With Bonn being chosen as the provisional federal capital ("provisorische Bundeshauptstadt") and seat of the government of the Federal Republic of Germany (then informally West Germany), Cologne benefited by being sandwiched between two important political centres. The city became–and still is–home to a number of federal agencies and organizations. After reunification in 1990, Berlin was made the capital of Germany.
In 1945 architect and urban planner Rudolf Schwarz called Cologne the "world's greatest heap of rubble". Schwarz designed the master plan for reconstruction in 1947, which included the construction of several new thoroughfares through the city centre, especially the "Nord-Süd-Fahrt" ("North-South-Drive"). The master plan took into consideration the fact that even shortly after the war a large increase in automobile traffic could be anticipated. Plans for new roads had already, to a certain degree, evolved under the Nazi administration, but the actual construction became easier when most of the city centre was in ruins.
The destruction of 95% of the city centre, including the famous Twelve Romanesque churches such as St. Gereon, Great St. Martin, St. Maria im Kapitol and several other monuments in World War II, meant a tremendous loss of cultural treasures. The rebuilding of those churches and other landmarks such as the Gürzenich event hall was not undisputed among leading architects and art historians at that time, but in most cases, civil intention prevailed. The reconstruction lasted until the 1990s, when the Romanesque church of St. Kunibert was finished.
In 1959, the city's population reached pre-war numbers again. It then grew steadily, exceeding 1 million for about one year from 1975. It remained just below that until mid-2010, when it exceeded 1 million again.
Post-reunification.
In the 1980s and 1990s Cologne's economy prospered for two main reasons. The first was the growth in the number of media companies, both in the private and public sectors; they are especially catered for in the newly developed Media Park, which creates a strong visual focal point in Cologne's city centre and includes the "KölnTurm", one of Cologne's most prominent high-rise buildings. The second was the permanent improvement of the diverse traffic infrastructure, which made Cologne one of the most easily accessible metropolitan areas in Central Europe.
Due to the economic success of the Cologne Trade Fair, the city arranged a large extension to the fair site in 2005. At the same time the original buildings, which date back to the 1920s, were rented out to RTL, Germany's largest private broadcaster, as their new corporate headquarters.
Cologne was the focus of the 2015-16 New Year's Eve sexual assaults in Germany, with over 500 women reporting that they were sexually assaulted by persons of African and Arab appearance.
Geography.
The metropolitan area encompasses over , extending around a central point that lies at 50° 56' 33 latitude and 6° 57' 32 longitude. The city's highest point is above sea level (the Monte Troodelöh) and its lowest point is above sea level (the Worringer Bruch). The city of Cologne lies within the larger area of the Cologne Lowland, a cone-shaped area of the central Rhineland that lies between Bonn, Aachen and Düsseldorf.
Districts.
Cologne is divided into 9 boroughs ("Stadtbezirke") and 85 districts ("Stadtteile"):
Climate.
Located in the Rhine-Ruhr area, Cologne is one of the warmest cities in Germany. It has a temperate–oceanic climate (Köppen: "Cfb") with cool winters and warm summers. It is also one of the cloudiest cities in Germany, with just 1,567.5 hours of sun a year. Its average annual temperature is : during the day and at night. In January, the mean temperature is , while the mean temperature in July is . The record high temperature of happened on 25 July 2019 during the July 2019 European heat wave in which Cologne saw three consecutive days over . Especially the inner urban neighbourhoods experience a greater number of hot days, as well as significantly higher temperatures during nighttime compared to the surrounding area (including the airport, where temperatures are classified). Still temperatures can vary noticeably over the course of a month with warmer and colder weather. Precipitation is spread evenly throughout the year with a light peak in summer due to showers and thunderstorms.
The progressing climate change can be seen by looking at the climate data of the previous decade with lower mean temperatures.
Flood protection.
Cologne is regularly affected by flooding from the Rhine and is considered the most flood-prone European city. A city agency ("Stadtentwässerungsbetriebe Köln", "Cologne Urban Drainage Operations") manages an extensive flood control system which includes both permanent and mobile flood walls, protection from rising waters for buildings close to the river banks, monitoring and forecasting systems, pumping stations and programmes to create or protect floodplains, and river embankments. The system was redesigned after a 1993 flood, which resulted in heavy damage.
Demographics.
In the Roman Empire, the city was large and rich with a population of 40,000 in 100–200 AD. The city was home to around 20,000 people in 1000 AD, growing to 50,000 in 1200 AD. The Rhineland metropolis still had 50,000 residents in 1300 AD.
Cologne is the fourth-largest city by population in Germany after Berlin, Hamburg and Munich. As of 31 December 2021, there were 1,079,301 people registered as living in Cologne in an area of , which makes Cologne the third largest city by area. The population density was . Cologne first reached the population of 1,000,000 in 1975 due to the incorporation of Wesseling, however this was reversed after public opposition. In 2009 Cologne's population again reached 1,000,000 and it became one of the four cities in Germany with a population exceeding 1 Million. The metropolitan area of the Cologne Bonn Region is home to 3,573,500 living on . It is part of the polycentric megacity region Rhine-Ruhr with a population of over 11,000,000 people.
There were 551,528 women and 527,773 men in Cologne. In 2021, there were 11,127 births in Cologne; 5,844 marriages and 1,808 divorces, and 10,536 deaths. In the city, the population was spread out, with 16.3% under the age of 18, and 17.8% were 65 years of age or older. 203 people in Cologne were over the age of 100.
According to the Statistical Office of the City of Cologne, the number of people with a migrant background is at 40.5% (436,660). 2,254 people acquired German citizenship in 2021. In 2021, there were 559,854 households, of which 18.4% had children under the age of 18; 51% of all households were made up of singles. 8% of all households were single-parent households. The average household size was 1.88.
Residents with foreign citizenship.
Cologne residents with a foreign citizenship as of 31 December 2021 is as follows:
Turkish community.
Cologne is home to 90,000 people of Turkish origin and is the second largest German city with Turkish population after Berlin. Cologne has a Little Istanbul in Keupstraße that has many Turkish restaurants and markets. Famous Turkish-German people like rapper Eko Fresh and TV presenter Nazan Eckes were born in Cologne.
Language.
Colognian or Kölsch () (natively "Kölsch Platt") is a small set of very closely related dialects, or variants, of the Ripuarian Central German group of languages. These dialects are spoken in the area covered by the Archdiocese and former Electorate of Cologne reaching from Neuss in the north to just south of Bonn, west to Düren and east to Olpe in the North-West of Germany. Kölsch is one of the very few city dialects in Germany, which also include the dialect spoken in Berlin, for example.
Religion.
As of 2015, 35.5% of the population belonged to the Catholic Church, the largest religious body, and 15.5% to the Protestant Church. Irenaeus of Lyons claimed that Christianity was brought to Cologne by Roman soldiers and traders at an unknown early date. It is known that in the early second century it was a bishop's seat. The first historical Bishop of Cologne was Saint Maternus. Thomas Aquinas studied in Cologne in 1244 under Albertus Magnus. Cologne is the seat of the Archdiocese of Cologne.
According to the 2011 census, 2.1% of the population was Eastern Orthodox, 0.5% belonged of an Evangelical Free Church and 4.2% belonged to further religious communities officially recognized by the state of North Rhine-Westphalia (such as Jehovah's Witnesses).
There are several mosques, including the Cologne Central Mosque run by the Turkish-Islamic Union for Religious Affairs. In 2011, about 11.2% of the population was Muslim.
Cologne also has one of the oldest and largest Jewish communities in Germany. In 2011, 0.3% of Cologne's population was Jewish.
On 11 October 2021, the Mayor of Cologne, Henriette Reker, announced that all of Cologne's 35 mosques would be allowed to broadcast the Adhan (prayer call) for up to five minutes on Fridays between noon and 3 p.m. She commented that the move "shows that diversity is appreciated and loved in Cologne".
Government and politics.
The city's administration is headed by the mayor and the three deputy mayors.
Political traditions and developments.
The long tradition of a free imperial city, which long dominated an exclusively Catholic population and the age-old conflict between the church and the bourgeoisie (and within it between the patricians and craftsmen) have created its own political climate in Cologne. Various interest groups often form networks beyond party boundaries. The resulting web of relationships, with political, economic, and cultural links with each other in a system of mutual favours, obligations and dependencies, is called the 'Cologne coterie'. This has often led to an unusual proportional distribution in the city government and degenerated at times into corruption: in 1999, a "waste scandal" over kickbacks and illegal campaign contributions came to light, which led not only to the imprisonment of the entrepreneur Hellmut Trienekens, but also to the downfall of almost the entire leadership of the ruling Social Democrats.
Mayor.
The incumbent Lord Mayor of Cologne is Henriette Reker. She received 52.66% of the vote at the municipal election on 17 October 2015, running as an independent with the support of the CDU, FDP, and Greens. She took office on 15 December 2015. Reker was re-elected to a second term in a runoff election on 27 September 2020, in which she received 59.27% of the vote.
The most recent mayoral election was held on 13 September 2020, with a runoff held on 27 September, and the results were as follows:
! rowspan=2 colspan=2| Candidate
! rowspan=2| Party
! colspan=2| First round
! colspan=2| Second round
! Votes
! Votes
! colspan=3| Valid votes
! 415,933
! 98.7
! 294,016
! 99.1
! colspan=3| Invalid votes
! 5,633
! 1.3
! 2,727
! 0.9
! colspan=3| Total
! 421,566
! 100.0
! 296,743
! 100.0
! colspan=3| Electorate/voter turnout
! 820,527
! 51.4
! 818,731
! 36.2
City council.
The Cologne city council ("Kölner Stadtrat") governs the city alongside the Mayor. It serves a term of five years. The most recent city council election was held on 13 September 2020, and the results were as follows:
! colspan=2| Party
! Votes
! +/-
! Seats
! colspan=2| Valid votes
! 417,227
! 98.9
!
! colspan=2| Invalid votes
! 4,596
! 1.1
!
! colspan=2| Total
! 421,823
! 100.0
! 90
! ±0
! colspan=2| Electorate/voter turnout
! 820,526
! 51.4
! 1.8
!
State Landtag.
In the Landtag of North Rhine-Westphalia, Cologne is divided among seven constituencies. After the 2022 North Rhine-Westphalia state election, the composition and representation of each was as follows:
Federal parliament.
In the Bundestag, Cologne is divided among four constituencies. In the 20th Bundestag, elected 26 September 2021, the composition and representation of each was as follows:
Cityscape.
The inner city of Cologne was largely destroyed during World War II. The reconstruction of the city followed the style of the 1950s, while respecting the old layout and naming of the streets. Thus, the city centre today is characterized by modern architecture, with a few interspersed pre-war buildings which were reconstructed due to their historical importance. Some buildings of the "Wiederaufbauzeit" (era of reconstruction), for example, the opera house by Wilhelm Riphahn, are nowadays regarded as classics of modern architecture. Nevertheless, the uncompromising style of the Cologne Opera house and other modern buildings has remained controversial.
The districts outside the city center consist mostly of 19th and 20th century buildings. Around 25% of Cologne was built before 1945.
Green areas account for over a quarter of Cologne, which is approximately of public green space for every inhabitant.
Wildlife.
The dominant wildlife of Cologne is insects, small rodents, and several species of birds. Pigeons are the most often seen animals in Cologne, although the number of birds is augmented each year by a growing population of feral exotics, most visibly parrots such as the rose-ringed parakeet. The sheltered climate in southeast Northrhine-Westphalia allows these birds to survive through the winter, and in some cases, they are displacing native species. The plumage of Cologne's green parrots is highly visible even from a distance, and contrasts starkly with the otherwise muted colours of the cityscape.
Hedgehogs, rabbits and squirrels are common in parks and the greener parts of town. In the outer suburbs foxes and wild boar can be seen, even during the day.
Tourism.
Cologne had 5.8 million overnight stays booked and 3.35 million arrivals in 2016.
Landmarks.
Medieval houses.
The Cologne City Hall ("Kölner Rathaus"), founded in the 12th century, is the oldest city hall in Germany still in use. The Renaissance-style loggia and tower were added in the 15th century. Other famous buildings include the Gürzenich, Haus Saaleck and the Overstolzenhaus.
Medieval city gates.
Of the twelve medieval city gates that once existed, only the Eigelsteintorburg at Ebertplatz, the Hahnentor at Rudolfplatz and the Severinstorburg at Chlodwigplatz still stand today.
Bridges.
Several bridges cross the Rhine in Cologne. They are (from south to north): the Rodenkirchen Bridge, South Bridge (railway), , Deutz Bridge, Hohenzollern Bridge (railway), ("Zoobrücke") and Mülheim Bridge. In particular the iron tied arch Hohenzollern Bridge ("Hohenzollernbrücke") is a dominant landmark along the river embankment. A Rhine crossing of a special kind is provided by the Cologne Cable Car (German: "Kölner Seilbahn"), a cableway that runs across the river between the Cologne Zoological Garden in Riehl and the Rheinpark in Deutz.
High-rise structures.
Cologne's tallest structure is the Colonius telecommunication tower at . The observation deck has been closed since 1992. A selection of the tallest buildings in Cologne is listed below. Other tall structures include the Hansahochhaus (designed by architect Jacob Koerfer and completed in 1925 – it was at one time Europe's tallest office building), the Kranhaus buildings at Rheinauhafen, and the Messeturm Köln ("trade fair tower").
Culture.
Cologne has numerous museums. The famous Roman-Germanic Museum features art and architecture from the city's distant past; the Museum Ludwig houses one of the most important collections of modern art in Europe, including a Picasso collection matched only by the museums in Barcelona and Paris. The Museum Schnütgen of religious art is partly housed in St. Cecilia, one of Cologne's Twelve Romanesque churches. Many art galleries in Cologne enjoy a worldwide reputation like e.g. Galerie Karsten Greve, one of the leading galleries for postwar and contemporary art.
Cologne has more than 60 music venues and the third-highest density of music venues of Germany's four largest cities, after Munich and Hamburg and ahead of Berlin.
Several orchestras are active in the city, among them the Gürzenich Orchestra, which is also the orchestra of the Cologne Opera and the WDR Symphony Orchestra Cologne ("German State Radio Orchestra"), both based at the Cologne Philharmonic Orchestra Building (Kölner Philharmonie). Other orchestras are the Musica Antiqua Köln, the WDR Rundfunkorchester Köln and WDR Big Band, and several choirs, including the WDR Rundfunkchor Köln. Cologne was also an important hotbed for electronic music in the 1950s (Studio für elektronische Musik, Karlheinz Stockhausen) and again from the 1990s onward. The public radio and TV station WDR was involved in promoting musical movements such as Krautrock in the 1970s; the influential Can was formed there in 1968. There are several centres of nightlife, among them the "Kwartier Latäng" (the student quarter around the Zülpicher Straße) and the nightclub-studded areas around Hohenzollernring, Friesenplatz and Rudolfplatz.
The large annual literary festival with its features regional and international authors. The main literary figure connected with Cologne is the writer Heinrich Böll, winner of the Nobel Prize for Literature. Since 2012, there is also an annual international festival of philosophy called .
The city also has the most pubs per capita in Germany. Cologne is well known for its beer, called Kölsch. Kölsch is also the name of the local dialect. This has led to the common joke of Kölsch being the only language one can drink.
Cologne is also famous for Eau de Cologne (German: "Kölnisch Wasser"; lit: "Water of Cologne"), a perfume created by Italian expatriate Johann Maria Farina at the beginning of the 18th century. During the 18th century, this perfume became increasingly popular, was exported all over Europe by the Farina family and "Farina" became a household name for "Eau de Cologne". In 1803 Wilhelm Mülhens entered into a contract with an unrelated person from Italy named Carlo Francesco Farina who granted him the right to use his family name and Mühlens opened a small factory at Cologne's Glockengasse. In later years, and after various court battles, his grandson Ferdinand Mülhens was forced to abandon the name "Farina" for the company and their product. He decided to use the house number given to the factory at Glockengasse during the French occupation in the early 19th century, 4711. Today, original Eau de Cologne is still produced in Cologne by both the Farina family, in the eighth generation, and by Mäurer & Wirtz who bought the 4711 brand in 2006.
Carnival.
The Cologne carnival is one of the largest street festivals in Europe. In Cologne, the carnival season officially starts on 11 November at 11 minutes past 11 a.m. with the proclamation of the new Carnival Season, and continues until Ash Wednesday. However, the so-called "Tolle Tage" (crazy days) do not start until "Weiberfastnacht" (Women's Carnival) or, in dialect, "Wieverfastelovend", the Thursday before Ash Wednesday, which is the beginning of the street carnival. Zülpicher Strasse and its surroundings, Neumarkt square, Heumarkt and all bars and pubs in the city are crowded with people in costumes dancing and drinking in the streets. Hundreds of thousands of visitors flock to Cologne during this time. Generally, around a million people celebrate in the streets on the Thursday before Ash Wednesday.
Rivalry with Düsseldorf.
Cologne and Düsseldorf have a "fierce regional rivalry", which includes carnival parades, ice hockey, football, and beer. People in Cologne prefer Kölsch while people in Düsseldorf prefer Altbier ("Alt"). Waiters and patrons will "scorn" and make a "mockery" of people who order Alt beer in Cologne or Kölsch in Düsseldorf. The rivalry has been described as a "love–hate relationship". The Köln Guild of Brewers was established in 1396. The Kölsch beer style first appeared in the 1800s and in 1986 the breweries established an appellation under which only breweries in the city are allowed to use the term Kölsch.
Music fairs and festivals.
The city was home to the internationally famous Ringfest, and now to the C/o pop festival.<ref name="C/o pop"></ref>
In addition, Cologne enjoys a thriving Christmas Market ("Weihnachtsmarkt") presence with several locations in the city.
Economy.
As the largest city in the Rhine-Ruhr metropolitan region, Cologne benefits from a large market structure. In competition with Düsseldorf, the economy of Cologne is primarily based on insurance and media industries, while the city is also an important cultural and research centre and home to a number of corporate headquarters.
Among the largest media companies based in Cologne are Westdeutscher Rundfunk, RTL Television (with subsidiaries), n-tv, Deutschlandradio, Brainpool TV and publishing houses like J. P. Bachem, Taschen, Tandem Verlag, and M. DuMont Schauberg. Several clusters of media, arts and communications agencies, TV production studios, and state agencies work partly with private and government-funded cultural institutions. Among the insurance companies based in Cologne are Central, DEVK, DKV, Generali Deutschland, Gen Re, Gothaer, HDI Gerling and national headquarters of Axa Insurance, Mitsui Sumitomo Insurance Group and Zurich Financial Services.
The German flag carrier Lufthansa and its subsidiary Lufthansa CityLine have their main corporate headquarters in Cologne. The largest employer in Cologne is Ford Europe, which has its European headquarters and a factory in Niehl (Ford-Werke GmbH). Toyota Motorsport GmbH (TMG), Toyota's official motorsports team, responsible for Toyota rally cars, and then Formula One cars, has its headquarters and workshops in Cologne. Other large companies based in Cologne include the REWE Group, TÜV Rheinland, Deutz AG and a number of Kölsch breweries. The largest three Kölsch breweries of Cologne are Reissdorf, Gaffel, and Früh.
Historically, Cologne has always been an important trade city, with land, air, and sea connections. The city has five Rhine ports, the second largest inland port in Germany and one of the largest in Europe. Cologne Bonn Airport is the second largest freight terminal in Germany. Today, the Cologne trade fair ("Koelnmesse") ranks as a major European trade fair location with over 50 trade fairs and other large cultural and sports events. In 2008 Cologne had 4.31 million overnight stays booked and 2.38 million arrivals. Cologne's largest daily newspaper is the "Kölner Stadt-Anzeiger".
Cologne shows a significant increase in startup companies, especially when considering digital business.
Cologne has also become the first German city with a population of more than a million people to declare climate emergency.
Transport.
Roads.
Road building had been a major issue in the 1920s under the leadership of mayor Konrad Adenauer. The first German limited-access road was constructed after 1929 between Cologne and Bonn. Today, this is the Bundesautobahn 555. In 1965, Cologne became the first German city to be fully encircled by a motorway ring road. Roughly at the same time, a city centre bypass ("Stadtautobahn") was planned, but only partially put into effect, due to opposition by environmental groups. The completed section became "Bundesstraße ("Federal Road") B 55a", which begins at the "Zoobrücke" ("Zoo Bridge") and meets with A 4 and A 3 at the interchange Cologne East. Nevertheless, it is referred to as "Stadtautobahn" by most locals. In contrast to this, the "Nord-Süd-Fahrt" ("North-South-Drive") was actually completed, a new four/six-lane city centre through-route, which had already been anticipated by planners such as Fritz Schumacher in the 1920s. The last section south of "Ebertplatz" was completed in 1972.
In 2005, the first stretch of an eight-lane motorway in North Rhine-Westphalia was opened to traffic on Bundesautobahn 3, part of the eastern section of the Cologne Beltway between the interchanges Cologne East and Heumar.
Cycling.
Compared to other German cities, Cologne has a traffic layout that is not very bicycle-friendly. It has repeatedly ranked among the worst in an independent evaluation conducted by the Allgemeiner Deutscher Fahrrad-Club. In 2014, it ranked 36th out of 39 German cities with a population greater than 200,000.
Railway.
Cologne has a railway service with InterCity and ICE-trains stopping at Köln Hauptbahnhof (Cologne Main Station), Köln Messe/Deutz and Cologne/Bonn Airport. ICE and TGV Thalys high-speed trains link Cologne with Amsterdam, Brussels (in 1h47, 9 departures/day) and Paris (in 3h14, 6 departures/day). There are frequent ICE trains to other German cities, including Frankfurt am Main and Berlin. ICE trains to London via the Channel Tunnel were planned for 2013.
The Cologne Stadtbahn operated by Kölner Verkehrsbetriebe (KVB) is an extensive light rail system that is partially underground and serves Cologne and a number of neighbouring cities. It evolved from the tram system. Nearby Bonn is linked by both the Stadtbahn and main line railway trains, with occasional recreational boats on the Rhine. Düsseldorf is also linked by S-Bahn trains, which are operated by .
The Rhine-Ruhr S-Bahn has 5 lines which cross Cologne. The S13/S19 runs 24/7 between Cologne Hbf and Cologne/Bonn airport.
Buses.
There are frequent buses covering most of the city and surrounding suburbs, and Eurolines coaches to London via Brussels.
Water.
Häfen und Güterverkehr Köln (Ports and Goods traffic Cologne, HGK) is one of the largest operators of inland ports in Germany. Ports include Köln-Deutz, Köln-Godorf and Köln-Niehl I and II.
Air.
Cologne's international airport is Cologne/Bonn Airport (CGN). It is also called "Konrad Adenauer Airport" after Germany's first post-war Chancellor Konrad Adenauer, who was born in the city and was mayor of Cologne from 1917 until 1933. The airport is shared with the neighbouring city of Bonn. Cologne is headquarters to the European Aviation Safety Agency (EASA).
Education.
Cologne is home to numerous universities and colleges, and host to some 72,000 students. Its oldest university, the University of Cologne (founded in 1388) is the largest university in Germany, as the Cologne University of Applied Sciences is the largest university of Applied Sciences in the country. The Cologne University of Music and Dance is the largest conservatory in Europe. Foreigners can have German lessons in the VHS (Adult Education Centre).
Lauder Morijah School (), a Jewish school in Cologne, previously closed. After Russian immigration increased the Jewish population, the school reopened in 2002.
Media.
Within Germany, Cologne is known as an important media centre. Several radio and television stations, including Westdeutscher Rundfunk (WDR), RTL and VOX, have their headquarters in the city. Film and TV production is also important. The city is "Germany's capital of TV crime stories". A third of all German TV productions are made in the Cologne region. Furthermore, the city hosts the Cologne Comedy Festival, which is considered to be the largest comedy festival in mainland Europe.
Sports.
Cologne hosts the football club 1. FC Köln, who play currently in the 2. Bundesliga (second division). They play their home matches in RheinEnergieStadion which also hosted five matches of the 2006 FIFA World Cup. The International Olympic Committee and the International Association of Sports and Leisure Facilities gave RheinEnergieStadion a bronze medal for "being one of the best sporting venues in the world". The city also hosts the two football clubs FC Viktoria Köln and SC Fortuna Köln, who play in the 3. Liga (third division) and the Regionalliga West (fourth division) respectively. Cologne's oldest football club 1. FSV Köln 1899 is playing with its amateur team in the Verbandsliga (sixth division).
Cologne also is home of the ice hockey team Kölner Haie, which is playing in the highest ice hockey league in Germany, the Deutsche Eishockey Liga. They are based at Lanxess Arena.
Several horse races per year are held at Cologne-Weidenpesch Racecourse since 1897, the annual Cologne Marathon was started in 1997 and the classic cycling race Rund um Köln is organised in Cologne since 1908. The city also has a long tradition in rowing, being home of some of Germany's oldest regatta courses and boat clubs, such as the Kölner Rudergesellschaft 1891 or the Kölner Ruderverein von 1877 in the Rodenkirchen district.
Japanese automotive manufacturer Toyota has their major motorsport facility known by the name Toyota Motorsport GmbH, which is located in the Marsdorf district, and is responsible for Toyota's major motorsport development and operations, which in the past included the FIA Formula One World Championship, the FIA World Rally Championship and the Le Mans Series. They are working on Toyota's team Toyota Gazoo Racing which competes in the FIA World Endurance Championship.
Cologne is considered "the secret golf capital of Germany". The first golf club in North Rhine-Westphalia was founded in Cologne in 1906. The city offers the most options and top events in Germany.
The city has hosted several athletic events which includes the 2005 FIFA Confederations Cup, 2006 FIFA World Cup, 2007 World Men's Handball Championship, 2010 and 2017 Ice Hockey World Championships, 2024 FIFA European Championship and 2010 Gay Games.
Since 2014, the city has hosted ESL One Cologne, one of the biggest tournaments held annually in July/August at Lanxess Arena.
Furthermore, Cologne is home of the Sport-Club Colonia 1906, Germany's oldest boxing club, and the Kölner Athleten-Club 1882, the world's oldest active weightlifting club.
Twin towns – sister cities.
Cologne is twinned with:
|
6188
|
33372873
|
https://en.wikipedia.org/wiki?curid=6188
|
Buddhist cuisine
|
Buddhist cuisine is an Asian cuisine that is followed by monks and many believers from areas historically influenced by Mahayana Buddhism. It is vegetarian or vegan, and it is based on the Dharmic concept of ahimsa (non-violence). Vegetarianism is common in other Dharmic faiths such as Hinduism, Jainism and Sikhism, as well as East Asian religions like Taoism. While monks, nuns and a minority of believers are vegetarian year-round, many believers follow the Buddhist vegetarian diet for celebrations.
In Buddhism, cooking is often seen as a spiritual practice that produces the nourishment which the body needs to work hard and meditate. The origin of "Buddhist food" as a distinct sub-style of cuisine is tied to monasteries, where one member of the community would have the duty of being the head cook and supplying meals that paid respect to the strictures of Buddhist precepts. Temples that were open to visitors from the general public might also serve meals to them and a few temples effectively run functioning restaurants on the premises. In Japan, this culinary custom, recognized as "shōjin ryōri" (精進料理) or devotion cuisine, is commonly offered at numerous temples, notably in Kyoto. This centuries-old culinary tradition, primarily associated with religious contexts, is seldom encountered beyond places like temples, religious festivals, and funerals. A more recent version, more Chinese in style, is prepared by the Ōbaku school of zen, and known as ; this is served at the head temple of Manpuku-ji, as well as various subtemples. In modern times, commercial restaurants have also latched on to the style, catering both to practicing and non-practicing lay people.
Philosophies governing food.
Vegetarianism.
Most of the dishes considered to be uniquely Buddhist are vegetarian, but not all Buddhist traditions require vegetarianism of lay followers or clergy. Vegetarian eating is primarily associated with the East and Southeast Asian tradition in China, Vietnam, Japan, and Korea where it is commonly practiced by clergy and may be observed by laity on holidays or as a devotional practice.
In the Mahayana tradition, several sutras of the Mahayana canon contain explicit prohibitions against consuming meat, including sections of the Lankavatara Sutra and Surangama Sutra. The monastic community in Chinese Buddhism, Vietnamese Buddhism and most of Korean Buddhism strictly adhere to vegetarianism.
Theravada Buddhist monks and nuns consume food by gathering alms themselves, and generally must eat whatever foods are offered to them, including meat. The exception to this alms rule is when monks and nuns have seen, heard or known that animal(s) have been specifically killed to feed the alms-seeker, in which case consumption of such meat would be karmically negative, as well as meat from certain animals, such as dogs and snakes, that were regarded as impure in ancient India. The same restriction is also followed by some lay Buddhists and is known as the consumption of "Threefold Pure Meat" (三净肉). The Pāli Scriptures also indicated that Lord Buddha refused a proposal by his traitor disciple Devadatta to mandate vegetarianism in the monastic precepts.
Tibetan Buddhism has long accepted that the practical difficulties in obtaining vegetables and grains within most of Tibet make it impossible to insist upon vegetarianism; however, many leading Tibetan Buddhist teachers agree upon the great worth of practicing vegetarianism whenever and wherever possible, such as Chatral Rinpoche, a lifelong advocate of vegetarianism who famously released large numbers of fish caught for food back into the ocean once a year, and who wrote about the practice of saving lives.
Both Mahayana and Theravada Buddhists consider that one may practice vegetarianism as part of cultivating Bodhisattvas's paramita.
Other restrictions.
In addition to the ban on garlic, practically all Mahayana monastics in China, Korea, Vietnam and Japan specifically avoid eating strong-smelling plants, traditionally asafoetida, shallot, mountain leek and Chinese onion, which together with garlic are referred to as "wǔ hūn" (五葷, or 'Five Acrid and Strong-smelling Vegetables') or "wǔ xīn" (五辛 or 'Five Spices') as they tend to excite senses. This is based on teachings found in the Brahmajala Sutra, the Surangama Sutra and the Lankavatara Sutra. In modern times this rule is often interpreted to include other vegetables of the onion genus, as well as coriander. The origin of this additional restriction is from the Indic region and can still be found among some believers of Hinduism and Jainism.
The consumption of non-vegetarian food by strict Buddhists is also subject to various restrictions. As well as the aforementioned "triply clean meat" rule followed by Theravada monks, nuns, and some lay Buddhists, many Chinese Buddhists avoid the consumption of beef, large animals, and exotic species. Some Buddhists abstain from eating offal (organ meat), known as "xiàshui" (下水).
Alcohol and other drugs are also avoided by many Buddhists because of their effects on the mind and "mindfulness". It is part of the Five Precepts which dictate that one is "not to take any substance that will cloud the mind." Caffeinated drinks may sometimes be included under this restriction.
Simple and natural.
In theory and practice, many regional styles of cooking may be adapted to be "Buddhist" as long as the cook, with the above restrictions in mind, prepares the food, generally in simple preparations, with expert attention to its quality, wholesomeness and flavor. Often working on a tight budget, the monastery cook would have to make the most of whatever ingredients were available.
In "Tenzo kyokun" ("Instructions for the Zen Cook"), Soto Zen founder Eihei Dogen wrote:
In preparing food, it is essential to be sincere and to respect each ingredient regardless of how coarse or fine it is. (...) A rich buttery soup is not better as such than a broth of wild herbs. In handling and preparing wild herbs, do so as you would the ingredients for a rich feast, wholeheartedly, sincerely, clearly. When you serve the monastic assembly, they and you should taste only the flavour of the Ocean of Reality, the Ocean of unobscured Awake Awareness, not whether or not the soup is creamy or made only of wild herbs. In nourishing the seeds of living in the Way, rich food and wild grass are not separate.
Ingredients.
Following its dominant status in most parts of East Asia where Buddhism is most practiced, rice features heavily as a staple in the Buddhist meal, especially in the form of rice porridge or congee as the usual morning meal. Noodles and other grains may often be served as well. Vegetables of all sorts are generally either stir-fried or cooked in vegetarian broth with seasonings and may be eaten with various sauces. Onions and garlic are usually avoided as consumption of these is thought to increase undesirable emotions such as anger or sexual desire. Traditionally, eggs and dairy are not permitted. Seasonings will be informed by whatever is common in the local region; for example, soy sauce and vegan dashi figure strongly in Japanese monastery food while Thai curry and tương (as a vegetarian replacement for fish sauce) may be prominent in Southeast Asia. Sweets and desserts are not often consumed, but are permitted in moderation and may be served at special occasions, such as in the context of a tea ceremony in the Zen tradition.
Buddhist vegetarian chefs have become extremely creative in imitating meat using prepared wheat gluten, also known as seitan, kao fu (烤麸) or wheat meat, soy (such as tofu or tempeh), agar, konnyaku and other plant products. Some of their recipes are the oldest and most-refined meat analogues in the world. Soy and wheat gluten are very versatile materials, because they can be manufactured into various shapes and textures, and they absorb flavorings (including, but not limited to, meat-like flavorings), while having very little flavor of their own. With the proper seasonings, they can mimic various kinds of meat quite closely.
Some of these Buddhist vegetarian chefs are in the many monasteries and temples which serve allium-free and mock-meat (also known as 'meat analogues') dishes to the monks and visitors (including non-Buddhists who often stay for a few hours or days, to Buddhists who are not monks, but staying overnight for anywhere up to weeks or months). Many Buddhist restaurants also serve vegetarian, vegan, non-alcoholic or allium-free dishes.
Some Buddhists eat vegetarian on the 1st and 15th of the lunar calendar (lenten days), on Chinese New Year eve, and on saint and ancestral holy days. To cater to this type of customer, as well as full-time vegetarians, the menu of a Buddhist vegetarian restaurant usually shows no difference from a typical Chinese or East Asian restaurant, except that in recipes originally made to contain meat, a soy chicken substitute might be served instead.
Variations by sect or region.
According to cookbooks published in English, formal monastery meals in the Zen tradition generally follow a pattern of "three bowls" in descending size. The first and largest bowl is a grain-based dish such as rice, noodles or congee; the second contains the protein dish which is often some form of stew or soup; the third and smallest bowl is a vegetable dish or a salad.
History.
The earliest surviving written accounts of Buddhism are the Edicts written by King Ashoka, a well-known Buddhist king who propagated Buddhism throughout Asia, and is honored by both Theravada and Mahayana schools of Buddhism. The authority of the Edicts of Ashoka as a historical record is suggested by the mention of numerous topics omitted as well as corroboration of numerous accounts found in the Theravada and Mahayana Tripitakas written down centuries later.
Asoka Rock Edict 1, dated to c. 257 BCE, mentions the prohibition of animal sacrifices in Ashoka's Maurya Empire as well as his commitment to vegetarianism; however, whether the Sangha was vegetarian in part or in whole is unclear from these edicts. However, Ashoka's personal commitment to, and advocating of, vegetarianism suggests Early Buddhism (at the very least for the layperson) most likely already had a vegetarian tradition (the details of what that entailed besides not killing animals and eating their flesh were not mentioned, and therefore are unknown).
|
6193
|
49805032
|
https://en.wikipedia.org/wiki?curid=6193
|
Constantin von Tischendorf
|
Lobegott Friedrich Constantin (von) Tischendorf (18 January 18157 December 1874) was a German biblical scholar. In 1844, he discovered the world's oldest and most complete Bible dated to around the mid-4th century and called Codex Sinaiticus after Saint Catherine's Monastery at Mount Sinai.
Tischendorf was made an honorary doctor by the University of Oxford on 16 March 1865, and by the University of Cambridge on 9 March 1865 following his discovery. While a student gaining his academic degree in the 1840s, he earned international recognition when he deciphered the "Codex Ephraemi Rescriptus", a 5th-century Greek manuscript of the New Testament.
Early life and education.
Tischendorf was born in Lengenfeld, Saxony, the son of a forensic physician. After attending primary school in Lengenfield, he went to grammar school in nearby Plauen. From Easter 1834, having achieved excellent marks at school, he studied theology and philosophy at the University of Leipzig.
At Leipzig he was mainly influenced by JGB Winer, and he began to take special interest in New Testament criticism. Winer's influence gave him the desire to use the oldest manuscripts in order to compile the text of the New Testament as close to the original as possible. Despite his father's death in 1835 and his mother's just a year later, he was still able to achieve his doctorate in 1838, before accepting a tutoring job in the home of Reverend Ferdinand Leberecht Zehme in Grossstadeln where he met and fell in love with the clergyman's daughter Angelika. He published a volume of poetry in 1838, Maiknospen (Buds of May) and Der junge Mystiker (The Young Mystic) was published under a pseudonym in 1839. At this time he also began his first critical edition of the NewTestament in Greek which was to become his life's work.
After a journey through southern Germany and Switzerland, and a visit to Strassburg, he returned to Leipzig to begin work on a critical study of the New Testament text.
Career.
In 1840, he qualified as university lecturer in theology with a dissertation on the recensions of the New Testament text, the main part of which reappeared the following year in the prolegomena to his first edition of the Greek New Testament. These early textual studies convinced him of the absolute necessity of new and more exact collations of manuscripts.
From October 1840 until January 1843 he was in Paris, busy with the treasures of the Bibliothèque Nationale, eking out his scanty means by making collations for other scholars, and producing for the publisher, Firmin Didot, several editions of the Greek New Testament, one of them exhibiting the form of the text corresponding most closely to the Vulgate. His second edition retracted the more precarious readings of the first, and included a statement of critical principles that is a landmark for evolving critical studies of Biblical texts.
In 1845 before he left paris, decipherment of the palimpsest "Codex Ephraemi Syri Rescriptus" and the Old Testament were completed. His success in dealing with a manuscript that, having been over-written with other works of Ephrem the Syrian, had been mostly illegible to earlier collators, made him more well known, and gained support for more extended critical expeditions. He now became "professor extraordinarius" at Leipzig, where he was married in 1845. He also began to publish "Reise in den Orient", an account of his travels in the east (in 2 volumes, 1845–46, translated as "Travels in the East" in 1847). Even though he was an expert in reading the text of a palimpsest (this is a document where the original writing has been removed and new writing added), he was not able to identify the value or meaning of the "Archimedes Palimpsest", a torn leaf of which he held and after his death was sold to the Cambridge University Library.
Tischendorf briefly visited the Netherlands in 1841 and England in 1842. In 1843 he visited Italy for thirteen months, before continuing on to Egypt, Sinai, and the Levant, returning via Vienna and Munich.
Discovery of the Codex Sinaiticus Bible manuscripts.
In 1844 Tischendorf travelled the first time to Saint Catherine's Monastery at the foot of Mount Sinai in Egypt, where he found a portion of what would later be hailed as the oldest complete known New Testament.
Of the many pages which were contained in an old wicker basket (the kind that the monastery hauled in its visitors as customary in unsafe territories) he was given 43 pages containing a part of the Old Testament as a present. He donated those 43 pages to King Frederick Augustus II of Saxony (reigned 1836–1854), to honour him and to recognise his patronage as the funder of Tischendorf's journey. (Tischendorf held a position as Theological Professor at Leipzig University, also under the patronage of Frederick Augustus II.) Leipzig University put two of the leaves on display in 2011.
Tischendorf reported in his 1865 book "Wann Wurden Unsere Evangelen Verfasst", translated to English in 1866 as "When Were Our Gospels Written" in the section "The Discovery of the Sinaitic Manuscript" that he found, in a trash basket, forty-three sheets of parchment of an ancient copy of the Greek Old Testament, reporting that the monks were using the trash to start fires. And Tischendorf, horrified, asked if he could have them. He deposited them at the University of Leipzig, under the title of the "Codex Friderico-Augustanus", a name given in honour of his patron, Frederick Augustus II of Saxony, king of Saxony. The fragments were published in 1846, although Tischendorf kept the place of discovery a secret.
Many have expressed skepticism at the historical accuracy of this report of saving a 1500-year-old parchment from the flames. J. Rendel Harris referred to the story as a "myth". The Tischendorf Lesebuch (see References) quotes that the Librarian Kyrillos mentioned to Tischendorf that the contents of the basket had already twice been submitted to the fire. The contents of the baskets were damaged scriptures, the third filling apparently, so cited by Tischendorf himself.[see Tischendorf Lesebuch, Tischendorf's own account].
In 1853 Tischendorf made a second trip to the Syrian monastery but made no new discoveries. He returned a third time in January 1859 under the patronage of Tsar Alexander II of Russia with the active aid of the Russian government to find more of the "Codex Frederico-Augustanus" or similar ancient Biblical texts. On 4 February, the last day of his visit, he was shown a text which he recognized as significant – the "Codex Sinaiticus" – a Greek manuscript of the complete New Testament and parts of the Old Testament dating to the 4th century.
Tischendorf persuaded the monks to present the manuscript to Tsar Alexander II of Russia, at the cost of the Tsar it was published in 1862 (in four folio volumes). Those ignorant of the details of his discovery of the "Codex Sinaiticus" accused Tischendorf of buying manuscripts from ignorant monastery librarians at low prices. Indeed, he was never rich, but he staunchly defended the rights of the monks at Saint Catherine's Monastery when he persuaded them eventually to send the manuscript to the Tsar. This took approximately 10 years because the abbot of St Catherines had to be re-elected and confirmed in office in Cairo and in Jerusalem, and during those 10 years no one in the monastery had the authority to hand over any documents. However the documents were handed over in due course following a signed and sealed letter to the Tsar Alexander II (Schenkungsurkunde). Even so, the monks of Mt. Sinai still display a receipt-letter from Tischendorf promising to return the manuscript to them in the case that the donation can not be done. This token-letter had to be destroyed, following the late issue of a "Schenkungsurkunde". This donation act regulated the Codex exchange with the Tsar, against 9000 Rubels and Rumanian estate protection. The Tsar was seen as the protector of Greek-Orthodox Christians. Thought lost since the Russian revolution, the document (Schenkungsurkunde) has now resurfaced in St Petersburg 2003, and has also been long before commented upon by other scholars like Kurt Aland. The monastery has disputed the existence of the gift certificate (Schenkungsurkunde) since the British Library was named as the new owner of the Codex. Now following the late find of the gift certificate by the National Russian Library the existence cannot be disputed in earnest.
In 1869 the Tsar awarded Tischendorf the style of "von" Tischendorf as a Russian noble. 327 facsimile editions of the Codex were printed in Leipzig for the Tsar (instead of a salary for the three-year work of Tischendorf the Tsar gave him 100 copies for reselling) in order to celebrate the 1000th anniversary of the traditional foundation of the Rus' state in 862 with the publication of this most amazing find. Supporting the production of the facsimile, all made with special print characters for each of the 4 scribes of the Codex Sinaiticus, was shift work and contributed to Tischendorf's early demise due to exhausting work for months also during nights. Thus the Codex found its way to the Imperial Library at St. Petersburg.
When the 4-volume luxury edition of the Sinai Bible was completed in 1862, C. Tischendorf presented the original ancient manuscript to Emperor Alexander II. Meanwhile, the question of transferring the manuscript to the full possession of the Russian Sovereign remained unresolved for some years. In 1869, the new Archbishop of Sinai, Callistratus, and the monastic community, signed the official certificate presenting the manuscript to the Tsar. The Russian Government, in turn, bestowed the Monastery with 9000 rubles and decorated the Archbishop and some of the brethren with orders. In 1933 the Soviet Government sold the Codex Sinaiticus for 100,000 pounds to the British Museum in London, England. The official certificate with signatures in Russian/ French/ Greek sections has been refound in St Petersburg.
Novum Testamentum Graece – publication with 21 editions.
In the winter of 1849 the first edition of his great work now titled "Novum Testamentum Graece. Ad antiquos testes recensuit. Apparatum criticum multis modis" appeared (translated as "Greek New Testament. The ancient witnesses reviewed. Preparations critical in many ways"), containing canons of criticism, adding examples of their application that are applicable to students today:
Basic rule: "The text is only to be sought from ancient evidence, and especially from Greek manuscripts, but without neglecting the testimonies of versions and fathers."
These were partly the result of the tireless travels he had begun in 1839 in search of unread manuscripts of the New Testament, "to clear up in this way," he wrote, "the history of the sacred text, and to recover if possible the genuine apostolic text which is the foundation of our faith."
In 1850 appeared his edition of the "Codex Amiatinus" (in 1854 corrected) and of the Septuagint version of the Old Testament (7th edition, 1887); in 1852, amongst other works, his edition of the "Codex Claromontanus". In 1859, he was named "professor ordinarius" of theology and of Biblical paleography, this latter professorship being specially created for him; and another book of travel, "Aus dem heiligen Lande", appeared in 1862. Tischendorf's Eastern journeys were rich enough in other discoveries to merit the highest praise.
Besides his fame as a scholar, he was a friend of both Robert Schumann, with whom he corresponded, and Felix Mendelssohn, who dedicated a song to him. His colleague Samuel Prideaux Tregelles wrote warmly of their mutual interest in textual scholarship. His personal library, purchased after his death, eventually came to the University of Glasgow, where a commemorative exhibition of books from his library was held in 1974 and can be accessed by the public.
Death.
Lobegott Friedrich Constantin (von) Tischendorf died in Leipzig on 7 December 1874, aged 59.
Codex Sinaiticus.
The "Codex Sinaiticus" contains a 4th-century manuscript of New Testament texts. Two other Bibles of similar age exist, though they are less complete: Codex Vaticanus in the Vatican Library and Codex Alexandrinus, currently owned by the British Library. The Codex Sinaiticus is deemed by some to be the most important surviving New Testament manuscript, as no older manuscript is as nearly complete as the Codex. The codex can be viewed in the British Library in London, or as a digitized version on the Internet.
Tischendorf's motivation.
Throughout his life, Tischendorf sought old biblical manuscripts, as he saw it as his task to give theology a Greek New Testament that was based on the oldest possible manuscripts. He intended to be as close as possible to the sources. Tischendorf's greatest discovery was in the monastery of Saint Catherine on the Sinai Peninsula, which he visited in May 1844, and again in 1853 and 1859 (as Russian envoy).
In 1862, Tischendorf published the text of the Codex Sinaiticus for the 1000th Anniversary of the Russian Monarchy in both an illustrious four-volume facsimile edition and a less costly text edition, the latter to enable as many scholars as possible to have access to the contents of the Codex.
Tischendorf pursued a constant course of editorial labors, mainly on the New Testament, until he was broken down by overwork in 1873. Prof. (Leipzig University, Prof. of Theology) explained in a publication on Tischendorf's Letter that he was motivated to prove scientifically that the words of the Bible were trustworthily transmitted over the centuries.
Works.
His "magnum opus" was the "Critical Edition of the New Testament."
The great edition, of which the text and apparatus appeared in 1869 and 1872, was called by himself "editio viii"; but this number is raised to twenty or twenty-one, if mere reprints from stereotype plates and the minor editions of his great critical texts are included; posthumous prints bring the total to forty-one. Four main recensions of Tischendorf's text may be distinguished, dating respectively from his editions of 1841, 1849, 1859 (ed. vii), and 1869–72 (ed. viii). The edition of 1849 may be regarded as historically the most important, from the mass of new critical material it used; that of 1859 is distinguished from Tischendorf's other editions by coming nearer to the received text; in the eighth edition, the testimony of the Sinaitic manuscript received great (probably too great) weight. The readings of the Vatican manuscript were given with more exactness and certainty than had been possible in the earlier editions, and the editor had also the advantage of using the published labours of his colleague and friend Samuel Prideaux Tregelles.
Of relatively lesser importance was Tischendorf's work on the Greek Old Testament. His edition of the Roman text, with the variants of the Alexandrian manuscript, the "Codex Ephraemi", and the "Friderico-Augustanus", was of service when it appeared in 1850, but, being stereotyped, was not greatly improved in subsequent issues. Its imperfections, even within the limited field it covers, may be judged by the aid of Eberhard Nestle's appendix to the 6th issue (1880).
Besides this may be mentioned editions of the New Testament apocrypha, "De Evangeliorum apocryphorum origine et usu" (1851); "Acta Apostolorum apocrypha" (1851); "Evangelia apocrypha" (1853; 2nd edition, 1876); "Apocalypses apocryphae" (1866), and various minor writings, partly of an apologetic character, such as "Wann wurden unsere Evangelien verfasst?" ("When Were Our Gospels Written?"; 1865; 4th edition, 1866, digitized by Google and available for e-readers), "Haben wir den echten Schrifttext der Evangelisten und Apostel?" (1873), and "Synopsis evangelica" (7th edition, 1898).
|
6195
|
35012415
|
https://en.wikipedia.org/wiki?curid=6195
|
Calvin Coolidge
|
Calvin Coolidge (born John Calvin Coolidge Jr.; ; July 4, 1872January 5, 1933) was the 30th president of the United States, serving from 1923 to 1929. A Republican lawyer from Massachusetts, he previously served as the 29th vice president from 1921 to 1923 under President Warren G. Harding, and as the 48th governor of Massachusetts from 1919 to 1921. Coolidge gained a reputation as a small-government conservative with a taciturn personality and dry sense of humor that earned him the nickname "Silent Cal".
Coolidge began his career as a member of the Massachusetts State House. He rose up the ranks of Massachusetts politics and was elected governor in 1918. As governor, Coolidge ran on the record of fiscal conservatism, strong support for women's suffrage, and vague opposition to Prohibition. His prompt and effective response to the Boston police strike of 1919 thrust him into the national spotlight as a man of decisive action. The following year, the Republican Party nominated Coolidge as the running mate to Senator Warren G. Harding in the 1920 presidential election, which they won in a landslide. Coolidge served as vice president until Harding's death in 1923, after which he assumed the presidency.
During his presidency, Coolidge restored public confidence in the White House after the Harding administration's many scandals. He signed into law the Indian Citizenship Act of 1924, which granted U.S. citizenship to all Native Americans, and oversaw a period of rapid and expansive economic growth known as the "Roaring Twenties", leaving office with considerable popularity. Coolidge was known for his hands-off governing approach and pro-business stance; biographer Claude Fuess wrote: "He embodied the spirit and hopes of the middle class, could interpret their longings and express their opinions. That he did represent the genius of the average is the most convincing proof of his strength." Coolidge chose not to run again in 1928, remarking that ten years as president would be "longer than any other man has had it—too long!"
Coolidge is widely admired for his stalwart support of racial equality during a period of heightened racial tension, and is highly regarded by advocates of smaller government and "laissez-faire" economics; supporters of an active central government generally view him far less favorably. His critics argue that he failed to use the country's economic boom to help struggling farmers and workers in other flailing industries, and there is still much debate among historians about the extent to which Coolidge's economic policies contributed to the onset of the Great Depression, which began shortly after he left office. Scholars have ranked Coolidge in the lower half of U.S. presidents.
Early life and family history.
John Calvin Coolidge Jr. was born on July 4, 1872, in Plymouth Notch, Vermont—the only U.S. president to be born on Independence Day. He was the elder of the two children of John Calvin Coolidge Sr. (1845–1926) and Victoria Josephine Moor (1846–1885). Although named for his father, from early childhood Coolidge was addressed by his middle name. The name Calvin was used in multiple generations of the Coolidge family, apparently selected in honor of John Calvin, the Protestant Reformer.
Coolidge Senior engaged in many occupations and developed a statewide reputation as a prosperous farmer, storekeeper, and public servant. He held various local offices, including justice of the peace and tax collector and served in both houses of the Vermont General Assembly. When Coolidge was 12 years old, his chronically ill mother died at the age of 39, perhaps from tuberculosis. His younger sister, Abigail Grace Coolidge (1875–1890), died at the age of 15, probably of appendicitis, when Coolidge was 18. Coolidge's father married a Plymouth schoolteacher in 1891, and lived to the age of 80.
Coolidge's earliest American ancestor, John Coolidge, emigrated from Cottenham, Cambridgeshire, England, around 1630 and settled in Watertown, Massachusetts. Coolidge also descended from Samuel Appleton, who settled in Ipswich and led the Massachusetts Bay Colony during King Philip's War. Coolidge's great-great-grandfather, another John Coolidge, was an American military officer in the Revolutionary War and one of the first selectmen of the town of Plymouth.
His grandfather Calvin Galusha Coolidge served in the Vermont House of Representatives. His cousin Park Pollard was a businessman in Cavendish, Vermont, and the longtime chair of the Vermont Democratic Party. Coolidge's mother was the daughter of Hiram Dunlap Moor, a Plymouth Notch farmer, and Abigail Franklin.
Early career and marriage.
Education and law practice.
Coolidge attended the Black River Academy and then St. Johnsbury Academy before enrolling at Amherst College, where he distinguished himself in the debating class. As a senior, he joined the Phi Gamma Delta fraternity and graduated "cum laude". While at Amherst, Coolidge was profoundly influenced by philosophy professor Charles Edward Garman, a Congregational mystic who had a neo-Hegelian philosophy. Coolidge explained Garman's ethics forty years later:
At his father's urging after graduation, Coolidge moved to Northampton, Massachusetts, to become a lawyer. Coolidge followed the common practice of apprenticing with a local law firm, Hammond & Field, and reading law with them. John C. Hammond and Henry P. Field, both Amherst graduates, introduced Coolidge to practicing law in the county seat of Hampshire County, Massachusetts. In 1897, Coolidge was admitted to the Massachusetts bar, becoming a country lawyer. With his savings and a small inheritance from his grandfather, Coolidge opened his own law office in Northampton in 1898. He practiced commercial law, believing that he served his clients best by staying out of court. As his reputation as a hard-working and diligent attorney grew, local banks and other businesses began to retain his services.
Marriage and family.
In 1903, Coolidge met Grace Goodhue, a graduate of the University of Vermont and a teacher at Northampton's Clarke School for the Deaf. They married on October 4, 1905, at 2:30 p.m. in a small ceremony which took place in the parlor of Grace's family's house, having overcome her mother's objections to the marriage. The newlyweds went on a honeymoon trip to Montreal, originally planned for two weeks but cut short by a week at Coolidge's request. After 25 years he wrote of Grace, "for almost a quarter of a century she has borne with my infirmities and I have rejoiced in her graces".
The Coolidges had two sons: John (1906–2000) and Calvin Jr. (1908–1924). On June 30, 1924, Calvin Jr. played tennis with his brother on the White House tennis courts without putting on socks and developed a blister on one of his toes. The blister subsequently degenerated into sepsis. He died a little over a week later at the age of 16.
Coolidge never forgave himself for Calvin Jr's death. His elder son John said it "hurt [Coolidge] terribly", and psychiatric biographer Robert E. Gilbert, author of" The Tormented President: Calvin Coolidge, Death, and Clinical Depression", said that Coolidge "ceased to function as President after the death of his sixteen-year-old son". Gilbert writes that after Calvin Jr.'s death Coolidge displayed all ten of the symptoms the American Psychiatric Association lists as evidence of major depressive disorder. John later became a railroad executive, helped start the Coolidge Foundation, and was instrumental in creating the President Calvin Coolidge State Historic Site.
Coolidge was frugal, and when it came to securing a home, he insisted upon renting. He and his wife attended Northampton's Edwards Congregational Church before and after his presidency.
Local political office (1898−1915).
City offices.
The Republican Party was dominant in New England at the time, and Coolidge followed the example of Hammond and Field by becoming active in local politics. In 1896, Coolidge campaigned for Republican presidential candidate William McKinley, and was selected to be a member of the Republican City Committee the next year. In 1898, he won election to the City Council of Northampton, placing second in a ward where the top three candidates were elected. The position offered no salary but provided Coolidge with valuable political experience.
In 1899, the city council made Coolidge city solicitor. He was elected to a one-year term in 1900 and reelected in 1901. This position gave Coolidge more experience as a lawyer and paid a salary of $600 (). In 1902, the city council selected a Democrat for city solicitor, and Coolidge returned to private practice. Soon thereafter, the clerk of courts for the county died, and Coolidge was chosen to replace him. The position paid well, but it barred him from practicing law, so he remained at the job for only a year.
In 1904, Coolidge suffered his sole defeat at the ballot box, losing an election to the Northampton school board. When told that some of his neighbors voted against him because he had no children in the schools he would govern, the recently married Coolidge replied, "Might give me time!"
Massachusetts state legislator and mayor.
In 1906, the local Republican committee nominated Coolidge for election to the Massachusetts House of Representatives. He won a close victory over the incumbent Democrat, and reported to Boston for the 1907 session of the Massachusetts General Court. In his freshman term, Coolidge served on minor committees and, although he usually voted with the party, was known as a Progressive Republican, voting in favor of such measures as women's suffrage and the direct election of Senators.
While in Boston, Coolidge became an ally, and then a liegeman, of then U.S. Senator Winthrop Murray Crane, who controlled the Massachusetts Republican Party's western faction; Crane's party rival in eastern Massachusetts was U.S. Senator Henry Cabot Lodge. Coolidge forged another key strategic alliance with Guy Currier, who had served in both state houses and had the social distinction, wealth, personal charm, and broad circle of friends Coolidge lacked, and which had a lasting impact on his political career. In 1907, Coolidge was reelected. In the 1908 session he was more outspoken, though not in a leadership position.
Instead of vying for another term in the State House, Coolidge returned home to his growing family and ran for mayor of Northampton when the incumbent Democrat retired. He was well liked in the town, and defeated his challenger by a vote of 1,597 to 1,409. During his first term from 1910 to 1911, he increased teachers' salaries and retired some of the city's debt while still managing to effect a slight tax decrease. In 1911, he was renominated and defeated the same opponent by a slightly larger margin.
In 1911, the State Senator for the Hampshire County area retired and successfully encouraged Coolidge to run for his seat for the 1912 session. Coolidge defeated his Democratic opponent by a large margin. At the start of that term, he became chairman of a committee to arbitrate the "Bread and Roses" strike by the workers of the American Woolen Company in Lawrence, Massachusetts. After two tense months, the company agreed to the workers' demands, in a settlement proposed by the committee.
A major issue affecting Massachusetts Republicans in 1912 was the party split between the progressive wing, which favored Theodore Roosevelt, and the conservative wing, which favored William Howard Taft. Although he favored some progressive measures, Coolidge refused to leave the Republican party. When the new Progressive Party declined to run a candidate in his state senate district, Coolidge won reelection against his Democratic opponent by an increased margin.
In the 1913 session, Coolidge enjoyed renowned success in arduously navigating to passage the Western Trolley Act, which connected Northampton with a dozen similar industrial communities in Western Massachusetts.
Coolidge intended to retire after his second term, as was customary, but when the president of the state senate, Levi H. Greenwood, considered running for lieutenant governor, Coolidge decided to run for the Senate again in hopes of being elected its presiding officer. Greenwood later decided to run for reelection to the Senate, and was defeated primarily due to his opposition to women's suffrage.
Coolidge was in favor of the women's vote, and was reelected. With Crane's help, Coolidge assumed the presidency of a closely divided Senate. After his election in January 1914, Coolidge delivered a published and frequently quoted speech, "Have Faith in Massachusetts", which summarized his philosophy of government.
Coolidge's speech was well received, and he attracted some admirers on its account. Towards the end of the term, many of them were proposing Coolidge's name for nomination to lieutenant governor. After winning reelection to the Senate by an increased margin in the 1914 elections, Coolidge was reelected unanimously to be President of the Senate. Coolidge's supporters, led by fellow Amherst alumnus Frank Stearns, encouraged him again to run for lieutenant governor. Stearns, an executive with the Boston department store R. H. Stearns, became another key ally, and began a publicity campaign on Coolidge's behalf before he announced his candidacy at the end of the 1915 legislative session.
Lieutenant Governor and Governor of Massachusetts (1916−1921).
Coolidge entered the primary election for lieutenant governor and was nominated to run alongside gubernatorial candidate Samuel W. McCall. Coolidge was the leading vote-getter in the Republican primary, and balanced the Republican ticket by adding a western presence to McCall's eastern base of support. McCall and Coolidge won the 1915 election to their respective one-year terms, with Coolidge defeating his opponent by more than 50,000 votes.
In Massachusetts, the lieutenant governor does not preside over the state Senate, as is the case in many other states; nevertheless, as lieutenant governor, Coolidge was a deputy governor functioning as an administrative inspector and was a member of the governor's council. He was also chairman of the finance committee and the pardons committee. As a full-time elected official, Coolidge discontinued his law practice in 1916, though his family continued to live in Northampton. McCall and Coolidge were both reelected in 1916 and in 1917. When McCall decided that he would not stand for a fourth term, Coolidge announced his intention to run for governor.
1918 election.
Coolidge was unopposed for the Republican nomination for Governor of Massachusetts in 1918. He and his running mate, Channing Cox, a Boston lawyer and Speaker of the Massachusetts House of Representatives, ran on the previous administration's record: fiscal conservatism, a vague opposition to Prohibition, support for women's suffrage, and support for American involvement in World War I. The issue of the war proved divisive, especially among Irish and German Americans. Coolidge was elected by a margin of 16,773 votes over his opponent, Richard H. Long, in the smallest margin of victory of any of his statewide campaigns.
Boston police strike.
In 1919, in reaction to a plan of the policemen of the Boston Police Department to register with a union, Police Commissioner Edwin U. Curtis announced that such an act would not be tolerated. In August of that year, the American Federation of Labor issued a charter to the Boston Police Union. Curtis declared the union's leaders were guilty of insubordination and would be relieved of duty, but indicated he would cancel their suspension if the union was dissolved by September 4. The mayor of Boston, Andrew Peters, convinced Curtis to delay his action for a few days, but with no results, and Curtis suspended the union leaders on September 8. The following day, about three-quarters of the policemen in Boston went on strike.
Tacitly but fully in support of Curtis's position, Coolidge closely monitored the situation but initially deferred to the local authorities. He anticipated that only a resulting measure of lawlessness could sufficiently prompt the public to understand and appreciate the controlling principle: that a policeman does not strike. That night and the next, there was sporadic violence and rioting in the city. Concerned about sympathy strikes by the firemen and others, Peters called up some units of the Massachusetts National Guard stationed in the Boston area pursuant to an old and obscure legal authority and relieved Curtis of duty.
Coolidge, sensing the severity of circumstances were then in need of his intervention, conferred with Crane's operative, William Butler, and then acted. He called up more units of the National Guard, restored Curtis to office, and took personal control of the police force. Curtis proclaimed that all of the strikers were fired from their jobs, and Coolidge called for a new police force to be recruited.
That night Coolidge received a telegram from AFL leader Samuel Gompers. "Whatever disorder has occurred", Gompers wrote, "is due to Curtis's order in which the right of the policemen has been denied".
Coolidge publicly answered Gompers's telegram, denying any justification whatsoever for the strike—and his response launched him into the national consciousness. Newspapers nationwide picked up on Coolidge's statement and he became the strike's opponents' newest hero. Amid the First Red Scare, many Americans were terrified of the spread of communist revolutions like those in Russia, Hungary, and Germany. Coolidge had lost some friends among organized labor, but conservatives saw a rising star. Although he usually acted with deliberation, the Boston police strike gave Coolidge a national reputation as a decisive leader and strict enforcer of law and order.
1919 election.
In 1919, Coolidge and Cox were renominated for their respective offices in 1919. By this time Coolidge's supporters, especially Stearns, had publicized his actions in the Police Strike around the state and the nation, and some of Coolidge's speeches were published in book form. He faced the same opponent as in 1918, Richard Long, but this time Coolidge defeated him by 125,101 votes, more than seven times his margin of victory from a year earlier. His actions in the police strike, combined with the massive electoral victory, led to suggestions that Coolidge run for president in 1920.
Legislation and vetoes as governor.
By the time Coolidge was inaugurated on January 2, 1919, the First World War had ended, and Coolidge pushed the legislature to give a $100 bonus () to Massachusetts veterans. He signed a bill reducing the work week for women and children from 54 hours to 48, saying, "We must humanize the industry, or the system will break down." He passed a budget that kept the tax rates the same, while trimming $4 million from expenditures, allowing the state to retire some of its debt.
Coolidge wielded the veto pen as governor. His most publicized veto prevented an increase in legislators' pay by 50%. Although he was personally opposed to Prohibition, he vetoed a bill in May 1920 that would have allowed the sale of beer or wine of 2.75% alcohol or less, in Massachusetts in violation of the Eighteenth Amendment to the United States Constitution. "Opinions and instructions do not outmatch the Constitution," he said in his veto message. "Against it, they are void."
Vice presidency (1921–1923).
1920 election.
At the 1920 Republican National Convention, most of the delegates were selected by state party caucuses, not primaries. As such, the field was divided among many local favorites. Coolidge was one such candidate, and while he placed as high as sixth in the voting, the powerful party bosses running the convention, primarily the party's U.S. Senators, never considered him seriously. After ten ballots, the bosses and then the delegates settled on Senator Warren G. Harding of Ohio as their nominee for president.
When the time came to select a vice-presidential nominee, the bosses also announced their choice, Senator Irvine Lenroot of Wisconsin, and then departed after his name was put forth, relying on the rank and file to confirm their decision. A delegate from Oregon, Wallace McCamant, had read "Have Faith in Massachusetts" and proposed Coolidge for vice president instead. The suggestion caught on quickly, with the masses craving an act of independence from the absent bosses, and Coolidge was unexpectedly nominated.
The Democrats nominated another Ohioan, James M. Cox, for president and the Assistant Secretary of the Navy, Franklin D. Roosevelt, for vice president. The question of the United States joining the League of Nations was a major issue in the campaign, as was the unfinished legacy of Progressivism. Harding ran a "front-porch" campaign from his home in Marion, Ohio, but Coolidge took to the campaign trail in the Upper South, New York, and New Englandhis audiences carefully limited to those familiar with Coolidge and those placing a premium upon concise and short speeches. On November 2, 1920, Harding and Coolidge were victorious in a landslide, winning more than 60 percent of the popular vote, including every state outside the South. They won in Tennessee, the first time a Republican ticket had won a Southern state since Reconstruction.
"Silent Cal".
The vice presidency did not carry many official duties, but Harding invited Coolidge to attend cabinet meetings, making him the first vice president to do so. He gave a number of unremarkable speeches around the country.
As vice president, Coolidge and his vivacious wife Grace were invited to quite a few parties, where the legend of "Silent Cal" was born. It is from this time that most of the jokes and anecdotes involving Coolidge originate, such as Coolidge being "silent in five languages". Although Coolidge was known to be a skilled and effective public speaker, in private he was a man of few words and was commonly referred to as "Silent Cal".
An apocryphal story has it that a person seated next to Coolidge at a dinner told him, "I made a bet today that I could get more than two words out of you", to which Coolidge replied, "You lose". On April 22, 1924, Coolidge said that the "You lose" incident never occurred. The story was related by Frank B. Noyes, President of the Associated Press, to its membership at its annual luncheon at the Waldorf Astoria Hotel, when toasting and introducing Coolidge, the invited speaker. After the introduction and before his prepared remarks, Coolidge told the membership, "Your President [Noyes] has given you a perfect example of one of those rumors now current in Washington which is without any foundation."
Coolidge often seemed uncomfortable among fashionable Washington society. When asked why he continued to attend so many of their dinner parties, he replied, "Got to eat somewhere." Alice Roosevelt Longworth, a leading Republican wit, underscored Coolidge's silence and his dour personality: "When he wished he were elsewhere, he pursed his lips, folded his arms, and said nothing. He looked then precisely as though he had been weaned on a pickle." Coolidge and his wife, Grace, who was a great baseball fan, once attended a Washington Senators game and sat through all nine innings without saying a word, except once when he asked her the time.
As president, Coolidge's reputation as a quiet man continued. "The words of a President have an enormous weight," he later wrote, "and ought not to be used indiscriminately." Coolidge was aware of his stiff reputation, and cultivated it. "I think the American people want a solemn ass as a President," he once told Ethel Barrymore, "and I think I will go along with them." Some historians suggest that Coolidge's image was created deliberately as a campaign tactic. Others believe his withdrawn and quiet behavior was natural, deepening after the death of his son in 1924. Dorothy Parker, upon learning that Coolidge had died, reportedly remarked, "How can they tell?"
Presidency (1923–1929).
On August 2, 1923, President Harding died unexpectedly from a heart attack in San Francisco while on a speaking tour of the western United States. Vice President Coolidge was in Vermont visiting his family home, which had neither electricity nor a telephone, when he received word by messenger of Harding's death. Coolidge dressed, said a prayer, and came downstairs to greet the reporters who had assembled. His father, a notary public and justice of the peace, administered the oath of office in the family's parlor by the light of a kerosene lamp at 2:47 a.m. on August 3, 1923, whereupon the new President of the United States returned to bed.
Coolidge returned to Washington the next day, and was sworn in again by Justice Adolph A. Hoehling Jr. of the Supreme Court of the District of Columbia, to forestall any questions about the authority of a state official to administer a federal oath. This second oath-taking remained a secret until it was revealed by Harry M. Daugherty in 1932, and confirmed by Hoehling. When Hoehling confirmed Daugherty's story, he indicated that Daugherty, then serving as United States Attorney General, asked him to administer the oath without fanfare at the Willard Hotel. According to Hoehling, he did not question Daugherty's reason for requesting a second oath-taking but assumed it was to resolve any doubt about whether the first swearing-in was valid.
The nation initially did not know what to make of Coolidge, who had maintained a low profile in the Harding administration. Many had even expected him to be replaced on the ballot in 1924. Coolidge believed that those of Harding's men under suspicion were entitled to every presumption of innocence, taking a methodical approach to the scandals, principally the Teapot Dome scandal, while others clamored for rapid punishment of those they presumed guilty.
Coolidge thought the Senate investigations of the scandals would suffice. The resulting resignations of those involved affirmed this. He personally intervened in demanding the resignation of Attorney General Harry M. Daugherty after Daugherty refused to cooperate with the investigations. He then set about to confirm that no loose ends remained in the administration, arranging for a full briefing on the wrongdoing. Harry A. Slattery reviewed the facts with him, Harlan F. Stone analyzed the legal aspects for him, and Senator William E. Borah assessed and presented the political factors.
On December 6, 1923, Coolidge addressed Congress when it reconvened, giving a speech that supported many of Harding's policies, including Harding's formal budgeting process, the enforcement of immigration restrictions, and the arbitration of coal strikes ongoing in Pennsylvania.
The address to Congress was the first presidential speech to be broadcast over the radio. The Washington Naval Treaty was proclaimed one month into Coolidge's term, and was generally well received nationally. In May 1924, Congress passed the World War I veterans' World War Adjusted Compensation Act ("Bonus Bill"), overriding Coolidge's veto. Later that year, Coolidge signed the Immigration Act, which was aimed at restricting southern and eastern European immigration, but appended a signing statement expressing his unhappiness with the bill's specific exclusion of Japanese immigrants.
Just before the Republican Convention began, Coolidge signed into law the Revenue Act of 1924, which reduced the top marginal tax rate from 58% to 46%, cut personal income tax rates across the board, increased the estate tax, and bolstered it with a new gift tax.
On June 2, 1924, Coolidge signed the act granting citizenship to all Native Americans born in the United States. By that time, two-thirds of them were already citizens, having gained it through marriage, military service (veterans of World War I were granted citizenship in 1919), or land allotments.
1924 election.
The Republican Convention was held from June 10 to 12, 1924, in Cleveland, Ohio. Coolidge was nominated on the first ballot. The convention nominated Frank Lowden of Illinois for vice president on the second ballot, but he declined. Former Brigadier General Charles G. Dawes was nominated on the third ballot and accepted.
The Democrats held their convention the next month in New York City. The convention soon deadlocked, and after 103 ballots, the delegates agreed upon a compromise candidate, John W. Davis, with Charles W. Bryan nominated for vice president. The Democrats' hopes were buoyed when Robert M. La Follette, a Republican senator from Wisconsin, split from the GOP to form a new Progressive Party. Many believed that the split in the Republican Party, like the one in 1912, would allow a Democrat to win the presidency.
After the conventions and the death of his younger son Calvin, Coolidge became withdrawn. He later said that "when he [the son] died, the power and glory of the Presidency went with him." Even as he mourned, Coolidge ran his standard campaign, not mentioning his opponents by name or maligning them, and delivering speeches on his theory of government, including several that were broadcast over the radio.
It was the most subdued campaign since 1896, partly because of Coolidge's grief, but also because of his naturally non-confrontational style. The other candidates campaigned in a more modern fashion, but despite the split in the Republican party, the results were similar to those of 1920. Coolidge won every state outside the South except Wisconsin, La Follette's home state. He won the election with 382 electoral votes and the popular vote by 2.5 million votes.
Industry and trade.
During Coolidge's presidency, the United States experienced a period of rapid economic growth known as the "Roaring Twenties". He left the administration's industrial policy in the hands of his activist Secretary of Commerce, Herbert Hoover, who energetically used government auspices to promote business efficiency and develop airlines and radio.
Coolidge disdained regulation and appointed men to the Federal Trade Commission and the Interstate Commerce Commission, who did little to restrict the activities of businesses under their jurisdiction. The regulatory state under Coolidge was, as one biographer called it, "thin to the point of invisibility".
Historian Robert Sobel offers some context for Coolidge's "laissez-faire" ideology, based on the prevailing understanding of federalism during his presidency: "As Governor of Massachusetts, Coolidge supported wages and hours legislation, opposed child labor, imposed economic controls during World War I, favored safety measures in factories, and even worker representation on corporate boards. Did he support these measures while president? No, because in the 1920s, such matters were considered the responsibilities of state and local governments."
Coolidge signed the Radio Act of 1927, which established the Federal Radio Commission and the equal-time rule for radio broadcasters and restricted radio broadcasting licenses to stations that demonstrated they served "the public interest, convenience, or necessity".
Taxation and government spending.
Coolidge adopted the taxation policies of his Secretary of the Treasury, Andrew Mellon, who advocated "scientific taxation"—the notion that lowering taxes will increase, rather than decrease, government receipts. Congress agreed, and tax rates were reduced in Coolidge's term.
In addition to federal tax cuts, Coolidge proposed reductions in federal expenditures and retiring the federal debt. His ideas were shared by the Republicans in Congress, and in 1924, Congress passed the Revenue Act of 1924, which reduced income tax rates and eliminated all income taxation for two million people. It reduced taxes again by passing the Revenue Acts of 1926 and 1928, while keeping spending down to reduce the overall federal debt. By 1927, only the wealthiest 2% of taxpayers paid federal income tax. Federal spending remained flat during Coolidge's administration, allowing one-fourth of the federal debt to be retired.
State and local governments saw considerable growth, surpassing the federal budget in 1927. In 1929, after Coolidge's series of tax rate reductions had cut the tax rate to 24% on those making over $100,000, the federal government collected more than $1 billion in income taxes, of which 65% was from those making over $100,000. In 1921, when the tax rate on those making over $100,000 a year was 73%, the federal government collected a little over $700 million in income taxes, of which 30% was from those making over $100,000.
Opposition to farm subsidies.
Perhaps the most contentious issue of Coolidge's presidency was relief for farmers. Some in Congress proposed a bill designed to fight falling agricultural prices by allowing the federal government to purchase crops to sell abroad at lower prices. Agriculture Secretary Henry C. Wallace and other administration officials favored the bill when it was introduced in 1924, but rising prices convinced many in Congress that the bill was unnecessary, and it was defeated just before the 1924 elections.
In 1926, with farm prices falling once more, Senator Charles L. McNary and Representative Gilbert N. Haugen—both Republicans—proposed the McNary–Haugen Farm Relief Bill. The bill proposed a federal farm board that would purchase surplus production in high-yield years, and hold it, when feasible, for later sale or sell it abroad.
Coolidge opposed McNary-Haugen, saying that agriculture must stand "on an independent business basis" and that "government control cannot be divorced from political control". Instead of manipulating prices, he favored Herbert Hoover's proposal to increase profitability by modernizing agriculture. Secretary Mellon wrote a letter denouncing McNary-Haugen as unsound and likely to cause inflation, and it was defeated.
After McNary-Haugen's defeat, Coolidge supported a less radical measure, the Curtis-Crisp Act, which would have created a federal board to lend money to farm cooperatives in times of surplus. The bill did not pass. In February 1927, Congress took up McNary-Haugen again, this time narrowly passing it, and Coolidge vetoed it.
In his veto message, he expressed the belief that the bill would do nothing to help farmers, benefiting only exporters and expanding the federal bureaucracy. Congress did not override the veto. In May 1928, Congress passed the bill again by an increased majority, and Coolidge vetoed it again. "Farmers never have made much money" he said. "I do not believe we can do much about it."
Flood control.
Coolidge has often been criticized for his actions during the Great Mississippi Flood of 1927, the worst natural disaster to hit the Gulf Coast until Hurricane Katrina in 2005. Although he eventually named Hoover to a commission in charge of flood relief, scholars argue that, overall, Coolidge showed lack of interest in federal flood control.
Coolidge believed that visiting the region after the floods would accomplish nothing and be seen as political grandstanding. He also did not want to incur the federal spending that flood control would require. He believed that property owners should bear much of the cost. Congress wanted a bill that would place the federal government completely in charge of flood mitigation. When Congress passed a compromise measure in 1928, Coolidge declined to take credit for it and signed the bill in private on May 15.
Civil rights.
According to one biographer, Coolidge was "devoid of racial prejudice", but he rarely took the lead on civil rights. Coolidge disliked the Ku Klux Klan and no Klansman is known to have received an appointment from him. In the 1924 presidential election, his opponents, Robert La Follette and John Davis, and his running mate, Charles Dawes, often attacked the Klan, but Coolidge avoided the subject.
Due to Coolidge's failure to condemn the Klan, some African-American leaders such as former assistant attorney general William Henry Lewis endorsed Davis. Davis got little of the black vote outside Indiana, where Klan control of the Indiana Republican Party caused many blacks to vote Democratic. It is estimated that over 90% of non-Indiana blacks voted for Coolidge.
Secretary of Commerce Herbert Hoover was accused of running forced labor camps for African Americans during the Great Mississippi Flood of 1927, which led more African Americans to vote Democratic when Hoover was the Republican presidential nominee in 1928 and 1932. During Coolidge's administration, lynchings of African-Americans decreased and millions of people left the Ku Klux Klan.
Coolidge spoke in favor of African Americans' civil rights, saying in his first State of the Union address that their rights were "just as sacred as those of any other citizen" under the U.S. Constitution and that it was a "public and a private duty to protect those rights".
Coolidge repeatedly called for laws to make lynching a federal crime. It was already a state crime, though not always enforced. Congress refused to pass any such legislation. On June 2, 1924, Coolidge signed the Indian Citizenship Act, which granted U.S. citizenship to all Native Americans living on reservations. Those off reservations had long been citizens.
On June 6, 1924, Coolidge delivered a commencement address at historically black, non-segregated Howard University, in which he thanked and commended African Americans for their rapid advances in education and contributions to U.S. society over the years, as well as their eagerness to render their services as soldiers in the World War, all while faced with discrimination and prejudice at home.
In an October 1924 speech, Coolidge stressed tolerance of differences as an American value and thanked immigrants for their contributions to U.S. society, saying that they had "contributed much to making our country what it is". He said that although the diversity of peoples was a source of conflict and tension in Europe, it was a peculiarly "harmonious" benefit for the U.S. Coolidge added that the U.S. should assist and help immigrants and urged immigrants to reject "race hatreds" and "prejudices".
Foreign policy.
Coolidge was neither well versed nor very interested in world affairs. His focus was mainly on U.S. business, especially pertaining to trade, and "Maintaining the Status Quo". Although not an isolationist, he was reluctant to enter into European involvements. Coolidge believed strongly in a non-interventionist foreign policy and supported American exceptionalism. He considered the 1920 Republican victory a rejection of the Wilsonian position that the U.S. should join the League of Nations.
Coolidge did not believe the League served U.S. interests. But he spoke in favor of joining the Permanent Court of International Justice (World Court), provided that the nation would not be bound by advisory decisions. In 1926, the Senate approved joining the Court, with reservations. The League of Nations accepted the reservations, but suggested some modifications of its own. The Senate failed to act, and so the U.S. did not join the World Court.
In 1924, the Coolidge administration nominated Charles Dawes to head the multinational committee that produced the Dawes Plan. It set fixed annual amounts for Germany's World War I reparations payments and authorized a large loan, mostly from U.S. banks, to help stabilize and stimulate the German economy. Coolidge attempted to pursue further curbs on naval strength after the successes of Harding's Washington Naval Conference, by sponsoring the Geneva Naval Conference in 1927, which failed owing to a French and Italian boycott and the failure of Great Britain and the U.S. to agree on cruiser tonnages. As a result, the conference was a failure and Congress eventually authorized for increased American naval spending in 1928.
The Kellogg–Briand Pact of 1928, named for U.S. Secretary of State Frank B. Kellogg and French Foreign Minister Aristide Briand, was a key peacekeeping initiative. Ratified in 1929, the treaty committed signatories—the U.S., the United Kingdom, France, Italy, Germany, and Japan—to "renounce war, as an instrument of national policy in their relations with one another". The treaty did not achieve its intended result—to outlaw war—but it did provide the founding principle for international law after World War II. Coolidge continued the Harding administration's policy of withholding recognition of the Soviet Union.
Efforts were made to normalize ties with post-Revolution Mexico. Coolidge recognized Mexico's new governments under Álvaro Obregón and Plutarco Elías Calles, and continued U.S. support for the elected Mexican government against the National League for the Defense of Religious Liberty during the Cristero War, lifting the arms embargo on Mexico. He appointed Dwight Morrow as Ambassador to Mexico with the successful objective to avoid further conflict with Mexico.
Coolidge's administration saw continuity in the occupation of Nicaragua and Haiti. In 1924, Coolidge ended the US occupation of the Dominican Republic as a result of withdrawal agreements finalized during Harding's administration. In 1925, Coolidge ordered the withdrawal of Marines stationed in Nicaragua following perceived stability after the 1924 Nicaraguan general election. In January 1927, he redeployed them there, after failed attempts to peacefully resolve the rapid deterioration of political stability and avert the ensuing Constitutionalist War. He later sent Henry L. Stimson to mediate a peace deal that ended the civil war and extend U.S. military presence in Nicaragua beyond Coolidge's presidency.
In January 1928, to extend an olive branch to Latin American leaders embittered over U.S. interventionist policies in Central America and the Caribbean, Coolidge led the U.S. delegation to the Sixth International Conference of American States in Havana, Cuba, the only international trip Coolidge made during his presidency. He was the last sitting U.S. president to visit Cuba until Barack Obama in 2016. For Canada, Coolidge authorized the St. Lawrence Seaway, a system of locks and canals that provided large vessels passage between the Atlantic Ocean and the Great Lakes.
Cabinet.
Although some of Harding's cabinet appointees were scandal-tarred, Coolidge initially retained all of them out of conviction that as successor to a deceased elected president, he was obligated to retain Harding's counselors and policies until the next election. He kept Harding's speechwriter Judson T. Welliver. Stuart Crawford replaced Welliver in November 1925. Coolidge appointed C. Bascom Slemp, a Virginia Congressman and experienced federal politician, to work jointly with Edward T. Clark, a Massachusetts Republican organizer whom he retained from his vice-presidential staff, as Secretaries to the President, a position equivalent to the modern White House Chief of Staff.
Perhaps the most powerful person in Coolidge's cabinet was Secretary of the Treasury Andrew Mellon, who controlled the administration's financial policies and was regarded by many, including House Minority Leader John Nance Garner, as more powerful than Coolidge himself. Commerce Secretary Herbert Hoover also held a prominent place in the cabinet, in part because Coolidge found value in Hoover's ability to win positive publicity with his pro-business proposals.
Secretary of State Charles Evans Hughes directed Coolidge's foreign policy until he resigned in 1925 following Coolidge's reelection. He was replaced by Frank B. Kellogg, who had previously served as a senator and ambassador to Great Britain. Coolidge made two other appointments after his reelection: William M. Jardine as Secretary of Agriculture and John G. Sargent as Attorney General. Coolidge had no vice president during his first term. Charles Dawes became vice president during Coolidge's second term, and Dawes and Coolidge clashed over farm policy and other issues.
Judicial appointments.
In 1925, Coolidge appointed one justice to the Supreme Court of the United States, Harlan F. Stone. Stone was Coolidge's fellow Amherst alumnus, a Wall Street lawyer and conservative Republican. In 1924, Stone was serving as the dean of Columbia Law School when Coolidge appointed him to be attorney general to restore the reputation tarnished by Harding's attorney general, Harry M. Daugherty.
It does not appear that Coolidge considered appointing anyone other than Stone, although Stone urged him to appoint Benjamin N. Cardozo. Stone proved to be a firm believer in judicial restraint and was regarded as one of the court's three liberal justices who often voted to uphold New Deal legislation. President Franklin D. Roosevelt later appointed Stone chief justice.
Coolidge nominated 17 judges to the United States Courts of Appeals and 61 to the United States district courts. He appointed judges to various specialty courts, including Genevieve R. Cline, who became the first woman named to the federal judiciary when Coolidge placed her on the United States Customs Court in 1928. Coolidge signed the Judiciary Act of 1925 into law, allowing the Supreme Court more discretion over its workload.
1928 election.
In the summer of 1927, Coolidge vacationed in the Black Hills of South Dakota. While on vacation, he issued a terse statement that he would not seek a second full term as president: "I do not choose to run for President in 1928." After allowing the reporters to take that in, Coolidge elaborated. "If I take another term, I will be in the White House till 1933 … Ten years in Washington is longer than any other man has had it—too long!"
In his memoirs, Coolidge explained his decision not to run: "The Presidential office takes a heavy toll of those who occupy it and those who are dear to them. While we should not refuse to spend and be spent in the service of our country, it is hazardous to attempt what we feel is beyond our strength to accomplish."
After leaving office, he and Grace returned to Northampton, where he wrote his memoirs. The Republicans retained the White House in 1928 when Herbert Hoover was elected in a landslide. Coolidge was reluctant to endorse Hoover. On one occasion he remarked, "for six years that man has given me unsolicited advice—all of it bad." But Coolidge had no desire to split the party by publicly opposing Hoover's nomination.
Post-presidency (1929–1933).
After his presidency, Coolidge retired to a spacious home in Northampton, "The Beeches". He kept a Hacker runabout boat on the Connecticut River, and local boating enthusiasts often observed him on the water. During this time, he chaired the Non-Partisan Railroad Commission, an entity several banks and corporations created to survey the country's long-term transportation needs and make recommendations for improvements. He was an honorary president of the American Foundation for the Blind, a director of New York Life Insurance Company, president of the American Antiquarian Society, and a trustee of Amherst College.
Coolidge published his autobiography in 1929 and wrote a syndicated newspaper column, "Calvin Coolidge Says", from 1930 to 1931. Faced with looming defeat in 1932, some Republicans spoke of rejecting Herbert Hoover as their party's nominee, and instead drafting Coolidge to run, but the former President made it clear that he was not interested in running again, and that he would publicly repudiate any effort to draft him, should it come about. Hoover was renominated, and Coolidge made several radio addresses in support of him. Hoover lost the general election to Franklin D. Roosevelt in a landslide.
Death.
Coolidge died suddenly of coronary thrombosis at The Beeches on January 5, 1933, at 12:45 p.m., aged 60. Shortly before his death, he told an old friend, "I feel I no longer fit in with these times." Coolidge is buried in Plymouth Notch Cemetery in Plymouth Notch, Vermont. The nearby family home is maintained as one of the original buildings on the Calvin Coolidge Homestead District site. In July 1972, the State of Vermont dedicated a new visitors' center nearby to mark Coolidge's 100th birthday.
Legacy.
Despite being one of the most popular U.S. presidents while in office, Coolidge is generally rated below average by modern historians. David Greenberg, a scholar from Rutgers University, said, "although the public liked and admired Calvin Coolidge during his tenure, the Great Depression that began in 1929 seriously eroded his reputation and changed public opinion about his policies".
Historians have criticized Coolidge for his lack of assertiveness and have called him a "do nothing president" who enjoyed high public approval only because he was in office when things were going well around the world. Some historians have scrutinized Coolidge for signing laws that broadened federal regulatory authority and say it paved the way for corruption in future presidential administrations.
In a 1982 "Chicago Tribune" survey of 49 historians, Coolidge was ranked the eighth-worst U.S. president. In 2006, British journalist William Shawcross said he believed Coolidge was the worst president of the 20th century. In a 2021 C-SPAN survey, historians ranked Coolidge 24th out of 44 presidents. They gave him high ratings for "moral authority" and "administrative skills" but poor ratings for "setting an agenda" and "pursuing equal justice".
Although historians generally view Coolidge unfavorably, his hands-off government approach continues to resonate with modern conservatives and Republican politicians. In 1981, President Ronald Reagan publicly praised Coolidge's "laissez-faire" policy. Later, Reagan had a portrait of Thomas Jefferson in the White House Cabinet Room replaced by one of Coolidge.
Radio, film, and commemorations.
Despite his reputation as a quiet and even reclusive politician, Coolidge made use of the new medium of radio and made radio history several times while president. He made himself available to reporters, giving 520 press conferences, meeting with reporters more regularly than any president before or since. Coolidge's second inauguration was the first presidential inauguration broadcast on radio. On December 6, 1923, his speech to Congress was broadcast on radio, the first presidential radio address.
Coolidge signed the Radio Act of 1927, which assigned regulation of radio to the newly created Federal Radio Commission. On August 11, 1924, Theodore W. Case, using the Phonofilm sound-on-film process he developed for Lee de Forest, filmed Coolidge on the White House lawn, making him the first president to appear in a sound film, "President Coolidge, Taken on the White House Grounds". When Charles Lindbergh arrived in Washington on a U.S. Navy ship after his celebrated 1927 trans-Atlantic flight, Coolidge welcomed him back to the U.S. and presented him with the Medal of Honor, and the event was filmed.
See also.
General:
External links.
Biographical information
Digital collections
Physical collections
Other
|
6198
|
7903804
|
https://en.wikipedia.org/wiki?curid=6198
|
Convention on Biological Diversity
|
The Convention on Biological Diversity (CBD), known informally as the Biodiversity Convention, is a multilateral treaty. The Convention has three main goals: the conservation of biological diversity (or biodiversity); the sustainable use of its components; and the fair and equitable sharing of benefits arising from genetic resources. Its objective is to develop national strategies for the conservation and sustainable use of biological diversity, and it is often seen as the key document regarding sustainable development.
The Convention was opened for signature at the Earth Summit in Rio de Janeiro on 5 June 1992 and entered into force on 29 December 1993. The United States is the only UN member state which has not ratified the Convention. It has two supplementary agreements, the Cartagena Protocol and Nagoya Protocol.
The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international treaty governing the movements of living modified organisms (LMOs) resulting from modern biotechnology from one country to another. It was adopted on 29 January 2000 as a supplementary agreement to the CBD and entered into force on 11 September 2003.
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity is another supplementary agreement to the CBD. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014.
2010 was also the International Year of Biodiversity, and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories at Nagoya, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity in December 2010. The Convention's "Strategic Plan for Biodiversity 2011–2020", created in 2010, include the Aichi Biodiversity Targets.
The meetings of the Parties to the Convention are known as Conferences of the Parties (COP), with the first one (COP 1) held in Nassau, Bahamas, in 1994 and the most recent one (COP 16) in 2024 in Cali, Colombia.
In the area of marine and coastal biodiversity CBD's focus at present is to identify Ecologically or Biologically Significant Marine Areas (EBSAs) in specific ocean locations based on scientific criteria. The aim is to create an international legally binding instrument (ILBI) involving area-based planning and decision-making under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ treaty or High Seas Treaty).
Origin and scope.
The notion of an international convention on biodiversity was conceived at a United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in November 1988. The subsequent year, the Ad Hoc Working Group of Technical and Legal Experts was established for the drafting of a legal text which addressed the conservation and sustainable use of biological diversity, as well as the sharing of benefits arising from their utilization with sovereign states and local communities. In 1991, an intergovernmental negotiating committee was established, tasked with finalizing the Convention's text.
A Conference for the Adoption of the Agreed Text of the Convention on Biological Diversity was held in Nairobi, Kenya, in 1992, and its conclusions were distilled in the Nairobi Final Act. The Convention's text was opened for signature on 5 June 1992 at the United Nations Conference on Environment and Development (the Rio "Earth Summit"). By its closing date, 4 June 1993, the Convention had received 168 signatures. It entered into force on 29 December 1993.
The Convention recognized for the first time in international law that the conservation of biodiversity is "a common concern of humankind" and is an integral part of the development process. The agreement covers all ecosystems, species, and genetic resources. It links traditional conservation efforts to the economic goal of using biological resources sustainably. It sets principles for the fair and equitable sharing of the benefits arising from the use of genetic resources, notably those destined for commercial use. It also covers the rapidly expanding field of biotechnology through its Cartagena Protocol on Biosafety, addressing technology development and transfer, benefit-sharing and biosafety issues. Importantly, the Convention is legally binding; countries that join it ('Parties') are obliged to implement its provisions.
The Convention reminds decision-makers of the finite status of natural resources and sets out a philosophy of sustainable use. While past conservation efforts were aimed at protecting particular species and habitats, the Convention recognizes that ecosystems, species and genes must be used for the benefit of humans. However, this should be done in a way and at a rate that does not lead to the long-term decline of biological diversity.
The Convention also offers decision-makers guidance based on the precautionary principle which demands that where there is a threat of significant reduction or loss of biological diversity, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize such a threat. The Convention acknowledges that substantial investments are required to conserve biological diversity. It argues, however, that conservation will bring us significant environmental, economic and social benefits in return.
The Convention on Biological Diversity of 2010 banned some forms of geoengineering.
Executive secretary.
As of April 2024, the acting executive secretary is Astrid Schomaker.
The previous executive secretaries were: David Cooper (2023–2024), Elizabeth Maruma Mrema (2020–2023), (2017–2019), Braulio Ferreira de Souza Dias (2012–2017), Ahmed Djoghlaf (2006–2012), Hamdallah Zedan (1998–2005), Calestous Juma (1995–1998), and Angela Cropper (1993–1995).
Issues.
Some of the many issues dealt with under the Convention include:
International bodies established.
Conference of the Parties (COP).
The Convention's governing body is the Conference of the Parties (COP), consisting of all governments (and regional economic integration organizations) that have ratified the treaty. This ultimate authority reviews progress under the Convention, identifies new priorities, and sets work plans for members. The COP can also make amendments to the Convention, create expert advisory bodies, review progress reports by member nations, and collaborate with other international organizations and agreements.
The Conference of the Parties uses expertise and support from several other bodies that are established by the Convention. In addition to committees or mechanisms established on an ad hoc basis, the main organs are:
CBD Secretariat.
The CBD Secretariat, based in Montreal, Quebec, Canada, operates under UNEP, the United Nations Environment Programme. Its main functions are to organize meetings, draft documents, assist member governments in the implementation of the programme of work, coordinate with other international organizations, and collect and disseminate information.
Subsidiary Body for Scientific, Technical and Technological Advice (SBSTTA).
The SBSTTA is a committee composed of experts from member governments competent in relevant fields. It plays a key role in making recommendations to the COP on scientific and technical issues. It provides assessments of the status of biological diversity and of various measures taken in accordance with Convention, and also gives recommendations to the Conference of the Parties, which may be endorsed in whole, in part or in modified form by the COPs. SBSTTA had met 26 times, with a 26th meeting taking place in Nairobi, Kenya in 2024.
Subsidiary Body on Implementation.
In 2014, the Conference of the Parties to the Convention on Biological Diversity established the Subsidiary Body on Implementation (SBI) to replace the Ad Hoc Open-ended Working Group on Review of Implementation of the Convention. The four functions and core areas of work of SBI are: (a) review of progress in implementation; (b) strategic actions to enhance implementation; (c) strengthening means of implementation; and (d) operations of the Convention and the Protocols. The first meeting of the SBI was held on 2–6 May 2016 and the second meeting was held on 9–13 July 2018, both in Montreal, Canada. The latest (fifth) meeting of the SBI was held in October 2024 in Cali, Colombia. The Bureau of the Conference of the Parties serves as the Bureau of the SBI. The current chair of the SBI is Ms. Clarissa Souza Della Nina of Brazil.
Parties.
As of 2016, the Convention has 196 Parties, which includes 195 states and the European Union. All UN member states—with the exception of the United States—have ratified the treaty. Non-UN member states that have ratified are the Cook Islands, Niue, and the State of Palestine. The Holy See and the states with limited recognition are non-Parties. The US has signed but not ratified the treaty, because ratification requires a two-thirds majority in the Senate and is blocked by Republican Party senators.
The European Union created the Cartagena Protocol (see below) in 2000 to enhance biosafety regulation and propagate the "precautionary principle" over the "sound science principle" defended by the United States. Whereas the impact of the Cartagena Protocol on domestic regulations has been substantial, its impact on international trade law remains uncertain. In 2006, the World Trade Organization (WTO) ruled that the European Union had violated international trade law between 1999 and 2003 by imposing a moratorium on the approval of genetically modified organisms (GMO) imports. Disappointing the United States, the panel nevertheless "decided not to decide" by not invalidating the stringent European biosafety regulations.
Implementation by the Parties to the Convention is achieved using two means:
National Biodiversity Strategies and Action Plans (NBSAP).
National Biodiversity Strategies and Action Plans (NBSAP) are the principal instruments for implementing the Convention at the national level. The Convention requires that countries prepare a national biodiversity strategy and to ensure that this strategy is included in planning for activities in all sectors where diversity may be impacted. As of early 2012, 173 Parties had developed NBSAPs.
The United Kingdom, New Zealand and Tanzania carried out elaborate responses to conserve individual species and specific habitats. The United States of America, a signatory who had not yet ratified the treaty by 2010, produced one of the most thorough implementation programs through species recovery programs and other mechanisms long in place in the US for species conservation.
Singapore established a detailed "National Biodiversity Strategy and Action Plan". The "National Biodiversity Centre" of Singapore represents Singapore in the Convention for Biological Diversity.
National Reports.
In accordance with Article 26 of the Convention, Parties prepare national reports on the status of implementation of the Convention.
Protocols and plans developed by CBD.
Cartagena Protocol (2000).
The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the Convention on Biological Diversity. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology.
The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will, for example, let countries ban imports of a genetically modified organism if they feel there is not enough scientific evidence the product is safe and requires exporters to label shipments containing genetically modified commodities such as corn or cotton.
The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003.
Global Strategy for Plant Conservation (2002).
In April 2002, the Parties of the UN CBD adopted the recommendations of the Gran Canaria Declaration Calling for a Global Plant Conservation Strategy, and adopted a 16-point plan aiming to slow the rate of plant extinctions around the world by 2010.
Nagoya Protocol (2010).
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was adopted on 29 October 2010 in Nagoya, Aichi Prefecture, Japan, at the tenth meeting of the Conference of the Parties, and entered into force on 12 October 2014. The protocol is a supplementary agreement to the Convention on Biological Diversity, and provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. It thereby contributes to the conservation and sustainable use of biodiversity.
Strategic Plan for Biodiversity 2011–2020.
Also, at the tenth meeting of the Conference of the Parties, held from 18 to 29 October 2010 in Nagoya, a revised and updated "Strategic Plan for Biodiversity, 2011–2020" was agreed and published. This document included the "Aichi Biodiversity Targets", comprising 20 targets that address each of five strategic goals defined in the plan. The strategic plan includes the following strategic goals:
Upon the launch of Agenda 2030, CBD released a technical note mapping and identifying synergies between the 17 Sustainable Development Goals (SDGs) and the 20 Aichi Biodiversity Targets. This helps to understand the contributions of biodiversity to achieving the SDGs.
Post-2020 Global Biodiversity Framework.
A new plan, known as the post-2020 Global Biodiversity Framework (GBF) was developed to guide action through 2030. A first draft of this framework was released in July 2021, and its final content was discussed and negotiated as part of the COP 15 meetings. Reducing agricultural pollution and sharing the benefits of digital sequence information arose as key points of contention among Parties during development of the framework. A final version was adopted by the Convention on 19 December 2022. The framework includes a number of ambitious goals, including a commitment to designate at least 30 percent of global land and sea as protected areas (known as the "30 by 30" initiative).
Marine and coastal biodiversity.
The CBD has a significant focus on marine and coastal biodiversity. A series of expert workshops have been held (2018–2022) to identify options for modifying the description of Ecologically or Biologically Significant Marine Areas (EBSAs) and describing new areas. These have focused on the North-East, North-West and South-Eastern Atlantic Ocean, Baltic Sea, Caspian Sea, Black Sea, Seas of East Asia, North-West Indian Ocean and Adjacent Gulf Areas, Southern and North-East Indian Ocean, Mediterranean Sea, North and South Pacific, Eastern Tropical and Temperate Pacific, Wider Caribbean and Western Mid-Atlantic. The workshop meetings have followed the EBSA process based on internationally agreed scientific criteria. This is aimed at creating an international legally binding instrument (ILBI) under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ or High Seas Treaty). The central mechanism is area-based planning and decision-making. It integrates EBSAs, Vulnerable Marine Ecosystems (VMEs) and High Seas (Marine Protected Areas) with Blue Growth scenarios. There is also linkage with the EU Marine Strategy Framework Directive.
Criticism.
There have been criticisms against CBD that its implementation has been weakened due to resistance of Western countries to the implementation of pro-South provisions of the Convention. CBD is also regarded as a case of a hard treaty gone soft in the implementation trajectory. The argument to enforce the treaty as a legally binding multilateral instrument with the Conference of Parties reviewing the infractions and non-compliance is also gaining strength.
Although the Convention explicitly states that all forms of life are covered by its provisions, examination of reports and of national biodiversity strategies and action plans submitted by participating countries shows that in practice this is not happening. The fifth report of the European Union, for example, makes frequent reference to animals (particularly fish) and plants, but does not mention bacteria, fungi or protists at all. The International Society for Fungal Conservation has assessed more than 100 of these CBD documents for their coverage of fungi using defined criteria to place each in one of six categories. No documents were assessed as good or adequate, less than 10% as nearly adequate or poor, and the rest as deficient, seriously deficient or totally deficient.
Scientists working with biodiversity and medical research are expressing fears that the Nagoya Protocol is counterproductive, and will hamper disease prevention and conservation efforts, and that the threat of imprisonment of scientists will have a chilling effect on research. Non-commercial researchers and institutions such as natural history museums fear maintaining biological reference collections and exchanging material between institutions will become difficult, and medical researchers have expressed alarm at plans to expand the protocol to make it illegal to publicly share genetic information, e.g. via GenBank.
William Yancey Brown, when with the Brookings Institution, suggested that the Convention on Biological Diversity should include the preservation of intact genomes and viable cells for every known species and for new species as they are discovered.
Meetings of the Parties.
A Conference of the Parties (COP) was held annually for three years after 1994, and thence biennially on even-numbered years.
1994 COP 1.
The first ordinary meeting of the Parties to the Convention took place in November and December 1994, in Nassau, Bahamas. The International Coral Reef Initiative (ICRI) was launched at this first COP for the Convention on Biological Diversity.
1995 COP 2.
The second ordinary meeting of the Parties to the Convention took place in November 1995, in Jakarta, Indonesia.
1996 COP 3.
The third ordinary meeting of the Parties to the Convention took place in November 1996, in Buenos Aires, Argentina.
1998 COP 4.
The fourth ordinary meeting of the Parties to the Convention took place in May 1998, in Bratislava, Slovakia.
1999 EX-COP 1 (Cartagena).
The First Extraordinary Meeting of the Conference of the Parties took place in February 1999, in Cartagena, Colombia. A series of meetings led to the adoption of the Cartagena Protocol on Biosafety in January 2000, effective from 2003.
2000 COP 5.
The fifth ordinary meeting of the Parties to the Convention took place in May 2000, in Nairobi, Kenya.
2002 COP 6.
The sixth ordinary meeting of the Parties to the Convention took place in April 2002, in The Hague, Netherlands.
2004 COP 7.
The seventh ordinary meeting of the Parties to the Convention took place in February 2004, in Kuala Lumpur, Malaysia.
2006 COP 8.
The eighth ordinary meeting of the Parties to the Convention took place in March 2006, in Curitiba, Brazil.
2008 COP 9.
The ninth ordinary meeting of the Parties to the Convention took place in May 2008, in Bonn, Germany.
2010 COP 10 (Nagoya).
The tenth ordinary meeting of the Parties to the Convention took place in October 2010, in Nagoya, Japan. It was at this meeting that the Nagoya Protocol was ratified.
2010 was the International Year of Biodiversity, which resulted in 110 reports on the loss of biodiversity in different countries, but little or no progress toward the goal of "significant reduction" in the problem. Following a recommendation of CBD signatories, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity.
2012 COP 11.
Leading up to the Conference of the Parties (COP 11) meeting on biodiversity in Hyderabad, India, 2012, preparations for a World Wide Views on Biodiversity has begun, involving old and new partners and building on the experiences from the World Wide Views on Global Warming.
2014 COP 12.
Under the theme, "Biodiversity for Sustainable Development", thousands of representatives of governments, NGOs, indigenous peoples, scientists and the private sector gathered in Pyeongchang, Republic of Korea in October 2014 for the 12th meeting of the Conference of the Parties to the Convention on Biological Diversity (COP 12).
From 6–17 October 2014, Parties discussed the implementation of the Strategic Plan for Biodiversity 2011–2020 and its Aichi Biodiversity Targets, which are to be achieved by the end of this decade. The results of Global Biodiversity Outlook 4, the flagship assessment report of the CBD informed the discussions.
The conference gave a mid-term evaluation to the UN Decade on Biodiversity (2011–2020) initiative, which aims to promote the conservation and sustainable use of nature. The meeting achieved a total of 35 decisions, including a decision on "Mainstreaming gender considerations", to incorporate gender perspective to the analysis of biodiversity.
At the end of the meeting, the meeting adopted the "Pyeongchang Road Map", which addresses ways to achieve biodiversity through technology cooperation, funding and strengthening the capacity of developing countries.
2016 COP 13.
The thirteenth ordinary meeting of the Parties to the Convention took place between 2 and 17 December 2016 in Cancún, Mexico.
2018 COP 14.
The 14th ordinary meeting of the Parties to the Convention took place on 17–29 November 2018, in Sharm El-Sheikh, Egypt. The 2018 UN Biodiversity Conference closed on 29 November 2018 with broad international agreement on reversing the global destruction of nature and biodiversity loss threatening all forms of life on Earth. Parties adopted the Voluntary Guidelines for the design and effective implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Governments also agreed to accelerate action to achieve the Aichi Biodiversity Targets, agreed in 2010, until 2020. Work to achieve these targets would take place at the global, regional, national and subnational levels.
2021/2022 COP 15.
The 15th meeting of the Parties was originally scheduled to take place in Kunming, China in 2020, but was postponed several times due to the COVID-19 pandemic. After the start date was delayed for a third time, the Convention was split into two sessions. A mostly online event took place in October 2021, where over 100 nations signed the Kunming declaration on biodiversity. The theme of the declaration was "Ecological Civilization: Building a Shared Future for All Life on Earth". Twenty-one action-oriented draft targets were provisionally agreed in the October meeting, to be further discussed in the second session: an in-person event that was originally scheduled to start in April 2022, but was rescheduled to occur later in 2022. The second part of COP 15 ultimately took place in Montreal, Canada, from 5–17 December 2022. At the meeting, the Parties to the Convention adopted a new action plan, the Kunming-Montreal Global Biodiversity Framework.
2024 COP 16.
The 16th meeting of the Parties is scheduled to be held in Cali, Colombia in 2024. Originally, Turkey was going to host it but after a series of earthquakes in February 2023 they had to withdraw.
|
6199
|
1295542808
|
https://en.wikipedia.org/wiki?curid=6199
|
Convention on Fishing and Conservation of the Living Resources of the High Seas
|
The Convention on Fishing and Conservation of Living Resources of the High Seas is an agreement that was designed to solve through international cooperation the problems involved in the conservation of living resources of the high seas, considering that because of the development of modern technology some of these resources are in danger of being overexploited. The convention opened for signature on 29 April 1958 and entered into force on 20 March 1966.
Participation.
"Parties" – (39): Australia, Belgium, Bosnia and Herzegovina, Burkina Faso, Cambodia, Colombia, Republic of the Congo, Denmark, Dominican Republic, Fiji, Finland, France, Haiti, Jamaica, Kenya, Lesotho, Madagascar, Malawi, Malaysia, Mauritius, Mexico, Montenegro, Netherlands, Nigeria, Portugal, Senegal, Serbia, Sierra Leone, Solomon Islands, South Africa, Spain, Switzerland, Thailand, Tonga, Trinidad and Tobago, Uganda, United Kingdom, United States, Venezuela.
"Countries that have signed, but not yet ratified" – (21): Afghanistan, Argentina, Bolivia, Canada, Costa Rica, Cuba, Ghana, Iceland, Indonesia, Iran, Ireland, Israel, Lebanon, Liberia, Nepal, New Zealand, Pakistan, Panama, Sri Lanka, Tunisia, Uruguay.
|
6200
|
9784415
|
https://en.wikipedia.org/wiki?curid=6200
|
Convention on Long-Range Transboundary Air Pollution
|
The Convention on Long-Range Transboundary Air Pollution, often abbreviated as Air Convention or CLRTAP, is intended to protect the human environment against air pollution and to gradually reduce and prevent air pollution, including long-range transboundary air pollution. It is implemented by the European Monitoring and Evaluation Programme (EMEP), directed by the United Nations Economic Commission for Europe (UNECE).
The convention opened for signature on , and entered into force on .
Secretariat.
The Convention, which now has 51 Parties, identifies the Executive Secretary of the United Nations Economic Commission for Europe (UNECE) as its secretariat. The current parties to the Convention are shown on the map.
The Convention is implemented by the European Monitoring and Evaluation Programme (EMEP) (short for "Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe"). Results of the EMEP programme are published on the EMEP website, www.emep.int.
Procedure.
The aim of the Convention is that Parties shall endeavour to limit and, as far as possible, gradually reduce and prevent air pollution including long-range transboundary air pollution. Parties develop policies and strategies to combat the discharge of air pollutants through exchanges of information, consultation, research and monitoring.
The Parties meet annually at sessions of the Executive Body to review ongoing work and plan future activities including a workplan for the coming year. The three main subsidiary bodies – the Working Group on Effects, the Steering Body to EMEP and the Working Group on Strategies and Review – as well as the Convention's Implementation Committee, report to the Executive Body each year.
Currently, the Convention's priority activities include review and possible revision of its most recent protocols, implementation of the Convention and its protocols across the entire UNECE region (with special focus on Eastern Europe, the Caucasus and Central Asia and South-East Europe) and sharing its knowledge and information with other regions of the world.
Protocols.
Since 1979 the Convention on Long-range Transboundary Air Pollution has addressed some of the major environmental problems of the UNECE region through scientific collaboration and policy negotiation. The Convention has been extended by eight protocols that identify specific measures to be taken by Parties to cut their emissions of air pollutants:
|
6201
|
19339494
|
https://en.wikipedia.org/wiki?curid=6201
|
CITES
|
CITES (shorter acronym for the Convention on International Trade in Endangered Species of Wild Fauna and Flora, also known as the Washington Convention) is a multilateral treaty to protect endangered plants and animals from the threats of international trade. It was drafted as a result of a resolution adopted in 1963 at a meeting of members of the International Union for Conservation of Nature (IUCN). The convention was opened for signature in 1973 and CITES entered into force on 1 July 1975.
Its aim is to ensure that international trade (import/export) in specimens of animals and plants included under CITES does not threaten the survival of the species in the wild. This is achieved via a system of permits and certificates. CITES affords varying degrees of protection to more than 40,900 species.
, the Secretary-General of CITES is Ivonne Higuero.
Background.
CITES is one of the largest and oldest conservation and sustainable use agreements in existence. There are three working languages of the convention (English, French and Spanish) in which all documents are made available. Participation is voluntary and countries that have agreed to be bound by the convention are known as Parties. Although CITES is legally binding on the Parties, it does not take the place of national laws. Rather it provides a framework respected by each Party, which must adopt their own domestic legislation to implement CITES at the national level.
Originally, CITES addressed depletion resulting from demand for luxury goods such as furs in Western countries, but with the rising wealth of Asia, particularly in China, the focus changed to products demanded there, particularly those used for luxury goods such as elephant ivory or rhinoceros horn. As of 2022, CITES has expanded to include thousands of species previously considered unremarkable and in no danger of extinction such as manta rays.
Ratifications.
The text of the convention was finalized at a meeting of representatives of 80 countries in Washington, D.C., United States, on 3 March 1973. It was then open for signature until 31 December 1974. It entered into force after the 10th ratification by a signatory country, on 1 July 1975. Countries that signed the Convention become Parties by ratifying, accepting or approving it. By the end of 2003, all signatory countries had become Parties. States that were not signatories may become Parties by acceding to the convention. , the convention has 185 parties, including 184 states and the European Union.
The CITES Convention includes provisions and rules for trade with non-Parties. All member states of the United Nations are party to the treaty, with the exception of North Korea, Federated States of Micronesia, Haiti, Kiribati, Marshall Islands, Nauru, South Sudan, East Timor, and Tuvalu. UN observer the Holy See is also not a member. The Faroe Islands, an autonomous region in the Kingdom of Denmark, is also treated as a non-Party to CITES (both the Danish mainland and Greenland are part of CITES).
An amendment to the text of the convention, known as the Gaborone Amendment allows regional economic integration organizations (REIO), such as the European Union, to have the status of a member state and to be a Party to the convention. The REIO can vote at CITES meetings with the number of votes representing the number of members in the REIO, but it does not have an additional vote.
In accordance with Article XVII, paragraph 3, of the CITES Convention, the Gaborone Amendment entered into force on 29 November 2013, 60 days after 54 (two-thirds) of the 80 States that were party to CITES on 30 April 1983 deposited their instrument of acceptance of the amendment. At that time it entered into force only for those States that had accepted the amendment. The amended text of the convention will apply automatically to any State that becomes a Party after 29 November 2013. For States that became party to the convention before that date and have not accepted the amendment, it will enter into force 60 days after they accept it.
Governing Structure of CITES.
CITES operates to support the member Parties. This support consists of the input from three Committees (Standing, Animal and Plant) who are overseen by the Secretary-General. The secretariat position has been held by a variety of people from different nations.
Timeline of CITES Secretary-General Offices.
1978-1981: Peter H. Sand
He was born in Bavaria, Germany and was educated in international law in Germany, France and Canada. He became a professor and an author, focusing on environmental law, holding other positions such as the Director-General of the IUCN and legal advisor for environmental affairs to the World Bank.
1982-1990: Eugene Lapointe
A Canadian native, Lapointe served in the military for many years and acted as a diplomat before governing the CITES. He is currently an author and holds the position of President of the IWMC World Conservation Trust, a non-profit organization that promotes wildlife conservation with an emphasis on a human-centered approach to natural-resources.
1991 - 1998: Izgrev Topkov
Born and raised in Bulgaria, Topkov was a diplomat before managing CITES, and was removed from that position following the misuse of permits that violated CITES guidelines.
1999 - 2010: Willem Wijnstekers
A native of the Netherlands and graduate from the University of Amsterdam, Wijnsteker held the position of Secretary-General for the longest period of time and is now an author.
2010 - 2018: John E. Scanlon
An Australian who studied environmental law, Scanlon was active in combating illegal animal trade and currently works in the effort to protect Elephants in Africa as CEO of the Elephant Protection Initiative Foundation (EPIF).
2018- Current: Ivonne Higuero
The first woman to hold this position, Higuero was educated in environmental economics and is from Panama.
Regulation of trade.
CITES works by subjecting international trade in specimens of listed taxa to controls as they move across international borders. CITES specimens can include a wide range of items including the whole animal/plant (whether alive or dead), or a product that contains a part or derivative of the listed taxa such as cosmetics or traditional medicines.
Four types of trade are recognized by CITES - import, export, re-export (export of any specimen that has previously been imported) and introduction from the sea (transportation into a state of specimens of any species which were taken in the marine environment not under the jurisdiction of any state). The CITES definition of "trade" does not require a financial transaction to be occurring. All trade in specimens of species covered by CITES must be authorized through a system of permits and certificates prior to the trade taking place. CITES permits and certificates are issued by one or more Management Authorities in charge of administering the CITES system in each country. Management Authorities are advised by one or more Scientific Authorities on the effects of trade of the specimen on the status of CITES-listed species. CITES permits and certificates must be presented to relevant border authorities in each country in order to authorize the trade.
Each party must enact their own domestic legislation to bring the provisions of CITES into effect in their territories. Parties may choose to take stricter domestic measures than CITES provides (for example by requiring permits/certificates in cases where they would not normally be needed or by prohibiting trade in some specimens).
Appendices.
Over 40,900 species, subspecies and populations are protected under CITES. Each protected taxa or population is included in one of three lists called Appendices. The Appendix that lists a taxon or population reflects the level of the threat posed by international trade and the CITES controls that apply.
Taxa may be split-listed meaning that some populations of a species are on one Appendix, while some are on another. The African bush elephant ("Loxodonta africana") is currently split-listed, with all populations except those of Botswana, Namibia, South Africa and Zimbabwe listed in Appendix I. Those of Botswana, Namibia, South Africa and Zimbabwe are listed in Appendix II. There are also species that have only some populations listed in an Appendix. One example is the pronghorn ("Antilocapra americana"), a ruminant native to North America. Its Mexican population is listed in Appendix I, but its U.S. and Canadian populations are not listed (though certain U.S. populations in Arizona are nonetheless protected under other domestic legislation, in this case the Endangered Species Act).
Taxa are proposed for inclusion, amendment or deletion in Appendices I and II at meetings of the Conference of the Parties (CoP), which are held approximately once every three years. Amendments to listing in Appendix III may be made unilaterally by individual parties.
Appendix I.
Appendix I taxa are those that are threatened with extinction and to which the highest level of CITES protection is afforded. Commercial trade in wild-sourced specimens of these taxa is not permitted and non-commercial trade is strictly controlled by requiring an import permit and export permit to be granted by the relevant Management Authorities in each country before the trade occurs.
Notable taxa listed in Appendix I include the red panda ("Ailurus fulgens"), western gorilla ("Gorilla gorilla"), the chimpanzee species ("Pan spp."), tigers ("Panthera tigris" sp.), Asian elephant ("Elephas maximus"), snow leopard ("Panthera uncia"), red-shanked douc ("Pygathrix nemaeus"), some populations of African bush elephant ("Loxodonta africana"), and the monkey puzzle tree ("Araucaria araucana").
Appendix II.
Appendix II taxa are those that are not necessarily threatened with extinction, but trade must be controlled in order to avoid utilization incompatible with their survival. Appendix II taxa may also include species similar in appearance to species already listed in the Appendices. The vast majority of taxa listed under CITES are listed in Appendix II. Any trade in Appendix II taxa standardly requires a CITES export permit or re-export certificate to be granted by the Management Authority of the exporting country before the trade occurs.
Examples of taxa listed on Appendix II are the great white shark ("Carcharodon carcharias"), the American black bear ("Ursus americanus"), Hartmann's mountain zebra ("Equus zebra hartmannae"), green iguana ("Iguana iguana"), queen conch ("Strombus gigas"), emperor scorpion ("Pandinus imperator"), Mertens' water monitor ("Varanus mertensi"), bigleaf mahogany ("Swietenia macrophylla"), lignum vitae ("Guaiacum officinale"), the chambered nautilus ("Nautilus pompilius"), all stony corals ("Scleractinia" spp.), Jungle cat ("Felis chaus") and American ginseng ("Panax quinquefolius").
Appendix III.
Appendix III species are those that are protected in at least one country, and that country has asked other CITES Parties for assistance in controlling the trade.
Any trade in Appendix III species standardly requires a CITES export permit (if sourced from the country that listed the species) or a certificate of origin (from any other country) to be granted before the trade occurs.
Examples of species listed on Appendix III and the countries that listed them are the Hoffmann's two-toed sloth ("Choloepus hoffmanni") by Costa Rica, sitatunga ("Tragelaphus spekii") by Ghana and African civet ("Civettictis civetta") by Botswana.
Exemptions and special procedures.
Under Article VII, the Convention allows for certain exceptions to the general trade requirements described above.
Pre-Convention specimens.
CITES provides for a special process for specimens that were acquired before the provisions of the Convention applied to that specimen. These are known as "pre-Convention" specimens and must be granted a CITES pre-Convention certificate before the trade occurs. Only specimens legally acquired before the date on which the species concerned was first included in the Appendices qualify for this exemption.
Personal and household effects.
CITES provides that the standard permit/certificate requirements for trade in CITES specimens do not generally apply if a specimen is a personal or household effect. However, there are a number of situations where permits/certificates for personal or household effects are required and some countries choose to take stricter domestic measures by requiring permits/certificates for some or all personal or household effects.
Captive bred or artificially propagated specimens.
CITES allows trade in specimens to follow special procedures if Management Authorities are satisfied that they are sourced from captive bred animals or artificially propagated plants.
In the case of commercial trade of Appendix I taxa, captive bred or artificially propagated specimens may be traded as if they were Appendix II. This reduces the permit requirements from two permits (import/export) to one (export only).
In the case of non-commercial trade, specimens may be traded with a certificate of captive breeding/artificial propagation issued by the Management Authority of the state of export in lieu of standard permits.
Scientific exchange.
Standard CITES permit and certificates are not required for the non-commercial loan, donation or exchange between scientific or forensic institutions that have been registered by a Management Authority of their State. Consignments containing the specimens must carry a label issued or approved by that Management Authority (in some cases Customs Declaration labels may be used). Specimens that may be included under this provision include museum, herbarium, diagnostic and forensic research specimens. Registered institutions are listed on the CITES website.
Amendments and reservations.
Amendments to the Convention must be supported by a two-thirds majority who are "present and voting" and can be made during an extraordinary meeting of the COP if one-third of the Parties are interested in such a meeting. The Gaborone Amendment (1983) allows regional economic blocs to accede to the treaty. Trade with non-Party states is allowed, although permits and certificates are recommended to be issued by exporters and sought by importers.
Species in the Appendices may be proposed for addition, change of Appendix, or de-listing (i.e., deletion) by any Party, whether or not it is a range State and changes may be made despite objections by range States if there is sufficient (2/3 majority) support for the listing. Species listings are made at the Conference of Parties.
Upon acceding to the convention or within 90 days of a species listing being amended, Parties may make reservations. In these cases, the party is treated as being a state that is not a Party to CITES with respect to trade in the species concerned. Notable reservations include those by Iceland, Japan, and Norway on various baleen whale species and those on Falconiformes by Saudi Arabia.
Shortcomings and concerns.
Implementation.
As of 2002, 50% of Parties lacked one or more of the four major CITES requirements - designation of Management and Scientific Authorities; laws prohibiting the trade in violation of CITES; penalties for such trade and laws providing for the confiscation of specimens.
Although the Convention itself does not provide for arbitration or dispute in the case of noncompliance, 36 years of CITES in practice has resulted in several strategies to deal with infractions by Parties. The Secretariat, when informed of an infraction by a Party, will notify all other parties. The Secretariat will give the Party time to respond to the allegations and may provide technical assistance to prevent further infractions. Other actions the Convention itself does not provide for but that derive from subsequent COP resolutions may be taken against the offending Party. These include:
Bilateral sanctions have been imposed on the basis of national legislation (e.g. the USA used certification under the Pelly Amendment to get Japan to revoke its reservation to hawksbill turtle products in 1991, thus reducing the volume of its exports).
Infractions may include negligence with respect to permit issuing, excessive trade, lax enforcement, and failing to produce annual reports (the most common).
Approach to biodiversity conservation.
General limitations about the structure and philosophy of CITES include: by design and intent it focuses on trade at the species level and does not address habitat loss, ecosystem approaches to conservation, or poverty; it seeks to prevent unsustainable use rather than promote sustainable use (which generally conflicts with the Convention on Biological Diversity), although this has been changing (see Nile crocodile, African elephant, South African white rhino case studies in Hutton and Dickinson 2000). It does not explicitly address market demand. In fact, CITES listings have been demonstrated to increase financial speculation in certain markets for high value species. Funding does not provide for increased on-the-ground enforcement (it must apply for bilateral aid for most projects of this nature).
There has been increasing willingness within the Parties to allow for trade in products from well-managed populations. For instance, sales of the South African white rhino have generated revenues that helped pay for protection. Listing the species on Appendix I increased the price of rhino horn (which fueled more poaching), but the species survived wherever there was adequate on-the-ground protection. Thus field protection may be the primary mechanism that saved the population, but it is likely that field protection would not have been increased without CITES protection.
In another instance, the United States initially stopped exports of bobcat and lynx hides in 1977 when it first implemented CITES for lack of data to support no detriment findings. However, in this Federal Register notice, issued by William Yancey Brown, the U.S. Endangered Species Scientific Authority (ESSA) established a framework of no detriment findings for each state and the Navajo nation and indicated that approval would be forthcoming if the states and Navajo nation provided evidence that their furbearer management programs assured the species would be conserved. Management programs for these species expanded rapidly, including tagging for export, and are currently recognized in program approvals under regulations of the U.S. Fish and Wildlife Service.
Drafting.
By design, CITES regulates and monitors trade in the manner of a "negative list" such that trade in all species is permitted and unregulated "unless" the species in question appears on the Appendices or looks very much like one of those taxa. Then and only then, trade is regulated or constrained. Because the remit of the Convention covers millions of species of plants and animals, and tens of thousands of these taxa are potentially of economic value, in practice this negative list approach effectively forces CITES signatories to expend limited resources on just a select few, leaving many species to be traded with neither constraint nor review. For example, recently several bird classified as threatened with extinction appeared in the legal wild bird trade because the CITES process never considered their status. If a "positive list" approach were taken, only species evaluated and approved for the positive list would be permitted in trade, thus lightening the review burden for member states and the Secretariat, and also preventing inadvertent legal trade threats to poorly known species.
Specific weaknesses in the text include: it does not stipulate guidelines for the 'non-detriment' finding required of national Scientific Authorities; non-detriment findings require copious amounts of information; the 'household effects' clause is often not rigid enough/specific enough to prevent CITES violations by means of this Article (VII); non-reporting from Parties means Secretariat monitoring is incomplete; and it has no capacity to address domestic trade in listed species.
In order to ensure that the General Agreement on Tariffs and Trade (GATT) was not violated, the Secretariat of GATT was consulted during the drafting process.
Animal sourced pathogens.
During the coronavirus pandemic in 2020 CEO Ivonne Higuero noted that illegal wildlife trade not only helps to destroy habitats, but these habitats create a safety barrier for humans that can prevent pathogens from animals passing themselves on to people.
Reform suggestions.
Suggestions for improvement in the operation of CITES include: more regular missions by the Secretariat (not reserved just for high-profile species); improvement of national legislation and enforcement; better reporting by Parties (and the consolidation of information from all sources-NGOs, TRAFFIC, the wildlife trade monitoring network and Parties); more emphasis on enforcement-including a technical committee enforcement officer; the development of CITES Action Plans (akin to Biodiversity Action Plans related to the Convention on Biological Diversity) including: designation of Scientific/Management Authorities and national enforcement strategies; incentives for reporting and timelines for both Action Plans and reporting. CITES would benefit from access to Global Environment Facility (GEF), funds-although this is difficult given the GEFs more ecosystem approach-or other more regular funds. Development of a future mechanism similar to that of the Montreal Protocol (developed nations contribute to a fund for developing nations) could allow more funds for non-Secretariat activities.
TRAFFIC Data.
From 2005 to 2009 the legal trade corresponded with these numbers:
In the 1990s the annual trade of legal animal products was $160 billion annually. In 2009 the estimated value almost doubled to $300 billion.
Traffic released a report in December 2024 outlining illegal trade in animal products occurring in Vietnam:
Additional information about the documented trade can be extracted through queries on the CITES website.
Meetings.
The Conference of the Parties (CoP) is held once every three years. The location of the next CoP is chosen at the close of each CoP by a secret ballot vote.
The CITES Committees (Animals Committee, Plants Committee and Standing Committee) hold meetings during each year that does not have a CoP, while the Standing committee meets also in years with a CoP. The Committee meetings take place in Geneva, Switzerland (where the Secretariat of the CITES Convention is located), unless another country offers to host the meeting. The Secretariat is administered by UNEP. The Animals and Plants Committees have sometimes held joint meetings. The previous joint meeting was held in March 2012 in Dublin, Ireland, and the latest one was held in Veracruz, Mexico, in May 2014.
A current list of upcoming meetings appears on the CITES calendar.
At the seventeenth Conference of the Parties (CoP 17), Namibia and Zimbabwe introduced proposals to amend their listing of elephant populations in Appendix II. Instead, they wished to establish controlled trade in all elephant specimens, including ivory. They argue that revenue from regulated trade could be used for elephant conservation and rural communities' development. However, both proposals were opposed by the US and other countries.
|
6203
|
28481209
|
https://en.wikipedia.org/wiki?curid=6203
|
Environmental Modification Convention
|
The Environmental Modification Convention (ENMOD), formally the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, is an international treaty prohibiting the military or other hostile use of environmental modification techniques having widespread, long-lasting or severe effects. It opened for signature on 18 May 1977 in Geneva and entered into force on 5 October 1978.
The Convention bans weather warfare, which is the use of weather modification techniques for the purposes of inducing damage or destruction. The Convention on Biological Diversity of 2010 would also ban some forms of weather modification or geoengineering.
Many states do not regard this as a complete ban on the use of herbicides in warfare, such as Agent Orange, but it does require case-by-case consideration.
Parties.
The convention was signed by 48 states; 16 of the signatories have not ratified. As of 2022 the convention has 78 state parties.
History.
The problem of artificial modification of the environment for military or other hostile purposes was brought to the international agenda in the early 1970s. Following the US decision of July 1972 to renounce the use of climate modification techniques for hostile purposes, the 1973 resolution by the US Senate calling for an international agreement "prohibiting the use of any environmental or geophysical modification activity as a weapon of war", and an in-depth review by the Department of Defense of the military aspects of weather and other environmental modification techniques, US decided to seek agreement with the Soviet Union to explore the possibilities of an international agreement.
In July 1974, US and USSR agreed to hold bilateral discussions on measures to overcome the danger of the use of environmental modification techniques for military purposes and three subsequent rounds of discussions in 1974 and 1975. In August 1975, US and USSR tabled identical draft texts of a convention at the Conference of the Committee on Disarmament (CCD), Conference on Disarmament, where intensive negotiations resulted in a modified text and understandings regarding four articles of this Convention in 1976.
The convention was approved by Resolution 31/72 of the General Assembly of the United Nations on 10 December 1976, by 96 to 8 votes with 30 abstentions.
Environmental Modification Technique.
Environmental Modification Technique includes any technique for changing – through the deliberate manipulation of natural processes – the dynamics, composition or structure of the earth, including its biota, lithosphere, hydrosphere and atmosphere, or of outer space.
Structure of ENMOD.
The Convention contains ten articles and one Annex on the Consultative Committee of Experts. Integral part of the convention are also the Understandings relating to articles I, II, III and VIII. These Understandings are not incorporated into the convention but are part of the negotiating record and were included in the report transmitted by the Conference of the Committee on Disarmament to the United Nations General Assembly in September 1976 Report of the Conference of the Committee on Disarmament, Volume I, General Assembly Official records: Thirty-first session, Supplement No. 27 (A/31/27), New York, United Nations, 1976, pp. 91–92.
Anthropogenic climate change.
ENMOD treaty members are responsible for 83% of carbon dioxide emissions since the treaty entered into force in 1978. The ENMOD treaty could potentially be used by ENMOD member states seeking climate-change loss and damage compensation from other ENMOD member states at the International Court of Justice. With the knowledge that carbon dioxide emissions can enhance extreme weather events, the continued unmitigated greenhouse gas emissions from some ENMOD member states could be viewed as ‘reckless’ in the context of deliberately declining emissions from other ENMOD member states. It is unclear whether the International Court of Justice will consider the ENMOD treaty when it issues a legal opinion on international climate change obligations requested by the United Nations General Assembly on 29 March 2023.
|
6205
|
49789177
|
https://en.wikipedia.org/wiki?curid=6205
|
Chaitin's constant
|
In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
Although there are infinitely many halting probabilities, one for each (universal, see below) method of encoding programs, it is common to use the letter to refer to them as if there were only one. Because depends on the program encoding used, it is sometimes called Chaitin's construction when not referring to any specific encoding.
Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.
Background.
The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a program in a programming language with the property that no valid program can be obtained as a proper extension of another valid program.
Suppose that is a partial function that takes one argument, a finite binary string, and possibly returns a single binary string as output. The function is called computable if there is a Turing machine that computes it, in the sense that for any finite binary strings and , if and only if the Turing machine halts with on its tape when given the input .
The function is called universal if for every computable function of a single variable there is a string such that for all , ; here represents the concatenation of the two strings and . This means that can be used to simulate any computable function of one variable. Informally, represents a "script" for the computable function , and represents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input.
The domain of is the set of all inputs on which it is defined. For that are universal, such a can generally be seen both as the concatenation of a program part and a data part, and as a single program for the function .
The function is called prefix-free if there are no two elements , in its domain such that is a proper extension of . This can be rephrased as: the domain of is a prefix-free code (instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear: one is easily recognized by some grammar, while the other requires arbitrary computation to recognize.
The domain of any universal computable function is a computably enumerable set but never a computable set. The domain is always Turing equivalent to the halting problem.
Definition.
Let be the domain of a prefix-free universal computable function . The constant is then defined as
formula_1
where denotes the length of a string . This is an infinite sum which has one summand for every in the domain of . The requirement that the domain be prefix-free, together with Kraft's inequality, ensures that this sum converges to a real number between 0 and 1. If is clear from context then may be denoted simply , although different prefix-free universal computable functions lead to different values of .
Relationship to the halting problem.
Knowing the first bits of , one could calculate the halting problem for all programs of a size up to . Let the program for which the halting problem is to be solved be bits long. In dovetailing fashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these first bits. If the program has not halted yet, then it never will, since its contribution to the halting probability would affect the first bits. Thus, the halting problem would be solved for .
Because many outstanding problems in number theory, such as Goldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, calculating any but the first few bits of Chaitin's constant is not possible for a universal language. This reduces hard problems to impossible ones, much like trying to build an oracle machine for the halting problem would be.
Interpretation as a probability.
The Cantor space is the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as the measure of a certain subset of Cantor space under the usual probability measure on Cantor space. It is from this interpretation that halting probabilities take their name.
The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary string the set of sequences that begin with has measure . This implies that for each natural number , the set of sequences in Cantor space such that = 1 has measure , and the set of sequences whose th element is 0 also has measure .
Let be a prefix-free universal computable function. The domain of consists of an infinite set of binary strings
formula_2
Each of these strings determines a subset of Cantor space; the set contains all sequences in cantor space that begin with . These sets are disjoint because is a prefix-free set. The sum
formula_3
represents the measure of the set
formula_4
In this way, represents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain of . It is for this reason that is called a halting probability.
Properties.
Each Chaitin constant has the following properties:
Not every set that is Turing equivalent to the halting problem is a halting probability. A finer equivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals. One can show that a real number in is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random. is among the few definable algorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.
Uncomputability.
A real number is called computable if there is an algorithm which, given , returns the first digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number.
No halting probability is computable. The proof of this fact relies on an algorithm which, given the first digits of , solves Turing's halting problem for programs of length up to . Since the halting problem is undecidable, cannot be computed.
The algorithm proceeds as follows. Given the first digits of and a , the algorithm enumerates the domain of until enough elements of the domain have been found so that the probability they represent is within of . After this point, no additional program of length can be in the domain, because each of these would add to the measure, which is impossible. Thus the set of strings of length in the domain is exactly the set of such strings already enumerated.
Algorithmic randomness.
A real number is random if the binary sequence representing the real number is an algorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin's number.
Incompleteness theorem for halting probabilities.
For each specific consistent effectively represented axiomatic system for the natural numbers, such as Peano arithmetic, there exists a constant such that no bit of after the th can be proven to be 1 or 0 within that system. The constant depends on how the formal system is effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar to Gödel's incompleteness theorem in that it shows that no consistent formal theory for arithmetic can be complete.
Super Omega.
The first bits of Gregory Chaitin's constant are random or incompressible in the sense that they cannot be computed by a halting algorithm with fewer than bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the first bits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the first bits of . In other words, the enumerable first bits of are highly compressible in the sense that they are limit-computable by a very short algorithm; they are not random with respect to the set of enumerating algorithms. Jürgen Schmidhuber constructed a limit-computable "Super " which in a sense is much more random than the original limit-computable , as one cannot significantly compress the Super by any enumerating non-halting algorithm.
For an alternative "Super ", the universality probability of a prefix-free universal Turing machine (UTM) namely, the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem (i.e., using Turing jump notation).
|
6206
|
17350134
|
https://en.wikipedia.org/wiki?curid=6206
|
Computable number
|
In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time.
Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
Informal definition.
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
The key notions in the definition are (1) that some "n" is specified at the start, (2) for any "n" the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all "n" of the digits on its tape, halting after printing the "n"th – emphasizes Minsky's observation: (3) That by use of a Turing machine, a "finite" definition – in the form of the machine's state table – is being used to define what is a potentially "infinite" string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
Formal definition.
A real number "a" is computable if it can be approximated by some computable function formula_1 in the following manner: given any positive integer "n", the function produces an integer "f"("n") such that:
formula_2
A complex number is called computable if its real and imaginary parts are computable.
Equivalent definitions.
There are two similar definitions that are equivalent:
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function formula_8 which when provided with a rational number formula_9 as input returns formula_10 or formula_11, satisfying the following conditions:
formula_12
formula_13
formula_14
An example is given by a program "D" that defines the cube root of 3. Assuming formula_15 this is defined by:
formula_16
formula_17
A real number is computable if and only if there is a computable Dedekind cut "D" corresponding to it. The function "D" is unique for each computable number (although of course two different programs may provide the same function).
Properties.
Not computably enumerable.
Assigning a Gödel number to each Turing machine definition produces a subset formula_18 of the natural numbers corresponding to the computable numbers and identifies a surjection from formula_18 to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set formula_18 of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of formula_18 that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set formula_18 of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number formula_23 the well ordering principle provides that there is a minimal element in formula_18 which corresponds to formula_25, and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
Properties as a field.
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers "a" and "b" are computable then the following real numbers are also computable: "a" + "b", "a" - "b", "ab", and "a"/"b" if "b" is nonzero.
These operations are actually "uniformly computable"; for example, there is a Turing machine which on input ("A","B",formula_26) produces output "r", where "A" is the description of a Turing machine approximating "a", "B" is the description of a Turing machine approximating "b", and "r" is an formula_26 approximation of "a" + "b".
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954.
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
Non-computability of the ordering.
The order relation on the computable numbers is not computable. Let "A" be the description of a Turing machine approximating the number formula_6. Then there is no Turing machine which on input "A" outputs "YES" if formula_29 and "NO" if formula_30 To see why, suppose the machine described by "A" keeps outputting 0 as formula_26 approximations. It is not clear how long to wait before deciding that the machine will "never" output an approximation which forces "a" to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines "A" and "B" approximating numbers formula_32 and formula_33, where formula_34, and outputs whether formula_35 or formula_36 It is sufficient to use formula_26-approximations where formula_38 so by taking increasingly small formula_26 (approaching 0), one eventually can decide whether formula_35 or formula_36
Other properties.
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
Non-computable numbers.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, non-computable real numbers, including:
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine.
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
Digit strings and the Cantor and Baire spaces.
Turing's original paper defined computable numbers as follows:
Turing was aware that this definition is equivalent to the formula_26-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the formula_26 sense: if formula_45, then the first "n" digits of the decimal expansion for "a" provide an formula_26 approximation of "a". For the converse, we pick an formula_26 computable real number "a" and generate increasingly precise approximations until the "n"th digit after the decimal point is certain. This always generates a decimal expansion equal to "a" but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of formula_48 (total 0,1 valued functions) instead of reals numbers in formula_49. The members of formula_48 can be identified with binary decimal expansions, but since the decimal expansions formula_51 and formula_52 denote the same real number, the interval formula_49 can only be bijectively (and homeomorphically under the subset topology) identified with the subset of formula_48 not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the formula_26 approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces formula_26 approximations for the computable number "a", and produces as output a Turing machine which enumerates the digits of "a" in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses formula_26 approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures formula_48 and formula_49 are essentially identical. Thus, computability theorists often refer to members of formula_48 as reals. While formula_48 is totally disconnected, for questions about formula_62 classes or randomness it is easier to work in formula_48.
Elements of formula_64 are sometimes called reals as well and though containing a homeomorphic image of formula_65, formula_64 isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the formula_67 satisfying formula_68, with formula_69 quantifier free, must be computable while the unique formula_70 satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
Use in place of the reals.
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as "e", "π", and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
Implementations of exact arithmetic.
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
|
6207
|
910180
|
https://en.wikipedia.org/wiki?curid=6207
|
Electric current
|
An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. The moving particles are called charge carriers, which may be one of several types of particles, depending on the conductor. In electric circuits the charge carriers are often electrons moving through a wire. In semiconductors they can be electrons or holes. In an electrolyte the charge carriers are ions, while in plasma, an ionized gas, they are ions and electrons.
In the International System of Units (SI), electric current is expressed in units of ampere (sometimes called an "amp", symbol A), which is equivalent to one coulomb per second. The ampere is an SI base unit and electric current is a base quantity in the International System of Quantities (ISQ). Electric current is also known as amperage and is measured using a device called an "ammeter".
Electric currents create magnetic fields, which are used in motors, generators, inductors, and transformers. In ordinary conductors, they cause Joule heating, which creates light in incandescent light bulbs. Time-varying currents emit electromagnetic waves, which are used in telecommunications to broadcast information.
Symbol.
The conventional symbol for current is , which originates from the French phrase (current intensity). Current intensity is often referred to simply as "current". The symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating Ampère's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using to until 1896.
Conventions.
The conventional direction of current, also known as "conventional current", is arbitrarily defined as the direction in which charges flow. In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei of the atoms are held in a fixed position, and the negatively charged electrons are the charge carriers, free to move about in the metal. In other materials, notably the semiconductors, the charge carriers can be positive "or" negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. Negatively charged carriers, such as the electrons (the charge carriers in metal wires and many other electronic circuit components), therefore flow in the opposite direction of conventional current flow in an electrical circuit.
Reference direction.
A current in a wire or circuit element can flow in either of two directions. When defining a variable formula_1 to represent the current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the "reference direction" of the current formula_1. When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown until the analysis is completed. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the current implies the actual direction of current through that circuit element is opposite that of the chosen reference direction.
Ohm's law.
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
formula_3
where "I" is the current through the conductor in units of amperes, "V" is the potential difference measured "across" the conductor in units of volts, and "R" is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the "R" in this relation is constant, independent of the current.
Alternating and direct current.
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, though certain applications use alternative waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or "modulated") onto the AC signal.
In contrast, direct current (DC) refers to a system in which electric charge moves in only one direction (sometimes called unidirectional flow). Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Alternating current can also be converted to direct current through use of a rectifier. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was "galvanic current".
Occurrences.
Natural observable examples of electric current include lightning, static electric discharge, and the solar wind, the source of the polar auroras.
Man-made occurrences of electric current include the flow of conduction electrons in metal wires such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery, and the flow of holes within metals and semiconductors.
A biological example of current is the flow of ions in neurons and nerves, responsible for both thought and sensory perception.
Measurement.
Current can be measured using an ammeter.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient.
Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current.
Devices, at the circuit level, use various techniques to measure current:
Resistive heating.
Joule heating, also known as "ohmic heating" and "resistive heating", is the process of power dissipation by which the passage of an electric current through a conductor increases the internal energy of the conductor, converting thermodynamic work into heat. The phenomenon was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
formula_4
This relationship is known as Joule's Law. The SI unit of energy was subsequently named the joule and given the symbol "J". The commonly known SI unit of power, the watt (symbol: W), is equivalent to one joule per second.
Electromagnetism.
Electromagnet.
In an electromagnet a coil of wires behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Electromagnetic induction.
Magnetic fields can also be used to make electric currents. When a changing magnetic field is applied to a conductor, an electromotive force (EMF) is induced, which starts an electric current, when there is a suitable path.
Radio waves.
When an electric current flows in a suitably shaped conductor at radio frequencies, radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
Conduction mechanisms in various media.
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, "conventional current" is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction to the overall electron movement. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydronium ions flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
Metals.
In a metal, some of the outer electrons in each atom are not bound to the individual molecules as they are in molecular solids, or in full bands as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons serve as charge carriers that can flow through the conductor as an electric current when an electric field is present. Metals are particularly conductive because there are many of these free electrons. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, "One, Two, Three...Infinity" (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current "I" (in amperes) can be calculated with the following equation:
formula_5
where "Q" is the electric charge transferred through the surface over a time "t". If "Q" and "t" are measured in coulombs and seconds respectively, "I" is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
formula_6
Electrolytes.
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, neutralizing each ion.
Water-ice and certain solid electrolytes called "proton conductors" contain positive hydrogen ions ("protons") that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas.
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Vacuum.
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called "cathode spots" or "anode spots") are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity.
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of "perfect conductivity" in classical physics.
Semiconductor.
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to many discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the "valence band". Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the "conduction band", the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimensionthat is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as "free electrons", though they are often simply called "electrons" if that is clear in context.
Current density and Ohm's law.
Current density is the rate at which charge passes through a chosen unit area. It is defined as a vector whose magnitude is the current per unit cross-sectional area. As discussed in Reference direction, the direction is arbitrary. Conventionally, if the moving charges are positive, then the current density has the same sign as the velocity of the charges. For negative charges, the sign of the current density is opposite to the velocity of the charges. In SI units, current density (symbol: j) is expressed in the SI base units of amperes per square metre.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
formula_7
where formula_1 is the current, measured in amperes; formula_9 is the potential difference, measured in volts; and formula_10 is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
Drift speed.
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
formula_11
where
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode-ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. (See also hydraulic analogy.)
|
6208
|
1544984
|
https://en.wikipedia.org/wiki?curid=6208
|
Charles Ancillon
|
Charles Ancillon (28 July 16595 July 1715) was a French jurist and diplomat.
Life.
Ancillon was born in Metz into a distinguished family of Huguenots. His father, David Ancillon (1617–1692), was obliged to leave France on the revocation of the Edict of Nantes, and became pastor of the French Protestant community in Berlin.
Ancillon studied law at Marburg, Geneva and Paris, where he was called to the bar. At the request of the Huguenots at Metz, he pleaded its cause at the court of King Louis XIV, urging that it should be excepted in the revocation of the Edict of Nantes, but his efforts were unsuccessful, and he joined his father in Berlin. He was at once appointed by Elector Frederick III ""juge et directeur de colonie de Berlin"." He also became the first headmaster of Französisches Gymnasium Berlin. Before this, he had published several works on the revocation of the Edict of Nantes and its consequences, but his literary capacity was mediocre, his style stiff and cold, and it was his personal character rather than his reputation as a writer that earned him the confidence of the elector.
In 1687 Ancillon was appointed head of the so-called "Academie des nobles," the principal educational establishment of the state; later on, as councillor of embassy, he took part in the negotiations which led to the assumption of the title of "King in Prussia" by the elector. In 1699 he succeeded Samuel Pufendorf as historiographer to the elector, and the same year replaced his uncle Joseph Ancillon as judge of all the French refugees in the Margraviate of Brandenburg.
Ancillon is mainly remembered for what he did for education in Brandenburg-Prussia, and the share he took, in co-operation with Gottfried Leibniz, in founding the Academy of Berlin. Of his fairly numerous works the most valued is the "Histoire de l'etablissement des Francais refugies dans les etats de Brandebourg" published in Berlin in 1690.
|
6210
|
3022076
|
https://en.wikipedia.org/wiki?curid=6210
|
Clark Ashton Smith
|
Clark Ashton Smith (January 13, 1893 – August 14, 1961) was an influential American writer of fantasy, horror, and science fiction stories and poetry, and an artist. He achieved early recognition in California (largely through the enthusiasm of George Sterling) for traditional verse in the vein of Swinburne. As a poet, Smith is grouped with the West Coast Romantics alongside Joaquin Miller, Sterling, and Nora May French and remembered as "The Last of the Great Romantics" and "The Bard of Auburn". Smith's work was praised by his contemporaries. H. P. Lovecraft stated that "in sheer daemonic strangeness and fertility of conception, Clark Ashton Smith is perhaps unexcelled", and Ray Bradbury said that Smith "filled my mind with incredible worlds, impossibly beautiful cities, and still more fantastic creatures". Additional writers influenced by Smith include Leigh Brackett, Harlan Ellison, Stephen King, Fritz Lieber, George R. R. Martin, and Donald Sidney-Fryer.
Smith was one of "the big three of "Weird Tales", with Robert E. Howard and H. P. Lovecraft", though some readers objected to his morbidness and violation of pulp traditions. The fantasy writer and critic L. Sprague de Camp said of him that "nobody since Poe has so loved a well-rotted corpse". Smith was a member of the Lovecraft circle, and his literary friendship with Lovecraft lasted from 1922 until Lovecraft's death in 1937. His work is marked by an extraordinarily rich and ornate vocabulary, a cosmic perspective and a vein of sardonic and sometimes ribald humor.
Of his writing style, Smith stated: "My own conscious ideal has been to delude the reader into accepting an impossibility, or series of impossibilities, by means of a sort of verbal black magic, in the achievement of which I make use of prose-rhythm, metaphor, simile, tone-color, counter-point, and other stylistic resources, like a sort of incantation."
Biography.
Early life and education.
Smith was born January 13, 1893, in Long Valley, Placer County, California, into a family of English and New England heritage. He spent most of his life in the small town of Auburn, California, living in a cabin built by his parents, Fanny and Timeus Smith. Smith professed to hate the town's provincialism but rarely left it until he married late in life.
His formal education was limited: he suffered from psychological disorders including intense agoraphobia, and although he was accepted to high school after attending eight years of grammar school, his parents decided it was better for him to be taught at home. An insatiable reader with an extraordinary eidetic memory, Smith appeared to retain most or all of whatever he read. After leaving formal education, he embarked upon a self-directed course of literature, including "Robinson Crusoe", "Gulliver's Travels", the fairy tales of Hans Christian Andersen and Madame d'Aulnoy, the "Arabian Nights" and the poems of Edgar Allan Poe. He read an unabridged dictionary word for word, studying not only the definitions of the words but also their etymology.
The other main course in Smith's self-education was to read the complete 11th edition of the "Encyclopædia Britannica" at least twice. Smith later taught himself French and Spanish to translate verse out of those languages, including works by Gérard de Nerval, Paul Verlaine, Amado Nervo, Gustavo Adolfo Bécquer and all but 6 of Charles Baudelaire's 157 poems in "The Flowers of Evil".
Early writing.
His first literary efforts, at the age of 11, took the form of fairy tales and imitations of the Arabian Nights. Later, he wrote long adventure novels dealing with Oriental life. By 14 he had already written a short adventure novel called "The Black Diamonds" which was lost for years until published in 2002. Another juvenile novel was written in his teenaged years: "The Sword of Zagan" (unpublished until 2004). Like "The Black Diamonds", it uses a medieval, "Arabian Nights"-like setting, and the "Arabian Nights", like the fairy tales of the Brothers Grimm and the works of Edgar Allan Poe, are known to have strongly influenced Smith's early writing, as did William Beckford's "Vathek".
When he was 15, Smith read George Sterling's fantasy-horror poem "A Wine of Wizardry" in a national magazine (which he later described as "In the ruck of magazine verse it was like finding a fire-opal of the Titans in a potato bin") and decided he wanted to become a poet. At age 17, he sold several tales to "The Black Cat", a magazine which specialized in unusual tales. He also published some tales in the "Overland Monthly" in this brief foray into fiction which preceded his poetic career.
However, it was primarily poetry that motivated the young Smith and he confined his efforts to poetry for more than a decade. In his later youth, Smith met Sterling through a member of the local Auburn Monday Night Club, where Smith read several of his poems with considerable success. On a month-long visit to Sterling in Carmel, California, Smith was introduced by Sterling to the poetry of Charles Baudelaire.
He became Sterling's protégé and Sterling helped him to publish his first volume of poems, "The Star-Treader and Other Poems", at the age of 19. Smith received international acclaim for the collection. "The Star-Treader" was received very favorably by American critics, one of whom named Smith "the Keats of the Pacific". Smith briefly moved among the circle that included Ambrose Bierce and Jack London, but his early fame soon faded away.
Health breakdown period.
A little later, Smith's health broke down and for eight years his literary production was intermittent, though he produced his best poetry during this period. A small volume, "Odes and Sonnets", was brought out in 1918. Smith came into contact with literary figures who would later form part of H.P. Lovecraft's circle of correspondents; Smith knew them far earlier than Lovecraft. These figures include poet Samuel Loveman and bookman George Kirk. It was Smith who in fact later introduced Donald Wandrei to Lovecraft. For this reason, it has been suggested that Lovecraft might as well be referred to as a member of a "Smith" circle as Smith was a member of a Lovecraft one.
In 1920 Smith composed a celebrated long poem in blank verse, "The Hashish Eater, or The Apocalypse of Evil", published in "Ebony and Crystal" (1922). This was followed by a fan letter from H. P. Lovecraft, which was the beginning of 15 years of friendship and correspondence. With studied playfulness, Smith and Lovecraft borrowed each other's coinages of place names and the names of strange gods for their stories, though so different is Smith's treatment of the Lovecraft theme that it has been dubbed the "Clark Ashton Smythos."
In 1925 Smith published "Sandalwood", which was partly funded by a gift of $50 from Donald Wandrei. He wrote little fiction in this period with the exception of some imaginative vignettes or prose poems. Smith was poor for most of his life and often did hard manual jobs such as fruit picking and woodcutting to support himself and his parents. He was an able cook and made many kinds of wine. He also did well digging, typing and journalism, as well as contributing a column to "The Auburn Journal" and sometimes worked as its night editor.
One of Smith's artistic patrons and frequent correspondents was San Francisco businessman Albert Bender.
Prolific fiction-writing period.
At the beginning of the Depression in 1929, with his aged parents' health weakening, Smith resumed fiction writing and turned out more than a hundred short stories between 1929 and 1934, nearly all of which can be classed as weird horror or science fiction. Like Lovecraft, he drew upon the nightmares that had plagued him during youthful spells of sickness. Brian Stableford has written that the stories written during this brief phase of hectic productivity "constitute one of the most remarkable oeuvres in imaginative literature".
He published at his own expense a volume containing six of his best stories, "The Double Shadow and Other Fantasies", in an edition of 1000 copies printed by the "Auburn Journal". The theme of much of his work is egotism and its supernatural punishment; his weird fiction is generally macabre in subject matter, gloatingly preoccupied with images of death, decay and abnormality.
Most of Smith's weird fiction falls into four series set variously in Hyperborea, Poseidonis, Averoigne and Zothique. Hyperborea, which is a lost continent of the Miocene period, and Poseidonis, which is a remnant of Atlantis, are much the same, with a magical culture characterized by bizarreness, cruelty, death and postmortem horrors. Averoigne is Smith's version of pre-modern France, comparable to James Branch Cabell's Poictesme. Zothique exists millions of years in the future. It is "the last continent of earth, when the sun is dim and tarnished". These tales have been compared to the "Dying Earth" sequence of Jack Vance.
In 1933 Smith began corresponding with Robert E. Howard, the Texan creator of Conan the Barbarian. From 1933 to 1936, Smith, Howard and Lovecraft were the leaders of the Weird Tales school of fiction and corresponded frequently, although they never met. The writer of oriental fantasies E. Hoffmann Price is the only man known to have met all three in the flesh.
Critic Steve Behrends has suggested that the frequent theme of 'loss' in Smith's fiction (many of his characters attempt to recapture a long-vanished youth, early love, or picturesque past) may reflect Smith's own feeling that his career had suffered a "fall from grace":
Mid-late career: return to poetry and sculpture.
In September 1935, Smith's mother Fanny died. Smith spent the next two years nursing his father through his last illness. Timeus died in December 1937. Aged 44, Smith now virtually ceased writing fiction. He had been severely affected by several tragedies occurring in a short period of time: Robert E. Howard's death by suicide (1936), Lovecraft's death from cancer (1937) and the deaths of his parents, which left him exhausted. As a result, he withdrew from the scene, marking the end of "Weird Tales"s Golden Age. He began sculpting and resumed the writing of poetry. However, Smith was visited by many writers at his cabin, including Fritz Leiber, Rah Hoffman, Francis T. Laney and others.
In 1942, three years after August Derleth founded Arkham House for the purpose of preserving the work of H.P. Lovecraft, Derleth published the first of several major collections of Smith's fiction, "Out of Space and Time" (1942). This was followed by "Lost Worlds" (1944). The books sold slowly, went out of print and became costly rarities. Derleth published five more volumes of Smith's prose and two of his verse, and at his death in 1971 had a large volume of Smith's poems in press.
Later life, marriage and death.
In 1953, Smith suffered a coronary attack. Aged 61, he married Carol(yn) Jones Dorman on November 10, 1954. Dorman had much experience in Hollywood and radio public relations. After honeymooning at the Smith cabin, they moved to Pacific Grove, California, where he set up a household including her three children from a previous marriage. For several years he alternated between the house on Indian Ridge and their house in Pacific Grove. Smith having sold most of his father's tract, in 1957 the old house burned – the Smiths believed by arson, others said by accident.
Smith now reluctantly did gardening for other residents at Pacific Grove, and grew a goatee. He spent much time shopping and walking near the seafront but despite Derleth's badgering, resisted the writing of more fiction. In 1961 he suffered a series of strokes and in August 1961 he quietly died in his sleep, aged 68. After Smith's death, Carol remarried (becoming Carolyn Wakefield) and subsequently died of cancer.
The poet's ashes were buried beside, or beneath, a boulder to the immediate west of where his childhood home (destroyed by fire in 1957) stood; some were also scattered in a stand of blue oaks near the boulder. There was no marker. Plaques recognizing Smith have been erected at the Auburn Placer County Library in 1985 and in Bicentennial Park in Auburn in 2003.
Bookseller Roy A. Squires was appointed Smith's "west coast executor", with Jack L. Chalker as his "east coast executor". Squires published many letterpress editions of individual Smith poems.
Smith's literary estate is represented by his stepson, Prof William Dorman, director of CASiana Literary Enterprises. Arkham House owns the copyright to many Smith stories, though some are now in the public domain.
For 'posthumous collaborations' of Smith (stories completed by Lin Carter), see the entry on Lin Carter.
Artistic periods.
While Smith was always an artist who worked in several very different media, it is possible to identify three distinct periods in which one form of art had precedence over the others.
Poetry: until 1925.
Smith published most of his volumes of poetry in this period, including the aforementioned "The Star-Treader and Other Poems", as well as "Odes and Sonnets" (1918), "Ebony and Crystal" (1922) and "Sandalwood" (1925). His long poem "The Hashish-Eater; Or, the Apocalypse of Evil" was written in 1920.
Weird fiction: 1926–1935.
Smith wrote most of his weird fiction and Cthulhu Mythos stories, inspired by H. P. Lovecraft. Creatures of his invention include Aforgomon, Rlim-Shaikorth, Mordiggian, Tsathoggua, the wizard Eibon, and various others. In an homage to his friend, Lovecraft referred in "The Whisperer in Darkness" and "The Battle That Ended the Century" (written in collaboration with R. H. Barlow) to an Atlantean high-priest, "Klarkash-Ton".
Smith's weird stories form several cycles, called after the lands in which they are set: Averoigne, Hyperborea, Mars, Poseidonis, Zothique. To some extent Smith was influenced in his vision of such lost worlds by the teachings of Theosophy and the writings of Helena Blavatsky. Stories set in Zothique belong to the Dying Earth subgenre. Amongst Smith's science fiction tales are stories set on Mars and the invented planet of Xiccarph.
His short stories originally appeared in the magazines "Weird Tales", "Strange Tales", "Astounding Stories", "Stirring Science Stories" and "Wonder Stories".
Clark Ashton Smith was the third member of the great triumvirate of "Weird Tales", with Lovecraft and Robert E. Howard.
Many of Smith's stories were published in six hardcover volumes by August Derleth under his Arkham House imprint. For a full bibliography to 1978, see Sidney-Fryer, "Emperor of Dreams" (cited below). S. T. Joshi is working with other scholars to produce an updated bibliography of Smith's work.
A selection of Smith's best-known tales includes:
Visual art: 1935–1961.
By this time his interest in writing fiction began to lessen and he turned to creating sculptures from soft rock such as soapstone. Smith also made hundreds of fantastic paintings and drawings.
Bibliography.
The authoritative bibliography on Smith's work is S. T. Joshi, David E. Schultz, and Scott Conners' "Clark Ashton Smith: A Comprehensive Bibliography." NY: Hippocampus Press, 2020. The first Smith bibliography, which focused on his short fiction, was "The Tales Of Clark Ashton Smith," published by Thomas G L Cockcroft in New Zealand in 1951.
Other (essays, letters, etc).
Schultz and Joshi are preparing a volume of Smith's letters to miscellaneous correspondents.
|
6211
|
1253959429
|
https://en.wikipedia.org/wiki?curid=6211
|
Context-sensitive grammar
|
A context-sensitive grammar (CSG) is a formal grammar in which the left-hand sides and right-hand sides of any production rules may be surrounded by a context of terminal and nonterminal symbols. Context-sensitive grammars are more general than context-free grammars, in the sense that there are languages that can be described by a CSG but not by a context-free grammar. Context-sensitive grammars are less general (in the same sense) than unrestricted grammars. Thus, CSGs are positioned between context-free and unrestricted grammars in the Chomsky hierarchy.
A formal language that can be described by a context-sensitive grammar, or, equivalently, by a noncontracting grammar or a linear bounded automaton, is called a context-sensitive language. Some textbooks actually define CSGs as non-contracting, although this is not how Noam Chomsky defined them in 1959. This choice of definition makes no difference in terms of the languages generated (i.e. the two definitions are weakly equivalent), but it does make a difference in terms of what grammars are structurally considered context-sensitive; the latter issue was analyzed by Chomsky in 1963.
Chomsky introduced context-sensitive grammars as a way to describe the syntax of natural language where it is often the case that a word may or may not be appropriate in a certain place depending on the context. Walter Savitch has criticized the terminology "context-sensitive" as misleading and proposed "non-erasing" as better explaining the distinction between a CSG and an unrestricted grammar.
Although it is well known that certain features of languages (e.g. cross-serial dependency) are not context-free, it is an open question how much of CSGs' expressive power is needed to capture the context sensitivity found in natural languages. Subsequent research in this area has focused on the more computationally tractable mildly context-sensitive languages. The syntaxes of some visual programming languages can be described by context-sensitive graph grammars.
Formal definition.
Formal grammar.
Let us notate a formal grammar as formula_1, with formula_2 a set of nonterminal symbols, formula_3 a set of terminal symbols, formula_4 a set of production rules, and formula_5 the start symbol.
A string formula_6 "directly yields", or "directly derives to", a string formula_7, denoted as formula_8, if "v" can be obtained from "u" by an application of some production rule in "P", that is, if formula_9 and formula_10, where formula_11 is a production rule, and formula_12 is the unaffected left and right part of the string, respectively.
More generally, "u" is said to "yield", or "derive to", "v", denoted as formula_13, if "v" can be obtained from "u" by repeated application of production rules, that is, if formula_14 for some "n" ≥ 0 and some strings formula_15. In other words, the relation formula_16 is the reflexive transitive closure of the relation formula_17.
The language of the grammar "G" is the set of all terminal-symbol strings derivable from its start symbol, formally: formula_18.
Derivations that do not end in a string composed of terminal symbols only are possible, but do not contribute to "L"("G").
Context-sensitive grammar.
A formal grammar is context-sensitive if each rule in "P" is either of the form formula_19 where formula_20 is the empty string, or of the form
α"A"β → αγβ
with "A" ∈ "N", formula_21, and formula_22.
The name "context-sensitive" is explained by the α and β that form the context of "A" and determine whether "A" can be replaced with γ or not.
By contrast, in a context-free grammar, no context is present: the left hand side of every production rule is just a nonterminal.
The string γ is not allowed to be empty. Without this restriction, the resulting grammars become equal in power to unrestricted grammars.
(Weakly) equivalent definitions.
A noncontracting grammar is a grammar in which for any production rule, of the form "u" → "v", the length of "u" is less than or equal to the length of "v".
Every context-sensitive grammar is noncontracting, while every noncontracting grammar can be converted into an equivalent context-sensitive grammar; the two classes are weakly equivalent.
Some authors use the term "context-sensitive grammar" to refer to noncontracting grammars in general.
The left-context- and right-context-sensitive grammars are defined by restricting the rules to just the form α"A" → αγ and to just "A"β → γβ, respectively. The languages generated by these grammars are also the full class of context-sensitive languages. The equivalence was established by Penttonen normal form.
Examples.
"anbncn".
The following context-sensitive grammar, with start symbol "S", generates the canonical non-context-free language { "anbncn" | "n" ≥ 1 } :
Rules 1 and 2 allow for blowing-up "S" to "a""n""BC"("BC")"n"−1; rules 3 to 6 allow for successively exchanging each "CB" to "BC" (four rules are needed for that since a rule "CB" → "BC" wouldn't fit into the scheme α"A"β → αγβ); rules 7–10 allow replacing a non-terminal "B" or "C" with its corresponding terminal "b" or "c", respectively, provided it is in the right place.
A generation chain for "" is:
"S"
→2
→2
→1
→3
→4
→5
→6
→3
→4
→5
→6
→3
→4
→5
→6
→7
→8
→8
→9
→10
→10
"anbncndn", etc..
More complicated grammars can be used to parse { "anbncndn" | "n" ≥ 1 }, and other languages with even more letters. Here we show a simpler approach using non-contracting grammars:
Start with a kernel of regular productions generating the sentential forms
formula_23 and then include the non contracting productions
formula_24,
formula_25,
formula_26,
formula_27,
formula_28,
formula_29,
formula_30,
formula_31,
formula_32,
formula_33.
"ambncmdn".
A non contracting grammar (for which there is an equivalent CSG) for the language formula_34 is defined by
formula_35,
formula_36,
formula_37,
formula_38,
formula_39,
formula_40,
formula_41, and
formula_42.
With these definitions, a derivation for formula_43 is:
formula_44.
"a"2"i".
A noncontracting grammar for the language { "a"2"i" | "i" ≥ 1 } is constructed in Example 9.5 (p. 224) of (Hopcroft, Ullman, 1979):
Kuroda normal form.
Every context-sensitive grammar which does not generate the empty string can be transformed into a weakly equivalent one in Kuroda normal form. "Weakly equivalent" here means that the two grammars generate the same language. The normal form will not in general be context-sensitive, but will be a noncontracting grammar.
The Kuroda normal form is an actual normal form for non-contracting grammars.
Properties and uses.
Equivalence to linear bounded automaton.
A formal language can be described by a context-sensitive grammar if and only if it is accepted by some linear bounded automaton (LBA). In some textbooks this result is attributed solely to Landweber and Kuroda. Others call it the Myhill–Landweber–Kuroda theorem. (Myhill introduced the concept of deterministic LBA in 1960. Peter S. Landweber published in 1963 that the language accepted by a deterministic LBA is context sensitive. Kuroda introduced the notion of non-deterministic LBA and the equivalence between LBA and CSGs in 1964.)
it is still an open question whether every context-sensitive language can be accepted by a "deterministic" LBA.
Closure properties.
Context-sensitive languages are closed under complement. This 1988 result is known as the Immerman–Szelepcsényi theorem.
Moreover, they are closed under union, intersection, concatenation, substitution, inverse homomorphism, and Kleene plus.
Every recursively enumerable language "L" can be written as "h"("L") for some context-sensitive language "L" and some string homomorphism "h".
Computational problems.
The decision problem that asks whether a certain string "s" belongs to the language of a given context-sensitive grammar "G", is PSPACE-complete. Moreover, there are context-sensitive grammars whose languages are PSPACE-complete. In other words, there is a context-sensitive grammar "G" such that deciding whether a certain string "s" belongs to the language of "G" is PSPACE-complete (so "G" is fixed and only "s" is part of the input of the problem).
The emptiness problem for context-sensitive grammars (given a context-sensitive grammar "G", is "L"("G")=∅ ?) is undecidable.
As model of natural languages.
Savitch has proven the following theoretical result, on which he bases his criticism of CSGs as basis for natural language: for any recursively enumerable set "R", there exists a context-sensitive language/grammar "G" which can be used as a sort of proxy to test membership in "R" in the following way: given a string "s", "s" is in "R" if and only if there exists a positive integer "n" for which "scn" is in G, where "c" is an arbitrary symbol not part of "R".
It has been shown that nearly all natural languages may in general be characterized by context-sensitive grammars, but the whole class of CSGs seems to be much bigger than natural languages. Worse yet, since the aforementioned decision problem for CSGs is PSPACE-complete, that makes them totally unworkable for practical use, as a polynomial-time algorithm for a PSPACE-complete problem would imply P=NP.
It was proven that some natural languages are not context-free, based on identifying so-called cross-serial dependencies and unbounded scrambling phenomena. However this does not necessarily imply that the class of CSGs is necessary to capture "context sensitivity" in the colloquial sense of these terms in natural languages. For example, linear context-free rewriting systems (LCFRSs) are strictly weaker than CSGs but can account for the phenomenon of cross-serial dependencies; one can write a LCFRS grammar for {"anbncndn" | "n" ≥ 1} for example.
Ongoing research on computational linguistics has focused on formulating other classes of languages that are "mildly context-sensitive" whose decision problems are feasible, such as tree-adjoining grammars, combinatory categorial grammars, coupled context-free languages, and linear context-free rewriting systems. The languages generated by these formalisms properly lie between the context-free and context-sensitive languages.
More recently, the class PTIME has been identified with range concatenation grammars, which are now considered to be the most expressive of the mild-context sensitive language classes.
|
6212
|
7611264
|
https://en.wikipedia.org/wiki?curid=6212
|
Context-sensitive language
|
In formal language theory, a context-sensitive language is a formal language that can be defined by a context-sensitive grammar, where the applicability of a production rule may depend on the surrounding context of symbols. Unlike context-free grammars, which can apply rules regardless of context, context-sensitive grammars allow rules to be applied only when specific neighboring symbols are present, enabling them to express dependencies and agreements between distant parts of a string.
These languages correspond to type-1 languages in the Chomsky hierarchy and are equivalently defined by noncontracting grammars (grammars where production rules never decrease the total length of a string). Context-sensitive languages can model natural language phenomena such as subject-verb agreement, cross-serial dependencies, and other complex syntactic relationships that cannot be captured by simpler grammar types, making them important for computational linguistics and natural language processing.
Computational properties.
Computationally, a context-sensitive language is equivalent to a linear bounded nondeterministic Turing machine, also called a linear bounded automaton. That is a non-deterministic Turing machine with a tape of only formula_1 cells, where formula_2 is the size of the input and formula_3 is a constant associated with the machine. This means that every formal language that can be decided by such a machine is a context-sensitive language, and every context-sensitive language can be decided by such a machine.
This set of languages is also known as NLINSPACE or NSPACE("O"("n")), because they can be accepted using linear space on a non-deterministic Turing machine. The class LINSPACE (or DSPACE("O"("n"))) is defined the same, except using a deterministic Turing machine. Clearly LINSPACE is a subset of NLINSPACE, but it is not known whether LINSPACE = NLINSPACE.
Examples.
One of the simplest context-sensitive but not context-free languages is formula_4: the language of all strings consisting of occurrences of the symbol "a", then "b"s, then "c"s (abc, , , etc.). A superset of this language, called the Bach language, is defined as the set of all strings where "a", "b" and "c" (or any other set of three symbols) occurs equally often (, , etc.) and is also context-sensitive.
can be shown to be a context-sensitive language by constructing a linear bounded automaton which accepts . The language can easily be shown to be neither regular nor context-free by applying the respective pumping lemmas for each of the language classes to .
Similarly:
formula_5 is another context-sensitive language; the corresponding context-sensitive grammar can be easily projected starting with two context-free grammars generating sentential forms in the formats
formula_6
and
formula_7
and then supplementing them with a permutation production like
formula_8, a new starting symbol and standard syntactic sugar.
formula_9 is another context-sensitive language (the "3" in the name of this language is intended to mean a ternary alphabet); that is, the "product" operation defines a context-sensitive language (but the "sum" defines only a context-free language as the grammar formula_10 and formula_11 shows). Because of the commutative property of the product, the most intuitive grammar for formula_12 is ambiguous. This problem can be avoided considering a somehow more restrictive definition of the language, e.g. formula_13. This can be specialized to
formula_14 and, from this, to formula_15, formula_16, etc.
formula_17 is a context-sensitive language. The corresponding context-sensitive grammar can be obtained as a generalization of the context-sensitive grammars for formula_18, formula_19, etc.
formula_20 is a context-sensitive language.
formula_21 is a context-sensitive language (the "2" in the name of this language is intended to mean a binary alphabet). This was proved by Hartmanis using pumping lemmas for regular and context-free languages over a binary alphabet and, after that, sketching a linear bounded multitape automaton accepting formula_22.
formula_23 is a context-sensitive language (the "1" in the name of this language is intended to mean a unary alphabet). This was credited by A. Salomaa to Matti Soittola by means of a linear bounded automaton over a unary alphabet (pages 213–214, exercise 6.8) and also to Marti Penttonen by means of a context-sensitive grammar also over a unary alphabet (See: Formal Languages by A. Salomaa, page 14, Example 2.5).
An example of recursive language that is not context-sensitive is any recursive language whose decision is an EXPSPACE-hard problem, say, the set of pairs of equivalent regular expressions with exponentiation.
|
6216
|
84755
|
https://en.wikipedia.org/wiki?curid=6216
|
Chinese room
|
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal "Behavioral and Brain Sciences". Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.
In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just following "syntactic" rules without "semantic" comprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.
The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general. While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.
Searle's thought experiment.
Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?
Now suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed the Turing test this way, it follows that Searle would do so as well, simply by running the program by hand.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.
History.
Gottfried Leibniz made a similar argument in 1714 against mechanism (the idea that everything that makes up a human being could, in principle, be explained in mechanical terms. In other words, that a person, including their mind, is merely a very complex machine). Leibniz used the thought experiment of expanding the brain until it was the size of a mill. Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes.
Peter Winch made the same point in his book "The Idea of a Social Science and its Relation to Philosophy" (1958), where he provides an argument to show that "a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language" (p. 108).
Soviet cyberneticist Anatoly Dneprov made an essentially identical argument in 1961, in the form of the short story "The Game". In it, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them know. The game was organized by a "Professor Zarubin" to answer the question "Can mathematical machines think?" Speaking through Zarubin, Dneprov writes "the only way to prove that machines can think is to turn yourself into a machine and examine your thinking process" and he concludes, as Searle does, "We've proven that even the most perfect simulation of machine thinking is not the thinking process itself."
In 1974, Lawrence H. Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called the China brain, also the "Chinese Nation" or the "Chinese Gym".
Searle's version appeared in his 1980 paper "Minds, Brains, and Programs", published in "Behavioral and Brain Sciences". It eventually became the journal's "most influential target article", generating an enormous number of commentaries and responses in the ensuing decades, and Searle has continued to defend and refine the argument in many papers, popular articles and books. David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".
Most of the discussion consists of attempts to refute it. "The overwhelming majority", notes "Behavioral and Brain Sciences" editor Stevan Harnad, "still think that the Chinese Room Argument is dead wrong". The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".
Searle's argument has become "something of a classic in cognitive science", according to Harnad. Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".
Philosophy.
Although the Chinese Room argument was originally presented in reaction to the statements of artificial intelligence researchers, philosophers have come to consider it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind, and is related to such questions as the mind–body problem, the problem of other minds, the symbol grounding problem, and the hard problem of consciousness.
Strong AI.
Searle identified a philosophical position he calls "strong AI":
The definition depends on the distinction between simulating a mind and actually having one. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."
The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that "there are now in the world machines that think, that learn and create". Simon, together with Allen Newell and Cliff Shaw, after having completed the first program that could do formal reasoning (the Logic Theorist), claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind." John Haugeland wrote that "AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."
Searle also ascribes the following claims to advocates of strong AI:
Strong AI as computationalism or functionalism.
In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer functionalism" (a term he attributes to Daniel Dennett). Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.
Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of computationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting." Computationalism is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.
Each of the following, according to Harnad, is a "tenet" of computationalism:
Recent philosophical discussions have revisited the implications of computationalism for artificial intelligence. Goldstein and Levinstein explore whether large language models (LLMs) like ChatGPT can possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation, such as informational, causal, and structural theories, by demonstrating robust internal representations of the world. However, they highlight that the evidence for LLMs having action dispositions necessary for belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges, such as the "stochastic parrots" argument and concerns over memorization, asserting that LLMs exhibit structured internal representations that align with these philosophical criteria.
David Chalmers suggests that while current LLMs lack features like recurrent processing and unified agency, advancements in AI could address these limitations within the next decade, potentially enabling systems to achieve consciousness. This perspective challenges Searle's original claim that purely "syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems could have authentic mental states.
Strong AI vs. biological naturalism.
Searle holds a philosophical position he calls "biological naturalism": that consciousness and understanding require specific biological machinery that is found in brains. He writes "brains cause minds" and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains". Searle argues that this machinery (known in neuroscience as the "neural correlates of consciousness") must have some causal powers that permit the human experience of consciousness. Searle's belief in the existence of these powers has been criticized.
Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines". Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur.
Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including "computer functionalism" or "strong AI"). Biological naturalism is similar to identity theory (the position that mental states are "identical to" or "composed of" neurological events); however, Searle has specific technical objections to identity theory. Searle's biological naturalism and strong AI are both opposed to Cartesian dualism, the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter".
Consciousness.
Searle's original presentation emphasized understanding—that is, mental states with intentionality—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations, Searle has included consciousness as the real target of the argument.
David Chalmers writes, "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.
Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.
Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese. In Searle's words, "the computer has nothing more than I have in the case where I understand nothing".
Applied ethics.
Patrick Hew used the Chinese Room argument to deduce requirements from military command and control systems if they are to preserve a commander's moral agency. He drew an analogy between a commander in their command center and the person in the Chinese Room, and analyzed it under a reading of Aristotle's notions of "compulsory" and "ignorance". Information could be "down converted" from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from the USS "Vincennes" incident.
Computer science.
The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields. However, several concepts developed by computer scientists are essential to understanding the argument, including symbol processing, Turing machines, Turing completeness, and the Turing test.
Strong AI vs. AI research.
Searle's arguments are not usually considered an issue for AI research. The primary mission of artificial intelligence research is only to create useful systems that act intelligently and it does not matter if the intelligence is "merely" a simulation. AI researchers Stuart J. Russell and Peter Norvig wrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."
Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do.
Searle's "strong AI hypothesis" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals or exceeds human intelligence—that is, artificial general intelligence, human level AI or superintelligence. Kurzweil is referring primarily to the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that a superintelligent machine would not necessarily have a mind and consciousness.
Turing test.
The Chinese room implements a version of the Turing test. Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.
Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote:
To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.
Symbol processing.
Computers manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic.
Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax, without any knowledge of the symbol's semantics (that is, their meaning).
Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
Twenty-first century AI programs (such as "deep learning") do mathematical operations on huge matrixes of unidentified numbers and bear little resemblance to the symbolic processing used by AI programs at the time Searle wrote his critique in 1980. Nils Nilsson describes systems like these as "dynamic" rather than "symbolic". Nilsson notes that these are essentially digitized representations of dynamic systems—the individual numbers do not have a specific semantics, but are instead samples or data points from a dynamic signal, and it is the signal being approximated which would have semantics. Nilsson argues it is not reasonable to consider these signals as "symbol processing" in the same sense as the physical symbol systems hypothesis.
Chinese room and Turing completeness.
The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a machine that follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known in theoretical computer science as "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Turing writes, "all digital computers are in a sense equivalent." The widely accepted Church–Turing thesis holds that any function computable by an effective procedure is computable by a Turing machine.
The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according to Stevan Harnad, are "no refutation (but rather an affirmation)" of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.
There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.
Complete argument.
Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990. The Chinese room thought experiment is intended to prove point A3.
He begins with three axioms:
(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it does not know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.
Searle posits that these lead directly to this conclusion:
(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.
This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct? He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:
(A4) Brains cause minds.
Searle claims that we can derive "immediately" and "trivially" that:
(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever else that could be used to make a mind.
And from this he derives the further conclusions:
(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers."
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.
Refutations of Searle's argument take many different forms (see below). Computationalists and functionalists reject A3, arguing that "syntax" (as Searle describes it) can have "semantics" if the syntax has the right functional structure. Eliminative materialists reject A2, arguing that minds don't actually have "semantics"—that thoughts and other mental phenomena are inherently meaningless but nevertheless function as if they had meaning.
Replies.
Replies to Searle's argument may be classified according to what they claim to show:
Some of the arguments (robot and brain simulation, for example) fall into multiple categories.
Systems and virtual mind replies: finding the mind.
These replies attempt to answer the question: since the man in the room does not speak Chinese, where is the mind that does? These replies address the key ontological issues of mind versus body and simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply".
System reply.
The basic version of the system reply argues that it is the "whole system" that understands Chinese. While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains.
Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper" without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;" In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain.
Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now "the system" and "the man" both describe exactly the same object.
Critics of Searle's response argue that the program has allowed the man to have two minds in one head. If we assume a "mind" is a form of information processing, then the theory of computation can account for two computations occurring at once, namely (1) the computation for universal programmability (which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program). The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's. However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption.
More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies, the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "emergent property", or "a virtual mind".
Virtual mind reply.
Marvin Minsky suggested a version of the system reply known as the "virtual mind reply". The term "virtual" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky proposes that a computer may contain a "mind" that is virtual in the same sense as virtual machines, virtual communities and virtual reality.
To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".
Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched." Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the physical attributes of the device do not matter." The question is, is the human mind like the pocket calculator, essentially composed of information, where a perfect simulation of the thing just is the thing? Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term "synthetic intelligence" is more appropriate than the common description of such intelligences as "artificial."
These replies provide an explanation of exactly who it is that understands Chinese. If there is something "besides" the man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.
These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese" and thus is dodging the question or hopelessly circular.
Robot and semantics replies: finding the meaning.
As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.
Robot reply.
Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."
Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robot's eyes."
Derived meaning.
Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols Searle manipulates are already meaningful, they are just not meaningful to him.
Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.
Contextualist reply.
Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.
Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.
To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."
However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.
Brain simulation and connectionist replies: redesigning the room.
These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.)
Brain simulator reply.
Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.
Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains." Moreover, he argues:
China brain.
What if we ask each citizen of China to simulate one neuron, using the telephone system, to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying. It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious.
Brain replacement scenario.
In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins. (See Ship of Theseus for a similar thought experiment.)
Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding. Modern deep learning is massively parallel and has successfully displayed intelligent behavior in many domains. Nils Nilsson argues that modern AI is using digitized "dynamic signals" rather than symbols of the kind used by AI in 1980. Here it is the sampled signal which would have the semantics, not the individual numbers manipulated by the program. This is a different kind of machine than the one that Searle visualized.
This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.
Better technology in the future will allow computers to understand. Searle agrees that this is possible, but considers this point irrelevant. Searle agrees that there may be other hardware besides brains that have conscious understanding.
Many mansions / wait till next year reply.
These arguments (and the robot or common-sense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it.
In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned. The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument.
The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle.
Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section).
In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's Blockhead argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes "S", reply with "P" and goto X". At least in principle, any program can be rewritten (or "refactored") into this form, even a brain simulation. In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be overly specific.
Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."
Speed and complexity: appeals to intuition.
The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.
Several critics believe that Searle's argument relies entirely on intuitions. Block writes "Searle's argument depends for its force on intuitions that certain entities do not think." Daniel Dennett describes the Chinese room argument as a misleading "intuition pump" and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the obvious conclusion from it."
Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", as Daniel Dennett explains.
Speed and complexity replies.
Many of these critiques emphasize speed and complexity of the human brain, which processes information at 100 billion operations per second (by some estimates). Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment: "Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according to Maxwell's theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!" Churchland's point is that the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.
Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"
Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology". The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness.
Other minds and zombies: meaninglessness.
Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose the reality and knowability of the mental." The replies below question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), the epiphenoma replies question whether we can make any argument at all about something like consciousness which can not, by definition, be detected by any experiment, and the eliminative materialist reply argues that Searle's own personal consciousness does not "exist" in the sense that Searle thinks it does.
Other minds reply.
The "Other Minds Reply" points out that Searle's argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.
Nils Nilsson writes "If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought."
Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply. He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." The Turing test simply extends this "polite convention" to machines. He does not intend to solve the problem of other minds (for machines or people) and he does not think we need to.
Replies considering that Searle's "consciousness" is undetectable.
If we accept Searle's description of intentionality, consciousness, and the mind, we are forced to accept that consciousness is epiphenomenal: that it "casts no shadow" i.e. is undetectable in the outside world. Searle's "causal properties" cannot be detected by anyone outside the mind, otherwise the Chinese Room could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot detect the existence of the mental. Thus, Searle's "causal properties" and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter.
Mike Alder calls this the "Newton's Flaming Laser Sword Reply". He argues that the entire argument is frivolous, because it is non-verificationist: not only is the distinction between simulating a mind and having a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.
Daniel Dennett provides this illustration: suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. This is a philosophical zombie, as formulated in the philosophy of mind. This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually "zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.
Eliminative materialist reply.
Several philosophers argue that consciousness, as Searle describes it, does not exist. Daniel Dennett describes consciousness as a "user illusion".
This position is sometimes referred to as eliminative materialism: the view that consciousness is not a concept that can "enjoy reduction" to a strictly mechanical description, but rather is a concept that will be simply "eliminated" once the way the "material" brain works is fully understood, in just the same way as the concept of a demon has already been eliminated from science rather than enjoying reduction to a strictly mechanical description. Other mental properties, such as original intentionality (also called "meaning", "content", and "semantic character"), are also commonly regarded as special properties related to beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that "minds have mental contents (semantics)" must be rejected.
Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers." He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point.
Other replies.
Margaret Boden argued in her paper "Escaping from the Chinese Room" that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses. She then points out that the same applies to machine languages: a natural language sentence is understood by the programming language code that instantiates it, which in turn is understood by the lower-level compiler code, and so on. This implies that the distinction between syntax and semantics is not fixed, as Searle presupposes, but relative: the semantics of natural language is realized in the syntax of programming language; the semantics of programming language has a semantics that is realized in the syntax of compiler code. Searle's problem is a failure to assume a binary notion of understanding or not, rather than a graded one, where each system is stupider than the next.
Carbon chauvinism.
Searle's conclusion that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains" has sometimes been described as a form of "carbon chauvinism". Steven Pinker suggested that a response to that conclusion would be to make a counter thought experiment to the Chinese Room, where the incredulity goes the other way. He brings as an example the short story "They're Made Out of Meat" which depicts an alien race composed of some electronic beings, who upon finding Earth express disbelief that the meat brains of humans can experience consciousness and thought.
However, Searle himself denied being carbon chauvinist. He said "I have not tried to show that only biological based systems like our brains can think ... I regard this issue as up for grabs". He said that even silicon machines could theoretically have human-like consciousness and thought, if the actual physical–chemical properties of silicon could be used in a way that produces consciousness and thought, but "until we know how the brain does it we are not in a position to try to do it artificially".
|
6217
|
4842600
|
https://en.wikipedia.org/wiki?curid=6217
|
Charon (disambiguation)
|
Charon, in Greek mythology, is the ferryman who carried the souls of the dead to the underworld.
Charon may also refer to:
|
6220
|
7903804
|
https://en.wikipedia.org/wiki?curid=6220
|
Circle
|
A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. The distance between any point of the circle and the centre is called the radius. The length of a line segment connecting two points on the circle and passing through the centre is called the diameter. A circle bounds a region of the plane called a disc.
The circle has been known since before the beginning of recorded history. Natural circles are common, such as the full moon or a slice of round fruit. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.
Terminology.
All of the specified regions may be considered as "open", that is, not containing their boundaries, or as "closed", including their respective boundaries.
Etymology.
The word "circle" derives from the Greek κίρκος/κύκλος ("kirkos/kuklos"), itself a metathesis of the Homeric Greek κρίκος ("krikos"), meaning "hoop" or "ring". The origins of the words "circus" and "circuit" are closely related.
History.
Prehistoric people made stone circles and timber circles, and circular elements are common in petroglyphs and cave paintings. Disc-shaped prehistoric artifacts include the Nebra sky disc and jade discs called Bi.
The Egyptian Rhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to (3.16049...) as an approximate value of .
Book 3 of Euclid's "Elements" deals with the properties of circles. Euclid's definition of a circle is:
In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.
In 1880 CE, Ferdinand von Lindemann proved that is transcendental, proving that the millennia-old problem of squaring the circle cannot be performed with straightedge and compass.
With the advent of abstract art in the early 20th century, geometric objects became an artistic subject in their own right. Wassily Kandinsky in particular often used circles as an element of his compositions.
Symbolism and religious use.
From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas.
However, differences in worldview (beliefs and culture) had a great impact on artists' perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits.
The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. Magic circles are part of some traditions of Western esotericism.
Analytic results.
Circumference.
The ratio of a circle's circumference to its diameter is (pi), an irrational constant approximately equal to 3.141592654. The ratio of a circle's circumference to its radius is . Thus the circumference "C" is related to the radius "r" and diameter "d" by:
formula_3
Area enclosed.
As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to multiplied by the radius squared:
formula_4
Equivalently, denoting diameter by "d",
formula_5
that is, approximately 79% of the circumscribing square (whose side is of length "d").
The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality.
Radian.
If a circle of radius is centred at the vertex of an angle, and that angle intercepts an arc of the circle with an arc length of , then the radian measure of the angle is the ratio of the arc length to the radius:
formula_6
The circular arc is said to subtend the angle, known as the central angle, at the centre of the circle. One radian is the measure of the central angle subtended by a circular arc whose length is equal to its radius. The angle subtended by a complete circle at its centre is a complete angle, which measures radians, 360 degrees, or one turn.
Using radians, the formula for the arc length of a circular arc of radius and subtending a central angle of measure is
formula_7
and the formula for the area of a circular sector of radius and with central angle of measure is
formula_8
In the special case , these formulae yield the circumference of a complete circle and area of a complete disc, respectively.
Equations.
Cartesian coordinates.
Equation of a circle.
In an "x"–"y" Cartesian coordinate system, the circle with centre coordinates ("a", "b") and radius "r" is the set of all points ("x", "y") such that
formula_9
This equation, known as the "equation of the circle", follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |"x" − "a"| and |"y" − "b"|. If the circle is centred at the origin (0, 0), then the equation simplifies to
formula_10
One coordinate as a function of the other.
The circle of radius with center at in the – plane can be broken into two semicircles each of which is the graph of a function, and , respectively:
formula_11
for values of ranging from to .
Parametric form.
The equation can be written in parametric form using the trigonometric functions sine and cosine as
formula_12
where "t" is a parametric variable in the range 0 to 2, interpreted geometrically as the angle that the ray from ("a", "b") to ("x", "y") makes with the positive "x" axis.
An alternative parametrisation of the circle is
formula_13
In this parameterisation, the ratio of "t" to "r" can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the "x" axis (see Tangent half-angle substitution). However, this parameterisation works only if "t" is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted.
3-point form.
The equation of the circle determined by three points formula_14 not on a line is obtained by a conversion of the "3-point form of a circle equation":
formula_15
Homogeneous form.
In homogeneous coordinates, each conic section with the equation of a circle has the form
formula_16
It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points "I"(1: "i": 0) and "J"(1: −"i": 0). These points are called the circular points at infinity.
Polar coordinates.
In polar coordinates, the equation of a circle is
formula_17
where "a" is the radius of the circle, formula_18 are the polar coordinates of a generic point on the circle, and formula_19 are the polar coordinates of the centre of the circle (i.e., "r"0 is the distance from the origin to the centre of the circle, and "φ" is the anticlockwise angle from the positive "x" axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. , this reduces to . When , or when the origin lies on the circle, the equation becomes
formula_20
In the general case, the equation can be solved for "r", giving
formula_21
Without the ± sign, the equation would in some cases describe only half a circle.
Complex plane.
In the complex plane, a circle with a centre at "c" and radius "r" has the equation
formula_22
In parametric form, this can be written as
formula_23
The slightly generalised equation
formula_24
for real "p", "q" and complex "g" is sometimes called a generalised circle. This becomes the above equation for a circle with formula_25, since formula_26. Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line.
Tangent lines.
The tangent line through a point "P" on the circle is perpendicular to the diameter passing through "P". If and the circle has centre ("a", "b") and radius "r", then the tangent line is perpendicular to the line from ("a", "b") to ("x"1, "y"1), so it has the form . Evaluating at ("x"1, "y"1) determines the value of "c", and the result is that the equation of the tangent is
formula_27
or
formula_28
If , then the slope of this line is
formula_29
This can also be found using implicit differentiation.
When the centre of the circle is at the origin, then the equation of the tangent line becomes
formula_30
and its slope is
formula_31
Properties.
Inscribed angles.
An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180°).
Sagitta.
The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle.
Given the length "y" of a chord and the length "x" of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines:
formula_35
Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length "y" and with sagitta of length "x", since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is () in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (. Solving for "r", we find the required result.
Compass and straightedge constructions.
There are many compass-and-straightedge constructions resulting in circles.
The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass.
Circle of Apollonius.
Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant "ratio" (other than 1) of distances to two fixed foci, "A" and "B". (The set of points where the distances are equal is the perpendicular bisector of segment "AB", a line.) That circle is sometimes said to be drawn "about" two points.
The proof is in two parts. First, one must prove that, given two foci "A" and "B" and a ratio of distances, any point "P" satisfying the ratio of distances must fall on a particular circle. Let "C" be another point, also satisfying the ratio and lying on segment "AB". By the angle bisector theorem the line segment "PC" will bisect the interior angle "APB", since the segments are similar:
formula_36
Analogously, a line segment "PD" through some point "D" on "AB" extended bisects the corresponding exterior angle "BPQ" where "Q" is on "AP" extended. Since the interior and exterior angles sum to 180 degrees, the angle "CPD" is exactly 90 degrees; that is, a right angle. The set of points "P" such that angle "CPD" is a right angle forms a circle, of which "CD" is a diameter.
Second, see for a proof that every point on the indicated circle satisfies the given ratio.
Cross-ratios.
A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If "A", "B", and "C" are as above, then the circle of Apollonius for these three points is the collection of points "P" for which the absolute value of the cross-ratio is equal to one:
formula_37
Stated another way, "P" is a point on the circle of Apollonius if and only if the cross-ratio is on the unit circle in the complex plane.
Generalised circles.
If "C" is the midpoint of the segment "AB", then the collection of points "P" satisfying the Apollonius condition
formula_38
is not a circle, but rather a line.
Thus, if "A", "B", and "C" are given distinct points in the plane, then the locus of points "P" satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius.
Inscription in or circumscription about other figures.
In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle.
About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices.
A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon.
A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon.
A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle.
Limiting case of other figures.
The circle can be viewed as a limiting case of various other figures:
Locus of constant sum.
Consider a finite set of formula_40 points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points.
A generalisation for higher powers of distances is obtained if, instead of formula_40 points, the vertices of the regular polygon formula_42 are taken. The locus of points such that the sum of the formula_43-th power of distances formula_44 to the vertices of a given regular polygon with circumradius formula_45 is constant is a circle, if
formula_46
whose centre is the centroid of the formula_42.
In the case of the equilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For the regular pentagon the constant sum of the eighth powers of the distances will be added and so forth.
Squaring the circle.
Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge.
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Despite the impossibility, this topic continues to be of interest for pseudomath enthusiasts.
Generalisations.
In other "p"-norms.
Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In "p"-norm, distance is determined by
formula_48
In Euclidean geometry, "p" = 2, giving the familiar
formula_49
In taxicab geometry, "p" = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length formula_50 using a Euclidean metric, where "r" is the circle's radius, its length in taxicab geometry is 2"r". Thus, a circle's circumference is 8"r". Thus, the value of a geometric analog to formula_51 is 4 in this geometry. The formula for the unit circle in taxicab geometry is formula_52 in Cartesian coordinates and
formula_53
in polar coordinates.
A circle of radius 1 (using this distance) is the von Neumann neighborhood of its centre.
A circle of radius "r" for the Chebyshev distance ("L"∞ metric) on a plane is also a square with side length 2"r" parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between "L"1 and "L"∞ metrics does not generalise to higher dimensions.
Topological definition.
The circle is the one-dimensional hypersphere (the 1-sphere).
In topology, a circle is not limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy).
|
6221
|
7903804
|
https://en.wikipedia.org/wiki?curid=6221
|
Cardinal (Catholic Church)
|
A cardinal is a senior member of the clergy of the Catholic Church. As titular members of the clergy of the Diocese of Rome, they serve as advisors to the pope, who is the bishop of Rome and the visible head of the worldwide Catholic Church. Cardinals are chosen and formally created by the pope, and typically hold the title for life. Collectively, they constitute the College of Cardinals. The most solemn responsibility of the cardinals is to elect a new pope in a conclave, almost always from among themselves, with a few historical exceptions, when the Holy See is vacant.
During the period between a pope's death or resignation and the election of his successor, the day-to-day governance of the Holy See is in the hands of the College of Cardinals. The right to participate in a conclave is limited to cardinals who have not reached the age of 80 years by the day the vacancy occurs. With the pope, cardinals collectively participate in papal consistories, in which matters of importance to the Church are considered and new cardinals may be created. Cardinals of working age are also often appointed to roles overseeing dicasteries (departments) of the Roman Curia, the central administration of the Catholic Church.
Cardinals are drawn from a variety of backgrounds, being appointed as cardinals in addition to their existing roles within the Church. Most cardinals are bishops and archbishops leading dioceses and archdioceses around the world – often the most prominent diocese or archdiocese in their country. Others are titular bishops who are current or former officials within the Roman Curia, generally the heads of dicasteries and other bodies linked to the Curia. A very small number are priests recognised by the pope for their service to the Church. Canon law requires them to be generally consecrated as bishops before they are made cardinals, but some are granted a papal dispensation. There are no strict criteria for elevation to the College of Cardinals. Since 1917, a potential cardinal must already be at least a priest, but laymen have been cardinals in the past. The selection is entirely up to the pope, and tradition is his only guide.
there are serving cardinals, of whom are eligible to vote in a conclave to elect a new pope.
History.
There is general disagreement about the origin of the term, but a chief consensus is that the Latin comes from the term (meaning 'pivot' or 'hinge'). It was first used in late antiquity to designate a bishop or priest who was incorporated into a church for which he had not originally been ordained. In Rome the first persons to be called cardinals were the deacons of the seven regions of the city at the beginning of the 6th century, when the word began to mean 'principal', 'eminent', or 'superior'.
The name was also given to the senior priest in each of the "title" churches (the parish churches) of Rome and to the bishops of the seven sees surrounding the city. By the 8th century the Roman cardinals constituted a privileged class among the Roman clergy. They took part in the administration of the Church of Rome and in the papal liturgy. By decree of a synod of 769, only a cardinal was eligible to become Bishop of Rome. Cardinals were granted the privilege of wearing the red hat by Pope Innocent IV in 1244.
In cities other than Rome, the name cardinal began to be applied to certain churchmen as a mark of honour. The earliest example of this occurs in a letter sent by Pope Zacharias in 747 to Pippin the Younger, ruler of the Franks, in which Zacharias applied the title to the priests of Paris to distinguish them from country clergy. This meaning of the word spread rapidly, and from the 9th century various episcopal cities had a special class among the clergy known as cardinals. The use of the title was reserved for the cardinals of Rome in 1567 by Pius V.
In 1059, five years after the East-West Schism, the right of electing the pope was reserved to the principal clergy of Rome and the bishops of the seven suburbicarian sees. In the 12th century the practice of appointing ecclesiastics from outside Rome as cardinals began, with each of them assigned a church in Rome as his titular church or linked with one of the suburbicarian dioceses, while still being incardinated in a diocese other than that of Rome.
The term "cardinal" at one time applied to any priest permanently assigned or incardinated to a church, or specifically to the senior priest of an important church, based on the Latin ('hinge'), meaning 'pivotal' as in "principal" or "chief". The term was applied in this sense as early as the 9th century to the priests of the (parishes) of the diocese of Rome.
In the year 1563, the Ecumenical Council of Trent, headed by Pope Pius IV, wrote about the importance of selecting good cardinals: "nothing is more necessary to the Church of God than that the holy Roman pontiff apply that solicitude which by the duty of his office he owes the universal Church in a very special way by associating with himself as cardinals the most select persons only, and appoint to each church most eminently upright and competent shepherds; and this the more so, because our Lord Jesus Christ will require at his hands the blood of the sheep of Christ that perish through the evil government of shepherds who are negligent and forgetful of their office."
The earlier influence of temporal rulers, notably the kings of France, reasserted itself through the influence of cardinals of certain nationalities or politically significant movements. Traditions even developed entitling certain monarchs, including those of Austria, Spain, and France, to nominate one of their trusted clerical subjects to be created cardinal, a so-called "crown-cardinal".
In early modern times, cardinals often had important roles in secular affairs. In some cases, they took on powerful positions in government. In Henry VIII's England, his chief minister was for some time Cardinal Wolsey. Cardinal Richelieu's power was so great that he was for many years effectively the ruler of France. Richelieu's successor was also a cardinal, Jules Mazarin. Guillaume Dubois and André-Hercule de Fleury complete the list of the four great cardinals to have ruled France. In Portugal, due to a succession crisis, one cardinal, Henry of Portugal, was crowned king, the only example of a cardinal-king (although John II Casimir Vasa was a cardinal from 1646 until he resigned in 1647, later being elected and crowned King of Poland, in 1648 and 1649, respectively).
While the incumbents of some sees are regularly made cardinals, and some countries are entitled to at least one cardinal by concordat (usually earning either its primate or the metropolitan of the capital city the cardinal's hat), almost no see carries an actual right to the cardinalate, not even if its bishop is a patriarch: the notable exception is the Patriarch of Lisbon who, by Pope Clement XII's 1737 bull , is accorded the right to be elevated to the rank of cardinal in the consistory following his appointment.
Papal elections.
In 1059, Pope Nicholas II gave cardinals the right to elect the Bishop of Rome in the papal bull . For a time this power was assigned exclusively to the cardinal bishops, but in 1179 the Third Lateran Council restored the right to the whole body of cardinals.
Numbers.
In 1586, Pope Sixtus V limited the number of cardinals to 70: six cardinal bishops, 50 cardinal priests, and 14 cardinal deacons. The number of seventy was in reference to the Sanhedrin and to the seventy disciples. Pope John XXIII exceeded that limit citing the need to staff church offices. In November 1970, in , Pope Paul VI established that electors would be under the age of 80 years. When it took effect on 1 January 1971, it deprived 25 cardinals of the right to participate in a conclave. In October 1975 in , he set the maximum number of electors at 120, while establishing no limit on the overall size of the college.
Popes can set aside church laws and they have regularly brought the number of cardinals under the age of 80 to more than 120, reaching as high as 140 with Pope Francis' consistory of December 2024. No more than 120 electors participated in a conclave until the conclave following the death of Pope Francis, in which 133 cardinals participated.
Pope Paul VI also increased the number of cardinal bishops by assigning that rank, in 1965, to patriarchs of the Eastern Catholic Churches when named cardinals. In 2018, Pope Francis expanded the cardinal bishops of Roman title, because this had not been done despite recent decades' expansion in the two lower orders of cardinals, besides having all six such cardinals being over the age limit for a conclave.
Titular churches.
Each cardinal is assigned a titular church upon his creation, which is always a church in the city of Rome. Through the process of opting (), a cardinal can rise through the ranks from cardinal deacon to priest, and from cardinal priest to that of cardinal bishop – in which case he obtains one of the suburbicarian sees located around the city of Rome. The only exception is for patriarchs of the Eastern Catholic Churches.
Nevertheless, cardinals possess no power of governance nor are they to intervene in any way in matters which pertain to the administration of goods, discipline, or the service of their titular churches. They are allowed to celebrate Mass and hear confessions and lead visits and pilgrimages to their titular churches, in coordination with the staff of the church. They often support their churches monetarily, and many cardinals do keep in contact with the pastoral staffs of their titular churches.
The Dean of the College of Cardinals in addition to such a titular church also receives the titular bishopric of Ostia, the primary suburbicarian see. Cardinals governing a particular church retain that church.
Title and reference style.
In 1630, Pope Urban VIII decreed their title to be "Eminence" (previously, it had been and ) and decreed that their secular rank would equate to prince, making them second only to the pope and crowned monarchs.
In accordance with tradition, they sign by placing the title "Cardinal" (abbreviated "Card.") after their personal name and before their surname as, for instance, "John Card(inal) Doe" or, in Latin, "Ioannes Card(inalis) Doe". Some writers, such as James-Charles Noonan, hold that, in the case of cardinals, the form used for signatures should be used also when referring to them in English.
Official sources, such as the Catholic News Service, say that the correct form for referring to a cardinal in English is normally as "Cardinal [First name] [Surname]". This is the rule given also in stylebooks not associated with the church. This style is also generally followed on the websites of the Holy See and episcopal conferences. Oriental patriarchs who are created cardinals customarily use as their full title, probably because they do not belong to the Roman clergy.
The "[First name] Cardinal [Surname]" order is used in the Latin proclamation of the election of a new pope by the cardinal protodeacon, if the new pope is a cardinal, as has been the case since 1389.
The title "Prince of the Church" has historically been applied to cardinals of the Catholic Church, and sometimes more broadly to senior members of the church hierarchy. It has been rejected by Pope Francis, who stated to a group of newly created cardinals "He (Jesus) does not call you to become 'princes' of the Church, to 'sit on his right or on his left.' He calls you to serve like Him and with Him." The title is still applied contemporarily, both officially and other times in criticism of the perceived attitudes of some cardinals.
Orders and their chief offices.
Cardinal bishops.
Cardinal bishops (cardinals of the episcopal order; ) are the senior order of cardinals. Though in modern times the vast majority of cardinals are also bishops or archbishops, few are "cardinal bishops". Until 1150, there were seven cardinal bishops, each presiding over one of the seven suburbicarian sees around Rome: Ostia, Albano, Porto and Santa Rufina, Palestrina, Sabina and Mentana, Frascati, and Velletri. Of these seven, Velletri was united with Ostia from 1150 until 1914, when Pope Pius X separated them again, but decreed that whichever cardinal bishop became Dean of the College of Cardinals would keep the suburbicarian see he already held, adding to it that of Ostia, with the result that there continued to be only six cardinal bishops. The actual number of cardinal bishops for the majority of the second millennium was thus six. Since 1962, the cardinal bishops have only a titular relationship with the suburbicarian sees, each of which is governed by a separate ordinary.
Until 1961, membership in the order of cardinal bishops was achieved through precedence in the College of Cardinals. When a suburbicarian see fell vacant, the most senior cardinal by precedence could exercise his option to claim the see and be promoted to the order of cardinal bishops. Pope John XXIII abolished that privilege on 10 March 1961 and made the right to promote someone to the order of cardinal bishops the sole prerogative of the pope.
In 1965, Pope Paul VI decreed in his that patriarchs of the Eastern Catholic Churches who were named cardinals (i.e. "cardinal patriarchs") would also be cardinal bishops, ranking after the six Latin Church cardinal bishops of the suburbicarian sees. Latin Church patriarchs who become cardinals are cardinal priests, not cardinal bishops: for example Angelo Scola was made the Patriarch of Venice in 2002 and cardinal priest of Santi XII Apostoli in 2003. Those of cardinal patriarch rank continue to hold their patriarchal see and are not assigned any Roman title (suburbicarian see, title or deaconry).
At the June 2018 consistory, Pope Francis increased the number of Latin Church cardinal bishops to match the expansion in cardinal priests and cardinal deacons in recent decades. He elevated four cardinals to this rank granting their titular churches and deaconries suburbicarian rank (temporarily) and making them equivalent to suburbicarian see titles. At the time of the announcement, all six cardinal bishops of suburbicarian see titles, as well as two of the three cardinal patriarchs, were non-electors as they had reached the age of 80. Pope Francis created another cardinal bishop in the same way on 1 May 2020, bringing the number of Latin Church cardinal bishops to 11.
The Dean of the College of Cardinals, the highest ranking cardinal, was formerly the longest serving cardinal bishop, but since 1965 is elected by the Latin Church cardinal bishops from among their number, subject to papal approval. Likewise the Vice-Dean, formerly the second longest serving, is also elected. Seniority of the remaining Latin Church cardinal bishops is still by date of appointment to the rank. The current Dean is Giovanni Battista Re and the Vice-Dean is Leonardo Sandri.
Cardinal priests.
Cardinal priests () are the most numerous of the three orders of cardinals in the Catholic Church, ranking above the cardinal deacons and below the cardinal bishops. Those who are named cardinal priests today are generally also bishops of important dioceses throughout the world, though some hold Curial positions.
In modern times, the term "cardinal priest" is interpreted as meaning a cardinal who is of the order of priests. Originally this referred to certain key priests of important churches of the Diocese of Rome, who were recognized as the priests – the important priests chosen by the pope to advise him in his duties as Bishop of Rome. Certain clerics in many dioceses at the time, not just that of Rome, were said to be the key personnel—the term gradually became exclusive to Rome to indicate those entrusted with electing the Bishop of Rome, the pope.
While the cardinalate has long been expanded beyond the Roman pastoral clergy and Roman Curia, every cardinal priest has a titular church in Rome, though they may be bishops or archbishops elsewhere, just as cardinal bishops were given one of the suburbicarian dioceses around Rome. Pope Paul VI abolished all administrative rights cardinals had with regard to their titular churches, though the cardinal's name and coat of arms are still posted in the church, and they are expected to celebrate Mass and preach there if convenient when they are in Rome.
While the number of cardinals was small from the times of the Roman Empire to the Renaissance, and frequently smaller than the number of recognized churches entitled to a cardinal priest, in the 16th century the college expanded markedly. In 1587, Pope Sixtus V sought to arrest this growth by fixing the maximum size of the college at 70, including 50 cardinal priests, about twice the historical number. This limit was respected until 1958, and the list of titular churches modified only on rare occasions, generally when a building fell into disrepair. When Pope John XXIII abolished the limit, he began to add new churches to the list, which Popes Paul VI and John Paul II continued to do. Today there are close to 150 titular churches, out of over 300 churches in Rome.
The cardinal who is the longest-serving member of the order of cardinal priests is titled "cardinal protopriest". He had certain ceremonial duties in the conclave that have effectively ceased because he would generally have already reached age 80, at which cardinals are barred from the conclave. The current cardinal protopriest is Michael Michai Kitbunchu of Thailand.
Cardinal deacons.
The cardinal deacons () are the lowest-ranking cardinals. Cardinals elevated to the diaconal order are either officials of the Roman Curia or priests elevated after their 80th birthday, chosen mainly for the honor of it, since those over 80 are not able to vote in a conclave. While bishops with diocesan responsibilities are created cardinal priests, it is generally not so for cardinal deacons.
Cardinal deacons derive originally from the seven deacons in the Papal Household who supervised the church's works in the 14 districts of Rome during the early Middle Ages, when church administration was effectively the government of Rome and provided all social services. They came to be called "cardinal deacons" by the late eighth century, and they were granted active rights in papal elections and made eligible for the election as pope by the Lateran Council of 769.
Cardinals elevated to the diaconal order are mainly officials of the Roman Curia holding various posts in the church administration. Their number and influence has varied through the years. While historically predominantly Italian, the group has become much more internationally diverse in later years. In 1939, about half were Italian. In 1994, approximately one third were Italian. Their influence in the election of the pope has been considered important. They are better informed and connected than the dislocated cardinals but their level of unity has been varied.
Under the 1587 decree of Pope Sixtus V, which fixed the maximum size of the College of Cardinals, there were 14 cardinal deacons. Later the number increased. As late as 1939 almost half of the cardinals were members of the Curia. Pius XII reduced this percentage to 24 percent. John XXIII brought it back up to 37 percent but Paul VI brought it down to 27 percent. John Paul II maintained this ratio.
As of 2005, there were over 50 churches recognized as cardinalatial deaconries, though there were only 30 cardinals of the order of deacons. Cardinal deacons have long enjoyed the right to "opt for the order of cardinal priests" () after they have been cardinal deacons for 10 years. They may on such elevation take a vacant "title" (a church allotted to a cardinal priest as the church in Rome with which he is associated) or their diaconal church may be temporarily elevated to a cardinal priest's "title" for that occasion. When elevated to cardinal priests, they take their precedence according to the day they were first made cardinal deacons, thus ranking above cardinal priests who were elevated to the college after them, regardless of order.
When not celebrating Mass, but still serving a liturgical function, such as the semiannual papal blessing, some Papal Masses and some events at Ecumenical Councils, cardinal deacons can be recognized by the dalmatics they would don with the simple white mitre (so called ).
Cardinal protodeacon.
The cardinal protodeacon is the senior cardinal deacon in order of appointment to the College of Cardinals. If he is a cardinal elector and participates in a conclave, he announces a new pope's election and name from the central balcony of St. Peter's Basilica in Vatican City. The protodeacon also bestows the pallium on the new pope and crowns him with the papal tiara, although the crowning has not been celebrated since Pope John Paul I opted for a simpler papal inauguration ceremony in 1978. The current cardinal protodeacon is Dominique Mamberti.
Special types of cardinals.
Camerlengo.
The Cardinal Camerlengo of the Holy Roman Church, assisted by the Vice-Camerlengo and the other prelates of the office known as the Apostolic Camera, has functions that in essence are limited to a period of of the papacy. He is to collate information about the financial situation of all administrations dependent on the Holy See and present the results to the College of Cardinals, as they gather for the papal conclave.
Cardinals who are not bishops.
Until 1918, any cleric, even one only in minor orders, could be created a cardinal (see "lay cardinals", below), but enrolled only in the order of cardinal deacons. For example, in the 16th century, Reginald Pole was a cardinal for 18 years before he was ordained a priest. The 1917 Code of Canon Law mandated that all cardinals, even cardinal deacons, had to be priests, and, in 1962, Pope John XXIII set the norm that all cardinals be consecrated as bishops, even if they are only priests at the time of appointment.
As a consequence of these two changes, canon 351 of the 1983 Code of Canon Law requires that a cardinal be at least in the order of priesthood at his appointment, and that those who are not already bishops must receive episcopal consecration. Several cardinals near to or over the age of 80 when appointed have obtained dispensation from the rule of having to be a bishop. These were all appointed cardinal-deacons, but Roberto Tucci and Albert Vanhoye lived long enough to exercise the right of option and be promoted to the rank of cardinal-priest. Since the 1962 rule change, Timothy Radcliffe has been the only non-bishop cardinal who took part in a papal election in the 2025 papal conclave.
A cardinal who is not a bishop is entitled to wear and use the episcopal vestments and other pontificalia, episcopal regalia being the mitre, crozier, zucchetto, pectoral cross, and ring. He has both actual and honorary precedence over archbishops, and bishops who are not cardinals. However, he cannot perform the sacrament of ordination or other rites reserved solely to bishops.
"Lay cardinals".
At various times, there have been cardinals who had only received first tonsure and minor orders but not yet been ordained as deacons or priests. Though clerics, they were inaccurately called "lay cardinals". Teodolfo Mertel was among the last of these cardinals. When he died in 1899 he was the last surviving cardinal who was not at least ordained a priest. With the revision of the Code of Canon Law promulgated in 1917 by Pope Benedict XV, only those who are already priests or bishops may be appointed cardinals. Since the time of Pope John XXIII, a priest who is appointed a cardinal must be consecrated a bishop, unless they receive a papal dispensation from this requirement.
Cardinals or secret cardinals.
In addition to the named cardinals, the pope may name secret cardinals or cardinals (Latin for 'in the breast'). During the Western Schism, many cardinals were created by the contending popes. Beginning with the reign of Pope Martin V, cardinals were created without publishing their names until later, a practice termed . A cardinal named is known only to the pope. In the modern era, popes have named cardinals to protect them or their congregations from political persecution or other danger.
If conditions change in respect of that persecution, the pope may make the appointment public. The cardinal in question then ranks in terms of precedence with those who were made cardinals at the time of the appointment. If a pope dies before revealing the identity of an cardinal, the person's status as cardinal expires. The last pope known to have named a cardinal is Pope John Paul II, who named four, including one whose identity was never revealed.
Vesture and privileges.
When in choir dress, a Latin Church cardinal wears scarlet garments—the blood-like red symbolizes a cardinal's willingness to die for his faith. Excluding the rochet—which is always white—the scarlet garments include the cassock, mozzetta, and biretta, over the usual scarlet zucchetto. The biretta of a cardinal is distinctive not merely for its scarlet color, but also for the fact that it does not have a pompom or tassel on the top as do the birettas of other prelates.
Until the 1460s, it was customary for cardinals to wear a violet or blue cape unless granted the privilege of wearing red when acting on papal business. His normal-wear cassock is black but has scarlet piping and a scarlet fascia (sash). Occasionally, a cardinal wears a scarlet which is a cape worn over the shoulders, tied at the neck in a bow by narrow strips of cloth in the front, without any 'trim' or piping on it. It is because of the scarlet color of cardinals' vesture that the bird of the same name has become known as such.
Eastern Catholic cardinals continue to wear the normal dress appropriate to their liturgical tradition, though some may line their cassocks with scarlet and wear scarlet fascias, or in some cases, wear Eastern-style cassocks entirely of scarlet.
In previous times, at the consistory at which the pope named a new cardinal, he would bestow upon him a distinctive wide-brimmed hat called a galero. This custom was discontinued in 1969 and the investiture now takes place with the scarlet biretta. In ecclesiastical heraldry, the scarlet galero is still displayed on the cardinal's coat of arms. Cardinals had the right to display the galero in their cathedral, and when a cardinal died, it would be suspended from the ceiling above his tomb. Some cardinals will still have a galero made, even though it is not officially part of their apparel.
To symbolize their bond with the papacy, the pope gives each newly appointed cardinal a gold ring, which is traditionally kissed by Catholics when greeting a cardinal, as with a bishop's episcopal ring. Before the new uniformity imposed by John Paul II, each cardinal was given a ring, the central piece of which was a gem, usually a sapphire, with the pope's stemma engraved on the inside. There is now no gemstone, and the pope chooses the image on the outside: under Pope Benedict XVI it was a modern depiction of the crucifixion of Jesus, with Mary and John to each side. The ring includes the pope's coat of arms on the inside.
Cardinals have in canon law a "privilege of forum", i.e., exemption from being judged by ecclesiastical tribunals of ordinary rank. Only the pope is competent to judge them in matters subject to ecclesiastical jurisdiction, cases that refer to matters that are spiritual or linked with the spiritual, or with regard to infringement of ecclesiastical laws and whatever contains an element of sin, where culpability must be determined and the appropriate ecclesiastical penalty imposed. The pope either decides the case himself or delegates the decision to a tribunal, usually one of the tribunals or congregations of the Roman Curia. Without such delegation, no ecclesiastical court, even the Roman Rota, is competent to judge a canon law case against a cardinal.
Additionally, canon law gives cardinals the faculty (ability) to hear confessions validly and licitly everywhere; while bishops too have this global confession-hearing faculty, they can be restricted in their use of it in a particular area by the local bishop.
List of canonized or otherwise venerated cardinals.
Many cardinals have been canonized (made saints) or are otherwise venerated ("raised to the altars") by the Catholic Church.
Saints
Blesseds
Declared Blessed by popular acclaim
Venerables
Servants of God
|
6225
|
44555953
|
https://en.wikipedia.org/wiki?curid=6225
|
Cantigas de Santa Maria
|
The Cantigas de Santa Maria (, ; "Canticles of Holy Mary") are 420 poems with musical notation, written in the medieval Galician-Portuguese language during the reign of Alfonso X of Castile "El Sabio" (1221–1284). Traditionally, they are all attributed to Alfonso, though scholars have since established that the musicians and poets of his court were responsible for most of them, with Alfonso being credited with a few as well.
It is one of the largest collections of monophonic (solo) songs from the Middle Ages and is characterized by the mention of the Virgin Mary in every song, while every tenth song is a hymn.
The "Cantigas" have survived in four manuscript codices: two at El Escorial, one at Madrid's National Library, and one in Florence, Italy. The E codex from El Escorial is illuminated with colored miniatures showing pairs of musicians playing a wide variety of instruments. The "Códice Rico" (T) from El Escorial and the one in the Biblioteca Nazionale Centrale of Florence (F) are richly illuminated with narrative vignettes.
Description.
The Cantigas are written in the early Medieval Galician variety of Galician-Portuguese, using Galician spelling; this was because of Galician-Portuguese being fashionable as a lyrical language in Castile at the time, as well as Alfonso X having passed part of his early years in Galicia and so probably being a fluent speaker since his childhood.
The Cantigas are a collection of 420 poems, 356 of which are in a narrative format relating to Marian miracles; the rest of them, except an introduction and two prologues, are of songs of praise or involve Marian festivities. The Cantigas depict the Virgin Mary in a very humanized way, often having her play a role in earthly episodes.
The authors are unknown, although several studies have suggested that Galician poet Airas Nunes might have been the author of a large number of the Cantiga poems. King Alfonso X — named as Affonso in the Cantigas — is also believed to be an author of some of them as he refers himself in first person. Support for this theory can be found in the prologue of the Cantigas. Also, many sources credit Alfonso owing to his influence on other works within the poetic tradition, including his introduction on religious song. Although King Alfonso X's authorship is debatable, his influence is not. While the other major works that came out of Alfonso's workshops, including histories and other prose texts, were in Castilian, the Cantigas are in Galician-Portuguese, and reflect the popularity in the Castilian court of other poetic corpuses such as the "cantigas d'amigo" and "cantigas d'amor".
The metrics are extraordinarily diverse: 280 different formats for the 420 Cantigas. The most common are the "virelai" and the "rondeau". The length of the lines varies between two and 24 syllables. The narrative voice in many of the songs describes an erotic relationship, in the troubadour fashion, with the Divine.
The music is written in notation which is similar to that used for chant, but also contains some information about the length of the notes. Several transcriptions exist. The Cantigas are frequently recorded and performed by early music groups, and quite a few CDs featuring music from the Cantigas are available.
Codices.
The Cantigas are preserved in four manuscripts:
"E" contains the largest number of songs (406 Cantigas, plus the Introduction and the Prologue); it contains 41 carefully detailed miniatures and many illuminated letters. "To" is the earliest collection and contains 129 songs. Although not illustrated, it is richly decorated with pen flourished initials, and great care has been taken over its construction. The "T" and "F" manuscripts are sister volumes. "T" contains 195 surviving cantigas (8 are missing due to loss of folios) which roughly correspond in order to the first two hundred in "E", each song being illustrated with either 6 or 12 miniatures that depict scenes from the cantiga. "F" follows the same format but has only 111 cantigas, of which 7 have no text, only miniatures. These are basically a subset of those found in the second half of "E", but are presented here in a radically different order. "F" was never finished, and so no music was ever added. Only the empty staves display the intention to add musical notation to the codex at a later date. It is generally thought that the codices were constructed during Alfonso's lifetime, "To" perhaps in the 1270s, and "T"/"F" and "E" in the early 1280s up until the time of his death in 1284.
The music.
The musical forms within the Cantigas, and there are many, are still being studied. There have been many false leads, and there is little beyond pitch value that is very reliable. Mensuration is a particular problem in the Cantigas, and most attempts at determining meaningful rhythmic schemes have tended, with some exceptions, to be unsatisfactory. This remains a lively topic of debate and study. Progress, while on-going, has nevertheless been significant over the course of the last 20 years.
|
6226
|
13724965
|
https://en.wikipedia.org/wiki?curid=6226
|
Claudio Monteverdi
|
Claudio Giovanni Antonio Monteverdi (baptized 15 May 1567 – 29 November 1643) was an Italian composer, choirmaster and string player. A composer of both secular and sacred music, and a pioneer in the development of opera, he is considered a crucial transitional figure between the Renaissance and Baroque periods of music history.
Born in Cremona, where he undertook his first musical studies and compositions, Monteverdi developed his career first at the court of Mantua () and then until his death in the Republic of Venice where he was "maestro di cappella" at the basilica of San Marco. His surviving letters give insight into the life of a professional musician in Italy of the period, including problems of income, patronage and politics.
Much of Monteverdi's output, including many stage works, has been lost. His surviving music includes nine books of madrigals, large-scale religious works, such as his "Vespro della Beata Vergine" ("Vespers for the Blessed Virgin") of 1610, and three complete operas. His opera "L'Orfeo" (1607) is the earliest of the genre still widely performed; towards the end of his life he wrote works for Venice, including "Il ritorno d'Ulisse in patria" and "L'incoronazione di Poppea".
While he worked extensively in the tradition of earlier Renaissance polyphony, as evidenced in his madrigals, he undertook great developments in form and melody, and began to employ the basso continuo technique, distinctive of the Baroque. No stranger to controversy, he defended his sometimes novel techniques as elements of a "seconda pratica", contrasting with the more orthodox earlier style which he termed the "prima pratica". Largely forgotten during the eighteenth and much of the nineteenth centuries, his works enjoyed a rediscovery around the beginning of the twentieth century. He is now established both as a significant influence in European musical history and as a composer whose works are regularly performed and recorded.
Life.
Cremona: 1567–1591.
Monteverdi was baptised in the church of SS Nazaro e Celso, Cremona, on 15 May 1567. The register records his name as "Claudio Zuan Antonio" the son of "Messer Baldasar Mondeverdo". He was the first child of the apothecary Baldassare Monteverdi and his first wife Maddalena (née Zignani); they had married early the previous year. Claudio's brother Giulio Cesare Monteverdi (b. 1573) was also to become a musician; there were two other brothers and two sisters from Baldassare's marriage to Maddalena and his subsequent marriage in 1576 or 1577. Cremona was close to the border of the Republic of Venice, and not far from the lands controlled by the Duchy of Mantua, in both of which states Monteverdi was later to establish his career.
There is no clear record of Monteverdi's early musical training, or evidence that (as is sometimes claimed) he was a member of the Cathedral choir or studied at Cremona University. Monteverdi's first published work, a set of motets, " (Sacred Little Songs)" for three voices, was issued in Venice in 1582, when he was only fifteen years old. In this, and his other initial publications, he describes himself as the pupil of Marc'Antonio Ingegneri, who was from 1581 (and possibly from 1576) to 1592 the "maestro di cappella" at Cremona Cathedral. The musicologist Tim Carter deduces that Ingegneri "gave him a solid grounding in counterpoint and composition", and that Monteverdi would also have studied playing instruments of the viol family and singing.
Monteverdi's first publications also give evidence of his connections beyond Cremona, even in his early years. His second published work, "Madrigali spirituali" (Spiritual Madrigals, 1583), was printed at Brescia. His next works (his first published secular compositions) were sets of five-part madrigals, according to his biographer Paolo Fabbri: "the inevitable proving ground for any composer of the second half of the sixteenth century ... the secular genre "par excellence"". The first book of madrigals (Venice, 1587) was dedicated to Count Marco Verità of Verona; the second book of madrigals (Venice, 1590) was dedicated to the President of the Senate of Milan, Giacomo Ricardi, for whom he had played the viola da braccio in 1587.
Mantua: 1591–1613.
Court musician.
In the dedication of his second book of madrigals, Monteverdi had described himself as a player of the "vivuola" (which could mean either viola da gamba or viola da braccio). In 1590 or 1591 he entered the service of Duke Vincenzo I Gonzaga of Mantua; he recalled in his dedication to the Duke of his third book of madrigals (Venice, 1592) that "the most noble exercise of the "vivuola" opened to me the fortunate way into your service." In the same dedication he compares his instrumental playing to "flowers" and his compositions as "fruit" which as it matures "can more worthily and more perfectly serve you", indicating his intentions to establish himself as a composer.
Duke Vincenzo was keen to establish his court as a musical centre, and sought to recruit leading musicians. When Monteverdi arrived in Mantua, the "maestro di capella" at the court was the Flemish musician Giaches de Wert. Other notable musicians at the court during this period included the composer and violinist Salomone Rossi, Rossi's sister, the singer Madama Europa, and Francesco Rasi. Monteverdi married the court singer Claudia de Cattaneis in 1599; they were to have three children, two sons (Francesco, b. 1601 and Massimiliano, b. 1604), and a daughter who died soon after birth in 1603. Monteverdi's brother Giulio Cesare joined the court musicians in 1602.
When Wert died in 1596, his post was given to Benedetto Pallavicino, but Monteverdi was clearly highly regarded by Vincenzo and accompanied him on his military campaigns in Hungary (1595) and also on a visit to Flanders in 1599. Here at the town of Spa he is reported by his brother Giulio Cesare as encountering, and bringing back to Italy, the "canto alla francese". (The meaning of this, literally "song in the French style", is debatable, but may refer to the French-influenced poetry of Gabriello Chiabrera, some of which was set by Monteverdi in his "Scherzi musicali", and which departs from the traditional Italian style of lines of 9 or 11 syllables). Monteverdi may possibly have been a member of Vincenzo's entourage at Florence in 1600 for the marriage of Maria de' Medici and Henry IV of France, at which celebrations Jacopo Peri's opera "Euridice" (the earliest surviving opera) was premiered. On the death of Pallavicino in 1601, Monteverdi was confirmed as the new "maestro di capella".
Artusi controversy and "seconda pratica".
At the turn of the 17th century, Monteverdi found himself the target of musical controversy. The influential Bolognese theorist Giovanni Maria Artusi attacked Monteverdi's music (without naming the composer) in his work "L'Artusi, overo Delle imperfettioni della moderna musica (Artusi, or On the imperfections of modern music)" of 1600, followed by a sequel in 1603. Artusi cited extracts from Monteverdi's works not yet published (they later formed parts of his fourth and fifth books of madrigals of 1603 and 1605), condemning their use of harmony and their innovations in use of musical modes, compared to orthodox polyphonic practice of the sixteenth century. Artusi attempted to correspond with Monteverdi on these issues; the composer refused to respond, but found a champion in a pseudonymous supporter, "L'Ottuso Academico" ("The Obtuse Academic"). Eventually Monteverdi replied in the preface to the fifth book of madrigals that his duties at court prevented him from a detailed reply; but in a note to "the studious reader", he claimed that he would shortly publish a response, "Seconda Pratica, overo Perfettione della Moderna Musica (The Second Style, or Perfection of Modern Music)". This work never appeared, but a later publication by Claudio's brother Giulio Cesare made it clear that the "seconda pratica" which Monteverdi defended was not seen by him as a radical change or his own invention, but was an evolution from previous styles ("prima pratica") which was complementary to them.
This debate seems in any case to have raised the composer's profile, leading to reprints of his earlier books of madrigals. Some of his madrigals were published in Copenhagen in 1605 and 1606, and the poet Tommaso Stigliani (1573–1651) published a eulogy of him in his 1605 poem "O sirene de' fiumi". The composer of madrigal comedies and theorist Adriano Banchieri wrote in 1609: "I must not neglect to mention the most noble of composers, Monteverdi ... his expressive qualities are truly deserving of the highest commendation, and we find in them countless examples of matchless declamation ... enhanced by comparable harmonies." The modern music historian Massimo Ossi has placed the Artusi issue in the context of Monteverdi's artistic development: "If the controversy seems to define Monteverdi's historical position, it also seems to have been about stylistic developments that by 1600 Monteverdi had already outgrown".
The non-appearance of Monteverdi's promised explanatory treatise may have been a deliberate ploy, since by 1608, by Monteverdi's reckoning, Artusi had become fully reconciled to modern trends in music, and the "seconda pratica" was by then well established; Monteverdi had no need to revisit the issue. On the other hand, letters to Giovanni Battista Doni of 1632 show that Monteverdi was still preparing a defence of the "seconda pratica", in a treatise entitled "Melodia"; he may still have been working on this at the time of his death ten years later.
Opera, conflict and departure.
In 1606 Vincenzo's heir Francesco commissioned from Monteverdi the opera "L'Orfeo", to a libretto by Alessandro Striggio, for the Carnival season of 1607. It was given two performances in February and March 1607; the singers included, in the title role, Rasi, who had sung in the first performance of "Euridice" witnessed by Vincenzo in 1600. This was followed in 1608 by the opera "L'Arianna" (libretto by Ottavio Rinuccini), intended for the celebration of the marriage of Francesco to Margherita of Savoy. All the music for this opera is lost apart from "Ariadne's Lament", which became extremely popular. To this period also belongs the ballet entertainment "Il ballo delle ingrate".
The strain of the hard work Monteverdi had been putting into these and other compositions was exacerbated by personal tragedies. His wife died in September 1607 and the young singer Caterina Martinelli, intended for the title role of "Arianna", died of smallpox in March 1608. Monteverdi also resented his increasingly poor financial treatment by the Gonzagas. He retired to Cremona in 1608 to convalesce, and wrote a bitter letter to Vincenzo's minister Annibale Chieppio in November of that year seeking (unsuccessfully) "an honourable dismissal". Although the Duke increased Monteverdi's salary and pension, and Monteverdi returned to continue his work at the court, he began to seek patronage elsewhere. After publishing his Vespers in 1610, which were dedicated to Pope Paul V, he visited Rome, ostensibly hoping to place his son Francesco at a seminary, but apparently also seeking alternative employment. In the same year he may also have visited Venice, where a large collection of his church music was being printed, with a similar intention.
Duke Vincenzo died on 18 February 1612. When Francesco succeeded him, court intrigues and cost-cutting led to the dismissal of Monteverdi and his brother Giulio Cesare, who both returned, almost penniless, to Cremona. Despite Francesco's own death from smallpox in December 1612, Monteverdi was unable to return to favour with his successor, his brother Cardinal Ferdinando Gonzaga. In 1613, following the death of Giulio Cesare Martinengo, Monteverdi auditioned for his post as "maestro" at the basilica of San Marco in Venice, for which he submitted music for a Mass. He was appointed in August 1613, and given 50 ducats for his expenses (of which he was robbed, together with his other belongings, by highwaymen at Sanguinetto on his return to Cremona).
Venice: 1613–1643.
Maturity: 1613–1630.
Martinengo had been ill for some time before his death and had left the music of San Marco in a fragile state. The choir had been neglected and the administration overlooked. When Monteverdi arrived to take up his post, his principal responsibility was to recruit, train, discipline and manage the musicians of San Marco (the "capella"), who amounted to about 30 singers and six instrumentalists; the numbers could be increased for major events. Among the recruits to the choir was Francesco Cavalli, who joined in 1616 at the age of 14; he remained connected with San Marco throughout his life, and developed a close association with Monteverdi. Monteverdi also sought to expand the repertory, including not only the traditional "a cappella" repertoire of Roman and Flemish composers, but also examples of the modern style which he favoured, including the use of continuo and other instruments. Apart from this he was of course expected to compose music for all the major feasts of the church. This included a new mass each year for Holy Cross Day and Christmas Eve, cantatas in honour of the Venetian Doge, and numerous other works (many of which are lost). Monteverdi was also free to obtain income by providing music for other Venetian churches and for other patrons, and was frequently commissioned to provide music for state banquets. The Procurators of San Marco, to whom Monteverdi was directly responsible, showed their satisfaction with his work in 1616 by raising his annual salary from 300 ducats to 400.
The relative freedom which the Republic of Venice afforded him, compared to the problems of court politics in Mantua, are reflected in Monteverdi's letters to Striggio, particularly his letter of 13 March 1620, when he rejects an invitation to return to Mantua, extolling his present position and finances in Venice, and referring to the pension which Mantua still owes him. Nonetheless, remaining a Mantuan citizen, he accepted commissions from the new Duke Ferdinando, who had formally renounced his position as Cardinal in 1616 to take on the duties of state. These included the "balli" "Tirsi e Clori" (1616) and "Apollo" (1620), an opera "Andromeda" (1620) and an "intermedio", "Le nozze di Tetide", for the marriage of Ferdinando with Caterina de' Medici (1617). Most of these compositions were extensively delayed in creation – partly, as shown by surviving correspondence, through the composer's unwillingness to prioritise them, and partly because of constant changes in the court's requirements. They are now lost, apart from "Tirsi e Clori", which was included in the seventh book of madrigals (published 1619) and dedicated to the Duchess Caterina, for which the composer received a pearl necklace from the Duchess. A subsequent major commission, the opera "La finta pazza Licori", to a libretto by Giulio Strozzi, was completed for Fernando's successor Vincenzo II, who succeeded to the dukedom in 1626. Because of the latter's illness (he died in 1627), it was never performed, and it is now also lost.
Monteverdi also received commissions from other Italian states and from their communities in Venice. These included, for the Milanese community in 1620, music for the Feast of St. Charles Borromeo, and for the Florentine community a Requiem Mass for Cosimo II de' Medici (1621). Monteverdi acted on behalf of Paolo Giordano II, Duke of Bracciano, to arrange publication of works by the Cremona musician Francesco Petratti. Among Monteverdi's private Venetian patrons was the nobleman Girolamo Mocenigo, at whose home was premiered in 1624 the dramatic entertainment "Il combattimento di Tancredi e Clorinda" based on an episode from Torquato Tasso's "La Gerusalemme liberata". In 1627 Monteverdi received a major commission from Odoardo Farnese, Duke of Parma, for a series of works, and gained leave from the Procurators to spend time there during 1627 and 1628.
Monteverdi's musical direction received the attention of foreign visitors. The Dutch diplomat and musician Constantijn Huygens, attending a Vespers service at the church of SS. Giovanni e Lucia, wrote that he "heard the most perfect music I had ever heard in my life. It was directed by the most famous Claudio Monteverdi ... who was also the composer and was accompanied by four theorbos, two cornettos, two bassoons, one "basso de viola" of huge size, organs and other instruments ...". Monteverdi wrote a mass, and provided other musical entertainment, for the visit to Venice in 1625 of the Crown Prince Władysław of Poland, who may have sought to revive attempts made a few years previously to lure Monteverdi to Warsaw. He also provided chamber music for Wolfgang Wilhelm, Count Palatine of Neuburg, when the latter was paying an incognito visit to Venice in July 1625.
Correspondence of Monteverdi in 1625 and 1626 with the Mantuan courtier Ercole Marigliani reveals an interest in alchemy, which apparently Monteverdi had taken up as a hobby. He discusses experiments to transform lead into gold, the problems of obtaining mercury, and mentions commissioning special vessels for his experiments from the glassworks at Murano.
Despite his generally satisfactory situation in Venice, Monteverdi experienced personal problems from time to time. He was on one occasion – probably because of his wide network of contacts – the subject of an anonymous denunciation to the Venetian authorities alleging that he supported the Habsburgs. He was also subject to anxieties about his children. His son Francesco, while a student of law at Padua in 1619, was spending in Monteverdi's opinion too much time with music, and he, therefore, moved him to the University of Bologna. This change did not have the desired result, and it seems that Monteverdi resigned himself to Francesco having a musical career – he joined the choir of San Marco in 1623. His other son Massimiliano, who graduated in medicine, was arrested by the Inquisition in Mantua in 1627 for reading forbidden literature. Monteverdi was obliged to sell the necklace he had received from Duchess Caterina to pay for his son's (eventually successful) defence. Monteverdi wrote at the time to Striggio seeking his help, and fearing that Massimiliano might be subject to torture; it seems that Striggio's intervention was helpful. Money worries at this time also led Monteverdi to visit Cremona to secure for himself a church canonry.
Pause and priesthood: 1630–1637.
A series of disturbing events troubled Monteverdi's world in the period around 1630. Mantua was invaded by Habsburg armies in 1630, who besieged the plague-stricken town, and after its fall in July looted its treasures, and dispersed the artistic community. The plague was carried to Mantua's ally Venice by an embassy led by Monteverdi's confidante Striggio, and over a period of 16 months led to over 45,000 deaths, leaving Venice's population in 1633 at just above 100,000, the lowest level for about 150 years. Among the plague victims was Monteverdi's assistant at San Marco, and a notable composer in his own right, Alessandro Grandi. The plague and the after-effects of war had an inevitable deleterious effect on the economy and artistic life of Venice. Monteverdi's younger brother Giulio Cesare also died at this time, probably from the plague.
By this time Monteverdi was in his sixties, and his rate of composition seems to have slowed down. He had written a setting of Strozzi's "Proserpina rapita (The Abduction of Proserpina)", now lost except for one vocal trio, for a Mocenigo wedding in 1630, and produced a Mass for deliverance from the plague for San Marco which was performed in November 1631. His set of "Scherzi musicali" was published in Venice in 1632. In 1631, Monteverdi was admitted to the tonsure, and was ordained deacon, and later priest, in 1632. Although these ceremonies took place in Venice, he was nominated as a member of Diocese of Cremona; this may imply that he intended to retire there.
Late flowering: 1637–1643.
The opening of the opera house of San Cassiano in 1637, the first public opera house in Europe, stimulated the city's musical life and coincided with a new burst of the composer's activity. The year 1638 saw the publication of Monteverdi's eighth book of madrigals and a revision of the "Ballo delle ingrate". The eighth book contains a "ballo", "Volgendi il ciel", which may have been composed for the Holy Roman Emperor, Ferdinand III, to whom the book is dedicated. The years 1640–1641 saw the publication of the extensive collection of church music, "Selva morale e spirituale". Among other commissions, Monteverdi wrote music in 1637 and 1638 for Strozzi's "Accademia degli Unisoni" in Venice, and in 1641 a ballet, "La vittoria d'Amore", for the court of Piacenza.
Monteverdi was still not entirely free from his responsibilities for the musicians at San Marco. He wrote to complain about one of his singers to the Procurators, on 9 June 1637: "I, Claudio Monteverdi ... come humbly ... to set forth to you how Domenicato Aldegati ... a bass, yesterday morning ... at the time of the greatest concourse of people ... spoke these exact words ...'The Director of Music comes from a brood of cut-throat bastards, a thieving, fucking, he-goat ... and I shit on him and whoever protects him ...
Monteverdi's contribution to opera at this period is notable. He revised his earlier opera "L'Arianna" in 1640 and wrote three new works for the commercial stage, "Il ritorno d'Ulisse in patria" ("The Return of Ulysses to his Homeland", 1640, first performed in Bologna with Venetian singers), "Le nozze d'Enea e Lavinia" ("The Marriage of Aeneas and Lavinia", 1641, music now lost), and "L'incoronazione di Poppea" ("The Coronation of Poppea", 1643). The introduction to the printed scenario of "Le nozze d'Enea", by an unknown author, acknowledges that Monteverdi is to be credited for the rebirth of theatrical music and that "he will be sighed for in later ages, for his compositions will surely outlive the ravages of time."
In his last surviving letter (20 August 1643), Monteverdi, already ill, was still hoping for the settlement of the long-disputed pension from Mantua, and asked the Doge of Venice to intervene on his behalf. He died in Venice on 29 November 1643, after paying a brief visit to Cremona, and is buried in the Church of the Frari. He was survived by his sons; Masimilliano died in 1661, Francesco after 1677.
Music.
Background: Renaissance to Baroque.
There is a consensus among music historians that a period extending from the mid-15th century to around 1625, characterised in Lewis Lockwood's phrase by "substantial unity of outlook and language", should be identified as the period of "Renaissance music". Musical literature has also defined the succeeding period (covering music from approximately 1580 to 1750) as the era of "Baroque music". It is in the late-16th to early-17th-century overlap of these periods that much of Monteverdi's creativity flourished; he stands as a transitional figure between the Renaissance and the Baroque.
In the Renaissance era, music had developed as a formal discipline, a "pure science of relationships" in the words of Lockwood. In the Baroque era it became a form of aesthetic expression, increasingly used to adorn religious, social and festive celebrations in which, in accordance with Plato's ideal, the music was subordinated to the text. Solo singing with instrumental accompaniment, or monody, acquired greater significance towards the end of the 16th century, replacing polyphony as the principal means of dramatic music expression. This was the changing world in which Monteverdi was active. Percy Scholes in his "Oxford Companion to Music" describes the "new music" thus: "[Composers] discarded the choral polyphony of the madrigal style as barbaric, and set dialogue or soliloquy for single voices, imitating more or less the inflexions of speech and accompanying the voice by playing mere supporting chords. Short choruses were interspersed, but they too were homophonic rather than polyphonic."
Novice years: Madrigal books 1 and 2.
Marc'Antonio Ingegneri, Monteverdi's first tutor, was a master of the "musica reservata" vocal style, which involved the use of chromatic progressions and word-painting; Monteverdi's early compositions were grounded in this style. Ingegneri was a traditional Renaissance composer, "something of an anachronism", according to Arnold, but Monteverdi also studied the work of more "modern" composers such as Luca Marenzio, Luzzasco Luzzaschi, and a little later, Giaches de Wert, from whom he would learn the art of expressing passion. He was a precocious and productive student, as indicated by his youthful publications of 1582–83. Mark Ringer writes that "these teenaged efforts reveal palpable ambition matched with a convincing mastery of contemporary style", but at this stage they display their creator's competence rather than any striking originality. Geoffrey Chew classifies them as "not in the most modern vein for the period", acceptable but out-of-date. Chew rates the "Canzonette" collection of 1584 much more highly than the earlier juvenilia: "These brief three-voice pieces draw on the airy, modern style of the villanellas of Marenzio, [drawing on] a substantial vocabulary of text-related madrigalisms".
The canzonetta form was much used by composers of the day as a technical exercise, and is a prominent element in Monteverdi's first book of madrigals published in 1587. In this book, the playful, pastoral settings again reflect the style of Marenzio, while Luzzaschi's influence is evident in Monteverdi's use of dissonance. The second book (1590) begins with a setting modelled on Marenzio of a modern verse, Torquato Tasso's "Non si levav' ancor", and concludes with a text from 50 years earlier: Pietro Bembo's "Cantai un tempo". Monteverdi set the latter to music in an archaic style reminiscent of the long-dead Cipriano de Rore. Between them is "Ecco mormorar l'onde", strongly influenced by de Wert and hailed by Chew as the great masterpiece of the second book.
A thread common throughout these early works is Monteverdi's use of the technique of "imitatio", a general practice among composers of the period whereby material from earlier or contemporary composers was used as models for their own work. Monteverdi continued to use this procedure well beyond his apprentice years, a factor that in some critics' eyes has compromised his reputation for originality.
Madrigals 1590–1605: books 3, 4, 5.
Monteverdi's first fifteen years of service in Mantua are bracketed by his publications of the third book of madrigals in 1592 and the fourth and fifth books in 1603 and 1605. Between 1592 and 1603 he made minor contributions to other anthologies. How much he composed in this period is a matter of conjecture; his many duties in the Mantuan court may have limited his opportunities, but several of the madrigals that he published in the fourth and fifth books were written and performed during the 1590s, some figuring prominently in the Artusi controversy.
The third book shows strongly the increased influence of Wert, by that time Monteverdi's direct superior as "maestro de capella" at Mantua. Two poets dominate the collection: Tasso, whose lyrical poetry had figured prominently in the second book but is here represented through the more epic, heroic verses from "Gerusalemme liberata", and Giovanni Battista Guarini, whose verses had appeared sporadically in Monteverdi's earlier publications, but form around half of the contents of the third book. Wert's influence is reflected in Monteverdi's forthrightly modern approach, and his expressive and chromatic settings of Tasso's verses. Of the Guarini settings, Chew writes: "The epigrammatic style ... closely matches a poetic and musical ideal of the period ... [and] often depends on strong, final cadential progressions, with or without the intensification provided by chains of suspended dissonances". Chew cites the setting of "Stracciami pur il core" as "a prime example of Monteverdi's irregular dissonance practice". Tasso and Guarini were both regular visitors to the Mantuan court; Monteverdi's association with them and his absorption of their ideas may have helped lay the foundations of his own approach to the musical dramas that he would create a decade later.
As the 1590s progressed, Monteverdi moved closer towards the form that he would identify in due course as the "seconda pratica". Claude V. Palisca quotes the madrigal "Ohimè, se tanto amate", published in the fourth book but written before 1600 – it is among the works attacked by Artusi – as a typical example of the composer's developing powers of invention. In this madrigal Monteverdi again departs from the established practice in the use of dissonance, by means of a vocal ornament Palisca describes as "échappé". Monteverdi's daring use of this device is, says Palisca, "like a forbidden pleasure". In this and in other settings the poet's images were supreme, even at the expense of musical consistency.
The fourth book includes madrigals to which Artusi objected on the grounds of their "modernism". However, Ossi describes it as "an anthology of disparate works firmly rooted in the 16th century", closer in nature to the third book than to the fifth. Besides Tasso and Guarini, Monteverdi set to music verses by Rinuccini, Maurizio Moro ("Sì ch'io vorrei morire") and Ridolfo Arlotti ("Luci serene e chiare"). There is evidence of the composer's familiarity with the works of Carlo Gesualdo, and with composers of the school of Ferrara such as Luzzaschi; the book was dedicated to a Ferrarese musical society, the "Accademici Intrepidi".
The fifth book looks more to the future; for example, Monteverdi employs the "concertato" style with basso continuo (a device that was to become a typical feature in the emergent Baroque era), and includes a "sinfonia" (instrumental interlude) in the final piece. He presents his music through complex counterpoint and daring harmonies, although at times combining the expressive possibilities of the new music with traditional polyphony.
Aquilino Coppini drew much of the music for his sacred contrafacta of 1608 from Monteverdi's 3rd, 4th and 5th books of madrigals. In writing to a friend in 1609 Coppini commented that Monteverdi's pieces "require, during their performance, more flexible rests and bars that are not strictly regular, now pressing forward or abandoning themselves to slowing down [...] In them there is a truly wondrous capacity for moving the affections".
Opera and sacred music: 1607–1612.
In Monteverdi's final five years' service in Mantua he completed the operas "L'Orfeo" (1607) and "L'Arianna" (1608), and wrote quantities of sacred music, including the "Messa in illo tempore" (1610) and also the collection known as "Vespro della Beata Vergine" which is often referred to as "Monteverdi's "Vespers"" (1610). He also published "Scherzi musicale a tre voci" (1607), settings of verses composed since 1599 and dedicated to the Gonzaga heir, Francesco. The vocal trio in the "Scherzi" comprises two sopranos and a bass, accompanied by simple instrumental ritornellos. According to Bowers the music "reflected the modesty of the prince's resources; it was, nevertheless, the earliest publication to associate voices and instruments in this particular way".
"L'Orfeo".
The opera opens with a brief trumpet toccata. The prologue of La musica (a figure representing music) is introduced with a ritornello by the strings, repeated often to represent the "power of music" – one of the earliest examples of an operatic leitmotif. Act 1 presents a pastoral idyll, the buoyant mood of which continues into Act 2. The confusion and grief which follow the news of Euridice's death are musically reflected by harsh dissonances and the juxtaposition of keys. The music remains in this vein until the act ends with the consoling sounds of the ritornello.
Act 3 is dominated by Orfeo's aria "Possente spirto e formidabil nume" by which he attempts to persuade Caronte to allow him to enter Hades. Monteverdi's vocal embellishments and virtuoso accompaniment provide what Tim Carter has described as "one of the most compelling visual and aural representations" in early opera. In Act 4 the warmth of Proserpina's singing on behalf of Orfeo is retained until Orfeo fatally "looks back". The brief final act, which sees Orfeo's rescue and metamorphosis, is framed by the final appearance of the ritornello and by a lively moresca that brings the audience back to their everyday world.
Throughout the opera Monteverdi makes innovative use of polyphony, extending the rules beyond the conventions which composers normally observed in fidelity to Palestrina. He combines elements of the traditional 16th-century madrigal with the new monodic style where the text dominates the music and sinfonias and instrumental ritornellos illustrate the action.
"L'Arianna".
The music for this opera is lost except for the "Lamento d'Arianna", which was published in the sixth book in 1614 as a five-voice madrigal; a separate monodic version was published in 1623. In its operatic context the lament depicts Arianna's various emotional reactions to her abandonment: sorrow, anger, fear, self-pity, desolation and a sense of futility. Throughout, indignation and anger are punctuated by tenderness, until a descending line brings the piece to a quiet conclusion.
The musicologist Suzanne Cusick writes that Monteverdi "creat[ed] the lament as a recognizable genre of vocal chamber music and as a standard scene in opera ... that would become crucial, almost genre-defining, to the full-scale public operas of 17th-century Venice". Cusick observes how Monteverdi is able to match in music the "rhetorical and syntactical gestures" in the text of Ottavio Rinuccini. The opening repeated words "Lasciatemi morire" (Let me die) are accompanied by a dominant seventh chord which Ringer describes as "an unforgettable chromatic stab of pain". Ringer suggests that the lament defines Monteverdi's innovative creativity in a manner similar to that in which the Prelude and the Liebestod in "Tristan und Isolde" announced Wagner's discovery of new expressive frontiers.
Rinuccini's full libretto, which has survived, was set in modern times by Alexander Goehr ("Arianna", 1995), including a version of Monteverdi's "Lament".
Vespers.
The "Vespro della Beata Vergine", Monteverdi's first published sacred music since the "Madrigali spirituali" of 1583, consists of 14 components: an introductory versicle and response, five psalms interspersed with five "sacred concertos" (Monteverdi's term), a hymn, and two Magnificat settings. Collectively these pieces fulfil the requirements for a Vespers service on any feast day of the Virgin. Monteverdi employs many musical styles; the more traditional features, such as cantus firmus, falsobordone and Venetian canzone, are mixed with the latest madrigal style, including echo effects and chains of dissonances. Some of the musical features used are reminiscent of "L'Orfeo", written slightly earlier for similar instrumental and vocal forces.
In this work the "sacred concertos" fulfil the role of the antiphons which divide the psalms in regular Vespers services. Their non-liturgical character has led writers to question whether they should be within the service, or indeed whether this was Monteverdi's intention. In some versions of Monteverdi's "Vespers" (for example, those of Denis Stevens) the concertos are replaced with antiphons associated with the Virgin, although John Whenham in his analysis of the work argues that the collection as a whole should be regarded as a single liturgical and artistic entity.
All the psalms, and the Magnificat, are based on melodically limited and repetitious Gregorian chant psalm tones, around which Monteverdi builds a range of innovative textures. This concertato style challenges the traditional cantus firmus, and is most evident in the "Sonata sopra Sancta Maria", written for eight string and wind instruments plus basso continuo, and a single soprano voice. Monteverdi uses modern rhythms, frequent metre changes and constantly varying textures; yet, according to John Eliot Gardiner, "for all the virtuosity of its instrumental writing and the evident care which has gone into the combinations of timbre", Monteverdi's chief concern was resolving the proper combination of words and music.
The actual musical ingredients of the Vespers were not novel to Mantua – concertato had been used by Lodovico Grossi da Viadana, a former choirmaster at the cathedral of Mantua, while the "Sonata sopra" had been anticipated by Archangelo Crotti in his "Sancta Maria" published in 1608. It is, writes Denis Arnold, Monteverdi's mixture of the various elements that makes the music unique. Arnold adds that the Vespers achieved fame and popularity only after their 20th-century rediscovery; they were not particularly regarded in Monteverdi's time.
Madrigals 1614–1638: books 6, 7 and 8.
Sixth book.
During his years in Venice Monteverdi published his sixth (1614), seventh (1619) and eighth (1638) books of madrigals. The sixth book consists of works written before the composer's departure from Mantua. Hans Redlich sees it as a transitional work, containing Monteverdi's last madrigal compositions in the manner of the "prima pratica", together with music which is typical of the new style of expression which Monteverdi had displayed in the dramatic works of 1607–08. The central theme of the collection is loss; the best-known work is the five-voice version of the "Lamento d'Arianna", which, says Massimo Ossi, gives "an object lesson in the close relationship between monodic recitative and counterpoint". The book contains Monteverdi's first settings of verses by Giambattista Marino, and two settings of Petrarch which Ossi considers the most extraordinary pieces in the volume, providing some "stunning musical moments".
Seventh book.
While Monteverdi had looked backwards in the sixth book, he moved forward in the seventh book from the traditional concept of the madrigal, and from monody, in favour of chamber duets. There are exceptions, such the two solo "lettere amorose" (love letters) "Se i languidi miei sguardi" and "Se pur destina e vole", written to be performed "genere rapresentativo" – acted as well as sung. Of the duets which are the main features of the volume, Chew highlights "Ohimé, dov'è il mio ben, dov'è il mio core", a romanesca in which two high voices express dissonances above a repetitive bass pattern. The book also contains large-scale ensemble works, and the ballet "Tirsi e Clori". This was the height of Monteverdi's "Marino period"; six of the pieces in the book are settings of the poet's verses. As Carter puts it, Monteverdi "embraced Marino's madrigalian kisses and love-bites with ... the enthusiasm typical of the period". Some commentators have opined that the composer should have had better poetic taste.
Eighth book.
The eighth book, subtitled "Madrigali guerrieri, et amorosi ..." ("Madrigals of war and love") is structured in two symmetrical halves, one for "war" and one for "love". Each half begins with a six-voice setting, followed by an equally large-scale Petrarch setting, then a series of duets mainly for tenor voices, and concludes with a theatrical number and a final ballet. The "war" half contains several items written as tributes to the emperor Ferdinand III, who had succeeded to the Habsburg throne in 1637. Many of Monteverdi's familiar poets – Strozzi, Rinuccini, Tasso, Marino, Guarini – are represented in the settings.
It is difficult to gauge when many of the pieces were composed, although the ballet "Mascherata dell' ingrate" that ends the book dates back to 1608 and the celebration of the Gonzaga-Savoy marriage. The "Combattimento di Tancredi e Clorinda", centrepiece of the "war" settings, had been written and performed in Venice in 1624; on its publication in the eighth book, Monteverdi explicitly linked it to his concept of "concitato genera" (otherwise "stile concitato" – "aroused style") that would "fittingly imitate the utterance and the accents of a brave man who is engaged in warfare", and implied that since he had originated this style, others had begun to copy it. The work employed for the first time instructions for the use of pizzicato string chords, and also evocations of fanfares and other sounds of combat.
The critic Andrew Clements describes the eighth book as "a statement of artistic principles and compositional authority", in which Monteverdi "shaped and expanded the madrigal form to accommodate what he wanted to do ... the pieces collected in Book Eight make up a treasury of what music in the first half the 17th century could possibly express."
Other Venetian music: 1614–1638.
During this period of his Venetian residency, Monteverdi composed quantities of sacred music. Numerous motets and other short works were included in anthologies by local publishers such as Giulio Cesare Bianchi (a former student of Monteverdi) and Lorenzo Calvi, and others were published elsewhere in Italy and Austria. The range of styles in the motets is broad, from simple strophic arias with string accompaniment to full-scale declamations with an alleluia finale.
Monteverdi retained emotional and political attachments to the Mantuan court and wrote for it, or undertook to write, large amounts of stage music including at least four operas. The ballet "Tirsi e Clori" survives through its inclusion in the seventh book, but the rest of the Mantuan dramatic music is lost. Many of the missing manuscripts may have disappeared in the wars that overcame Mantua in 1630. The most significant aspect of their loss, according to Carter, is the extent to which they might have provided musical links between Monteverdi's early Mantuan operas and those he wrote in Venice after 1638: "Without these links ... it is hard to a produce a coherent account of his development as a composer for the stage". Likewise, Janet Beat regrets that the 30-year gap hampers the study of how opera orchestration developed during those critical early years.
Apart from the madrigal books, Monteverdi's only published collection during this period was the volume of "Scherzi musicale" in 1632. For unknown reasons, the composer's name does not appear on the inscription, the dedication being signed by the Venetian printer Bartolomeo Magni; Carter surmises that the recently ordained Monteverdi may have wished to keep his distance from this secular collection. It mixes strophic continuo songs for solo voice with more complex works which employ continuous variation over repeated bass patterns. Chew selects the chaconne for two tenors, "Zefiro torna e di soavi accenti", as the outstanding item in the collection: "[T]he greater part of this piece consists of repetitions of a bass pattern which ensures tonal unity of a simple kind, owing to its being framed as a simple cadence in a G major tonal type: over these repetitions, inventive variations unfold in virtuoso passage-work".
Late operas and final works.
"Main articles": "Il ritorno d'Ulisse in patria"; "L'incoronazione di Poppea"; "Selva morale e spirituale"
The last years of Monteverdi's life were much occupied with opera for the Venetian stage. Richard Taruskin, in his "Oxford History of Western Music", gave his chapter on this topic the title "Opera from Monteverdi to Monteverdi." This wording, originally proposed humorously by the Italian music historian Nino Pirrotta, is interpreted seriously by Taruskin as indicating that Monteverdi is significantly responsible for the transformation of the opera genre from a private entertainment of the nobility (as with "Orfeo" in 1607), to what became a major commercial genre, as exemplified by his opera "L'incoronazione di Poppea" (1643). His two surviving operatic works of this period, "Il ritorno d'Ulisse in patria" and "L'incoronazione" are held by Arnold to be the first "modern" operas; "Il ritorno" is the first Venetian opera to depart from what Ellen Rosand terms "the mythological pastoral". However, David Johnson in the "North American Review" warns audiences not to expect immediate affinity with Mozart, Verdi or Puccini: "You have to submit yourself to a much slower pace, to a much more chaste conception of melody, to a vocal style that is at first merely like dry declamation and only on repeated hearings begins to assume an extraordinary eloquence."
"Il ritorno", says Carter, is clearly influenced by Monteverdi's earlier works. Penelope's lament in Act I is close in character to the lament from "L'Arianna", while the martial episodes recall "Il combattimento". "Stile concitato" is prominent in the fight scenes and in the slaying of Penelope's suitors. In "L'incoronazione", Monteverdi represents moods and situations by specific musical devices: triple metre stands for the language of love; arpeggios demonstrate conflict; "stile concitato" represents rage. There is continuing debate about how much of the extant "L'incoronazione" music is Monteverdi's original, and how much is the work of others (there are, for instance, traces of music by Francesco Cavalli).
The "Selva morale e spirituale" of 1641, and the posthumous "Messa et salmi" published in 1650 (which was edited by Cavalli), are selections of the sacred music that Monteverdi wrote for San Marco during his 30-year tenure – much else was likely written but not published. The "Selva morale" volume opens with a series of madrigal settings on moral texts, dwelling on themes such as "the transitory nature of love, earthly rank and achievement, even existence itself". They are followed by a Mass in conservative style ("stile antico"), the high point of which is an extended seven-voice "Gloria". Scholars believe that this might have been written to celebrate the end of the 1631 plague. The rest of the volume is made up of numerous psalm settings, two Magnificats and three "Salve Reginas". The "Messa et salmi" volume includes a "stile antico" Mass for four voices, a polyphonic setting of the psalm "Laetatus Sum", and a version of the Litany of Lareto that Monteverdi had originally published in 1620.
The posthumous ninth book of madrigals was published in 1651, a miscellany dating back to the early 1630s, some items being repeats of previously published pieces, such as the popular duet "O sia tranquillo il mare" from 1638. The book includes a trio for three sopranos, "Come dolce oggi l'auretta", which is the only surviving music from the 1630 lost opera "Proserpina rapita".
Historical perspective.
In his lifetime Monteverdi enjoyed considerable status among musicians and the public. This is evidenced by the scale of his funeral rites: "[W]ith truly royal pomp a catafalque was erected in the Chiesa de Padrini Minori de Frari, decorated all in mourning, but surrounded with so many candles that the church resembled a night sky luminous with stars". This glorification was transitory; Carter writes that in Monteverdi's day, music rarely survived beyond the circumstances of its initial performance and was quickly forgotten along with its creator. In this regard Monteverdi fared better than most. His operatic works were revived in several cities in the decade following his death; according to Severo Bonini, writing in 1651, every musical household in Italy possessed a copy of the "Lamento d'Arianna".
The German composer Heinrich Schütz, who had studied in Venice under Giovanni Gabrieli shortly before Monteverdi's arrival there, possessed a copy of "Il combattimento" and himself took up elements of the "stile concitato". On his second visit to Venice in 1628–1629, Arnold believes, Schütz absorbed the concepts of "basso continuo" and expressiveness of word-setting, but he opines that Schütz was more directly influenced by the style of the younger generation of Venetian composers, including Grandi and Giovanni Rovetta (the eventual successor to Monteverdi at San Marco). Schütz published a first book of "Symphoniae sacrae", settings of biblical texts in the style of "seconda pratica", in Venice in 1629. "Es steh Gott auf", from his "Symphoniae sacrae II", published in Dresden in 1647, contains specific quotations from Monteverdi.
After the 1650s, Monteverdi's name quickly disappears from contemporary accounts, his music generally forgotten except for the "Lamento", the prototype of a genre that would endure well into the 18th century.
Interest in Monteverdi revived in the late 18th and early 19th centuries among music scholars in Germany and Italy, although he was still regarded as essentially a historical curiosity. Wider interest in the music itself began in 1881, when Robert Eitner published a shortened version of the "Orfeo" score. Around this time Kurt Vogel scored the madrigals from the original manuscripts, but more critical interest was shown in the operas, following the discovery of the "L'incoronazione" manuscript in 1888 and that of "Il ritorno" in 1904. Largely through the efforts of Vincent d'Indy, all three operas were staged in one form or another, during the first quarter of the 20th century: "L'Orfeo" in May 1911, "L'incoronazione" in February 1913 and "Il ritorno" in May 1925.
The Italian nationalist poet Gabriele D'Annunzio lauded Monteverdi and in his novel "Il fuoco" (1900) wrote of ""il divino Claudio" ... what a heroic soul, purely Italian in its essence!" His vision of Monteverdi as the true founder of Italian musical lyricism was adopted by musicians who worked with the regime of Benito Mussolini (1922–1945), including Gian Francesco Malipiero, Luigi Dallapiccola, and , who contrasted Monteverdi with the decadence of the music of Richard Strauss, Claude Debussy and Igor Stravinsky.
In the years after the Second World War the operas began to be performed in the major opera houses, and eventually were established in the general repertory. The resuscitation of Monteverdi's sacred music took longer; he did not benefit from the Catholic Church's 19th-century revival of Renaissance music in the way that Palestrina did, perhaps, as Carter suggests, because Monteverdi was viewed chiefly as a secular composer. It was not until 1932 that the 1610 "Vespers" were published in a modern edition, followed by Redlich's revision two years later. Modern editions of the "Selva morale" and "Missa e Salmi" volumes were published respectively in 1940 and 1942.
The revival of public interest in Monteverdi's music gathered pace in the second half of the 20th century, reaching full spate in the general early-music revival of the 1970s, during which time the emphasis turned increasingly towards "authentic" performance using historical instruments. The magazine "Gramophone" notes over 30 recordings of the "Vespers" between 1976 and 2011, and 27 of "Il combattimento di Tancredi e Clorinda" between 1971 and 2013. Monteverdi's surviving operas are today regularly performed; the website Operabase notes 555 performances of the operas in 149 productions worldwide in the seasons 2011–2016, ranking Monteverdi at 30th position for all composers, and at 8th ranking for Italian opera composers. In 1985, Manfred H. Stattkus published an index to Monteverdi's works, the Stattkus-Verzeichnis, (revised in 2006) giving each composition an "SV" number, to be used for cataloguing and references.
Monteverdi is lauded by modern critics as "the most significant composer in late Renaissance and early Baroque Italy"; "one of the principal composers in the history of Western music"; and, routinely, as the first great opera composer. These assessments reflect a contemporary perspective, since his music was largely unknown to the composers who followed him during an extensive period, spanning more than two centuries after his death. It is, as Redlich and others have pointed out, the composers of the 20th and 21st century who have rediscovered Monteverdi and sought to make his music a basis for their own. Possibly, as Chew suggests, they are attracted by Monteverdi's reputation as "a Modern, a breaker of rules, against the Ancients, those who deferred to ancient authority" – although the composer was, essentially, a pragmatist, "showing what can only be described as an opportunistic and eclectic willingness to use whatever lay to hand for the purpose". In a letter dated 16 October 1633, Monteverdi appears to endorse the view of himself as a "modern": "I would rather be moderately praised for the new style than greatly praised for the ordinary". However, Chew, in his final summation, sees the composer historically as facing both ways, willing to use modern techniques but while at the same time protective of his status as a competent composer in the "stile antico". Thus, says Chew, "his achievement was both retrospective and progressive". Monteverdi represents the late Renaissance era while simultaneously summing up much of the early Baroque. "And in one respect in particular, his achievement was enduring: the effective projection of human emotions in music, in a way adequate for theatre as well as for chamber music."
|
6229
|
7903804
|
https://en.wikipedia.org/wiki?curid=6229
|
Colossus computer
|
Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.
Colossus was designed by General Post Office (GPO) research telephone engineer Tommy Flowers based on plans developed by mathematician Max Newman at the Government Code and Cypher School at Bletchley Park.
Alan Turing's use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma. (Turing's machine that helped decode Enigma was the electromechanical Bombe, not Colossus.)
The prototype, Colossus Mark 1, was shown to be working in December 1943 and was in use at Bletchley Park by early 1944. An improved Colossus Mark 2 that used shift registers to run five times faster first worked on 1 June 1944, just in time for the Normandy landings on D-Day. Ten Colossi were in use by the end of the war and an eleventh was being commissioned. Bletchley Park's use of these machines allowed the Allies to obtain a vast amount of high-level military intelligence from intercepted radiotelegraphy messages between the German High Command ("OKW") and their army commands throughout occupied Europe.
The existence of the Colossus machines was kept secret until the mid-1970s. All but two machines were dismantled into such small parts that their use could not be inferred. The two retained machines were eventually dismantled in the 1960s. In January 2024, new photos were released by GCHQ that showed re-engineered Colossus in a very different environment from the Bletchley Park buildings, presumably at GCHQ Cheltenham. A functioning reconstruction of a Mark 2 Colossus was completed in 2008 by Tony Sale and a team of volunteers; it is on display in The National Museum of Computing at Bletchley Park.
Purpose and origins.
The Colossus computers were used to help decipher intercepted radio teleprinter messages that had been encrypted using an unknown device. Intelligence information revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to call encrypted German teleprinter traffic "Fish", and the unknown machine and its intercepted messages "Tunny" (tunafish).
Before the Germans increased the security of their operating procedures, British cryptanalysts diagnosed how the unseen machine functioned and built an imitation of it called "British Tunny".
It was deduced that the machine had twelve wheels and used a Vernam ciphering technique on message characters in the standard 5-bit ITA2 telegraph code. It did this by combining the plaintext characters with a stream of key characters using the XOR Boolean function to produce the ciphertext.
In August 1941, a blunder by German operators led to the transmission of two versions of the same message with identical machine settings. These were intercepted and worked on at Bletchley Park. First, John Tiltman, a very talented GC&CS cryptanalyst, derived a keystream of almost 4000 characters. Then Bill Tutte, a newly arrived member of the Research Section, used this keystream to work out the logical structure of the Lorenz machine. He deduced that the twelve wheels consisted of two groups of five, which he named the χ ("chi") and ψ ("psi") wheels, the remaining two he called μ ("mu") or "motor" wheels. The "chi" wheels stepped regularly with each letter that was encrypted, while the "psi" wheels stepped irregularly, under the control of the motor wheels.
With a sufficiently random keystream, a Vernam cipher removes the natural language property of a plaintext message of having an uneven frequency distribution of the different characters, to produce a uniform distribution in the ciphertext. The Tunny machine did this well. However, the cryptanalysts worked out that by examining the frequency distribution of the character-to-character changes in the ciphertext, instead of the plain characters, there was a departure from uniformity which provided a way into the system. This was achieved by "differencing" in which each bit or character was XOR-ed with its successor. After Germany surrendered, allied forces captured a Tunny machine and discovered that it was the electromechanical Lorenz SZ ("Schlüsselzusatzgerät", cipher attachment) in-line cipher machine.
In order to decrypt the transmitted messages, two tasks had to be performed. The first was "wheel breaking", which was the discovery of the cam patterns for all the wheels. These patterns were set up on the Lorenz machine and then used for a fixed period of time for a succession of different messages. Each transmission, which often contained more than one message, was enciphered with a different start position of the wheels. Alan Turing invented a method of wheel-breaking that became known as Turingery. Turing's technique was further developed into "Rectangling", for which Colossus could produce tables for manual analysis. Colossi 2, 4, 6, 7 and 9 had a "gadget" to aid this process.
The second task was "wheel setting", which worked out the start positions of the wheels for a particular message and could only be attempted once the cam patterns were known. It was this task for which Colossus was initially designed. To discover the start position of the "chi" wheels for a message, Colossus compared two character streams, counting statistics from the evaluation of programmable Boolean functions. The two streams were the ciphertext, which was read at high speed from a paper tape, and the keystream, which was generated internally, in a simulation of the unknown German machine. After a succession of different Colossus runs to discover the likely "chi"-wheel settings, they were checked by examining the frequency distribution of the characters in the processed ciphertext. Colossus produced these frequency counts.
Decryption processes.
By using differencing and knowing that the "psi" wheels did not advance with each character, Tutte worked out that trying just two differenced bits (impulses) of the "chi"-stream against the differenced ciphertext would produce a statistic that was non-random. This became known as Tutte's "1+2 break in". It involved calculating the following Boolean function:
formula_1
and counting the number of times it yielded "false" (zero). If this number exceeded a pre-defined threshold value known as the "set total", it was printed out. The cryptanalyst would examine the printout to determine which of the putative start positions was most likely to be the correct one for the "chi"-1 and "chi"-2 wheels.
This technique would then be applied to other pairs of, or single, impulses to determine the likely start position of all five "chi" wheels. From this, the de-"chi" (D) of a ciphertext could be obtained, from which the "psi" component could be removed by manual methods. If the frequency distribution of characters in the de-"chi" version of the ciphertext was within certain bounds, "wheel setting" of the "chi" wheels was considered to have been achieved, and the message settings and de-"chi" were passed to the "Testery". This was the section at Bletchley Park led by Major Ralph Tester where the bulk of the decrypting work was done by manual and linguistic methods.
Colossus could also derive the start position of the "psi" and motor wheels. The feasibility of utilizing this additional capability regularly was made possible in the last few months of the war when there were plenty of Colossi available and the number of Tunny messages had declined.
Design and construction.
Colossus was developed for the "Newmanry", the section headed by the mathematician Max Newman that was responsible for machine methods against the twelve-rotor Lorenz SZ40/42 on-line teleprinter cipher machine (code-named Tunny, for tunafish). The Colossus design arose out of a parallel project that produced a less-ambitious counting machine dubbed "Heath Robinson". Although the Heath Robinson machine proved the concept of machine analysis for this part of the process, it had serious limitations. The electro-mechanical parts were relatively slow and it was difficult to synchronise two looped paper tapes, one containing the enciphered message, and the other representing part of the keystream of the Lorenz machine. Also the tapes tended to stretch and break when being read at up to 2000 characters per second.
Tommy Flowers MBE was a senior electrical engineer and Head of the Switching Group at the Post Office Research Station at Dollis Hill. Prior to his work on Colossus, he had been involved with GC&CS at Bletchley Park from February 1941 in an attempt to improve the Bombes that were used in the cryptanalysis of the German Enigma cipher machine. He was recommended to Max Newman by Alan Turing, who had been impressed by his work on the Bombes. The main components of the Heath Robinson machine were as follows.
Flowers had been brought in to design the Heath Robinson's combining unit. He was not impressed by the system of a key tape that had to be kept synchronised with the message tape and, on his own initiative, he designed an electronic machine which eliminated the need for the key tape by having an electronic analogue of the Lorenz (Tunny) machine. He presented this design to Max Newman in February 1943, but the idea that the one to two thousand thermionic valves (vacuum tubes and thyratrons) proposed, could work together reliably, was greeted with great scepticism, so more Robinsons were ordered from Dollis Hill. Flowers, however, knew from his pre-war work that most thermionic valve failures occurred as a result of the thermal stresses at power-up, so not powering a machine down reduced failure rates to very low levels. Additionally, if the heaters were started at a low voltage then slowly brought up to full voltage, thermal stress was reduced. The valves themselves could be soldered-in to avoid problems with plug-in bases, which could be unreliable. Flowers persisted with the idea and obtained support from the Director of the Research Station, W Gordon Radley.
Flowers and his team of some fifty people in the switching group spent eleven months from early February 1943 designing and building a machine that dispensed with the second tape of the Heath Robinson, by generating the wheel patterns electronically. Flowers used some of his own money for the project. This prototype, Mark 1 Colossus, contained 1,600 thermionic valves (tubes). It performed satisfactorily at Dollis Hill on 8 December 1943 and was dismantled and shipped to Bletchley Park, where it was delivered on 18 January and re-assembled by Harry Fensom and Don Horwood. It was operational in January and it successfully attacked its first message on 5 February 1944. It was a large structure and was dubbed 'Colossus'. A memo held in the National Archives written by Max Newman on 18 January 1944 records that "Colossus arrives today".
During the development of the prototype, an improved design had been developed – the Mark 2 Colossus. Four of these were ordered in March 1944 and by the end of April the number on order had been increased to twelve. Dollis Hill was put under pressure to have the first of these working by 1 June. Allen Coombs took over leadership of the production Mark 2 Colossi, the first of which – containing 2,400 valves – became operational at 08:00 on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Subsequently, Colossi were delivered at the rate of about one a month. By the time of V-E Day there were ten Colossi working at Bletchley Park and a start had been made on assembling an eleventh. Seven of the Colossi were used for 'wheel setting' and three for 'wheel breaking'.
The main units of the Mark 2 design were as follows.
Most of the design of the electronics was the work of Tommy Flowers, assisted by William Chandler, Sidney Broadhurst and Allen Coombs; with Erie Speight and Arnold Lynch developing the photoelectric reading mechanism. Coombs remembered Flowers, having produced a rough draft of his design, tearing it into pieces that he handed out to his colleagues for them to do the detailed design and get their team to manufacture it. The Mark 2 Colossi were both five times faster and were simpler to operate than the prototype.
Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal storage for the data. The design overcame the problem of synchronizing the electronics with the speed of the message tape by generating a clock signal from reading its sprocket holes. The speed of operation was thus limited by the mechanics of reading the tape. During development, the tape reader was tested up to 9700 characters per second (53 mph) before the tape disintegrated. So 5000 characters/second () was settled on as the speed for regular use. Flowers designed a 6-character shift register, which was used both for computing the delta function (ΔZ) and for testing five different possible starting points of Tunny's wheels in the five processors. This five-way parallelism enabled five simultaneous tests and counts to be performed giving an effective processing speed of 25,000 characters per second. The computation used algorithms devised by W. T. Tutte and colleagues to decrypt a Tunny message.
Operation.
The Newmanry was staffed by cryptanalysts, operators from the Women's Royal Naval Service (WRNS) – known as "Wrens" – and engineers who were permanently on hand for maintenance and repair. By the end of the war the staffing was 272 Wrens and 27 men.
The first job in operating Colossus for a new message was to prepare the paper tape loop. This was performed by the Wrens who stuck the two ends together using Bostik glue, ensuring that there was a 150-character length of blank tape between the end and the start of the message. Using a special hand punch they inserted a start hole between the third and fourth channels sprocket holes from the end of the blank section, and a stop hole between the fourth and fifth channels sprocket holes from the end of the characters of the message. These were read by specially positioned photocells and indicated when the message was about to start and when it ended. The operator would then thread the paper tape through the gate and around the pulleys of the bedstead and adjust the tension. The two-tape bedstead design had been carried on from Heath Robinson so that one tape could be loaded whilst the previous one was being run. A switch on the Selection Panel specified the "near" or the "far" tape.
After performing various resetting and zeroizing tasks, the Wren operators would, under instruction from the cryptanalyst, operate the "set total" decade switches and the K2 panel switches to set the desired algorithm. They would then start the bedstead tape motor and lamp and, when the tape was up to speed, operate the master start switch.
Programming.
Howard Campaigne, a mathematician and cryptanalyst from the US Navy's OP-20-G, wrote the following in a foreword to Flowers' 1983 paper "The Design of Colossus".
Colossus was not a stored-program computer. The input data for the five parallel processors was read from the looped message paper tape and the electronic pattern generators for the "chi", "psi" and motor wheels. The programs for the processors were set and held on the switches and jack panel connections. Each processor could evaluate a Boolean function and count and display the number of times it yielded the specified value of "false" (0) or "true" (1) for each pass of the message tape.
Input to the processors came from two sources, the shift registers from tape reading and the thyratron rings that emulated the wheels of the Tunny machine. The characters on the paper tape were called Z and the characters from the Tunny emulator were referred to by the Greek letters that Bill Tutte had given them when working out the logical structure of the machine. On the selection panel, switches specified either Z or ΔZ, either formula_2 or Δformula_2 and either formula_4 or Δformula_4 for the data to be passed to the jack field and 'K2 switch panel'. These signals from the wheel simulators could be specified as stepping on with each new pass of the message tape or not.
The K2 switch panel had a group of switches on the left-hand side to specify the algorithm. The switches on the right-hand side selected the counter to which the result was fed. The plugboard allowed less specialized conditions to be imposed. Overall the K2 switch panel switches and the plugboard allowed about five billion different combinations of the selected variables.
As an example: a set of runs for a message tape might initially involve two "chi" wheels, as in Tutte's 1+2 algorithm. Such a two-wheel run was called a long run, taking on average eight minutes unless the parallelism was utilised to cut the time by a factor of five. The subsequent runs might only involve setting one "chi" wheel, giving a short run taking about two minutes. Initially, after the initial long run, the choice of the next algorithm to be tried was specified by the cryptanalyst. Experience showed, however, that decision trees for this iterative process could be produced for use by the Wren operators in a proportion of cases.
Influence and fate.
Although the Colossus was the first of the electronic digital machines with programmability, albeit limited by modern standards, it was not a general-purpose machine, being designed for a range of cryptanalytic tasks, most involving counting the results of evaluating Boolean algorithms.
A Colossus computer was thus not a fully Turing complete machine. However, University of San Francisco professor Benjamin Wells has shown that if all ten Colossus machines made were rearranged in a specific cluster, then the entire set of computers could have simulated a universal Turing machine, and thus be Turing complete.
Colossus and the reasons for its construction were highly secret and remained so for 30 years after the War. Consequently, it was not included in the history of computing hardware for many years, and Flowers and his associates were deprived of the recognition they were due. All but two of the Colossi were dismantled after the war and parts returned to the Post Office. Some parts, sanitised as to their original purpose, were taken to Max Newman's Royal Society Computing Machine Laboratory at Manchester University. Two Colossi, along with two Tunny machines, were retained and moved to GCHQ's new headquarters at Eastcote in April 1946, and then to Cheltenham between 1952 and 1954. One of the Colossi, known as "Colossus Blue", was dismantled in 1959; the other in the 1960s. Tommy Flowers was ordered to destroy all documentation. He duly burnt them in a furnace and later said of that order:
The Colossi were adapted for other purposes, with varying degrees of success; in their later years they were used for training. Jack Good related how he was the first to use Colossus after the war, persuading the US National Security Agency that it could be used to perform a function for which they were planning to build a special-purpose machine. Colossus was also used to perform character counts on one-time pad tape to test for non-randomness.
A small number of people who were associated with Colossus—and knew that large-scale, reliable, high-speed electronic digital computing devices were feasible—played significant roles in early computer work in the UK and probably in the US. However, being so secret, it had little direct influence on the development of later computers; it was EDVAC that was the seminal computer architecture of the time. In 1972, Herman Goldstine, who was unaware of Colossus and its legacy to the projects of people such as Alan Turing (ACE), Max Newman (Manchester computers) and Harry Huskey (Bendix G-15), wrote that,
Professor Brian Randell, who unearthed information about Colossus in the 1970s, commented on this, saying that:
Randell's efforts started to bear fruit in the mid-1970s. The secrecy about Bletchley Park had been broken when Group Captain Winterbotham published his book "The Ultra Secret" in 1974. Randell was researching the history of computer science in Britain for a conference on the history of computing held at the Los Alamos Scientific Laboratory, New Mexico on 10–15 June 1976, and got permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill (in October 1975 the British Government had released a series of captioned photographs from the Public Record Office). The interest in the "revelations" in his paper resulted in a special evening meeting when Randell and Coombs answered further questions. Coombs later wrote that "no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days". In 1977 Randell published an article "The First Electronic Computer" in several journals.
In October 2000, a 500-page technical report on the Tunny cipher and its cryptanalysis—entitled "General Report on Tunny"—was released by GCHQ to the national Public Record Office, and it contains a fascinating paean to Colossus by the cryptographers who worked with it:
Reconstruction.
A team led by Tony Sale built a fully functional reconstruction of a Colossus Mark 2 between 1993 and 2008. In spite of the blueprints and hardware being destroyed, a surprising amount of material had survived, mainly in engineers' notebooks, but a considerable amount of it in the U.S. The optical tape reader might have posed the biggest problem, but Dr. Arnold Lynch, its original designer was able to redesign it to his own original specification. The reconstruction is on display, in the historically correct place for Colossus No. 9, at The National Museum of Computing, in H Block Bletchley Park in Milton Keynes, Buckinghamshire.
In November 2007, to celebrate the project completion and to mark the start of a fundraising initiative for The National Museum of Computing, a Cipher Challenge pitted the rebuilt Colossus against radio amateurs worldwide in being first to receive and decode three messages enciphered using the Lorenz SZ42 and transmitted from radio station DL0HNF in the "Heinz Nixdorf MuseumsForum" computer museum. The challenge was easily won by radio amateur Joachim Schüth, who had carefully prepared for the event and developed his own signal processing and code-breaking code using Ada. The Colossus team were hampered by their wish to use World War II radio equipment, delaying them by a day because of poor reception conditions. Nevertheless, the victor's 1.4 GHz laptop, running his own code, took less than a minute to find the settings for all 12 wheels. The German codebreaker said: "My laptop digested ciphertext at a speed of 1.2 million characters per second—240 times faster than Colossus. If you scale the CPU frequency by that factor, you get an equivalent clock of 5.8 MHz for Colossus. That is a remarkable speed for a computer built in 1944."
The Cipher Challenge verified the successful completion of the rebuilding project. "On the strength of today's performance Colossus is as good as it was six decades ago", commented Tony Sale. "We are delighted to have produced a fitting tribute to the people who worked at Bletchley Park and whose brainpower devised these fantastic machines which broke these ciphers and shortened the war by many months."
Other meanings.
There was a fictional computer named "Colossus" in the 1970 film "" which was based on the 1966 novel "Colossus" by D. F. Jones. This was a coincidence as it pre-dates the public release of information about Colossus, or even its name.
Neal Stephenson's novel "Cryptonomicon" (1999) also contains a fictional treatment of the historical role played by Turing and Bletchley Park.
A guided tour of the history and geography of the Park, written by one of the founder members of the Bletchley Park Trust
|
6230
|
29366802
|
https://en.wikipedia.org/wiki?curid=6230
|
Canadian Shield
|
The Canadian Shield ( ), also called the Laurentian Shield or the Laurentian Plateau, is a geologic shield, a large area of exposed Precambrian igneous and high-grade metamorphic rocks. It forms the North American Craton (or Laurentia), the ancient geologic core of the North American continent. Glaciation has left the area with only a thin layer of soil, through which exposures of igneous bedrock resulting from its long volcanic history are frequently visible. As a deep, common, joined bedrock region in eastern and central Canada, the shield stretches north from the Great Lakes to the Arctic Ocean, covering over half of Canada and most of Greenland; it also extends south into the northern reaches of the continental United States.
Geographical extent.
The Canadian Shield is a physiographic division comprising four smaller physiographic provinces: the Laurentian Upland, Kazan Region, Davis and James. The shield extends into the United States as the Adirondack Mountains (connected by the Frontenac Axis) and the Superior Upland. The Canadian Shield is a U-shaped subsection of the Laurentia craton signifying the area of greatest glacial impact (scraping down to bare rock) creating the thin soils. The age of the Canadian Shield is estimated to be 4.28 Ga (4.28 billion years). The Canadian Shield once had jagged peaks, higher than any of today's mountains, but millions of years of erosion have transformed these mountains to rolling hills.
The Canadian Shield is a collage of Archean plates and accreted juvenile arc terranes and sedimentary basins of the Proterozoic Eon that were progressively amalgamated during the interval 2.45–1.24 Ga, with the most substantial growth period occurring during the Trans-Hudson orogeny, between c. 1.90–1.80 Ga. The Canadian Shield was the first part of North America to be permanently elevated above sea level and has remained almost wholly untouched by successive encroachments of the sea upon the continent. It is the Earth's greatest area of exposed Archean rock. The metamorphic base rocks are mostly from the Precambrian (between 4.5 Ga and 540 Ma) and have been repeatedly uplifted and eroded. Today it consists largely of an area of low relief above sea level with a few monadnocks and low mountain ranges (including the Laurentian Mountains) probably eroded from the plateau during the Cenozoic Era. During the Pleistocene Epoch, continental ice sheets depressed the land surface (creating Hudson Bay) but also tilted up its northeastern "rim" (the Torngat), scooped out thousands of lake basins, and carried away much of the region's soil. The northeastern portion, however, became tilted up so that, in northern Labrador and Baffin Island, the land rises to more than 1,500 metres (5,000 feet) above sea level.
When the Greenland section is included, the Canadian Shield is approximately circular, bounded on the northeast by the northeast edge of Greenland, with Hudson Bay in the middle. It covers much of Greenland, all of Labrador and the Great Northern Peninsula of Newfoundland, most of Quebec north of the St. Lawrence River, much of Ontario including northern sections of the Ontario Peninsula, the Adirondack Mountains of New York, the northernmost part of Lower Michigan and all of Upper Michigan, northern Wisconsin, northeastern Minnesota, the central and northern portions of Manitoba, northern Saskatchewan, a small portion of northeastern Alberta, mainland Northwest Territories to the east of a line extended north from the Saskatchewan-Alberta border, most of Nunavut's mainland and, of its Arctic Archipelago, Baffin Island and significant bands through Somerset, Southampton, Devon and Ellesmere islands. In total, the exposed area of the shield covers approximately . The true extent of the shield is greater still and stretches from the Western Cordillera in the west to the Appalachians in the east and as far south as Texas, but these regions are overlaid with much younger rocks and sediment.
Geology.
The Canadian Shield is among the oldest geologic areas on Earth, with regions dating from 2.5 to 4.2 billion years. The multitude of rivers and lakes in the region is classical example of a deranged drainage system, caused by the watersheds of the area being disturbed by glaciation and the effect of post-glacial rebound. The shield was originally an area of very large, very tall mountains (about ) with much volcanic activity, but the area was eroded to nearly its current topographic appearance of relatively low relief over 500 Ma. Erosion has exposed the roots of the mountains, which take the form of greenstone belts in which belts of volcanic rock that have been altered by metamorphism are surrounded by granitic rock. These belts range in age from 3.6 to 2.7 Ga. Much of the granitic rock belongs to the distinctive tonalite–trondhjemite–granodiorite family of rocks, which are characteristic of Archean continental crust. Many of Canada's major ore deposits are associated with greenstone belts.
The Sturgeon Lake Caldera in Kenora District, Ontario, is one of the world's best preserved mineralized Neoarchean caldera complexes, which is 2.7 Ga. The Canadian Shield also contains the Mackenzie dike swarm, which is the largest dike swarm known on Earth. The North American craton is the bedrock forming the heart of the North American continent, and the Canadian Shield is the largest exposed part of the craton's bedrock. The Canadian Shield is part of an ancient continent called Arctica, which was formed about 2.5 Ga during the Neoarchean era.
Mountains have deep roots and float on the denser mantle much like an iceberg at sea. As mountains erode, their roots rise and are eroded in turn. The rocks that now form the surface of the shield were once far below the Earth's surface. The high pressures and temperatures at those depths provided ideal conditions for mineralization. Although these mountains are now heavily eroded, many large mountains still exist in Canada's far north called the Arctic Cordillera. This is a vast, deeply dissected mountain range, stretching from northernmost Ellesmere Island to the northernmost tip of Labrador. The range's highest peak is Nunavut's Barbeau Peak at above sea level. Precambrian rock is the major component of the bedrock.
Ecology.
The current surface expression of the shield is one of very thin soil lying on top of the bedrock, with many bare outcrops. This arrangement was caused by severe glaciation during the ice ages that covered the shield and scraped the rock clean. The lowlands of the Canadian Shield have a very dense soil that is not suitable for forestation; it also contains many marshes and bogs (muskegs). The rest of the region has coarse soil that does not retain moisture well and is frozen with permafrost throughout the year. Forests are not as dense in the north.
The shield is covered in parts by vast boreal forests in the south that support natural ecosystems as well as a major logging industry. The boreal forest area gives way to the Eastern Canadian Shield taiga that covers northern Quebec and most of Labrador. The Midwestern Canadian Shield forests that run westwards from Northwestern Ontario have boreal forests that give way to taiga in the most northerly parts of Manitoba and Saskatchewan. Hydrologic drainage is generally poor, the soil compacting effects of glaciation being one of the many causes. Tundra typically prevails in the northern regions.
Many mammals such as beaver, caribou, white-tailed deer, moose, wolves, wolverines, weasels, mink, otters, grizzly bear, polar bears and black bears are present. In the case of polar bears (), the shield area contains many of their denning locations, such as the Wapusk National Park.
The many lakes and rivers on the shield contain a plentiful quantity of different sports fish species, including walleye, northern pike, lake trout, yellow perch, whitefish, brook trout, arctic grayling, and many types of baitfish. The water surfaces are also home to many waterfowl, most notably Canada geese, loons and gulls. The vast forests contain a myriad population of other birds, including ravens and crows, predatory birds and many songbirds.
Mining and economics.
The Canadian Shield is one of the world's richest areas for mineral ores. It is filled with substantial deposits of nickel, gold, silver, and copper. There are many mining towns extracting these minerals. The largest, and one of the best known, is Sudbury, Ontario. Sudbury is an exception to the normal process of forming minerals in the shield since the Sudbury Basin is an ancient meteorite impact crater. Ejecta from the meteorite impact was found in the Rove Formation in May 2007. The nearby but less-known Temagami Magnetic Anomaly has striking similarities to the Sudbury Basin. This suggests it could be a second metal-rich impact crater. In northeastern Quebec, the giant Manicouagan Reservoir is the site of an extensive hydroelectric project (Manic-cinq, or Manic-5). This is one of the largest-known meteorite impact craters on Earth, though not as large as the Sudbury crater.
The Flin Flon greenstone belt in central Manitoba and east-central Saskatchewan "is one of the largest Paleoproterozoic volcanic-hosted massive sulfide (VMS) districts in the world, containing 27 copper-zinc-(gold) deposits from which more than 183 million tonnes of sulfide have been mined." The portion in the Northwest Territories has recently been the site of several major diamond discoveries. The kimberlite pipes in which the diamonds are found are closely associated with cratons, which provide the deep lithospheric mantle required to stabilize diamond as a mineral. The kimberlite eruptions then bring the diamonds from over depth to the surface. The Diavik mine is actively mining kimberlite diamonds.
|
6231
|
7903804
|
https://en.wikipedia.org/wiki?curid=6231
|
Comic book
|
A comic book, comic-magazine, or simply comic is a publication that consists of comics art in the form of sequential panels that represent individual scenes. Panels are often accompanied by descriptive prose and written narrative, usually dialogue contained in word balloons, which are in emblematic of the comics art form.
"Comic Cuts" was a British comic published from 1890 to 1953. It was preceded by "Ally Sloper's Half Holiday" (1884), which is notable for its use of sequential cartoons to unfold narrative. These British comics existed alongside the popular lurid "penny dreadfuls" (such as "Spring-heeled Jack"), boys' "story papers" and the humorous "Punch" magazine, which was the first to use the term "cartoon" in its modern sense of a humorous drawing.
The first modern American-style comic book, "", was released in the US in 1933 and was a reprinting of earlier newspaper humor comic strips, which had established many of the story-telling devices used in comics. The term "comic book" derives from American comic books once being a compilation of comic strips of a humorous tone; however, this practice was replaced by featuring stories of all genres, usually not humorous in tone.
The largest comic book market is Japan. By 1995, the manga market in Japan was valued at (), with annual sales of 1.9billion manga books ( volumes and manga magazines) in Japan, equivalent to 15issues per person. In 2020, the manga market in Japan reached a new record value of due to a fast growth of digital manga sales as well as an increase in print sales. The comic book market in the United States and Canada was valued at in 2016. , the largest comic book publisher in the United States is manga distributor Viz Media, followed by DC Comics and Marvel Comics featuring superhero comics franchises such as Superman, Batman, Wonder Woman, Spider-Man, the Incredible Hulk, and the X-Men. The best-selling comic book categories in the US are juvenile children's fiction at 41%, manga at 28% and superhero comics at 10% of the market. Another major comic book market is France, where Franco-Belgian comics and Japanese manga each represent 40% of the market, followed by American comics at 10% market share.
Structure.
Comic books heavily rely on their organization and visual presentation. Authors dedicate significant attention to aspects like page layout, size, orientation, and the positioning of panels. These characteristics are crucial for effectively conveying the content and messages within the comic book. Key components of comic books encompass panels, speech bubbles (also known as balloons), text lines, and characters. Speech balloons generally take the form of convex containers that hold character dialogue and are connected to the character via a tail element. The tail comprises an origin, path, tip, and directional point. The creation of comic books involves several essential steps: writing, drawing, and coloring. Various technological tools and methods are employed to craft comic books, incorporating concepts such as directions, axes, data, and metrics. Following these formatting guidelines, the process unfolds with writing, drawing, and coloring. In the United States, the term "comic book", is generally used for comics periodicals and trade paperbacks while "graphic novel" is the term used for standalone books.
American comic books.
Comics as a print medium have existed in the United States since the printing of "The Adventures of Mr. Obadiah Oldbuck" in 1842 in hardcover, making it the first known American prototype comic book. Proto-comics periodicals began appearing early in the 20th century, with the first comic standard-sized comic being "Funnies on Parade". "Funnies on Parades" was the first book that established the size, duration, and format of the modern comic book. Following this was, Dell Publishing's 36-page "" as the first true newsstand American comic book; Goulart, for example, calls it "the cornerstone for one of the most lucrative branches of magazine publishing". In 1905 G.W. Dillingham Company published 24 select strips by the cartoonist Gustave Verbeek in an anthology book called 'The Incredible Upside-Downs of Little Lady Lovekins and Old Man Muffaroo'. The introduction of Jerry Siegel and Joe Shuster's Superman in 1938 turned comic books into a major industry and ushered in the Golden Age of Comic Books. The Golden Age originated the archetype of the superhero. According to historian Michael A. Amundson, appealing comic-book characters helped ease young readers' fear of nuclear war and neutralize anxiety about the questions posed by atomic power.
Historians generally divide the timeline of the American comic book into eras. The Golden Age of Comic Books began in 1938, with the debut of Superman in Action Comics #1, published by Detective Comics (predecessor of DC Comics), which is generally considered the beginning of the modern comic book as it is known today. The Silver Age of Comic Books is generally considered to date from the first successful revival of the then-dormant superhero form, with the debut of the Flash in "Showcase" #4 (Oct. 1956). The Silver Age lasted through the late 1960s or early 1970s, during which time Marvel Comics revolutionized the medium with such naturalistic superheroes as Stan Lee and Jack Kirby's Fantastic Four and Lee and Steve Ditko's Spider-Man. The demarcation between the Silver Age and the following era, the Bronze Age of Comic Books, is less well-defined, with the Bronze Age running from the very early 1970s through the mid-1980s. The Modern Age of Comic Books runs from the mid-1980s to the present day.
A significant event in the timeline of American comic books occurred when psychiatrist Fredric Wertham voiced his criticisms of the medium through his book "Seduction of the Innocent" (1954). This critique led to the involvement of the American Senate Subcommittee on Juvenile Delinquency, which launched an investigation into comic books. Wertham argued that comic books were accountable for a surge in juvenile delinquency and posed a potential impact on a child's sexuality and moral values. In response to attention from the government and from the media, the US comic book industry set up the Comics Magazine Association of America. The CMAA instilled the Comics Code Authority in 1954 and drafted the self-censorship Comics Code that year, which required all comic books to go through a process of approval. It was not until the 1970s that comic books could be published without passing through the inspection of the CMAA. The Code was made formally defunct in November 2011.
Underground comic books.
In the late 1960s and early 1970s, a surge of creativity emerged in what became known as underground comix. Published and distributed independently of the established comics industry, most of such comics reflected the youth counterculture and drug culture of the time. Underground comix "reflected and commented on the social divisions and tensions of American society". Many had an uninhibited, often irreverent style; their frank depictions of nudity, sex, profanity, and politics had no parallel outside their precursors, the pornographic and even more obscure "Tijuana bibles". Underground comics were almost never sold at newsstands, but rather in such youth-oriented outlets as head shops and record stores, as well as by mail order. The underground comics encouraged creators to publish their work independently so that they would have full ownership rights to their characters.
Frank Stack's "The Adventures of Jesus", published under the name Foolbert Sturgeon, has been credited as the first underground comix; while R. Crumb and the crew of cartoonists who worked on "Zap Comix" popularized the form.
Alternative comics.
The rise of comic book specialty stores in the late 1970s created and paralleled a dedicated market for "independent" or "alternative comics" in the US. The first such comics included the anthology series "Star Reach", published by comic book writer Mike Friedrich from 1974 to 1979, and Harvey Pekar's "American Splendor", which continued sporadic publication into the 21st century and which Shari Springer Berman and Robert Pulcini adapted into a 2003 film. Some independent comics continued in the tradition of underground comics. While their content generally remained less explicit, others resembled the output of mainstream publishers in format and genre but were published by smaller artist-owned companies or by single artists. A few (notably "RAW") represented experimental attempts to bring comics closer to the status of fine art.
During the 1970s the "small press" culture grew and diversified. By the 1980s, several independent publishers – such as Pacific, Eclipse, First, , and Fantagraphics – had started releasing a wide range of styles and formats—from color-superhero, detective, and science-fiction comic books to black-and-white magazine-format stories of Latin American magical realism.
A number of small publishers in the 1990s, changed the format and distribution of their comics to more closely resemble non-comics publishing. The "minicomics" form, an extremely informal version of self-publishing, arose in the 1980s and became increasingly popular among artists in the 1990s, despite reaching an even more limited audience than the small press.
Small publishers regularly releasing titles include Avatar Press, Hyperwerks, Raytoons, and Terminal Press, buoyed by such advances in printing technology as digital print-on-demand.
Graphic novels.
In 1964, Richard Kyle coined the term "graphic novel".
Precursors of the form existed by the 1920s, which saw a revival of the medieval woodcut tradition by Belgian Frans Masereel, American Lynd Ward and others, including Stan Lee.
In 1947, Fawcett Publications published "Comics Novel No. 1", as the first in an intended series of these "comics novels". The story in the first issue was "Anarcho, Dictator of Death", a five chapter spy genre tale written by Otto Binder and drawn by Al Carreno. It is readable online in the Digital Comic Museum. The magazine never reached a second issue.
In 1950, St. John Publications produced the digest-sized, adult-oriented "picture novel" "It Rhymes with Lust", a 128-page digest by pseudonymous writer "Drake Waller" (Arnold Drake and Leslie Waller), penciler Matt Baker and inker Ray Osrin, touted as "an original full-length novel" on its cover. "It Rhymes with Lust" is also available to read online in the Digital Comic Museum.
In 1971, writer-artist Gil Kane and collaborators applied a paperback format to their "comics novel" "Blackmark". Will Eisner popularized the term "graphic novel" when he used it on the cover of the paperback edition of his work "A Contract with God, and Other Tenement Stories" in 1978 and, subsequently, the usage of the term began to increase.
Market size.
In 2017, the comic book market size for North America was just over $1 billion with digital sales being flat, book stores having a 1% decline, and comic book stores having a 10% decline over 2016. The global comic book market saw a substantial 12% growth in 2020, reaching a total worth of US$8.49 billion. This positive trajectory continued in 2021, with the market's annual valuation surging to US$9.21 billion. The rising popularity of comic books can be attributed to heightened global interest, driven significantly by collaborative efforts among diverse brands. These collaborations are geared towards producing more engaging and appealing comic content, contributing to the industry's continued growth.
Comic book collecting.
The 1970s saw the advent of specialty comic book stores. Initially, comic books were marketed by publishers to children because comic books were perceived as children's entertainment. However, with increasing recognition of comics as an art form and the growing pop culture presence of comic book conventions, they are now embraced by many adults.
Comic book collectors often exhibit a lifelong passion for the stories within comics, often focusing on specific superheroes and striving to gather a complete collection of a particular series. Comics are assigned sequential numbers, and the initial issue of a long-lasting comic book series tends to be both the scarcest and the most coveted among collectors. The introduction of a new character might occur within an existing title. For instance, the first appearance of Spider-Man took place in "Amazing Fantasy" #15. New characters were frequently introduced in this manner, waiting for an established audience before launching their own titles. Consequently, comics featuring the debut appearance of a significant character can sometimes be even more challenging to locate than the inaugural issue of that character's standalone series.
Some rare comic books include copies of the unreleased "Motion Picture Funnies Weekly" #1 from 1939. Eight copies, plus one without a cover, emerged in the estate of the deceased publisher in 1974. The "Pay Copy" of this book sold for $43,125 in a 2005 Heritage auction.
The most valuable American comics have combined rarity and quality with the first appearances of popular and enduring characters. Four comic books have sold for over US$1 million , including two examples of "Action Comics" #1, the first appearance of Superman, both sold privately through online dealer ComicConnect.com in 2010, and "Detective Comics" #27, the first appearance of Batman, via public auction.
Updating the above price obtained for "Action Comics" #1, the first appearance of Superman, the highest sale on record for this book is $3.2 million, for a 9.0 copy.
Misprints, promotional comic-dealer incentive printings, and issues with exceptionally low distribution tend to possess scarcity value in the comic book market. The rarest modern comic books include the original press run of "The League of Extraordinary Gentlemen" #5, which DC executive Paul Levitz recalled and pulped due to the appearance of a vintage Victorian era advertisement for "Marvel Douche", which the publisher considered offensive; only 100 copies exist, most of which have been CGC graded. (See Recalled comics for more pulped, recalled, and erroneous comics.)
In 2000, a company named Comics Guaranty (CGC) initiated the practice of "slabbing" comics, which involves encasing them within thick plastic cases and assigning them a numerical grade. This approach inspired the emergence of Comic Book Certification Service. Given the significance of condition in determining the value of rare comics, the concept of grading by an impartial company, one that does not engage in buying or selling comics, seemed promising. Nevertheless, there is an ongoing debate regarding whether the relatively high cost of this grading service is justified and whether it serves the interests of collectors or mainly caters to speculators seeking rapid profits, akin to trading in stocks or fine art. Comic grading has played a role in establishing standards for valuation, which online price guides such as GoCollect and GPAnalysis utilize to provide real-time market value information.
Collectors also seek out the original artwork pages from comic books, which are perhaps the most rarefied items in the realm of comic book collecting. These pages hold unparalleled scarcity due to the fact that there exists only one unique page of artwork for every page that was printed and published.
The creation of these original artwork pages involves a collaborative effort: a writer crafts the story, a pencil artist designs the sequential panels on the page, an ink artist goes over the pencil with pen and ink, a letterer provides the dialogue and narration through hand-lettering, and finally, a colorist adds color as the final touch before the pages are sent to the printer.
When the printer returns the original artwork pages, they are typically returned to the artists themselves. These artists sometimes opt to sell these pages at comic book conventions, in galleries, and at art shows centered around comic book art. The original pages from DC and Marvel, featuring the debut appearances of iconic characters such as Superman, Batman, Wonder Woman, the Flash, Captain Marvel, Spider-Man, the Incredible Hulk, Iron Man, Captain America and the Mighty Thor are regarded as priceless treasures within the comic book world.
History of race in American comic books.
Many early iterations of black characters in comics "became variations on the 'single stereotypical image of Sambo'." Sambo was closely related to the coon stereotype but had some subtle differences. They are both a derogatory way of portraying black characters. "The name itself, an abbreviation of raccoon, is dehumanizing. As with Sambo, the coon was portrayed as a lazy, easily frightened, chronically idle, inarticulate, buffoon." This portrayal "was of course another attempt to solidify the intellectual inferiority of the black race through popular culture." However, in the 1940s there was a change in portrayal of black characters. "A cursory glance...might give the impression that situations had improved for African Americans in comics." In many comics being produced in this time there was a major push for tolerance between races. "These equality minded heroes began to spring to action just as African Americans were being asked to participate in the war effort."
During this time, a government ran program, the Writers' War Board, became heavily involved in what would be published in comics. "The Writers' War Board used comic books to shape popular perceptions of race and ethnicity..." Not only were they using comic books as a means of recruiting all Americans, they were also using it as propaganda to "[construct] a justification for race-based hatred of America's foreign enemies." The Writers' War Board created comics books that were meant to "[promote] domestic racial harmony". However, "these pro-tolerance narratives struggled to overcome the popular and widely understood negative tropes used for decades in American mass culture...". However, they were not accomplishing this agenda within all of their comics.
In the comic series "Captain Marvel Adventures", there was a character named Steamboat who embodied a collection of highly negative stereotypes prevalent during that period. The Writers' War Board did not request any alterations to this character despite the problematic portrayal. The removal of Steamboat from the series only came about due to the persistent advocacy of a black youth group based in New York City." Originally their request was refused by individuals working on the comic stating, ""Captain Marvel Adventures" included many kinds of caricatures 'for the sake of humor'." The black youth group responded with "this is not the Negro race, but your one-and-a-half millions readers will think it so." Afterwards, Steamboat disappeared from the comics all together. There was a comic created about the 99th Squadron, also known as the Tuskegee Airmen, an all-black air force unit. Instead of making the comic about their story, the comic was about Hop Harrigan. A white pilot who captures a Nazi, shows him videos of the 99th Squadron defeating his men and then reveals to the Nazi that his men were defeated by African Americans which infuriated him as he sees them as a less superior race and cannot believe they bested his men."The Tuskegee Airmen, and images of black aviators appear in just three of the fifty three panels... the pilots of the 99th Squadron have no dialogue and interact with neither Hop Harrigan nor his Nazi captive." During this time, they also used black characters in comic books as a means to invalidate the militant black groups that were fighting for equality within the U.S. "Spider-Man 'made it clear that militant black power was not the remedy for racial injustice'." "The Falcon openly criticized black behavior stating' maybe it's important us to cool things down-so we can protect the rights we been fightin' for'." This portrayal and character development of black characters can be partially blamed on the fact that, during this time, "there had rarely been a black artist or writer allowed in a major comics company."
Asian characters within comic books encountered similar prejudiced treatment as black characters did. They were subjected to dehumanizing depictions, with narratives often portraying them as "incompetent and subhuman." In a 1944 edition of the publication "United States Marines", there was a story titled "The Smell of the Monkeymen". This narrative portrayed Japanese soldiers as brutish simians, and it depicted their concealed positions being betrayed by their repugnant body odor. Chinese characters received the same treatment. "By the time the United States entered WWII, negative perceptions of Chinese were an established part of mass culture..." However, concerned that the Japanese could use America's anti-Chinese material as propaganda they began "to present a more positive image of America's Chinese allies..." Just as they tried to show better representation for Black people in comics they did the same for Asian people. However, "Japanese and Filipino characters were visually indistinguishable. Both groups have grotesque buckteeth, tattered clothing, and bright yellow skin." "Publishers depicted America's Asian allies through derogatory images and language honed over the preceding decades." Asian characters were previously portrayed as, "ghastly yellow demons". During WWII, "[every] major superhero worth his spandex devoted himself to the eradication of Asian invaders." There was "a constant relay race in which one Asian culture merely handed off the baton of hatred to another with no perceptible changes in the manner in which the characters would be portrayed."
"The only specific depiction of a Hispanic superhero did not end well. In 1975, Marvel gave us Hector Ayala (a.k.a. The White Tiger)." "Although he fought for several years alongside the likes of much more popular heroes such as Spider-Man and Daredevil, he only lasted six years before sales of comics featuring him got so bad that Marvel had him retire. The most famous Hispanic character is Bane, a villain from Batman."
The Native American representation in comic books "can be summed up in the noble savage stereotype" " a recurring theme...urged American indians to abandon their traditional hostility towards the United States. They were the ones painted as intolerant and disrespectful of the dominant concerns of white America".
East Asian comics.
Japanese manga.
Manga () are comic books or graphic novels originating from Japan. Most manga conform to a style developed in Japan in the late 19th century, though the art form has a long prehistory in earlier Japanese art. The term "manga" is used in Japan to refer to both comics and cartooning in general. Outside Japan, the word is typically used to refer to comics originally published in the country.
Dōjinshi.
, fan-made Japanese comics, operate in a far larger market in Japan than the American "underground comix" market; the largest dōjinshi fair, Comiket, attracts 500,000 visitors twice a year.
Korean manhwa.
Manhwa () are comic books or graphic novels originating from Korea. The term "manhwa" is used in Korea to refer to both comics and cartooning in general. Outside Korea, the term usually refers to comics originally published in Korea. Manhwa is greatly influenced by Japanese Manga comics though it differs from manga and manhua with its own distinct features.
Webtoons.
Webtoons have become popular in South Korea as a new way to read comics. Thanks in part to different censorship rules, color and unique visual effects, and optimization for easier reading on smartphones and computers. More manhwa have made the switch from traditional print manhwa to online webtoons thanks to better pay and more freedom than traditional print manhwa. The webtoon format has also expanded to other countries outside of Korea like China, Japan, Southeast Asia, and Western countries. Major webtoon distributors include Lezhin, Naver, and Kakao.
European comics.
Franco-Belgian comics.
France and Belgium have a long tradition in comics and comic books, often called "BDs" (an abbreviation of "bandes dessinées", meaning literally "drawn strips") in French, and "strips" in Dutch or Flemish. Belgian comic books originally written in Dutch show the influence of the Francophone "Franco-Belgian" comics but have their own distinct style.
British comics.
Although "Ally Sloper's Half Holiday" (1884) was aimed at an adult market, publishers quickly targeted a younger demographic, which has led to most publications being for children and has created an association in the public's mind of comics as somewhat juvenile. "The Guardian" refers to Ally Sloper as "one of the world's first iconic cartoon characters", and "as famous in Victorian Britain as Dennis the Menace would be a century later." British comics in the early 20th century typically evolved from illustrated penny dreadfuls of the Victorian era (featuring Sweeney Todd, Dick Turpin and "Varney the Vampire"). First published in the 1830s, penny dreadfuls were "Britain's first taste of mass-produced popular culture for the young."
The two most popular British comic books, "The Beano" and "The Dandy", were first published by DC Thomson in the 1930s. By 1950 the weekly circulation of both reached 2 million. Explaining the enormous popularity of comics in the UK during this period, Anita O'Brien, director curator at London's Cartoon Museum, states: "When comics like the Beano and Dandy were invented back in the 1930s – and through really to the 1950s and 60s – these comics were almost the only entertainment available to children." "Dennis the Menace" was created in the 1950s, which saw sales for "The Beano" soar. He features in the cover of "The Beano", with the BBC referring to him as the "definitive naughty boy of the comic world."
In 1954, "Tiger" comics introduced "Roy of the Rovers", the hugely popular football based strip recounting the life of Roy Race and the team he played for, Melchester Rovers. The stock media phrase "real 'Roy of the Rovers' stuff" is often used by football writers, commentators and fans when describing displays of great skill, or surprising results that go against the odds, in reference to the dramatic storylines that were the strip's trademark. Other comic books such as "Eagle", "Valiant", "Warrior", "Viz" and "2000 AD" also flourished. Some comics, such as "Judge Dredd" and other "2000 AD" titles, have been published in a tabloid form. Underground comics and "small press" titles have also appeared in the UK, notably "Oz" and "Escape Magazine".
The content of "Action", another title aimed at children and launched in the mid-1970s, became the subject of discussion in the House of Commons. Although on a smaller scale than similar investigations in the US, such concerns led to a moderation of content published within British comics. Such moderation never became formalized to the extent of promulgating a code, nor did it last long. The UK has also established a healthy market in the reprinting and repackaging of material, notably material originating in the US. The lack of reliable supplies of American comic books led to a variety of black-and-white reprints, including Marvel's monster comics of the 1950s, Fawcett's Captain Marvel, and other characters such as Sheena, Mandrake the Magician, and the Phantom. Several reprint companies became involved in repackaging American material for the British market, notably the importer and distributor Thorpe & Porter. Marvel Comics established a UK office in 1972. DC Comics and Dark Horse Comics also opened offices in the 1990s. The repackaging of European material has occurred less frequently, although "The Adventures of Tintin" and "Asterix" serials have been successfully translated and repackaged in softcover books. The number of European comics available in the UK has increased in the last two decades. The British company Cinebook, founded in 2005, has released English translated versions of many European series.
In the 1980s, a resurgence of British writers and artists gained prominence in mainstream comic books, which was dubbed the "British Invasion" in comic book history. These writers and artists brought with them their own mature themes and philosophy such as anarchy, controversy and politics common in British media. These elements would pave the way for mature and "darker and edgier" comic books and jump start the Modern Age of Comics. Writers included Alan Moore, famous for his "V for Vendetta", "From Hell", "Watchmen", "Marvelman", and "The League of Extraordinary Gentlemen"; Neil Gaiman with "The Sandman" mythos and "Books of Magic"; Warren Ellis, creator of "Transmetropolitan" and "Planetary"; and others such as Mark Millar, creator of "Wanted" and "Kick-Ass". The comic book series "John Constantine, Hellblazer", which is largely set in Britain and starring the magician John Constantine, paved the way for British writers such as Jamie Delano.
The English musician Peter Gabriel issued in 2000 The Story of OVO which was released in a CD-booklet-shaped comic book as part of the CD edition with the title "OVO The Millennium Show". The 2000 Millennium Dome Show based on it.
At Christmas, publishers repackage and commission material for comic annuals, printed and bound as hardcover A4-size books; "Rupert" supplies a famous example of the British comic annual. DC Thomson also repackages "The Broons" and "Oor Wullie" strips in softcover A4-size books for the holiday season.
On 19 March 2012, the British postal service, the Royal Mail, released a set of stamps depicting British comic book characters and series. The collection featured "The Beano", "The Dandy", "Eagle", "The Topper", "Roy of the Rovers", "Bunty", "Buster", "Valiant", "Twinkle" and "2000 AD".
Spanish comics.
It has been stated that the 13th century "Cantigas de Santa María" could be considered as the first Spanish "comic", although comic books (also known in Spain as "historietas" or "tebeos") made their debut around 1857. The magazine "TBO" was influential in popularizing the medium. After the Spanish Civil War, the Franco regime imposed strict censorship in all media: superhero comics were forbidden and as a result, comic heroes were based on historical fiction (in 1944 the medieval hero "El Guerrero del Antifaz" was created by Manuel Gago and another popular medieval hero, "Capitán Trueno", was created in 1956 by Víctor Mora and Miguel Ambrosio Zaragoza). Two publishing houses — Editorial Bruguera and Editorial Valenciana — dominated the Spanish comics market during its golden age (1950–1970). The most popular comics showed a recognizable style of slapstick humor (influenced by Franco-Belgian authors such as Franquin): Escobar's "Carpanta" and "Zipi y Zape", Vázquez's "Las hermanas Gilda" and "Anacleto," Ibáñez's "Mortadelo y Filemón" and "13. Rue del Percebe" or Jan's "Superlópez". After the end of the Francoist period, there was an increased interest in adult comics with magazines such as "Totem", "El Jueves", "1984", and "El Víbora," and works such as "Paracuellos" by Carlos Giménez.
Spanish artists have traditionally worked in other markets finding great success, either in the American (e.g., Eisner Award winners Sergio Aragonés, Salvador Larroca, Gabriel Hernández Walta, Marcos Martín or David Aja), the British (e.g., Carlos Ezquerra, co-creator of "Judge Dredd") or the Franco-Belgian one (e.g., Fauve d'Or winner or "Blacksad" authors Juan Díaz Canales and Juanjo Guarnido).
Italian comics.
In Italy, comics (known in Italian as "fumetti") made their debut as humor strips at the end of the 19th century, and later evolved into adventure stories. After World War II, however, artists like Hugo Pratt and Guido Crepax exposed Italian comics to an international audience. Popular comic books such as "Diabolik" or the "Bonelli" line—namely "Tex Willer" or "Dylan Dog"—remain best-sellers.
Mainstream comics are usually published on a monthly basis, in a black-and-white digest size format, with approximately 100 to 132 pages. Collections of classic material for the most famous characters, usually with more than 200 pages, are also common. Author comics are published in the French BD format, with an example being Pratt's "Corto Maltese".
Italian cartoonists show the influence of comics from other countries, including France, Belgium, Spain, and Argentina. Italy is also famous for being one of the foremost producers of Walt Disney comic stories outside the US; Donald Duck's superhero alter ego, Paperinik, known in English as Superduck, was created in Italy.
Distribution.
The comic book industry has struggled with distribution issues throughout its history, as numerous mainstream retailers have been hesitant to stock substantial quantities of the most engaging and sought-after comics. The smartphone and the tablet have turned out to be an ideal medium for online distribution.
Digital distribution.
On 13 November 2007, Marvel Comics launched Marvel Digital Comics Unlimited, a subscription service allowing readers to read many comics from Marvel's history online. The service also includes periodic release new comics not available elsewhere. With the release of "Avenging Spider-Man" #1, Marvel also became the first publisher to provide free digital copies as part of the print copy of the comic book.
With the growing popularity of smartphones and tablets, many major publishers have begun releasing titles in digital form. The most popular platform is comiXology. Some platforms, such as Graphicly, have shut down.
Comic collections in libraries.
Numerous libraries house extensive collections of comics in the form of graphic novels. This serves as a convenient means for the general public to become acquainted with the medium.
Guinness World Records.
In 2015, the Japanese manga artist Eiichiro Oda was awarded the "Guinness World Records" title for having the "Most copies published for the same comic book series by a single author". His manga series "One Piece", which he writes and illustrates, has been serialized in the Japanese magazine "Weekly Shōnen Jump" since December 1997, and by 2015, 77 collected volumes had been released. "Guinness World Records" reported in their announcement that the collected volumes of the series had sold a total of 320,866,000 units. "One Piece" also holds the "Guinness World Records" title for "Most copies published for the same manga series".
On 5 August 2018, the "Guinness World Records" title for the "Largest comic book ever published" was awarded to the Brazilian comic book "Turma da Mônica — O Maior Gibi do Mundo!", published by Panini Comics Brasil and Mauricio de Sousa Produções. The comic book measures . The 18-page comic book had a print run of 120 copies.
With the July 2021 publication of the 201st collected volume of his manga series "Golgo 13", Japanese manga artist Takao Saito was awarded the "Guinness World Records" title for "Most volumes published for a single manga series." "Golgo 13" has been continuously serialized in the Japanese magazine "Big Comic" since October 1968, which also makes it the oldest manga still in publication.
|
6233
|
38005489
|
https://en.wikipedia.org/wiki?curid=6233
|
Connected space
|
In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that distinguish topological spaces.
A subset of a topological space formula_1 is a if it is a connected space when viewed as a subspace of formula_1.
Some related but stronger conditions are path connected, simply connected, and formula_3-connected. Another related notion is locally connected, which neither implies nor follows from connectedness.
Formal definition.
A topological space formula_1 is said to be if it is the union of two disjoint non-empty open sets. Otherwise, formula_1 is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice.
For a topological space formula_1 the following conditions are equivalent:
Historically this modern formulation of the notion of connectedness (in terms of no partition of formula_1 into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See for details.
Connected components.
Given some point formula_17 in a topological space formula_18 the union of any collection of connected subsets such that each contains formula_17 will once again be a connected subset.
The connected component of a point formula_17 in formula_1 is the union of all connected subsets of formula_1 that contain formula_23 it is the unique largest (with respect to formula_24) connected subset of formula_1 that contains formula_26
The maximal connected subsets (ordered by inclusion formula_24) of a non-empty topological space are called the connected components of the space.
The components of any topological space formula_1 form a partition of formula_1: they are disjoint, non-empty and their union is the whole space.
Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers formula_30 are in different components. Take an irrational number formula_31 and then set formula_32 and formula_33 Then formula_34 is a separation of formula_35 and formula_36. Thus each component is a one-point set.
Let formula_37 be the connected component of formula_17 in a topological space formula_18 and formula_40 be the intersection of all clopen sets containing formula_17 (called quasi-component of formula_17). Then formula_43 where the equality holds if formula_1 is compact Hausdorff or locally connected.
Disconnected spaces.
A space in which all components are one-point sets is called . Related to this property, a space formula_1 is called if, for any two distinct elements formula_17 and formula_47 of formula_1, there exist disjoint open sets formula_49 containing formula_17 and formula_51 containing formula_47 such that formula_1 is the union of formula_49 and formula_51. Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers formula_56, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.
Examples.
An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space.
Path connectedness.
A is a stronger notion of connectedness, requiring the structure of a path. A path from a point formula_17 to a point formula_47 in a topological space formula_1 is a continuous function formula_89 from the unit interval formula_90 to formula_1 with formula_92 and formula_93. A of formula_1 is an equivalence class of formula_1 under the equivalence relation which makes formula_17 equivalent to formula_47 if and only if there is a path from formula_17 to formula_47. The space formula_1 is said to be path-connected (or pathwise connected or formula_101-connected) if there is exactly one path-component. For non-empty spaces, this is equivalent to the statement that there is a path joining any two points in formula_1. Again, many authors exclude the empty space.
Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line formula_103 and the topologist's sine curve.
Subsets of the real line formula_67 are connected if and only if they are path-connected; these subsets are the intervals and rays of formula_67.
Also, open subsets of formula_65 or formula_107 are connected if and only if they are path-connected.
Additionally, connectedness and path-connectedness are the same for finite topological spaces.
Arc connectedness.
A space formula_1 is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding formula_109. An arc-component of formula_1 is a maximal arc-connected subset of formula_1; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable.
Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a formula_112-Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of formula_113 can be connected by a path but not by an arc.
Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let formula_1 be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces:
Local connectedness.
A topological space is said to be locally connected at a point formula_17 if every neighbourhood of formula_17 contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space formula_1 is locally connected if and only if every component of every open set of formula_1 is open.
Similarly, a topological space is said to be if it has a base of path-connected sets.
An open subset of a locally path-connected space is connected if and only if it is path-connected.
This generalizes the earlier statement about formula_65 and formula_107, each of which is locally path-connected. More generally, any topological manifold is locally path-connected.
Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in formula_67, such as formula_128.
A classic example of a connected space that is not locally connected is the so-called topologist's sine curve, defined as formula_129, with the Euclidean topology induced by inclusion in formula_130.
Set operations.
The intersection of connected sets is not necessarily connected.
The union of connected sets is not necessarily connected, as can be seen by considering formula_131.
Each ellipse is a connected set, but the union is not connected, since it can be partitioned into two disjoint open sets formula_49 and formula_51.
This means that, if the union formula_1 is disconnected, then the collection formula_135 can be partitioned into two sub-collections, such that the unions of the sub-collections are disjoint and open in formula_1 (see picture). This implies that in several cases, a union of connected sets necessarily connected. In particular:
The set difference of connected sets is not necessarily connected. However, if formula_144 and their difference formula_145 is disconnected (and thus can be written as a union of two open sets formula_146 and formula_147), then the union of formula_148 with each such component is connected (i.e. formula_149 is connected for all formula_150).
Graphs.
Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them.
However, it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any formula_3-cycle with formula_157 odd) is one such example.
As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets . Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs.
However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space.
Stronger forms of connectedness.
There are stronger forms of connectedness for topological spaces, for instance:
In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve.
|
6235
|
28481209
|
https://en.wikipedia.org/wiki?curid=6235
|
Cell nucleus
|
The cell nucleus (; : nuclei) is a membrane-bound organelle found in eukaryotic cells. Eukaryotic cells usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have no nuclei, and a few others including osteoclasts have many. The main structures making up the nucleus are the nuclear envelope, a double membrane that encloses the entire organelle and isolates its contents from the cellular cytoplasm; and the nuclear matrix, a network within the nucleus that adds mechanical support.
The cell nucleus contains nearly all of the cell's genome. Nuclear DNA is often organized into multiple chromosomes – long strands of DNA dotted with various proteins, such as histones, that protect and organize the DNA. The genes within these chromosomes are structured in such a way to promote cell function. The nucleus maintains the integrity of genes and controls the activities of the cell by regulating gene expression.
Because the nuclear envelope is impermeable to large molecules, nuclear pores are required to regulate nuclear transport of molecules across the envelope. The pores cross both nuclear membranes, providing a channel through which larger molecules must be actively transported by carrier proteins while allowing free movement of small molecules and ions. Movement of large molecules such as proteins and RNA through the pores is required for both gene expression and the maintenance of chromosomes. Although the interior of the nucleus does not contain any membrane-bound subcompartments, a number of nuclear bodies exist, made up of unique proteins, RNA molecules, and particular parts of the chromosomes. The best-known of these is the nucleolus, involved in the assembly of ribosomes.
Chromosomes.
The cell nucleus contains the majority of the cell's genetic material in the form of multiple linear DNA molecules organized into structures called chromosomes. Each human cell contains roughly two meters of DNA. During most of the cell cycle these are organized in a DNA-protein complex known as chromatin, and during cell division the chromatin can be seen to form the well-defined chromosomes familiar from a karyotype. A small fraction of the cell's genes are located instead in the mitochondria.
There are two types of chromatin. Euchromatin is the less compact DNA form, and contains genes that are frequently expressed by the cell. The other type, heterochromatin, is the more compact form, and contains DNA that is infrequently transcribed. This structure is further categorized into "facultative" heterochromatin, consisting of genes that are organized as heterochromatin only in certain cell types or at certain stages of development, and "constitutive" heterochromatin that consists of chromosome structural components such as telomeres and centromeres. During interphase the chromatin organizes itself into discrete individual patches, called "chromosome territories". Active genes, which are generally found in the euchromatic region of the chromosome, tend to be located towards the chromosome's territory boundary.
Antibodies to certain types of chromatin organization, in particular, nucleosomes, have been associated with a number of autoimmune diseases, such as systemic lupus erythematosus. These are known as anti-nuclear antibodies (ANA) and have also been observed in concert with multiple sclerosis as part of general immune system dysfunction.
Nuclear structures and landmarks.
The nucleus contains nearly all of the cell's DNA, surrounded by a network of fibrous intermediate filaments called the nuclear matrix, and is enveloped in a double membrane called the nuclear envelope. The nuclear envelope separates the fluid inside the nucleus, called the nucleoplasm, from the rest of the cell. The size of the nucleus is correlated to the size of the cell, and this ratio is reported across a range of cell types and species. In eukaryotes the nucleus in many cells typically occupies 10% of the cell volume. The nucleus is the largest organelle in animal cells. In human cells, the diameter of the nucleus is approximately six micrometres (μm).
Nuclear envelope and pores.
The nuclear envelope consists of two membranes, an inner and an outer nuclear membrane, perforated by nuclear pores. Together, these membranes serve to separate the cell's genetic material from the rest of the cell contents, and allow the nucleus to maintain an environment distinct from the rest of the cell. Despite their close apposition around much of the nucleus, the two membranes differ substantially in shape and contents. The inner membrane surrounds the nuclear content, providing its defining edge. Embedded within the inner membrane, various proteins bind the intermediate filaments that give the nucleus its structure. The outer membrane encloses the inner membrane, and is continuous with the adjacent endoplasmic reticulum membrane. As part of the endoplasmic reticulum membrane, the outer nuclear membrane is studded with ribosomes that are actively translating proteins across membrane. The space between the two membranes is called the perinuclear space, and is continuous with the endoplasmic reticulum lumen.
In a mammalian nuclear envelope there are between 3000 and 4000 nuclear pore complexes (NPCs) perforating the envelope. Each NPC contains an eightfold-symmetric ring-shaped structure at a position where the inner and outer membranes fuse. The number of NPCs can vary considerably across cell types; small glial cells only have about a few hundred, with large Purkinje cells having around 20,000. The NPC provides selective transport of molecules between the nucleoplasm and the cytosol. The nuclear pore complex is composed of approximately thirty different proteins known as nucleoporins. The pores are about 60–80 million daltons in molecular mass and consist of around 50 (in yeast) to several hundred proteins (in vertebrates). The pores are 100 nm in total diameter; however, the gap through which molecules freely diffuse is only about 9 nm wide, due to the presence of regulatory systems within the center of the pore. This size selectively allows the passage of small water-soluble molecules while preventing larger molecules, such as nucleic acids and larger proteins, from inappropriately entering or exiting the nucleus. These large molecules must be actively transported into the nucleus instead. Attached to the ring is a structure called the nuclear basket that extends into the nucleoplasm, and a series of filamentous extensions that reach into the cytoplasm. Both structures serve to mediate binding to nuclear transport proteins.
Most proteins, ribosomal subunits, and some RNAs are transported through the pore complexes in a process mediated by a family of transport factors known as karyopherins. Those karyopherins that mediate movement into the nucleus are also called importins, whereas those that mediate movement out of the nucleus are called exportins. Most karyopherins interact directly with their cargo, although some use adaptor proteins. Steroid hormones such as cortisol and aldosterone, as well as other small lipid-soluble molecules involved in intercellular signaling, can diffuse through the cell membrane and into the cytoplasm, where they bind nuclear receptor proteins that are trafficked into the nucleus. There they serve as transcription factors when bound to their ligand; in the absence of a ligand, many such receptors function as histone deacetylases that repress gene expression.
Nuclear lamina.
In animal cells, two networks of intermediate filaments provide the nucleus with mechanical support: The nuclear lamina forms an organized meshwork on the internal face of the envelope, while less organized support is provided on the cytosolic face of the envelope. Both systems provide structural support for the nuclear envelope and anchoring sites for chromosomes and nuclear pores.
The nuclear lamina is composed mostly of lamin proteins. Like all proteins, lamins are synthesized in the cytoplasm and later transported to the nucleus interior, where they are assembled before being incorporated into the existing network of nuclear lamina. Lamins found on the cytosolic face of the membrane, such as emerin and nesprin, bind to the cytoskeleton to provide structural support. Lamins are also found inside the nucleoplasm where they form another regular structure, known as the "nucleoplasmic veil", that is visible using fluorescence microscopy. The actual function of the veil is not clear, although it is excluded from the nucleolus and is present during interphase. Lamin structures that make up the veil, such as LEM3, bind chromatin and disrupting their structure inhibits transcription of protein-coding genes.
Like the components of other intermediate filaments, the lamin monomer contains an alpha-helical domain used by two monomers to coil around each other, forming a dimer structure called a coiled coil. Two of these dimer structures then join side by side, in an antiparallel arrangement, to form a tetramer called a "protofilament". Eight of these protofilaments form a lateral arrangement that is twisted to form a ropelike "filament". These filaments can be assembled or disassembled in a dynamic manner, meaning that changes in the length of the filament depend on the competing rates of filament addition and removal.
Mutations in lamin genes leading to defects in filament assembly cause a group of rare genetic disorders known as "laminopathies". The most notable laminopathy is the family of diseases known as progeria, which causes the appearance of premature aging in those with the condition. The exact mechanism by which the associated biochemical changes give rise to the aged phenotype is not well understood.
Nucleolus.
The nucleolus is the largest of the discrete densely stained, membraneless structures known as nuclear bodies found in the nucleus. It forms around tandem repeats of rDNA, DNA coding for ribosomal RNA (rRNA). These regions are called nucleolar organizer regions (NOR). The main roles of the nucleolus are to synthesize rRNA and assemble ribosomes. The structural cohesion of the nucleolus depends on its activity, as ribosomal assembly in the nucleolus results in the transient association of nucleolar components, facilitating further ribosomal assembly, and hence further association. This model is supported by observations that inactivation of rDNA results in intermingling of nucleolar structures.
In the first step of ribosome assembly, a protein called RNA polymerase I transcribes rDNA, which forms a large pre-rRNA precursor. This is cleaved into two large rRNA subunits – 5.8S, and 28S, and a small rRNA subunit 18S. The transcription, post-transcriptional processing, and assembly of rRNA occurs in the nucleolus, aided by small nucleolar RNA (snoRNA) molecules, some of which are derived from spliced introns from messenger RNAs encoding genes related to ribosomal function. The assembled ribosomal subunits are the largest structures passed through the nuclear pores.
When observed under the electron microscope, the nucleolus can be seen to consist of three distinguishable regions: the innermost "fibrillar centers" (FCs), surrounded by the "dense fibrillar component" (DFC) (that contains fibrillarin and nucleolin), which in turn is bordered by the "granular component" (GC) (that contains the protein nucleophosmin). Transcription of the rDNA occurs either in the FC or at the FC-DFC boundary, and, therefore, when rDNA transcription in the cell is increased, more FCs are detected. Most of the cleavage and modification of rRNAs occurs in the DFC, while the latter steps involving protein assembly onto the ribosomal subunits occur in the GC.
Splicing speckles.
Speckles are subnuclear structures that are enriched in pre-messenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells.
At the fluorescence-microscope level they appear as irregular, punctate structures, which vary in size and shape, and when examined by electron microscopy they are seen as clusters of interchromatin granules. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations, including active transcription sites. Speckles can work with p53 as enhancers of gene activity to directly enhance the activity of certain genes. Moreover, speckle-associating and non-associating p53 gene targets are functionally distinct.
Studies on the composition, structure and behaviour of speckles have provided a model for understanding the functional compartmentalization of the nucleus and the organization of the gene-expression machinery splicing snRNPs and other splicing proteins necessary for pre-mRNA processing. Because of a cell's changing requirements, the composition and location of these bodies changes according to mRNA transcription and regulation via phosphorylation of specific proteins. The splicing speckles are also known as nuclear speckles (nuclear specks), splicing factor compartments (SF compartments), interchromatin granule clusters (IGCs), and B snurposomes.
B snurposomes are found in the amphibian oocyte nuclei and in "Drosophila melanogaster" embryos. B snurposomes appear alone or attached to the Cajal bodies in the electron micrographs of the amphibian nuclei. While nuclear speckles were originally thought to be storage sites for the splicing factors, a more recent study demonstrated that organizing genes and pre-mRNA substrates near speckles increases the kinetic efficiency of pre-mRNA splicing, ultimately boosting protein levels by modulation of splicing.
Cajal bodies and gems.
A nucleus typically contains between one and ten compact structures called Cajal bodies or coiled bodies (CB), whose diameter measures between 0.2 μm and 2.0 μm depending on the cell type and species. When seen under an electron microscope, they resemble balls of tangled thread and are dense foci of distribution for the protein coilin. CBs are involved in a number of different roles relating to RNA processing, specifically small nucleolar RNA (snoRNA) and small nuclear RNA (snRNA) maturation, and histone mRNA modification.
Similar to Cajal bodies are Gemini of Cajal bodies, or gems, whose name is derived from the Gemini constellation in reference to their close "twin" relationship with CBs. Gems are similar in size and shape to CBs, and in fact are virtually indistinguishable under the microscope. Unlike CBs, gems do not contain small nuclear ribonucleoproteins (snRNPs), but do contain a protein called survival of motor neuron (SMN) whose function relates to snRNP biogenesis. Gems are believed to assist CBs in snRNP biogenesis, though it has also been suggested from microscopy evidence that CBs and gems are different manifestations of the same structure. Later ultrastructural studies have shown gems to be twins of Cajal bodies with the difference being in the coilin component; Cajal bodies are SMN positive and coilin positive, and gems are SMN positive and coilin negative.
Other nuclear bodies.
Beyond the nuclear bodies first described by Santiago Ramón y Cajal above (e.g., nucleolus, nuclear speckles, Cajal bodies) the nucleus contains a number of other nuclear bodies. These include polymorphic interphase karyosomal association (PIKA), promyelocytic leukaemia (PML) bodies, and paraspeckles. Although little is known about a number of these domains, they are significant in that they show that the nucleoplasm is not a uniform mixture, but rather contains organized functional subdomains.
Other subnuclear structures appear as part of abnormal disease processes. For example, the presence of small intranuclear rods has been reported in some cases of nemaline myopathy. This condition typically results from mutations in actin, and the rods themselves consist of mutant actin as well as other cytoskeletal proteins.
PIKA and PTF domains.
PIKA domains, or polymorphic interphase karyosomal associations, were first described in microscopy studies in 1991. Their function remains unclear, though they were not thought to be associated with active DNA replication, transcription, or RNA processing. They have been found to often associate with discrete domains defined by dense localization of the transcription factor PTF, which promotes transcription of small nuclear RNA (snRNA).
PML-nuclear bodies.
Promyelocytic leukemia protein (PML-nuclear bodies) are spherical bodies found scattered throughout the nucleoplasm, measuring around 0.1–1.0 μm. They are known by a number of other names, including nuclear domain 10 (ND10), Kremer bodies, and PML oncogenic domains. PML-nuclear bodies are named after one of their major components, the promyelocytic leukemia protein (PML). They are often seen in the nucleus in association with Cajal bodies and cleavage bodies. Pml-/- mice, which are unable to create PML-nuclear bodies, develop normally without obvious ill effects, showing that PML-nuclear bodies are not required for most essential biological processes.
Paraspeckles.
Discovered by Fox et al. in 2002, paraspeckles are irregularly shaped compartments in the interchromatin space of the nucleus. First documented in HeLa cells, where there are generally 10–30 per nucleus, paraspeckles are now known to also exist in all human primary cells, transformed cell lines, and tissue sections. Their name is derived from their distribution in the nucleus; the "para" is short for parallel and the "speckles" refers to the splicing speckles to which they are always in close proximity.
Paraspeckles sequester nuclear proteins and RNA and thus appear to function as a molecular sponge that is involved in the regulation of gene expression. Furthermore, paraspeckles are dynamic structures that are altered in response to changes in cellular metabolic activity. They are transcription dependent and in the absence of RNA Pol II transcription, the paraspeckle disappears and all of its associated protein components (PSP1, p54nrb, PSP2, CFI(m)68, and PSF) form a crescent shaped perinucleolar cap in the nucleolus. This phenomenon is demonstrated during the cell cycle. In the cell cycle, paraspeckles are present during interphase and during all of mitosis except for telophase. During telophase, when the two daughter nuclei are formed, there is no RNA Pol II transcription so the protein components instead form a perinucleolar cap.
Perichromatin fibrils.
Perichromatin fibrils are visible only under electron microscope. They are located next to the transcriptionally active chromatin and are hypothesized to be the sites of active pre-mRNA processing.
Clastosomes.
Clastosomes are small nuclear bodies (0.2–0.5 μm) described as having a thick ring-shape due to the peripheral capsule around these bodies. This name is derived from the Greek "klastos" (κλαστός), broken and "soma" (σῶμα), body. Clastosomes are not typically present in normal cells, making them hard to detect. They form under high proteolytic conditions within the nucleus and degrade once there is a decrease in activity or if cells are treated with proteasome inhibitors. The scarcity of clastosomes in cells indicates that they are not required for proteasome function. Osmotic stress has also been shown to cause the formation of clastosomes. These nuclear bodies contain catalytic and regulatory subunits of the proteasome and its substrates, indicating that clastosomes are sites for degrading proteins.
Function.
The nucleus provides a site for genetic transcription that is segregated from the location of translation in the cytoplasm, allowing levels of gene regulation that are not available to prokaryotes. The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle.
Cell compartmentalization.
The nuclear envelope allows control of the nuclear contents, and separates them from the rest of the cytoplasm where necessary. This is important for controlling processes on either side of the nuclear membrane: In most cases where a cytoplasmic process needs to be restricted, a key participant is removed to the nucleus, where it interacts with transcription factors to downregulate the production of certain enzymes in the pathway. This regulatory mechanism occurs in the case of glycolysis, a cellular pathway for breaking down glucose to produce energy. Hexokinase is an enzyme responsible for the first step of glycolysis, forming glucose-6-phosphate from glucose. At high concentrations of fructose-6-phosphate, a molecule made later from glucose-6-phosphate, a regulator protein removes hexokinase to the nucleus, where it forms a transcriptional repressor complex with nuclear proteins to reduce the expression of genes involved in glycolysis.
In order to control which genes are being transcribed, the cell separates some transcription factor proteins responsible for regulating gene expression from physical access to the DNA until they are activated by other signaling pathways. This prevents even low levels of inappropriate gene expression. For example, in the case of NF-κB-controlled genes, which are involved in most inflammatory responses, transcription is induced in response to a signal pathway such as that initiated by the signaling molecule TNF-α, binds to a cell membrane receptor, resulting in the recruitment of signalling proteins, and eventually activating the transcription factor NF-κB. A nuclear localisation signal on the NF-κB protein allows it to be transported through the nuclear pore and into the nucleus, where it stimulates the transcription of the target genes.
The compartmentalization allows the cell to prevent translation of unspliced mRNA. Eukaryotic mRNA contains introns that must be removed before being translated to produce functional proteins. The splicing is done inside the nucleus before the mRNA can be accessed by ribosomes for translation. Without the nucleus, ribosomes would translate newly transcribed (unprocessed) mRNA, resulting in malformed and nonfunctional proteins.
Replication.
The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle. It has been found that replication happens in a localised way in the cell nucleus. In the S phase of interphase of the cell cycle; replication takes place. Contrary to the traditional view of moving replication forks along stagnant DNA, a concept of "replication factories" emerged, which means replication forks are concentrated towards some immobilised 'factory' regions through which the template DNA strands pass like conveyor belts.
Gene expression.
Gene expression first involves transcription, in which DNA is used as a template to produce RNA. In the case of genes encoding proteins, that RNA produced from this process is messenger RNA (mRNA), which then needs to be translated by ribosomes to form a protein. As ribosomes are located outside the nucleus, mRNA produced needs to be exported.
Since the nucleus is the site of transcription, it also contains a variety of proteins that either directly mediate transcription or are involved in regulating the process. These proteins include helicases, which unwind the double-stranded DNA molecule to facilitate access to it, RNA polymerases, which bind to the DNA promoter to synthesize the growing RNA molecule, topoisomerases, which change the amount of supercoiling in DNA, helping it wind and unwind, as well as a large variety of transcription factors that regulate expression.
Processing of pre-mRNA.
Newly synthesized mRNA molecules are known as primary transcripts or pre-mRNA. They must undergo post-transcriptional modification in the nucleus before being exported to the cytoplasm; mRNA that appears in the cytoplasm without these modifications is degraded rather than used for protein translation. The three main modifications are 5' capping, 3' polyadenylation, and RNA splicing. While in the nucleus, pre-mRNA is associated with a variety of proteins in complexes known as heterogeneous ribonucleoprotein particles (hnRNPs). Addition of the 5' cap occurs co-transcriptionally and is the first step in post-transcriptional modification. The 3' poly-adenine tail is only added after transcription is complete.
RNA splicing, carried out by a complex called the spliceosome, is the process by which introns, or regions of DNA that do not code for protein, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. This process normally occurs after 5' capping and 3' polyadenylation but can begin before synthesis is complete in transcripts with many exons. Many pre-mRNAs can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences. This process is known as alternative splicing, and allows production of a large variety of proteins from a limited amount of DNA.
Dynamics and regulation.
Nuclear transport.
The entry and exit of large molecules from the nucleus is tightly controlled by the nuclear pore complexes. Although small molecules can enter the nucleus without regulation, macromolecules such as RNA and proteins require association karyopherins called importins to enter the nucleus and exportins to exit. "Cargo" proteins that must be translocated from the cytoplasm to the nucleus contain short amino acid sequences known as nuclear localization signals, which are bound by importins, while those transported from the nucleus to the cytoplasm carry nuclear export signals bound by exportins. The ability of importins and exportins to transport their cargo is regulated by GTPases, enzymes that hydrolyze the molecule guanosine triphosphate (GTP) to release energy. The key GTPase in nuclear transport is Ran, which is bound to either GTP or GDP (guanosine diphosphate), depending on whether it is located in the nucleus or the cytoplasm. Whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo.
Nuclear import depends on the importin binding its cargo in the cytoplasm and carrying it through the nuclear pore into the nucleus. Inside the nucleus, RanGTP acts to separate the cargo from the importin, allowing the importin to exit the nucleus and be reused. Nuclear export is similar, as the exportin binds the cargo inside the nucleus in a process facilitated by RanGTP, exits through the nuclear pore, and separates from its cargo in the cytoplasm.
Specialized export proteins exist for translocation of mature mRNA and tRNA to the cytoplasm after post-transcriptional modification is complete. This quality-control mechanism is important due to these molecules' central role in protein translation. Mis-expression of a protein due to incomplete excision of exons or mis-incorporation of amino acids could have negative consequences for the cell; thus, incompletely modified RNA that reaches the cytoplasm is degraded rather than used in translation.
Assembly and disassembly.
During its lifetime, a nucleus may be broken down or destroyed, either in the process of cell division or as a consequence of apoptosis (the process of programmed cell death). During these events, the structural components of the nucleus — the envelope and lamina — can be systematically degraded.
In most cells, the disassembly of the nuclear envelope marks the end of the prophase of mitosis. However, this disassembly of the nucleus is not a universal feature of mitosis and does not occur in all cells. Some unicellular eukaryotes (e.g., yeasts) undergo so-called closed mitosis, in which the nuclear envelope remains intact. In closed mitosis, the daughter chromosomes migrate to opposite poles of the nucleus, which then divides in two. The cells of higher eukaryotes, however, usually undergo open mitosis, which is characterized by breakdown of the nuclear envelope. The daughter chromosomes then migrate to opposite poles of the mitotic spindle, and new nuclei reassemble around them.
At a certain point during the cell cycle in open mitosis, the cell divides to form two cells. In order for this process to be possible, each of the new daughter cells must have a full set of genes, a process requiring replication of the chromosomes as well as segregation of the separate sets. This occurs by the replicated chromosomes, the sister chromatids, attaching to microtubules, which in turn are attached to different centrosomes. The sister chromatids can then be pulled to separate locations in the cell. In many cells, the centrosome is located in the cytoplasm, outside the nucleus; the microtubules would be unable to attach to the chromatids in the presence of the nuclear envelope. Therefore, the early stages in the cell cycle, beginning in prophase and until around prometaphase, the nuclear membrane is dismantled. Likewise, during the same period, the nuclear lamina is also disassembled, a process regulated by phosphorylation of the lamins by protein kinases such as the CDC2 protein kinase. Towards the end of the cell cycle, the nuclear membrane is reformed, and around the same time, the nuclear lamina are reassembled by dephosphorylating the lamins.
However, in dinoflagellates, the nuclear envelope remains intact, the centrosomes are located in the cytoplasm, and the microtubules come in contact with chromosomes, whose centromeric regions are incorporated into the nuclear envelope (the so-called closed mitosis with extranuclear spindle). In many other protists (e.g., ciliates, sporozoans) and fungi, the centrosomes are intranuclear, and their nuclear envelope also does not disassemble during cell division.
Apoptosis is a controlled process in which the cell's structural components are destroyed, resulting in death of the cell. Changes associated with apoptosis directly affect the nucleus and its contents, for example, in the condensation of chromatin and the disintegration of the nuclear envelope and lamina. The destruction of the lamin networks is controlled by specialized apoptotic proteases called caspases, which cleave the lamin proteins and, thus, degrade the nucleus' structural integrity. Lamin cleavage is sometimes used as a laboratory indicator of caspase activity in assays for early apoptotic activity. Cells that express mutant caspase-resistant lamins are deficient in nuclear changes related to apoptosis, suggesting that lamins play a role in initiating the events that lead to apoptotic degradation of the nucleus. Inhibition of lamin assembly itself is an inducer of apoptosis.
The nuclear envelope acts as a barrier that prevents both DNA and RNA viruses from entering the nucleus. Some viruses require access to proteins inside the nucleus in order to replicate and/or assemble. DNA viruses, such as herpesvirus replicate and assemble in the cell nucleus, and exit by budding through the inner nuclear membrane. This process is accompanied by disassembly of the lamina on the nuclear face of the inner membrane.
Disease-related dynamics.
Initially, it has been suspected that immunoglobulins in general and autoantibodies in particular do not enter the nucleus. Now there is a body of evidence that under pathological conditions (e.g. lupus erythematosus) IgG can enter the nucleus.
Nuclei per cell.
Most eukaryotic cell types usually have a single nucleus, but some have no nuclei, while others have several. This can result from normal development, as in the maturation of mammalian red blood cells, or from faulty cell division.
Anucleated cells.
An anucleated cell contains no nucleus and is, therefore, incapable of dividing to produce daughter cells. The best-known anucleated cell is the mammalian red blood cell, or erythrocyte, which also lacks other organelles such as mitochondria, and serves primarily as a transport vessel to ferry oxygen from the lungs to the body's tissues. Erythrocytes mature through erythropoiesis in the bone marrow, where they lose their nuclei, organelles, and ribosomes. The nucleus is expelled during the process of differentiation from an erythroblast to a reticulocyte, which is the immediate precursor of the mature erythrocyte. The presence of mutagens may induce the release of some immature "micronucleated" erythrocytes into the bloodstream. Anucleated cells can also arise from flawed cell division in which one daughter lacks a nucleus and the other has two nuclei.
In flowering plants, this condition occurs in sieve tube elements.
Multinucleated cells.
Multinucleated cells contain multiple nuclei. Most acantharean species of protozoa and some fungi in mycorrhizae have naturally multinucleated cells. Other examples include the intestinal parasites in the genus "Giardia", which have two nuclei per cell. Ciliates have two kinds of nuclei in a single cell, a somatic macronucleus and a germline micronucleus. In humans, skeletal muscle cells, also called myocytes and syncytium, become multinucleated during development; the resulting arrangement of nuclei near the periphery of the cells allows maximal intracellular space for myofibrils. Other multinucleate cells in the human are osteoclasts a type of bone cell. Multinucleated and binucleated cells can also be abnormal in humans; for example, cells arising from the fusion of monocytes and macrophages, known as giant multinucleated cells, sometimes accompany inflammation and are also implicated in tumor formation.
A number of dinoflagellates are known to have two nuclei. Unlike other multinucleated cells these nuclei contain two distinct lineages of DNA: one from the dinoflagellate and the other from a symbiotic diatom.
Evolution.
As the major defining characteristic of the eukaryotic cell, the nucleus's evolutionary origin has been the subject of much speculation. Four major hypotheses have been proposed to explain the existence of the nucleus, although none have yet earned widespread support.
The first model known as the "syntrophic model" proposes that a symbiotic relationship between the archaea and bacteria created the nucleus-containing eukaryotic cell. (Organisms of the Archaeal and Bacterial domains have no cell nucleus.) It is hypothesized that the symbiosis originated when ancient archaea similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. One possibility is that the nuclear membrane arose as a new membrane system following the origin of mitochondria in an archaebacterial host. The nuclear membrane may have served to protect the genome from damaging reactive oxygen species produced by the protomitochondria. The archaeal origin of the nucleus is supported by observations that archaea and eukarya have similar genes for certain proteins, including histones. Observations that myxobacteria are motile, can form multicellular complexes, and possess kinases and G proteins similar to eukarya, support a bacterial origin for the eukaryotic cell.
A second model proposes that proto-eukaryotic cells evolved from bacteria without an endosymbiotic stage. This model is based on the existence of modern Planctomycetota bacteria that possess a nuclear structure with primitive pores and other compartmentalized membrane structures. A similar proposal states that a eukaryote-like cell, the , evolved first and phagocytosed archaea and bacteria to generate the nucleus and the eukaryotic cell.
The most controversial model, known as "viral eukaryogenesis", posits that the membrane-bound nucleus, along with other eukaryotic features, originated from the infection of a prokaryote by a virus. The suggestion is based on similarities between eukaryotes and viruses such as linear DNA strands, mRNA capping, and tight binding to proteins (analogizing histones to viral envelopes). One version of the proposal suggests that the nucleus evolved in concert with phagocytosis to form an early cellular "predator". Another variant proposes that eukaryotes originated from early archaea infected by poxviruses, on the basis of observed similarity between the DNA polymerases in modern poxviruses and eukaryotes. It has been suggested that the unresolved question of the evolution of sex could be related to the viral eukaryogenesis hypothesis.
A more recent proposal, the "exomembrane hypothesis", suggests that the nucleus instead originated from a single ancestral cell that evolved a second exterior cell membrane; the interior membrane enclosing the original cell then became the nuclear membrane and evolved increasingly elaborate pore structures for passage of internally synthesized cellular components such as ribosomal subunits.
History.
The nucleus was the first organelle to be discovered. What is most likely the oldest preserved drawing dates back to the early microscopist Antonie van Leeuwenhoek (1632–1723). He observed a "lumen", the nucleus, in the red blood cells of salmon. Unlike mammalian red blood cells, those of other vertebrates still contain nuclei.
The nucleus was also described by Franz Bauer in 1804 and in more detail in 1831 by Scottish botanist Robert Brown in a talk at the Linnean Society of London. Brown was studying orchids under the microscope when he observed an opaque area, which he called the "areola" or "nucleus", in the cells of the flower's outer layer. He did not suggest a potential function.
In 1838, Matthias Schleiden proposed that the nucleus plays a role in generating cells, thus he introduced the name "cytoblast" ("cell builder"). He believed that he had observed new cells assembling around "cytoblasts". Franz Meyen was a strong opponent of this view, having already described cells multiplying by division and believing that many cells would have no nuclei. The idea that cells can be generated de novo, by the "cytoblast" or otherwise, contradicted work by Robert Remak (1852) and Rudolf Virchow (1855) who decisively propagated the new paradigm that cells are generated solely by cells (""). The function of the nucleus remained unclear.
Between 1877 and 1878, Oscar Hertwig published several studies on the fertilization of sea urchin eggs, showing that the nucleus of the sperm enters the oocyte and fuses with its nucleus. This was the first time it was suggested that an individual develops from a (single) nucleated cell. This was in contradiction to Ernst Haeckel's theory that the complete phylogeny of a species would be repeated during embryonic development, including generation of the first nucleated cell from a "monerula", a structureless mass of primordial protoplasm ("Urschleim"). Therefore, the necessity of the sperm nucleus for fertilization was discussed for quite some time. However, Hertwig confirmed his observation in other animal groups, including amphibians and molluscs. Eduard Strasburger produced the same results for plants in 1884. This paved the way to assign the nucleus an important role in heredity. In 1873, August Weismann postulated the equivalence of the maternal and paternal germ "cells" for heredity. The function of the nucleus as carrier of genetic information became clear only later, after mitosis was discovered and the Mendelian rules were rediscovered at the beginning of the 20th century; the chromosome theory of heredity was therefore developed.
|
6237
|
248739
|
https://en.wikipedia.org/wiki?curid=6237
|
Christmas
|
Christmas is an annual festival commemorating the birth of Jesus Christ, observed primarily on December 25 as a religious and cultural celebration among billions of people around the world. A liturgical feast central to Christianity, Christmas preparation begins on the First Sunday of Advent and it is followed by Christmastide, which historically in the West lasts twelve days and culminates on Twelfth Night. Christmas Day is a public holiday in many countries, is observed religiously by a majority of Christians, as well as celebrated culturally by many non-Christians, and forms an integral part of the annual holiday season.
The traditional Christmas narrative recounted in the New Testament, known as the Nativity of Jesus, says that Jesus was born in Bethlehem, in accordance with messianic prophecies. When Joseph and Mary arrived in the city, the inn had no room, and so they were offered a stable where the Christ Child was soon born, with angels proclaiming this news to shepherds, who then spread the word.
There are different hypotheses regarding the date of Jesus's birth. In the early fourth century, the church fixed the date as December 25, the date of the winter solstice in the Roman Empire. It is nine months after Annunciation on March 25, also the Roman date of the spring equinox. Most Christians celebrate on December 25 in the Gregorian calendar, which has been adopted almost universally in the civil calendars used in countries throughout the world. However, part of the Eastern Christian Churches celebrate Christmas on December 25 of the older Julian calendar, which currently corresponds to January 7 in the Gregorian calendar. For Christians, celebrating that God came into the world in the form of man to atone for the sins of humanity is more important than knowing Jesus's exact birth date.
The customs associated with Christmas in various countries have a mix of pre-Christian, Christian, and secular themes and origins. Popular holiday traditions include gift giving; completing an Advent calendar or Advent wreath; Christmas music and caroling; watching Christmas movies; viewing a Nativity play; an exchange of Christmas cards; attending church services; a special meal; and displaying various Christmas decorations, including Christmas trees, Christmas lights, nativity scenes, poinsettias, garlands, wreaths, mistletoe, and holly. Additionally, several related and often interchangeable figures, known as Santa Claus, Father Christmas, Saint Nicholas, and Christkind, are associated with bringing gifts to children during the Christmas season and have their own body of traditions and lore. Because gift-giving and many other aspects of the Christmas festival involve heightened economic activity, the holiday has become a significant event and a key sales period for retailers and businesses. Over the past few centuries, Christmas has had a steadily growing economic effect in many regions of the world.
Etymology.
The English word "Christmas" is a shortened form of 'Christ's Mass'. The word is recorded as in 1038 and in 1131. (genitive ) is from the Greek (, 'Christ'), a translation of the Hebrew (, 'Messiah'), meaning 'anointed'; and is from the Latin , the celebration of the Eucharist.
The form "Christenmas" was also used during some periods, but is now considered archaic and dialectal. The term derives from Middle English , meaning 'Christian mass'. "Xmas" is an abbreviation of "Christmas" found particularly in print, based on the initial letter chi (Χ) in the Greek , although some style guides discourage its use. This abbreviation has a precedent in Middle English (where is another abbreviation of the Greek word).
Other names.
The holiday has had various other English names throughout its history. The Anglo-Saxons referred to the feast as "midwinter", or, more rarely, as (from the Latin below). "Nativity", meaning 'birth', is from the Latin . In Old English, ('Yule') referred to the period corresponding to December and January, which was eventually equated with Christian Christmas. 'Noel' (also 'Nowel' or 'Nowell', as in "The First Nowell") entered English in the late 14th century and is from the Old French or , itself ultimately from the Latin meaning 'birth (day)'.
"Koleda" is the traditional Slavic name for Christmas and the period from Christmas to Epiphany or, more generally, to Slavic Christmas-related rituals, some dating to pre-Christian times.
Nativity.
The gospels of Luke and Matthew describe Jesus as being born in Bethlehem to the Virgin Mary. In the Gospel of Luke, Joseph and Mary travel from Nazareth to Bethlehem in order to be counted for a census, and Jesus is born there and placed in a manger. Angels proclaim him a savior for all people, and three shepherds come to adore him. In the Gospel of Matthew, by contrast, three magi follow a star to Bethlehem to bring gifts to Jesus, born the king of the Jews. King Herod orders the massacre of all the boys less than two years old in Bethlehem, but the family flees to Egypt and later returns to Nazareth.
History.
Early and medieval era.
In the 2nd century, the "earliest church records" indicate that "Christians were remembering and celebrating the birth of the Lord", an "observance [that] sprang up organically from the authentic devotion of ordinary believers"; although "they did not agree upon a set date". The earliest document to place Jesus's birthday on December 25 is the Chronograph of 354 (also called the Calendar of Filocalus), which also names it as the birthday of Sol Invictus (the 'Invincible Sun').
Liturgical historians generally agree that this part of the text was written in Rome in AD 336. This is consistent with the assertion that the date was formally set by Pope Julius I, bishop of Rome from 337 to 352. Though Christmas did not appear on the lists of festivals given by the early Christian writers Irenaeus and Tertullian, the early Church Fathers John Chrysostom, Augustine of Hippo, and Jerome attested to December 25 as the date of Christmas toward the end of the fourth century. December 25 was the traditional date of the winter solstice in the Roman Empire, where most Christians lived, and the Roman festival (birthday of Sol Invictus) had been held on this date since 274 AD.
In the East, the birth of Jesus was celebrated in connection with the Epiphany on January 6. This holiday was not primarily about Christ's birth, but rather his baptism. Christmas was promoted in the East as part of the revival of Orthodox Christianity that followed the death of the pro-Arian Emperor Valens at the Battle of Adrianople in 378. The feast was introduced in Constantinople in 379, in Antioch by John Chrysostom towards the end of the fourth century, probably in 388, and in Alexandria in the following century. The Georgian Iadgari demonstrates that Christmas was celebrated in Jerusalem by the sixth century.
In the Early Middle Ages, Christmas Day was overshadowed by Epiphany, which in western Christianity focused on the visit of the magi. However, the medieval calendar was dominated by Christmas-related holidays. The forty days before Christmas became the "forty days of St. Martin" (which began on November 11, the feast of St. Martin of Tours), now known as Advent. In Italy, former Saturnalian traditions were attached to Advent. Around the 12th century, these traditions transferred again to the Twelve Days of Christmas (December 25 – January 5); a time that appears in the liturgical calendars as Christmastide or Twelve Holy Days.
In 567, the Council of Tours put in place the season of Christmastide, proclaiming "the twelve days from Christmas to Epiphany as a sacred and festive season, and established the duty of Advent fasting in preparation for the feast". This was done in order to solve the "administrative problem for the Roman Empire as it tried to coordinate the solar Julian calendar with the lunar calendars of its provinces in the east".
The prominence of Christmas Day increased gradually after Charlemagne was crowned Emperor on Christmas Day in 800. King Edmund the Martyr was anointed on Christmas in 855 and King William I of England was crowned on Christmas Day 1066.
By the High Middle Ages, the holiday had become so prominent that chroniclers routinely noted where various magnates celebrated Christmas. King Richard II of England hosted a Christmas feast in 1377 at which 28 oxen and 300 sheep were eaten. The Yule boar was a common feature of medieval Christmas feasts. Caroling also became popular, and was originally performed by a group of dancers who sang. The group was composed of a lead singer and a ring of dancers that provided the chorus. Various writers of the time condemned caroling as lewd, indicating that the unruly traditions of Saturnalia and Yule may have continued in this form. "Misrule"—drunkenness, promiscuity, gambling—was also an important aspect of the festival. In England, gifts were exchanged on New Year's Day (a custom at the royal court), and there was special Christmas ale.
Christmas during the Middle Ages was a public festival that incorporated ivy, holly, and other evergreens. Christmas gift-giving during the Middle Ages was usually between people with legal relationships, such as tenant and landlord. The annual indulgence in eating, dancing, singing, sporting, and card playing escalated in England, and by the 17th century the Christmas season featured lavish dinners, elaborate masques, and pageants. In 1607, King James I insisted that a play be acted on Christmas night and that the court indulge in games. It was during the Reformation in 16th–17th-century Europe that many Protestants changed the gift bringer to the Christ Child or "Christkindl", and the date of giving gifts changed from December 6 to Christmas Eve.
17th and 18th centuries.
Following the Protestant Reformation, many of the new denominations, including the Anglican Church and Lutheran Church, continued to celebrate Christmas. In 1629, the Anglican poet John Milton penned "On the Morning of Christ's Nativity", a poem that has since been read by many during Christmastide. Donald Heinz, a professor at California State University, Chico, states that Martin Luther "inaugurated a period in which Germany would produce a unique culture of Christmas, much copied in North America". Among the congregations of the Dutch Reformed Church, Christmas was celebrated as one of the principal evangelical feasts.
However, in 17th century England, some groups such as the Puritans strongly condemned the celebration of Christmas, considering it a Catholic invention and the "trappings of popery" or the "rags of the Beast". In contrast, the established Anglican Church "pressed for a more elaborate observance of feasts, penitential seasons, and saints' days. The calendar reform became a major point of tension between the Anglican party and the Puritan party". The Catholic Church also responded, promoting the festival in a more religiously oriented form. King Charles I of England directed his noblemen and gentry to return to their landed estates in midwinter to keep up their old-style Christmas generosity. Following the Parliamentarian victory over Charles I during the English Civil War, England's Puritan rulers banned Christmas in 1647.
Protests followed as pro-Christmas rioting broke out in several cities and for weeks Canterbury was controlled by the rioters, who decorated doorways with holly and shouted royalist slogans. Football, among the sports the Puritans banned on a Sunday, was also used as a rebellious force: when Puritans outlawed Christmas in England in December 1647 the crowd brought out footballs as a symbol of festive misrule. The book, "The Vindication of Christmas" (London, 1652), argued against the Puritans, and makes note of Old English Christmas traditions, dinner, roast apples on the fire, card playing, dances with "plow-boys" and "maidservants", old Father Christmas and carol singing. During the ban, semi-clandestine religious services marking Christ's birth continued to be held, and people sang carols in secret.
Christmas was restored as a legal holiday in England with the Restoration of King Charles II in 1660 when Puritan legislation was declared null and void, with Christmas again freely celebrated in England. Many Calvinist clergymen disapproved of Christmas celebrations. As such, in Scotland, the Presbyterian Church of Scotland discouraged the observance of Christmas, and though James VI commanded its celebration in 1618, attendance at church was scant. The Parliament of Scotland officially abolished the observance of Christmas in 1640, claiming that the church had been "purged of all superstitious observation of days". Whereas in England, Wales and Ireland Christmas Day is a common law holiday, having been a customary holiday since time immemorial, it was not until 1871 that it was designated a bank holiday in Scotland. Following the Restoration of Charles II, "Poor Robin's Almanack" contained the lines: "Now thanks to God for Charles return, / Whose absence made old Christmas mourn. / For then we scarcely did it know, / Whether it Christmas were or no". The diary of James Woodforde, from the latter half of the 18th century, details the observance of Christmas and celebrations associated with the season over a number of years.
As in England, Puritans in Colonial America staunchly opposed the observation of Christmas. The Pilgrims of New England pointedly spent their first December 25 in the New World working normally. Puritans such as Cotton Mather condemned Christmas both because scripture did not mention its observance and because Christmas celebrations of the day often involved boisterous behavior. Many non-Puritans in New England deplored the loss of the holidays enjoyed by the laboring classes in England. Christmas observance was outlawed in Boston in 1659. The ban on Christmas observance was revoked in 1681 by English governor Edmund Andros, but it was not until the mid-19th century that celebrating Christmas became fashionable in the Boston region.
At the same time, Christian residents of Virginia and New York observed the holiday freely. Pennsylvania Dutch settlers, predominantly Moravian settlers of Bethlehem, Nazareth, and Lititz in Pennsylvania and the Wachovia settlements in North Carolina, were enthusiastic celebrators of Christmas. The Moravians in Bethlehem had the first Christmas trees in America as well as the first Nativity Scenes. Christmas fell out of favor in the United States after the American Revolution, when it was considered an English custom.
George Washington attacked Hessian (German) mercenaries on the day after Christmas during the Battle of Trenton on December 26, 1776, Christmas being much more popular in Germany than in America at this time.
With the atheistic Cult of Reason in power during the era of Revolutionary France, Christian Christmas religious services were banned and the three kings cake was renamed the "equality cake" under anticlerical government policies.
19th century.
In the early 19th century, Christmas festivities and services gradually spread with the rise of the Oxford Movement in the Church of England that emphasized the centrality of Christmas in Christianity and charity to the poor, along with Washington Irving, Charles Dickens, and other authors emphasizing family, children, kind-heartedness, gift-giving, and Santa Claus (for Irving), or Father Christmas (for Dickens). An indication this increased recognition of Christmas was slow, however, is seen in the fact that "in twenty of the years between 1790 and 1835, "The Times" did not mention Christmas at all."
In the early-19th century, writers imagined Tudor-period Christmas as a time of heartfelt celebration. In 1835, Thomas Hervey and Robert Seymour published "The Christmas Book" in which they introduced what has been called a "national Christmas narrative." In his book, Hervey asserted: "the revels of merry England are fast subsiding into silence, and her many customs wearing gradually away." In 1843, Charles Dickens wrote the novel "A Christmas Carol", which helped revive the "spirit" of Christmas and seasonal merriment. Its instant popularity played a major role in portraying Christmas as a holiday emphasizing family, goodwill, and compassion.
Dickens sought to construct Christmas as a family-centered festival of generosity, linking "worship and feasting, within a context of social reconciliation". Superimposing his humanitarian vision of the holiday, in what has been termed "Carol Philosophy", Dickens influenced many aspects of Christmas that are celebrated today in Western culture, such as family gatherings, seasonal food and drink, dancing, games, and a festive generosity of spirit. It has been said that Dickens' breakthrough with "A Christmas Carol" was his "ingenious pairing of seasonal fiction and seasonal [book] sales." A prominent phrase from the tale, "Merry Christmas", was popularized following the appearance of the story. This coincided with the appearance of the Oxford Movement and the growth of Anglo-Catholicism, which led a revival in traditional rituals and religious observances.
The term "Scrooge" became a synonym for miser, with the phrase "Bah! Humbug!" becoming emblematic of a dismissive attitude of the festive spirit. In 1843, the first commercial Christmas card was produced by Sir Henry Cole. The revival of the Christmas Carol began with William Sandys's "Christmas Carols Ancient and Modern" (1833), with the first appearance in print of "The First Noel", "I Saw Three Ships", "Hark the Herald Angels Sing" and "God Rest Ye Merry, Gentlemen", popularized in Dickens's "A Christmas Carol".
In Britain, the Christmas tree was introduced in the early 19th century by the German-born Queen Charlotte. In 1832, the future Queen Victoria wrote about her delight at having a Christmas tree, hung with lights, ornaments, and presents placed round it. After her marriage to her German cousin Prince Albert, by 1841 the custom became more widespread throughout Britain. An image of the British royal family with their Christmas tree at Windsor Castle created a sensation when it was published in the "Illustrated London News" in 1848. A modified version of this image was published in "Godey's Lady's Book", Philadelphia in 1850. By the 1870s, putting up a Christmas tree had become common in America.
In America, interest in Christmas had been revived in the 1820s by several short stories by Washington Irving which appear in his "The Sketch Book of Geoffrey Crayon, Gent." and "Old Christmas". Irving's stories depicted harmonious warm-hearted English Christmas festivities he experienced while staying in Aston Hall, Birmingham, England, that had largely been abandoned, and he used the tract "Vindication of Christmas" (1652) of Old English Christmas traditions, that he had transcribed into his journal as a format for his stories.
In 1822, Clement Clarke Moore wrote the poem "A Visit From St. Nicholas" (popularly known by its first line: "Twas the Night Before Christmas"). The poem helped popularize the tradition of exchanging gifts, and seasonal Christmas shopping began to assume economic importance. This also started the cultural conflict between the holiday's spiritual significance and its associated commercialism that some see as corrupting the holiday. In her 1850 book "The First Christmas in New England", Harriet Beecher Stowe includes a character who complains that the true meaning of Christmas was lost in a shopping spree.
While the celebration of Christmas was not yet customary in some regions in the U.S., Henry Wadsworth Longfellow detected "a transition state about Christmas here in New England" in 1856. "The old puritan feeling prevents it from being a cheerful, hearty holiday; though every year makes it more so". In Reading, Pennsylvania, a newspaper remarked in 1861, "Even our presbyterian friends who have hitherto steadfastly ignored Christmas—threw open their church doors and assembled in force to celebrate the anniversary of the Savior's birth."
The First Congregational Church of Rockford, Illinois, "although of genuine Puritan stock", was 'preparing for a grand Christmas jubilee', a news correspondent reported in 1864. By 1860, fourteen states including several from New England had adopted Christmas as a legal holiday. In 1875, Louis Prang introduced the Christmas card to Americans. He has been called the "father of the American Christmas card". On June 28, 1870, Christmas was formally declared a United States federal holiday.
20th and 21st centuries.
During the First World War and particularly (but not exclusively) in 1914, a series of informal truces took place for Christmas between opposing armies. The truces, which were organised spontaneously by fighting men, ranged from promises not to shoot (shouted at a distance in order to ease the pressure of war for the day) to friendly socializing, gift giving and even sport between enemies. These incidents became a well known and semi-mythologised part of popular memory. They have been described as a symbol of common humanity even in the darkest of situations and used to demonstrate to children the ideals of Christmas.
Under the state atheism of the Soviet Union, after its foundation in 1917, Christmas celebrations—along with other Christian holidays—were prohibited in public. During the 1920s, 1930s, and 1940s, the League of Militant Atheists encouraged school pupils to campaign against Christmas traditions, such as the Christmas tree, as well as other Christian holidays, including Easter; the League established an antireligious holiday to be the 31st of each month as a replacement. At the height of this persecution, in 1929, on Christmas Day, children in Moscow were encouraged to spit on crucifixes as a protest against the holiday. Instead, the importance of the holiday and all its trappings, such as the Christmas tree and gift-giving, was transferred to the New Year. It was not until the dissolution of the Soviet Union in 1991 that the persecution ended and Orthodox Christmas became a state holiday again for the first time in Russia after seven decades.
In 1991, the Gubbio Christmas Tree, in Italy, 650 meters high and decorated with over 700 lights, entered the Guinness Book of Records as the tallest Christmas tree in the world. European History Professor Joseph Perry wrote that likewise, in Nazi Germany, "because Nazi ideologues saw organized religion as an enemy of the totalitarian state, propagandists sought to deemphasize—or eliminate altogether—the Christian aspects of the holiday" and that "Propagandists tirelessly promoted numerous Nazified Christmas songs, which replaced Christian themes with the regime's racial ideologies".
As Christmas celebrations began to spread globally even outside traditional Christian cultures, several Muslim-majority countries began to ban the observance of Christmas, claiming it undermined Islam. In 2023, public Christmas celebrations were cancelled in Bethlehem, the city synonymous with the birth of Jesus. Palestinian leaders of various Christian denominations cited the ongoing Israel–Gaza war in their unanimous decision to cancel celebrations.
Observance and traditions.
Christmas Day is celebrated as a major festival and public holiday in countries around the world, including many whose populations are mostly non-Christian. In some non-Christian areas, periods of former colonial rule introduced the celebration (e.g. Hong Kong); in others, Christian minorities or foreign cultural influences have led populations to observe the holiday. Countries such as Japan, where Christmas is popular despite there being only a small number of Christians, have adopted many of the cultural aspects of Christmas, such as gift-giving, decorations, and Christmas trees. A similar example is in Turkey, being Muslim-majority and with a small number of Christians, where Christmas trees and decorations tend to line public streets during the festival.
Many popular customs associated with Christmas developed independently of the commemoration of Jesus's birth, with some claiming that certain elements are Christianized and have origins in pre-Christian festivals that were celebrated by pagan populations who were later converted to Christianity; other scholars reject these claims and affirm that Christmas customs largely developed in a Christian context. The prevailing atmosphere of Christmas has also continually evolved since the holiday's inception, ranging from a sometimes raucous, drunken, carnival-like state in the Middle Ages, to a tamer family-oriented and children-centered theme introduced in a 19th-century transformation. The celebration of Christmas was banned on more than one occasion within certain groups, such as the Puritans and Jehovah's Witnesses (who do not celebrate birthdays in general), due to concerns that it was too unbiblical. Celtic winter herbs such as mistletoe and ivy, and the custom of kissing under a mistletoe, are common in modern Christmas celebrations in the English-speaking countries.
The pre-Christian Germanic peoples—including the Anglo-Saxons and the Norse—celebrated a winter festival called Yule, held in the late December to early January period, yielding modern English "yule", today used as a synonym for "Christmas". In Germanic language-speaking areas, numerous elements of modern Christmas folk custom and iconography may have originated from Yule, including the Yule log, Yule boar, and the Yule goat. Often leading a ghostly procession through the sky (the Wild Hunt), the long-bearded god Odin is referred to as "the Yule one" and "Yule father" in Old Norse texts, while other gods are referred to as "Yule beings". On the other hand, as there are no reliable existing references to a Christmas log prior to the 16th century, the burning of the Christmas block may have been an early modern invention by Christians unrelated to the pagan practice.
Among countries with a strong Christian tradition, a variety of Christmas celebrations have developed that incorporate regional and local cultures. For example, in eastern Europe Christmas celebrations incorporated pre-Christian traditions such as the Koleda, which shares parallels with the Christmas carol.
Church attendance.
Christmas Day (inclusive of its vigil, Christmas Eve), is a Festival in the Lutheran Churches, a solemnity in the Roman Catholic Church, and a Principal Feast of the Anglican Communion. Other Christian denominations do not rank their feast days but nevertheless place importance on Christmas Eve/Christmas Day, as with other Christian feasts like Easter, Ascension Day, and Pentecost. As such, for Christians, attending a Christmas Eve or Christmas Day church service plays an important part in the recognition of the Christmas season. Christmas, along with Easter, is the period of the highest annual church attendance. A 2010 survey by LifeWay Christian Resources found that six in ten Americans attend church services during this time. In the United Kingdom, the Church of England reported an estimated attendance of 2.5million people at Christmas services in 2015.
Decorations.
Nativity scenes are known from 10th-century Rome. They were popularised by Saint Francis of Assisi from 1223, quickly spreading across Europe. Different types of decorations developed across the Christian world, dependent on local tradition and available resources, and can vary from simple representations of the crib to far more elaborate sets – renowned manger scene traditions include the colourful in Poland, which imitate Kraków's historical buildings as settings, the elaborate Italian (, and ), or the Provençal crèches in southern France, using hand-painted terracotta figurines called . In certain parts of the world, notably Sicily, living nativity scenes following the tradition of Saint Francis are a popular alternative to static crèches. The first commercially produced decorations appeared in Germany in the 1860s, inspired by paper chains made by children. In countries where a representation of the Nativity scene is very popular, people are encouraged to compete and create the most original or realistic ones. Within some families, the pieces used to make the representation are considered a valuable family heirloom.
The traditional colors of Christmas decorations are red, green, and gold. Red symbolizes the blood of Jesus, which was shed in his crucifixion; green symbolizes eternal life, and in particular the evergreen tree, which does not lose its leaves in the winter; and gold is the first color associated with Christmas, as one of the three gifts of the Magi, symbolizing royalty.
The Christmas tree was first used by German Lutherans in the 16th century, with records indicating that a Christmas tree was placed in the Cathedral of Strassburg in 1539, under the leadership of the Protestant Reformer, Martin Bucer. In the United States, these "German Lutherans brought the decorated Christmas tree with them; the Moravians put lighted candles on those trees". When decorating the Christmas tree, many individuals place a star at the top of the tree symbolizing the Star of Bethlehem, a fact recorded by "The School Journal" in 1897. Professor David Albert Jones of Oxford University writes that in the 19th century, it became popular for people to also use an angel to top the Christmas tree in order to symbolize the angels mentioned in the accounts of the Nativity of Jesus. Additionally, in the context of a Christian celebration of Christmas, the Christmas tree, being evergreen in colour, is symbolic of Christ, who offers eternal life; the candles or lights on the tree represent the Light of the World—Jesus—born in Bethlehem. Christian services for family use and public worship have been published for the blessing of a Christmas tree, after it has been erected. The Christmas tree is considered by some as Christianisation of pagan tradition and ritual surrounding the Winter Solstice, which included the use of evergreen boughs, and an adaptation of pagan tree worship; according to eighth-century biographer Æddi Stephanus, Saint Boniface (634–709), who was a missionary in Germany, took an ax to an oak tree dedicated to Thor and pointed out a fir tree, which he stated was a more fitting object of reverence because it pointed to heaven and it had a triangular shape, which he said was symbolic of the Trinity. The English language phrase "Christmas tree" is first recorded in 1835 and represents an importation from the German language.
Since the 16th century, the poinsettia, a native plant from Mexico, has been associated with Christmas carrying the Christian symbolism of the Star of Bethlehem; in that country it is known in Spanish as the "Flower of the Holy Night". Other popular holiday plants include holly, mistletoe, red amaryllis, and Christmas cactus. Along with a Christmas tree, the interior of a home may be decorated with these plants, along with garlands and evergreen foliage. The display of Christmas villages has also become a tradition in many homes this season. The outside of houses may be decorated with lights and sometimes with illuminated sleighs, snowmen, and other Christmas figures. Mistletoe features prominently in European myth and folklore (for example, the legend of Baldr); it is an evergreen parasitic plant that grows on trees, especially apples and poplar, and turns golden when it is dried. It is customary to hang a sprig of mistletoe in the house at Christmas, and anyone standing underneath it may be kissed.
Other traditional decorations include bells, candles, candy canes, stockings, wreaths, and angels. The wreaths and candles in each window are a more traditional Christmas display. The concentric assortment of leaves, usually from an evergreen, make up Christmas wreaths. Candles in each window are meant to demonstrate that Christians believe that Jesus Christ is the ultimate light of the world.
Christmas lights and banners may be hung along streets, music played by speakers, and Christmas trees placed in prominent places. It is common in many parts of the world for town squares and consumer shopping areas to sponsor and display decorations. Rolls of brightly colored paper with secular or religious Christmas motifs are manufactured to wrap gifts. In some countries, Christmas decorations are traditionally taken down on Twelfth Night.
Nativity play.
The tradition of the Nativity scene comes from Italy. One of the earliest representation in art of the nativity was found in the early Christian Roman catacomb of Saint Valentine. It dates to about AD 380. Another, of similar date, is beneath the pulpit in Sant'Ambrogio, Milan.
For the Christian celebration of Christmas, the viewing of the Nativity play is one of the oldest Christmastime traditions, with the first reenactment of the Nativity of Jesus taking place in A.D. 1223 in the Italian town of Greccio. In that year, Francis of Assisi assembled a Nativity scene outside of his church in Italy and children sang Christmas carols celebrating the birth of Jesus.
Each year, this grew larger, and people travelled from afar to see Francis' depiction of the Nativity of Jesus that came to feature drama and music. Nativity plays eventually spread throughout all of Europe, where they remained popular. Christmas Eve and Christmas Day church services often came to feature Nativity plays, as did schools and theatres. In France, Germany, Mexico, and Spain, Nativity plays are often reenacted outdoors in the streets.
Music and carols.
The earliest extant specifically Christmas hymns appear in fourth-century Rome. Latin hymns such as , written by Ambrose, Archbishop of Milan, were austere statements of the theological doctrine of the Incarnation in opposition to Arianism. ("Of the Father's love begotten") by the Spanish poet Prudentius ( 413) is still sung in some churches today. In the 9th and 10th centuries, the Christmas "Sequence" or "Prose" was introduced in North European monasteries, developing under Bernard of Clairvaux into a sequence of rhymed stanzas. In the 12th century the Parisian monk Adam of St. Victor began to derive music from popular songs, introducing something closer to the traditional Christmas carol. Christmas carols in English appear in a 1426 work of John Awdlay who lists twenty five "caroles of Cristemas", probably sung by groups of 'wassailers', who went from house to house.
The songs now known specifically as carols were originally communal folk songs sung during celebrations such as "harvest tide" as well as Christmas. It was only later that carols began to be sung in church. Traditionally, carols have often been based on medieval chord patterns, and it is this that gives them their uniquely characteristic musical sound. Some carols like "Personent hodie", "Good King Wenceslas", and can be traced directly back to the Middle Ages. They are among the oldest musical compositions still regularly sung. (O Come all ye faithful) appeared in its current form in the mid-18th century.
The singing of carols increased in popularity after the Protestant Reformation in the Lutheran areas of Europe, as the Reformer Martin Luther wrote carols and encouraged their use in worship, in addition to spearheading the practice of caroling outside the Mass. The 18th-century English reformer Charles Wesley, a founder of Methodism, understood the importance of music to Christian worship. In addition to setting many psalms to melodies, he wrote texts for at least three Christmas carols. The best known was originally entitled "Hark! How All the Welkin Rings", later renamed "Hark! The Herald Angels Sing".
Christmas seasonal songs of a secular nature emerged in the late 18th century. The Welsh melody for "Deck the Halls" dates from 1794, with the lyrics added by Scottish musician Thomas Oliphant in 1862, and the American "Jingle Bells" was copyrighted in 1857. Other popular carols include "The First Noel", "God Rest Ye Merry, Gentlemen", "The Holly and the Ivy", "I Saw Three Ships", "In the Bleak Midwinter", "Joy to the World", "Once in Royal David's City" and "While Shepherds Watched Their Flocks". In the 19th and 20th centuries, African American spirituals and songs about Christmas, based in their tradition of spirituals, became more widely known. An increasing number of seasonal holiday songs were commercially produced in the 20th century, including jazz and blues variations. In addition, there was a revival of interest in early music, from groups singing folk music, such as The Revels, to performers of early medieval and classical music.
One of the most ubiquitous festive songs is "We Wish You a Merry Christmas", which originates from the West Country of England in the 1930s. Radio has covered Christmas music from variety shows from the 1940s and 1950s, as well as modern-day stations that exclusively play Christmas music from late November through December 25. Hollywood movies have featured new Christmas music, such as "White Christmas" in "Holiday Inn" and "Rudolph the Red-Nosed Reindeer". Traditional carols have also been included in Hollywood films, such as "Hark! The Herald Angels Sing" in "It's a Wonderful Life" (1946), and "Silent Night" in "A Christmas Story".
Traditional cuisine.
A special Christmas family meal is traditionally an important part of the holiday's celebration, and the food served varies greatly from country to country. Some regions have special meals for Christmas Eve, such as Sicily, where 12 kinds of fish are served. In the United Kingdom and countries influenced by its traditions, a standard Christmas meal includes turkey, goose or other large bird, gravy, potatoes, vegetables, sometimes bread, and cider. Special desserts are also prepared, such as Christmas pudding, mince pies, fruit cake and Yule log cake.
In Poland and Scandinavia, fish is often used for the traditional main course, but richer meat such as lamb is increasingly served. In Sweden, it is common with a special variety of smörgåsbord, where ham, meatballs, and herring play a prominent role. In Germany, France, and Austria, goose and pork are favored. Beef, ham, and chicken in various recipes are popular worldwide. The Maltese traditionally serve "Imbuljuta tal-Qastan", a chocolate and chestnuts beverage, after Midnight Mass and throughout the Christmas season. Slovenes prepare the traditional Christmas bread potica, "bûche de Noël" in France, "panettone" in Italy, and elaborate tarts and cakes. "Panettone", an Italian type of sweet bread and fruitcake, originally from Milan, Italy, usually prepared and enjoyed for Christmas and New Year in Western, Southern, and Southeastern Europe, as well as in South America, Eritrea, Australia and North America.
The eating of sweets and chocolates has become popular worldwide, and sweeter Christmas delicacies include the German "stollen", marzipan cake or candy, and Jamaican rum fruit cake. As one of the few fruits traditionally available to northern countries in winter, oranges have been long associated with special Christmas foods. Eggnog is a sweetened dairy-based beverage traditionally made with milk, cream, sugar, and whipped eggs (which gives it a frothy texture). Spirits such as brandy, rum, or bourbon are often added. The finished serving is often garnished with a sprinkling of ground cinnamon or nutmeg.
Cards.
Christmas cards are illustrated messages of greeting exchanged between friends and family members during the weeks preceding Christmas Day. The traditional greeting reads "wishing you a Merry Christmas and a Happy New Year", much like that of the first commercial Christmas card, produced by Sir Henry Cole in London in 1843. The custom of sending them has become popular among a wide cross-section of people with the emergence of the modern trend towards exchanging E-cards.
Christmas cards are purchased in considerable quantities and feature artwork, commercially designed and relevant to the season. The content of the design might relate directly to the Christmas narrative, with depictions of the Nativity of Jesus, or Christian symbols such as the Star of Bethlehem, or a white dove, which can represent both the Holy Spirit and Peace on Earth. Other Christmas cards are more secular and can depict Christmas traditions, figures such as Santa Claus, objects directly associated with Christmas such as candles, holly, and baubles, or a variety of images associated with the season, such as Christmastide activities, snow scenes, and the wildlife of the northern winter.
Commemorative stamps.
A number of nations have issued commemorative stamps at Christmastide. Postal customers will often use these stamps to mail Christmas cards, and they are popular with philatelists. These stamps are regular postage stamps, unlike Christmas seals, and are valid for postage year-round. They usually go on sale sometime between early October and early December and are printed in considerable quantities.
Christmas seals.
Christmas seals were first issued to raise funding to fight and bring awareness to tuberculosis. The first Christmas seal was issued in Denmark in 1904, and since then other countries have issued their own Christmas seals.
Gift giving.
The exchanging of gifts is one of the core aspects of the modern Christmas celebration, making it the most profitable time of year for retailers and businesses throughout the world. On Christmas, people exchange gifts based on the Christian tradition associated with Saint Nicholas, and the gifts of gold, frankincense, and myrrh which were given to the baby Jesus by the Magi. The practice of gift giving in the Roman celebration of Saturnalia may have influenced Christian customs, but on the other hand the Christian "core dogma of the Incarnation, however, solidly established the giving and receiving of gifts as the structural principle of that recurrent yet unique event", because it was the Biblical Magi, "together with all their fellow men, who received the gift of God through man's renewed participation in the divine life". However, Thomas J. Talley holds that the Roman Emperor Aurelian placed the alternate festival on December 25 in order to compete with the growing rate of the Christian Church, which had already been celebrating Christmas on that date first.
Gift-bearing figures.
A number of figures are associated with Christmas and the seasonal giving of gifts. Among these are Father Christmas, also known as Santa Claus (derived from the Dutch for Saint Nicholas), Père Noël, and the Weihnachtsmann; Saint Nicholas or Sinterklaas; the Christkind; Kris Kringle; Joulupukki; tomte/nisse; Babbo Natale; Saint Basil; and Ded Moroz. The Scandinavian tomte (also called nisse) is sometimes depicted as a gnome instead of Santa Claus.
The best known of these figures today is the red-dressed Santa Claus, of diverse origins. The name 'Santa Claus' can be traced back to the Dutch ('Saint Nicholas'). Nicholas was a 4th-century Greek bishop of Myra, a city in the Roman province of Lycia, whose ruins are from modern Demre in southwest Turkey. Among other saintly attributes, he was noted for the care of children, generosity, and the giving of gifts. His feast day, December 6, came to be celebrated in many countries with the giving of gifts.
Saint Nicholas traditionally appeared in bishop's attire, accompanied by helpers, inquiring about the behaviour of children during the past year before deciding whether they deserved a gift or not. By the 13th century, Saint Nicholas was well known in the Netherlands, and the practice of gift-giving in his name spread to other parts of central and southern Europe. At the Reformation in 16th- and 17th-century Europe, many Protestants changed the gift bringer to the Christ Child or , corrupted in English to 'Kris Kringle', and the date of giving gifts changed from December 6 to Christmas Eve.
The modern popular image of Santa Claus, however, was created in the United States, and in particular in New York. The transformation was accomplished with the aid of notable contributors including Washington Irving and the German-American cartoonist Thomas Nast (1840–1902). Following the American Revolutionary War, some of the inhabitants of New York City sought out symbols of the city's non-English past. New York had originally been established as the Dutch colonial town of New Amsterdam and the Dutch Sinterklaas tradition was reinvented as Saint Nicholas.
Current tradition in several Latin American countries (such as Venezuela and Colombia) holds that while Santa makes the toys, he then gives them to the Baby Jesus, who is the one who actually delivers them to the children's homes, a reconciliation between traditional religious beliefs and the iconography of Santa Claus imported from the United States.
In Italy's South Tyrol, Austria, the Czech Republic, Southern Germany, Hungary, Liechtenstein, Slovakia, and Switzerland, the Christkind (Ježíšek in Czech, Jézuska in Hungarian and Ježiško in Slovak) brings the presents. Greek children get their presents from Saint Basil on New Year's Eve, the eve of that saint's liturgical feast. The German St. Nikolaus is not identical with the Weihnachtsmann (who is the German version of Santa Claus / Father Christmas). St. Nikolaus wears a bishop's dress and still brings small gifts (usually candies, nuts, and fruits) on December 6 and is accompanied by Knecht Ruprecht. Although many parents around the world routinely teach their children about Santa Claus and other gift bringers, some have come to reject this practice, considering it deceptive.
Multiple gift-giver figures exist in Poland, varying between regions and individual families. St Nicholas () dominates Central and North-East areas, the Starman () is most common in Greater Poland, Baby Jesus () is unique to Upper Silesia, with the Little Star () and the Little Angel () being common in the South and the South-East. Grandfather Frost () is less commonly accepted in some areas of Eastern Poland. It is worth noting that across all of Poland, St Nicholas is the gift giver on Saint Nicholas Day on December 6.
Sport.
Christmas during the Middle Ages was a public festival with annual indulgences included the sporting. When Puritans outlawed Christmas in England in December 1647 the crowd brought out footballs as a symbol of festive misrule. The Orkney Christmas Day Ba' tradition continues. In the former top tier of English football, home and away Christmas Day and Boxing Day double headers were often played guaranteeing football clubs large crowds by allowing many working people their only chance to watch a game. Champions Preston North End faced Aston Villa on Christmas Day 1889 and the last December 25 fixture was in 1965 in England, Blackpool beating Blackburn Rovers 4–2. One of the most memorable images of the Christmas truce during World War I was the games of football played between the opposing sides on Christmas Day 1914.
More recently, in the United States, both NFL and NBA have held fixtures on Christmas Day.
Christmas in China.
During the late Qing dynasty, the "Shanghai News" referred to Christmas by a variety of terms. In 1872, it initially called Christmas "Jesus' birthday" (), but from 1873 to 1881 it used terms such as "Western countries' Winter Solstice" () and "Western peoples' Winter Solstice" (), before finally settling on "Foreign Winter Solstice" () from 1882 onwards. This term was gradually replaced by the now standard term "Festival of the birth of the Holy One" () during the early years of the twentieth century.
Scandinavia and the Nordics.
In Scandinavia—Denmark, Norway, Sweden—where Lutheranism is dominant, Christmas ("jul") is celebrated on 24 December. In Sweden, it is traditional for companies to host a Christmas buffet lunch (julbord or jullunch) for their employees a week before Christmas. To prevent food poisoning during the holiday season, Swedish newspapers annually publish reports and laboratory tests warning the public to avoid leaving cold cuts, mayonnaise, and other perishable foods at room temperature to prevent spoilage. Christmas in Sweden is a time to indulge in festive meals, with roasted ham being the centerpiece of the feast. However, the exact day for enjoying this treat varies across regions, with each area having its own traditions. Another well-established custom in Sweden is tuning in to watch a special Disney television program at precisely 3 p.m. on December 24.
In Norway, the Christmas feast is held on December 24, with each region offering its own special dishes for Christmas dinner. After the meal, "Julenissen" (where "jule" means Christmas and "nissen" refers to a mythical elf in Norwegian folklore) brings gifts to well-behaved children. Following a quiet family gathering on December 25, another grand celebration takes place on Boxing Day, December 26, where children go door-to-door visiting neighbors and receiving treats.
Choice of date.
Theories.
There are several theories as to why December 25 was chosen as the date for Christmas. However, theology professor Susan Roll notes that "no liturgical historian [...] goes so far as to deny that it has any sort of relation with the sun, the winter solstice and the popularity of solar worship in the later Roman Empire". The early Church linked Jesus Christ to the Sun and referred to him as the 'Sun of Righteousness' () prophesied by Malachi. In the early fifth century, Augustine of Hippo and Maximus of Turin preached that it was fitting to celebrate Christ's birth at the winter solstice, because it marked the point when the hours of daylight begin to grow.
The 'history of religions' or 'substitution' theory suggests that the Church chose December 25 as Christ's birthday () to appropriate the Roman winter solstice festival (birthday of , the 'Invincible Sun'), held on this date since 274 AD; before the earliest evidence of Christmas on that date. Gary Forsythe, Professor of Ancient History, says that the followed "the seven-day period of the (December 17–23), Rome's most joyous holiday season since Republican times, characterized by parties, banquets, and exchanges of gifts". Roll says that "the specific nature of the relation" between Christmas and the "Natalis Solis Invicti" has not yet been "conclusively proven from extant texts".
The 'calculation theory' suggests that December 25 was calculated as nine months after a date chosen for Jesus's conception: 25 March, the Roman date of the spring equinox, which later became the Feast of the Annunciation.
Date according to Julian calendar.
Some jurisdictions of the Eastern Orthodox Church, including those of Russia, Georgia, North Macedonia, Montenegro, Serbia, and Jerusalem, mark feasts using the older Julian calendar. Since Christmas 1899 until Christmas 2099 inclusive, there is a difference of 13 days between the Julian calendar and the modern Gregorian calendar. As a result, December 25 on the Julian calendar currently corresponds to January 7 on the calendar used by most governments and people in everyday life. Therefore, the aforementioned Orthodox Christians mark December 25 (and thus Christmas) on the day that is internationally considered to be January 7.
However, following the Council of Constantinople in 1923, other Orthodox Christians, such as those belonging to the jurisdictions of Constantinople, Bulgaria, Greece, Romania, Antioch, Alexandria, Albania, Cyprus, Finland, and the Orthodox Church in America, among others, began using the Revised Julian calendar, which at present corresponds exactly to the Gregorian calendar. Therefore, these Orthodox Christians mark December 25 (and thus Christmas) on the same day that is internationally considered to be December 25.
A further complication is added by the fact that the Armenian Apostolic Church continues the original ancient Eastern Christian practice of celebrating the birth of Christ not as a separate holiday, but on the same day as the celebration of his baptism (Theophany), which is on January 6. This is a public holiday in Armenia, and it is held on the same day that is internationally considered to be January 6, because since 1923 the Armenian Church in Armenia has used the Gregorian calendar.
However, there is also a small Armenian Patriarchate of Jerusalem, which maintains the traditional Armenian custom of celebrating the birth of Christ on the same day as Theophany (January 6), but uses the Julian calendar for the determination of that date. As a result, this church celebrates "Christmas" (more properly called Theophany) on the day that is considered January 19 on the Gregorian calendar in use by the majority of the world.
Following the 2022 invasion of its territory by Russia, Ukraine officially moved its Christmas date from January 7 to December 25, to distance itself from the Russian Orthodox Church that had supported Russia's invasion. This followed the Orthodox Church of Ukraine formally adopting the Revised Julian calendar for fixed feasts and solemnities.
Table of dates.
There are four different dates used by different Christian groups to mark the birth of Christ, given in the table below.
Economy.
Christmas is typically a peak selling season for retailers in many nations around the world; sales increase dramatically during this time as people purchase gifts, decorations, and supplies to celebrate. In the United States, the "Christmas shopping season" starts as early as October. In Canada, merchants begin advertising campaigns before Halloween (October 31) and step up their marketing following Remembrance Day on November 11. In the UK and Ireland, the Christmas shopping season starts from mid-November, around the time when high street Christmas lights are turned on. A concept devised by retail entrepreneur David Lewis, the first Christmas grotto opened in Lewis's department store in Liverpool, England in 1879. In the United States, it has been calculated that a quarter of all personal spending takes place during the Christmas/holiday shopping season. Figures from the US Census Bureau reveal that expenditure in department stores nationwide rose from $20.8billion in November 2004 to $31.9billion in December 2004, an increase of 54 percent. In other sectors, the pre-Christmas increase in spending was even greater, there being a November–December buying surge of 100 percent in bookstores and 170 percent in jewelry stores. In the same year employment in American retail stores rose from 1.6million to 1.8million in the two months leading up to Christmas. Industries completely dependent on Christmas include Christmas cards, of which 1.9billion are sent in the United States each year, and live Christmas trees, of which 20.8million were cut in the US in 2002. In the UK in 2010, up to £8billion was expected to be spent online at Christmas, approximately a quarter of total retail festive sales.
In most Western nations, Christmas Day is the least active day of the year for business and commerce; almost all retail, commercial and institutional businesses are closed, and almost all industries cease activity (more than any other day of the year), whether laws require such or not. In England and Wales, the Christmas Day (Trading) Act 2004 prevents all large shops from trading on Christmas Day. Similar legislation was approved in Scotland in 2007. Film studios release many high-budget movies during the holiday season, including Christmas films, fantasy movies or high-tone dramas with high production values to hopes of maximizing the chance of nominations for the Academy Awards.
One economist's analysis calculates that despite increased overall spending, Christmas is a deadweight loss under orthodox microeconomic theory, because of the effect of gift-giving. This loss is calculated as the difference between what the gift giver spent on the item and what the gift receiver would have paid for the item. It is estimated that in 2001, Christmas resulted in a $4billion deadweight loss in the US alone. Because of complicating factors, this analysis is sometimes used to discuss possible flaws in current microeconomic theory. Other deadweight losses include the effects of Christmas on the environment and the fact that material gifts are often perceived as white elephants, imposing cost for upkeep and storage and contributing to clutter.
Controversies.
Christmas has been the subject of controversy and attacks from various sources, both Christian and non-Christian. Historically, it was prohibited by Puritans during their ascendency in the Commonwealth of England (1647–1660) and in Colonial New England where the Puritans outlawed the celebration of Christmas in 1659 on the grounds that Christmas was not mentioned in Scripture and therefore violated the Reformed regulative principle of worship. The Parliament of Scotland, which was dominated by Presbyterians, passed a series of acts outlawing the observance of Christmas between 1637 and 1690; Christmas Day did not become a public holiday in Scotland until 1871. Today, some conservative Reformed denominations such as the Free Presbyterian Church of Scotland and the Reformed Presbyterian Church of North America likewise reject the celebration of Christmas based on the regulative principle and what they see as its non-Scriptural origin. Celebrating Christmas is banned in the Jehovah's Witnesses, as the Governing Body believes that Christmas is originally pagan and again that it is without basis in Scripture. Christmas celebrations have also been prohibited by atheist states such as the Soviet Union and more recently majority Muslim states such as Somalia, Tajikistan and Brunei.
Some Christians and organizations such as Pat Robertson's American Center for Law and Justice cite alleged attacks on Christmas (dubbing them a "war on Christmas"). Such groups claim that any specific mention of the term "Christmas" or its religious aspects is being increasingly censored, avoided, or discouraged by a number of advertisers, retailers, government (prominently schools), and other public and private organizations. One controversy is the occurrence of Christmas trees being renamed Holiday trees. In the U.S. there has been a tendency to replace the greeting "Merry Christmas" with "Happy Holidays", which is considered inclusive at the time of the Jewish celebration of Hanukkah. In the U.S., and Canada, where the use of the term "Holidays" is most prevalent, opponents have denounced its usage and avoidance of using the term "Christmas" as being politically correct. In 1984, the U.S. Supreme Court ruled in "Lynch v. Donnelly" that a Christmas display (which included a Nativity scene) owned and displayed by the city of Pawtucket, Rhode Island, did not violate the First Amendment. American Muslim scholar Abdul Malik Mujahid has said that Muslims must treat Christmas with respect, even if they disagree with it.
The government of the People's Republic of China officially espouses state atheism, and has conducted antireligious campaigns to this end. In December 2018, officials raided Christian churches prior to Christmastide and coerced them to close; Christmas trees and Santa Clauses were also forcibly removed.
|
6239
|
20836525
|
https://en.wikipedia.org/wiki?curid=6239
|
Contraction mapping
|
In mathematics, a contraction mapping, or contraction or contractor, on a metric space ("M", "d") is a function "f" from "M" to itself, with the property that there is some real number formula_1 such that for all "x" and "y" in "M",
formula_2
The smallest such value of "k" is called the Lipschitz constant of "f". Contractive maps are sometimes called Lipschitzian maps. If the above condition is instead satisfied for
"k" ≤ 1, then the mapping is said to be a non-expansive map.
More generally, the idea of a contractive mapping can be defined for maps between metric spaces. Thus, if ("M", "d") and ("N", "d"') are two metric spaces, then formula_3 is a contractive mapping if there is a constant formula_1 such that
formula_5
for all "x" and "y" in "M".
Every contraction mapping is Lipschitz continuous and hence uniformly continuous (for a Lipschitz continuous function, the constant "k" is no longer necessarily less than 1).
A contraction mapping has at most one fixed point. Moreover, the Banach fixed-point theorem states that every contraction mapping on a non-empty complete metric space has a unique fixed point, and that for any "x" in "M" the iterated function sequence "x", "f" ("x"), "f" ("f" ("x")), "f" ("f" ("f" ("x"))), ... converges to the fixed point. This concept is very useful for iterated function systems where contraction mappings are often used. Banach's fixed-point theorem is also applied in proving the existence of solutions of ordinary differential equations, and is used in one proof of the inverse function theorem.
Contraction mappings play an important role in dynamic programming problems.
Firmly non-expansive mapping.
A non-expansive mapping with formula_6 can be generalized to a firmly non-expansive mapping in a Hilbert space formula_7 if the following holds for all "x" and "y" in formula_7:
formula_9
where
formula_10.
This is a special case of formula_11 averaged nonexpansive operators with formula_12. A firmly non-expansive mapping is always non-expansive, via the Cauchy–Schwarz inequality.
The class of firmly non-expansive maps is closed under convex combinations, but not compositions. This class includes proximal mappings of proper, convex, lower-semicontinuous functions, hence it also includes orthogonal projections onto non-empty closed convex sets. The class of firmly nonexpansive operators is equal to the set of resolvents of maximally monotone operators. Surprisingly, while iterating non-expansive maps has no guarantee to find a fixed point (e.g. multiplication by -1), firm non-expansiveness is sufficient to guarantee global convergence to a fixed point, provided a fixed point exists. More precisely, if formula_13, then for any initial point formula_14, iterating
formula_15
yields convergence to a fixed point formula_16. This convergence might be weak in an infinite-dimensional setting.
Subcontraction map.
A subcontraction map or subcontractor is a map "f" on a metric space ("M", "d") such that
formula_17
formula_18
If the image of a subcontractor "f" is compact, then "f" has a fixed point.
Locally convex spaces.
In a locally convex space ("E", "P") with topology given by a set "P" of seminorms, one can define for any "p" ∈ "P" a "p"-contraction as a map "f" such that there is some "k""p" < 1 such that ≤ . If "f" is a "p"-contraction for all "p" ∈ "P" and ("E", "P") is sequentially complete, then "f" has a fixed point, given as limit of any sequence "x""n"+1 = "f"("x""n"), and if ("E", "P") is Hausdorff, then the fixed point is unique.
|
6246
|
82432
|
https://en.wikipedia.org/wiki?curid=6246
|
Covalent bond
|
A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding.
Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term "covalence" was introduced by Irving Langmuir in 1919, with Nevil Sidgwick using "co-valent link" in the 1920s. Merriam-Webster dates the specific phrase "covalent bond" to 1939, recognizing its first known use. The prefix "co-" (jointly, partnered) indicates that "co-valent" bonds involve shared "valence", as detailed in valence bond theory.
In the molecule , the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized.
History.
The term "covalence" in regard to bonding was first used in 1919 by Irving Langmuir in a "Journal of the American Chemical Society" article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term "covalence" the number of pairs of electrons that a given atom shares with its neighbors."
The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms (and in 1926 he also coined the term "photon" for the smallest unit of radiant energy). He introduced the "Lewis notation" or "electron dot notation" or "Lewis dot structure", in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines.
Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the "n" = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the "n" = 1 shell, which can hold only two.
While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms.
Types of covalent bonds.
Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds.
Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule.
Covalent structures.
There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures.
One- and three-electron bonds.
Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, . One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.
The simplest example of three-electron bonding can be found in the helium dimer cation, . It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds.
Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities.
Dioxygen is sometimes represented as obeying the octet rule with a double bond (O=O) containing two pairs of shared electrons. However the ground state of this molecule is paramagnetic, indicating the presence of unpaired electrons. Pauling proposed that this molecule actually contains two three-electron bonds and one normal covalent (two-electron) bond. The octet on each atom then consists of two electrons from each three-electron bond, plus the two electrons of the covalent bond, plus one lone pair of non-bonding electrons. The bond order is 1+0.5+0.5=2.
Resonance.
There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = .
Aromaticity.
In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4"n" + 2 (where "n" is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons ("n" = 1, 4"n" + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.
In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent.
Hypervalence.
Certain molecules such as xenon difluoride and sulfur hexafluoride have higher coordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory.
Electron deficiency.
In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However, the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms so that the molecules can instead be classified as electron-precise.
Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated.
Quantum mechanical description.
After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states.
Comparison of VB and MO theories.
The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory, a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons.
The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands.
At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene.
Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it.
Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals.
Covalency from atomic contribution to the electronic density of states.
Evaluation of bond covalency is dependent on the basis set for approximate quantum-chemical methods such as COOP (crystal orbital overlap population), COHP (Crystal orbital Hamilton population), and BCOOP (Balanced crystal orbital overlap population). To overcome this issue, an alternative formulation of the bond covalency can be provided in this way.
The mass center of an atomic orbital formula_1 with quantum numbers for atom A is defined as
formula_2
where formula_3 is the contribution of the atomic orbital formula_4 of the atom A to the total electronic density of states of the solid
formula_5
where the outer sum runs over all atoms A of the unit cell. The energy window is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond.
The relative position of the mass center of formula_6 levels of atom A with respect to the mass center of formula_7 levels of atom B is given as
formula_8
where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is
formula_9
where, for simplicity, we may omit the dependence from the principal quantum number in the notation referring to
In this formalism, the greater the value of the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity is denoted as the "covalency" of the bond, which is specified in the same units of the energy .
Analogous effect in nuclear systems.
An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common.
|
6247
|
49258332
|
https://en.wikipedia.org/wiki?curid=6247
|
Condensation polymer
|
In polymer chemistry, condensation polymers are any kind of polymers whose process of polymerization involves a condensation reaction (i.e. a small molecule, such as water or methanol, is produced as a byproduct). Natural proteins as well as some common plastics such as nylon and PETE are formed in this way. Condensation polymers are formed by polycondensation, when the polymer is formed by condensation reactions between species of all degrees of polymerization, or by condensative chain polymerization, when the polymer is formed by sequential addition of monomers to an active site in a chain reaction. The main alternative forms of polymerization are chain polymerization and polyaddition, both of which give addition polymers.
Condensation polymerization is a form of step-growth polymerization. Linear polymers are produced from bifunctional monomers, i.e. compounds with two reactive end-groups. Common condensation polymers include polyesters, polyamides such as nylon, polyacetals, and proteins.
Polyamides.
One important class of condensation polymers are polyamides. They arise from the reaction of carboxylic acid and an amine. Examples include nylons and proteins. When prepared from amino-carboxylic acids, e.g. amino acids, the stoichiometry of the polymerization includes co-formation of water:
n H2N-X-CO2H → [HN-X-C(O)]n + (n-1) H2O
When prepared from diamines and dicarboxylic acids, e.g. the production of nylon 66, the polymerization produces two molecules of water per repeat unit:
n H2N-X-NH2 + n HO2C-Y-CO2H → [HN-X-NHC(O)-Y-C(O)]n + (2n-1) H2O
Polyesters.
Another important class of condensation polymers are polyesters. They arise from the reaction of a carboxylic acid and an alcohol. An example is polyethyleneterephthalate, the common plastic PET (recycling #1 in the USA):
n HO-X-OH + n HO2C-Y-CO2H → [O-X-O2C-Y-C(O)]n + (2n-1) H2O
Safety and environmental considerations.
Condensation polymers tend to be more biodegradable than addition polymers. The peptide or ester bonds between monomers can be hydrolysed, especially in the presence of catalysts or bacterial enzymes.
|
6249
|
34915723
|
https://en.wikipedia.org/wiki?curid=6249
|
Timeline of computing
|
Timeline of computing presents events in the history of computing organized by year and grouped into six topic areas: predictions and concepts, first use and inventions, hardware systems and processors, operating systems, programming languages, and new application areas.
Detailed computing timelines: before 1950, 1950–1979, 1980–1989, 1990–1999, 2000–2009, 2010–2019, 2020–present
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.